In this post I want to show you how to set up a production-grade static site in AWS. The catch? We aren’t going to perform any click-ops in AWS console and eliminate any manual steps.
Instead we are going to build “Living Infrastruture” using Infrastructure as Code (IaC). To make this enterprise ready, we will decouple the setup into three distinct layers. This ensures that a change to our CSS won’t accidently trigger a change to our DNS settings.
This is going to be similar to the previous blog post but instead of deploying a static site we are going to deploy a static site builder written in astro and also introduce a build step.
The Roadmap
Building this involves several steps and we can break them down into three stages.Phase 1: The Global Foundation
This is the "level 0" layer. These resources are created once and shared across your entire AWS account. This is going to be its own GitHub repo and we will ask CloudFormation to create the following resources for us.- Create a GitHub Connection: The bridge between your code and AWS.
- Create a Route 53 Hosted Zone (Because my registrar is not AWS)
- Set up an SSL Certificate
- Create a Git Sync IAM Role: The role that allows CloudFormation to build the resources on your behalf.
Note: If you have bought your domain using AWS then a Hosted Zone will be created for you.
Phase 2: The Site Infrastructure
This layer is specific to your site. It uses resources created by the global Foundation in addition to creating its own resources. This is going to be its own repo.- Create an S3 Bucket (Where the website files will live)
- Create a Bucket Policy and apply it our S3 bucket.
- Create an artifact S3 bucket.
- Create a Code pipeline (Automated way to move the code from GitHub to S3)
- Create a service role for the above CodePipeline to make use of.
- Create a Static site Pipeline.
- Create a CloudFront Origin Access Control (OAC).
- Create a CloudFront distribution
- Create a domain record.
Phase 3: Create the Stack
In the last Phase we will create the Stack in AWS Console using Sync from Git.Introducing the Architect: AWS CloudFormation
Building your foundation is like drawing a blueprint for a house. You don't just start laying bricks, you define where the walls go, where the pipes run, and who has the keys. In AWS, our "blueprint" language of choice is CloudFormation.To automate everything, we use AWS CloudFormation. It allows us to write a simple text file (in YAML) that tells AWS exactly what we want.
Instead of logging into AWS Console and clicking “Create Bucket” you give CloudFormation a script. It reads the script, realizes you need a bucket, a certificate, and a DNS record, and then it builds them for you in perfect order.
Phase 1: Setting up the Global Foundation
We will start by creating a new Global Resources repository. This is the bedrock of our infrastructure. These are "global" because these resoucres can be consumed by any other resource by our infrastructure.We are going to build this template step by step so you can see how each piece of the puzzle fits together.
This repo contains two files:
- The Template This is your YAML file, the source of truth.
- The Stack This is what CloudFormation creates. When you “deploy” your template, AWS groups all
the resources under one stack.
Step 1: The GitHub Connection
First, we need to let AWS talk to GitHub. We begin by creating a new YAML file in our repo. Let's call this file global-infra.yaml.AWSTemplateFormatVersion: 2010-09-09
Description: Foundational resources for yourdomain.com - Includes DNS (Route 53), SSL (ACM), GitHub Connections, and IAM roles for Git Sync.
The file starts by looking something like this
Now we are ready to add resources to this file. CloudFormation will read this file and create the resources for us.
Resources:
SharedGitHubConnection:
Type: AWS::CodeConnections::Connection
Properties:
ConnectionName: github-org-connection
ProviderType: GitHub
Here we are asking CloudFormation to create a GitHub connection resource.
Now is a good time to install the AWS connector for GitHub and complete the necessary set up.
The next things our site will need is an SSL certificate, so lets ask CloudFormation to create that for us.
Resources:
YourDomainCertificate:
Type: 'AWS::CertificateManager::Certificate'
Properties:
DomainName: yourdomain.com
SubjectAlternativeNames:
- www.yourdomain.com
ValidationMethod: DNS
DomainValidationOptions:
- DomainName: yourdomain.com
HostedZoneId: !Ref YourDomainHostedZone
- DomainName: www.yourdomain.com
HostedZoneId: !Ref YourDomainHostedZone
Here we create a new resource AWS::CertificateManager::Certificate and label it YourDomainCertificate so we can reference it later.
Now we need a place to manage our domain records. In AWS this is called a Hosted Zone.
Lets add the Hosted Zone as a resource in our script.
Resources:
YourDomainHostedZone:
Type: 'AWS::Route53::HostedZone'
Properties:
Name: yourdomain.com
If you bought your domain on AWS, a Hosted Zone was likely created for you. To keep everything automated we are going to import or recreate that zone in our script so that our code has full control over our DNS. Since I didn’t buy my domain name on AWS I need to create a Hosted Zone. `
If your registrar is AWS then AWS has already created your HostedZone and all we need is a parameter
Parameters:
YourDomainHostedZoneId:
Type: String
Description: The ID of the Hosted Zone created by AWS when the domain was purchased.
Resources:
YourDomainCertificate:
Type: 'AWS::CertificateManager::Certificate'
Properties:
DomainName: yourdomain.me
SubjectAlternativeNames:
- www.yourdomain.me
ValidationMethod: DNS
DomainValidationOptions:
- DomainName: yourdomain.me
HostedZoneId: !Ref YourDomainHostedZoneId
- DomainName: www.yourdomain.me
HostedZoneId: !Ref YourDomainHostedZoneId
The parameter is populated in your deployment file which we will create later.
CloudFormation doesn’t have any access to create resoucres on your behalf so we need to create an IAM Role with proper permissions and policies so it can create all our resources for us.
Resources:
CloudFormationGitSyncRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: CloudFormationGitSyncRole
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- cloudformation.sync.codeconnections.amazonaws.com
- cloudformation.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCloudFormationFullAccess
- arn:aws:iam::aws:policy/AmazonS3FullAccess
- arn:aws:iam::aws:policy/IAMFullAccess
- arn:aws:iam::aws:policy/AWSCodePipeline_FullAccess
Policies:
- PolicyName: GitSyncExtraPermissions
PolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: CodeBuildManagement
Effect: Allow
Action:
- 'codebuild:UpdateProject'
- 'codebuild:CreateProject'
- 'codebuild:DeleteProject'
- 'codebuild:BatchGetProjects'
Resource: !Sub 'arn:aws:codebuild:${AWS::Region}:${AWS::AccountId}:project/JaikMeStaticBuild'
- Sid: EventBridgeManagement
Effect: Allow
Action:
- 'events:PutRule'
- 'events:PutTargets'
- 'events:DescribeRule'
Resource: '*'
- Sid: PassConnectionPermission
Effect: Allow
Action:
- 'codeconnections:PassConnection'
- 'codeconnections:UseConnection'
- 'codestar-connections:PassConnection'
- 'codestar-connections:UseConnection'
Resource: !Sub 'arn:aws:codeconnections:${AWS::Region}:${AWS::AccountId}:connection/*'
- Sid: InfrastructurePermissions
Effect: Allow
Action:
- 'cloudfront:GetFunction'
- 'cloudfront:CreateFunction'
- 'cloudfront:DeleteFunction'
- 'cloudfront:DescribeFunction'
- 'cloudfront:PublishFunction'
- 'cloudfront:UpdateFunction'
- 'cloudfront:CreateOriginAccessControl'
- 'cloudfront:GetOriginAccessControl'
- 'cloudfront:UpdateOriginAccessControl'
- 'cloudfront:DeleteOriginAccessControl'
- 'cloudfront:CreateDistribution'
- 'cloudfront:GetDistribution'
- 'cloudfront:UpdateDistribution'
- 'cloudfront:DeleteDistribution'
- 'cloudfront:TagResource'
- 'route53:CreateHostedZone'
- 'route53:GetHostedZone'
- 'route53:ChangeResourceRecordSets'
- 'route53:ListResourceRecordSets'
- 'route53:GetChange'
- 'acm:DescribeCertificate'
- 'acm:ListCertificates'
- 'acm:RequestCertificate'
Resource: '*'
The above role has some extra policies applied to it which are not present in the example shown in the previous blog post. We need these permissions because CloudFormation needs permissions to manage functions and Origin Access Control (OAC)
The next thing to do is to export the Arn
of the resoucres we created and the Hosted Zone Id. We do this so another stack can reference the resource and
start using it.
Outputs:
ExportedConnectionArn:
Value: !Ref SharedGitHubConnection
Export:
Name: "GlobalResourcesStack-GitHubConnectionArn"
CertificateArn:
Value: !Ref YourDomainCertificate
Export:
Name: GlobalResources-CertificateArn
HostedZoneId:
Value: !Ref YourDomainHostedZone
Export:
Name: GlobalResources-HostedZoneId
The last thing we need to do in this stack is to create a deployment file. This is arguably the simplest file in our setup.
template-file-path: global-infra.yaml
Name this file deployment-file.yaml
If Route 53 is the registrar of your domain then you need to set the parameter in your deployment file
template-file-path: global-infra.yaml
parameters:
YourDomainHostedZoneId: YourHostedZoneID
Find your hosted Zone Id by navigating to Route 53 -> Hosted Zones -> Select your zone -> Expand Hosted zone ID -> Copy Hosted zone ID and paste it
Lets push this to GitHub and now we have completed our global resources repo. Lets move on to the next phase which will be to create our static site stack.
Phase 2: Setting up Astro Site Infrastructure
Now we are ready to create our infrastructure which will create the S3 bucket, code pipeline, cloudfront, and a domain record needed to bring our site to the public.We will begin by creating a new repo and inside this repo we will create a file called site-infra.yaml and lets add the following to the top of the file
AWSTemplateFormatVersion: 2010-09-09
Description: Infrastructure for yourdomain.com - S3 Hosting, CloudFront CDN, and CI/CD Pipeline.
Now we need to create a resource which will tell CloudFormation to create an S3 bucket for us
Resources:
S3WebsiteBucket:
Type: 'AWS::S3::Bucket'
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
VersioningConfiguration:
Status: Enabled
LifecycleConfiguration:
Rules:
- Id: AutoCleanupOldVersions
Status: Enabled
NoncurrentVersionExpiration:
NoncurrentDays: 30
This tells CloudFormation to create an S3 bucket and make it private. We are keeping the bucket private and only allowing CloudFront to see the files via Origin Access Control (OAC). Versioning is enabled for this bucket so we can perform rollbacks easily. The old versions are kept for 30 days.
Now lets apply a policy to the bucket to allow all the files inside the bucket to be read publicly.
Resources:
BucketPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref S3WebsiteBucket
PolicyDocument:
Id: MyPolicy
Version: 2012-10-17
Statement:
- Sid: AllowCloudFrontServicePrincipal
Effect: Allow
Principal:
Service: cloudfront.amazonaws.com
Action: 's3:GetObject'
Resource: !Sub '${S3WebsiteBucket.Arn}/*'
Condition:
StringEquals:
# Wildcard used to prevent circular dependency errors during first creation
AWS:SourceArn: !Sub "arn:aws:cloudfront::${AWS::AccountId}:distribution/${CloudFrontDistribution}"
Here we reference the bucket using the
!Reftemplate.
CloudFormation cannot get the code from your repo and deploy directly to the public bucket. It needs another intermediate bucket where it will store the zipped version of our git repo. So lets create that bucket.
PipelineArtifactBucket:
Type: 'AWS::S3::Bucket'
Properties:
VersioningConfiguration:
Status: Enabled
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
Here we create our Artifact bucket where the zip version of the site will live.
Next we create an IAM Role that can give permissions to the pipeline we will create. We arn’t giving it full administrative access instead, we are practicing the principle of least privilege. We give it only the specific permissions it needs, the ability to use our GitHub connection and the ability to read/write to exactly two S3 buckets.
Since our site is written in astro it requires a build step and lets build that step now by
creating a new resource AWS::CodeBuild::Project
Resources:
BuildProject:
Type: 'AWS::CodeBuild::Project'
Properties:
Name: YourDomainStaticBuild
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
Artifacts:
Type: CODEPIPELINE
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/amazonlinux2-x86_64-standard:5.0
EnvironmentVariables:
- Name: CLOUDFRONT_DIST_ID
Value: !Ref CloudFrontDistribution
Source:
Type: CODEPIPELINE
BuildSpec: |
version: 0.2
phases:
install:
runtime-versions:
nodejs: 20
commands:
- npm install
build:
commands:
- npm run build
post_build:
commands:
- aws cloudfront create-invalidation --distribution-id $CLOUDFRONT_DIST_ID --paths "/*"
artifacts:
base-directory: dist
files:
- '**/*'
Here we create a new code build project which will run under a service role that we will define and it creates a linux enviorment, a code pipeline with install, build, and post build commands and specifies that the dist directory contains the final output.
The post build command create-invalidation removes files from CloudFront before
it expires and the next time a viewer requests the file, CloudFront returns to the origin
to fetch the latest version of the file.
Now lets create the code build service role referenced in our build project resource
Resources:
CodeBuildServiceRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Action: 'sts:AssumeRole'
Policies:
- PolicyName: CodeBuildAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/YourDomainStaticBuild*'
- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:GetBucketVersioning
Resource:
- !Sub '${PipelineArtifactBucket.Arn}/*'
- !GetAtt PipelineArtifactBucket.Arn
- Effect: Allow
Action:
- 'cloudfront:CreateInvalidation'
Resource: !Sub 'arn:aws:cloudfront::${AWS::AccountId}:distribution/*'
Next we create an IAM Role that can give permissions to the pipeline we will create. We arn’t giving it full administrative access instead, we are practicing the principle of least privilege. We give it only the specific permissions it needs, the ability to use our GitHub connection and the ability to read/write to exactly two S3 buckets.
Resources:
CodePipelineServiceRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: codepipeline.amazonaws.com
Action: 'sts:AssumeRole'
Policies:
- PolicyName: PipelineAccessPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectVersion
- s3:GetBucketVersioning
- s3:PutObject
- s3:ListBucket
- s3:DeleteObject
Resource:
- !GetAtt S3WebsiteBucket.Arn
- !GetAtt PipelineArtifactBucket.Arn
- !Sub '${S3WebsiteBucket.Arn}/*'
- !Sub '${PipelineArtifactBucket.Arn}/*'
- Effect: Allow
Action: 'codestar-connections:UseConnection'
Resource: !ImportValue "GlobalResourcesStack-GitHubConnectionArn"
- Effect: Allow
Action:
- 'codebuild:BatchGetBuilds'
- 'codebuild:StartBuild'
Resource: !GetAtt BuildProject.Arn
This IAM role is a permanent part of our infrastructure. It sits quietly in our account, granting the pipeline the power to act only when a code change triggers a deployment.
With our security pass (IAM Role) in place, we can now build the conveyor belt (the pipeline). Now lets create the pipeline. This is how it looks like
Resources:
StaticSitePipeline:
Type: 'AWS::CodePipeline::Pipeline'
Properties:
RoleArn: !GetAtt CodePipelineServiceRole.Arn
ArtifactStore:
Type: S3
Location: !Ref PipelineArtifactBucket
Stages:
- Name: Source
Actions:
- Name: GitHubSource
ActionTypeId:
Category: Source
Owner: AWS
Provider: CodeStarSourceConnection
Version: '1'
OutputArtifacts:
- Name: SourceArtifact
Configuration:
ConnectionArn: !ImportValue "GlobalResourcesStack-GitHubConnectionArn"
FullRepositoryId: github_username/yourrepo
BranchName: main
OutputArtifactFormat: CODE_ZIP
- Name: Build
Actions:
- Name: AstroBuild
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: '1'
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: BuildArtifact
Configuration:
ProjectName: !Ref BuildProject
- Name: Deploy
Actions:
- Name: S3Deploy
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: S3
Version: '1'
InputArtifacts:
- Name: BuildArtifact
Configuration:
BucketName: !Ref S3WebsiteBucket
Extract: 'true'
CacheControl: max-age=0,no-cache,no-store,must-revalidate
The FullRepositoryId is where you specify your static site repo.
Here we create the pipeline which uses the role we created earlier. We have defined three stages: source, build, and deploy. It uses the GitHub connection created by the global resources stack in the source stage, and in the build stage it uses the build project to build our project, and finally in the deploy stage it takes the zipped file from the artifact buccket, unzips it and moves the files in the public S3 bucket.
Now that our files are automatically moving into an S3 bucket we are not done yet. S3 bucket we created does not allow public access to the files and directories because we are going to use Origin Access control and to enable lets add that resource
Resources:
CloudFrontOAC:
Type: 'AWS::CloudFront::OriginAccessControl'
Properties:
OriginAccessControlConfig:
Description: "OAC for yourdomain.com S3 Bucket"
Name: !Sub "${AWS::StackName}-OAC"
OriginAccessControlOriginType: s3
SigningBehavior: always
SigningProtocol: sigv4
What we have is an S3 bucket containing our files but the bucket is not public and we use CloudFront Origin Access Control (OAC) to give access to the files in the bucket. This means any request with
Resources:
SubdirectoryIndexFunction:
Type: 'AWS::CloudFront::Function'
Properties:
AutoPublish: true
Name: !Sub "${AWS::StackName}-IndexRewrite"
FunctionConfig:
Comment: "Appends index.html to subdirectory requests"
Runtime: cloudfront-js-1.0
FunctionCode: |
function handler(event) {
var request = event.request;
var uri = request.uri;
if (uri.endsWith('/')) {
request.uri += 'index.html';
} else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
Lets add a CloudFront so we can use our SSL certificate. The update looks like this
Resources:
CloudFrontDistribution:
Type: 'AWS::CloudFront::Distribution'
Properties:
DistributionConfig:
Aliases:
- yourdomain.com
- www.yourdomain.com
DefaultRootObject: index.html
Enabled: true
Origins:
- DomainName: !GetAtt S3WebsiteBucket.RegionalDomainName
Id: S3Origin
OriginAccessControlId: !GetAtt CloudFrontOAC.Id
S3OriginConfig:
OriginAccessIdentity: ""
DefaultCacheBehavior:
TargetOriginId: S3Origin
ViewerProtocolPolicy: redirect-to-https
ForwardedValues:
QueryString: false
FunctionAssociations:
- EventType: viewer-request
FunctionARN: !GetAtt SubdirectoryIndexFunction.FunctionARN
ViewerCertificate:
AcmCertificateArn: !ImportValue GlobalResources-CertificateArn
SslSupportMethod: sni-only
CloudFront sits in front of our S3 bucket and we have also added an SSL certificate.
We have our files in S3, our automation pipeline is ready, and CloudFront is standing by to
serve our site securely via SSL. The final piece of the puzzle is telling the internet
that yourdomain.com belongs to our CloudFront.
Resources:
DomainRecord:
Type: 'AWS::Route53::RecordSet'
Properties:
HostedZoneId: !ImportValue GlobalResources-HostedZoneId
Name: yourdomain.com
Type: A
AliasTarget:
DNSName: !GetAtt CloudFrontDistribution.DomainName
HostedZoneId: Z2FDTNDATAQYW2
You’ll notice a strange ID Z2FDTNDATAQYW2. Don’t worry, I didn’t leak my private ID! This is a universal constant provided by AWS to represent CloudFront. It’s the same for everyone.
With this step added our Phase 2 is complete. You have:
- S3: Storage
- IAM Role: Permissions.
- Build Project.
- Build Project Role.
- CodePipeline: Automation.
- CloudFront: Security/Speed.
- Route53: The Address.
Now we have to create a deployment file for this stack and it looks like this. Create a new file called deployment-file.yaml
template-file-path: site-infra.yaml
Lets push this to GitHub and now we have completed our astro site infrastructure repo.
The last thing to do is to create our stacks in AWS and that process is the same as mentioned in the Phase 3 of this blog post.
Git References
- Git repository for the global infrastructure.- Git repository for the static site infrastructure.
Resources
- Working with CloudFormation- How Git sync works with CloudFormation
- AWS::CodeConnections::Connection
- AWS::CertificateManager::Certificate
- AWS::Route53::HostedZone
- AWS::IAM::Role
- AWS::S3::Bucket
- UpdateReplacePolicy
- DeletionPolicy
- AWS::CloudFront::OriginAccessControl
- AWS::CodeBuild::Project
- AWS::CloudFront::Distribution
- AWS::CloudFront::OriginAccessControl
- Fn::Ref
- Fn::Sub
- Route 53 template snippets