Contents

Making a static site with Hugo and S3

What’s Hugo?

Hugo is a static site generator, part of a new wave of sites powered by the JAM stack - JavaScript, APIs, and Markup. JAM stack sites have no backend - they aren’t running node, or PHP, or any sort of framework that would work on top of them like WordPress or Drupal.

This means that sites that progressively load in assets - like single page applications (SPAs) - also don’t count. All rendering is done before the site is pushed to production, then static assets are served up by your hosting provider of choice.

The content editor is usually Markdown (which is what this post is written in). Client-side JavaScript fills in the need for user interaction which can call out to APIs for any dynamic content.

Check out Hugo’s quick start to build a site in a few minutes. This tutorial is focused around the deployment aspect; it’s not about how to configure Hugo itself.

Cost of running your own Hugo site on S3

These numbers were checked in June 2020 for us-east-2 (Ohio).

  • S3
    • First 50 tb/month of storage is $0.023/gb
    • $0.0004/1000 GETs (1 request for a page or image is considered 1 GET)
    • $0.09/GB for data transfer up to ~10TB; first gb is free
    • Assume…
      • 1gb of data in storage - $0.023
      • You get 50,000 page hits/month
      • Each page loads ~30 assets (GETs) * 50,000 pages / 1000 requests = 1500 * $0.0004 = $0.60
      • Average total page size (with browser cache) is 50kb * 50,000 requests = 2.5gb - 1gb free = 1.5gb * $0.09 = $0.135
  • Route 53 - https://aws.amazon.com/route53/pricing/
    • Hosted zone: $0.50 per hosted zone / month for the first 25 hosted zones
    • $0.40 per million queries – first 1 Billion queries / month
    • Assuming you’re under 1 million unique visitors a month and you have fewer than 25 hosted zones, you pay $0.90/month.

You’re going to pay $0.023 for storage, $0.60 for requests, $0.135 for data transfer, and $0.90 for DNS.

That sums to a whopping $1.66/month to host your own blog that’s handling 50,000 requests/month.

Great, I have a Hugo site. Now what?

Congrats on the new content. Time to get it to your loyal fan base.

The hugo CLI has a built-in way to deploy to S3, which is what we’re going to use to publish our site. This is the simplest way to make your site work. I had this dream of automated pipelines building the site every time I typed git push - and there are ways to do that - but I wanted the cheapest and fastest way to get this site up and running. The more steps I added for myself, the farther I was from actually writing anything. Don’t let technical details overwhelm your goal.

Get an AWS account

Sign up for an AWS (Amazon Web Services) account if you don’t have one.

If this is a brand new account, create an IAM user with programmatic access for your personal use. Note the access and secret keys. Install the aws CLI, and run aws configure. For region, I prefer us-east-2.

Install the AWS CLI

The rest of this page assumes you have the AWS CLI v2 installed. See https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html for details on how to do so on your platform.

On OSX I had some problems installing the v2 CLI. StackOverflow helped, as per usual.

Create S3 buckets for hosting

First we’ll create the buckets, apply the correct bucket policies so people can access your site, then enable static website hosting. “Buckets?” you may be asking. Yes, we need two buckets. You want your site accessible from both “http://example.com” and “http://www.example.com”. That means you need one bucket as “example.com” and another as “www.example.com”. We’ll come back to this later.

These will become publicly visible S3 buckets. Do not put anything confidential or private (e.g. passwords) in them.

Replace “example.com” with whatever your custom URL will be. The buckets will exist in us-east-2, unless you change the region.

# Change SITE_DOMAIN to whatever your base URL will be.
SITE_DOMAIN="example.com"
aws s3api create-bucket --bucket www.${SITE_DOMAIN} --region us-east-2 --create-bucket-configuration LocationConstraint=us-east-2 --acl private
aws s3api create-bucket --bucket ${SITE_DOMAIN} --region us-east-2 --create-bucket-configuration LocationConstraint=us-east-2  --acl private

Great, now we have some buckets. Next we need to allow publicly visible objects. The buckets are set to private to start since public-read gives anyone the ability to list all files in your bucket. Not a big deal, but let’s make it function like a website.

aws s3api put-public-access-block --bucket www.${SITE_DOMAIN} --public-access-block-configuration BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false
aws s3api put-public-access-block --bucket ${SITE_DOMAIN} --public-access-block-configuration BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false

Great, now we have some buckets that can have public objects. Next we need to change the bucket policy to make all files public by default, so when Hugo uploads files we don’t have to change each file’s ACL individually.

POLICY='{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicRead",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::www.'${SITE_DOMAIN}'/*"
        }
    ]
}'
aws s3api put-bucket-policy --bucket www.${SITE_DOMAIN} --policy "${POLICY}"
POLICY='{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicRead",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::'${SITE_DOMAIN}'/*"
        }
    ]
}'
aws s3api put-bucket-policy --bucket ${SITE_DOMAIN} --policy "${POLICY}"

Now we can enable static website hosting from the S3 bucket. This is where you will need to decide if you want “http://example.com” or “http://www.example.com” to be your canonical URL. The example below assumes the “www.” bucket is your canonical URL, and “http://” will redirect to it.

WEBSITE_CONFIGURATION='{
    "IndexDocument": {
        "Suffix": "index.html"
    },
    "ErrorDocument": {
        "Key": "404.html"
    }
}'
aws s3api put-bucket-website --bucket www.${SITE_DOMAIN} --website-configuration ${WEBSITE_CONFIGURATION}
REDIRECT_CONFIGURATION='{
    "RedirectAllRequestsTo": {
        "HostName": "www.'${SITE_DOMAIN}'",
        "Protocol": "http"
    }
}'
aws s3api put-bucket-website --bucket ${SITE_DOMAIN} --website-configuration ${REDIRECT_CONFIGURATION}

Congrats, you now have two buckets - one to serve your content, and one to direct requests to that content.

Deploying with Hugo

Time to deploy for the first time using Hugo!

Modify your config.toml file to add in the needed deployment section according to the Hugo deployment documentation.

This is what my section looks like:

[deployment]
# By default, files are uploaded in an arbitrary order.
# Files that match the regular expressions in the "Order" list
# will be uploaded first, in the listed order.
order = [".png$", ".jpg$", ".gif$"]

[[deployment.targets]]
name = "s3-bucket"
URL = "s3://www.example.com?region=us-east-2"

Note that “www.example.comis the name of your bucket.

I chose to upload images first to ensure that any content renders correctly - imagine if your static HTML was uploaded before any images.

Run hugo && hugo deploy and everything should work.

Checking your site

Your bucket should now have some content. You can preview what your site looks like at http://www.example.com.s3-website.us-east-2.amazonaws.com, where “example.com” is the name of your bucket. Replace “us-east-2” with whatever region you put your bucket in.

Using Route53 for a custom domain

Time to make your site official and link your domain name. We’re going to create some Route53 records to point at your bucket.

This assumes you have jq installed. If you don’t have it or don’t want it, just make sure to note the hosted zone ID in the output.

First create a new hosted zone where the DNS records will live. This will automatically add 2 record sets - the NS and SOA records.

HOSTED_ZONE_ID=$(aws route53 create-hosted-zone --name ${SITE_DOMAIN} --caller-reference ${SITE_DOMAIN}-1 | jq '.HostedZone.Id')

Now we need to add 2 more records, one for each bucket. If you’re using buckets that aren’t in us-east-2, you’ll need to find the S3 Route 53 Hosted Zone ID. That can be found on the Amazon S3 endpoint configuration page.

# Hosted zone ID looks like "/hostedzone/Z0839880Y6HXZS7MJCHV", need to take just the ID itself without quotes
# xargs will strip the quotes, cut takes off the prefix
HOSTED_ZONE_ID=$(echo ${HOSTED_ZONE_ID} | xargs | cut -c 13-)
CHANGE_BATCH='{
  "Comment": "Records for the Hugo S3 bucket",
  "Changes": [
    {
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "www.'${SITE_DOMAIN}'",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "Z2O1EMRO9K5GLX",
          "DNSName": "s3-website.us-east-2.amazonaws.com",
          "EvaluateTargetHealth": false
        }
      }
    }
  ]
}'
aws route53 change-resource-record-sets --hosted-zone-id ${HOSTED_ZONE_ID} --change-batch ${CHANGE_BATCH}

CHANGE_BATCH='{
  "Comment": "Records for the Hugo S3 bucket",
  "Changes": [
    {
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "'${SITE_DOMAIN}'",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "'${HOSTED_ZONE_ID}'",
          "DNSName": "www.'${SITE_DOMAIN}'",
          "EvaluateTargetHealth": false
        }
      }
    }
  ]
}'
aws route53 change-resource-record-sets --hosted-zone-id "${HOSTED_ZONE_ID}" --change-batch ${CHANGE_BATCH}

Now we have A records. You may be wondering why we need the S3 bucket for the base domain (e.g. “example.com”) since it’s an A record that points at the “www.example.com” bucket. For whatever reason, if a corresponding bucket to the DNS record does not exist, AWS will not serve the request.

The final piece is to set the nameservers correctly in your domain registrar. I don’t use AWS for that piece, so you’re on your own. That said, you can pull the new NS records for your hosted zone:

aws route53 list-resource-record-sets --hosted-zone-id ${HOSTED_ZONE_ID}

You should see your domain point to your new site once the old records hit their TTL.