I have several Hugo sites and I like deploying on my own infrastructure when possible.

I've gone through several possible configurations to get things running in my infra such as kaniko, docker build and ko build. These options are inefficient and often slow for being involved in static website deployment.

In that, in my CI, I had been using kubectl apply to reconcile new changes with a new image.

Using crane, I've been able to simplify and optimise my static website builds (Hugo, Vuejs) into fast and tiny website containers.

We land up with a tiny website container image of ~5Mb compressed. This website, given it's content and base image is ~28Mb compressed.

πŸ“¦ Base image

I use a webserver I wrote back at Safe Surfer in 2020. This was written specifically for a few reasons

  • security: single binary in container to constrain attack surface
  • features: a Vuejs history mode option with Go templating for index.html providing pass through values from env
  • ease: simplify the tooling

Along the timeline, it also gained support for writing headers, redirecting requests based on path or domain.

Checkout the repo at gitlab.com/BobyMCbobs/go-http-server.

πŸ”¨ Build and publish

First, build the site

hugo

Next, move the build site which will be served into a directory matching the path in the container

mkdir -p ./output/var/run/
mv ./public/ ./output/var/run/ko/
chmod -R 0755 ./output/

Finally, use crane append to append a tar of the output folder to go-http-server and push to a new tag

IMAGE="$(crane append \
    --base="registry.gitlab.com/bobymcbobs/go-http-server:latest" \
    --new_layer=<(cd output && tar --exclude=".DS_Store" -f - -c .) \
    --new_tag="registry.example.com/some/image:latest")"

this command also stores the resulting image ref as a variable usable for signing the result, like so

cosign sign -y --recursive "$IMAGE"

I would like to figure out how to consolidate the second code block above and the –new_layer tar command in the future to make it cleaner. Perhaps even writing a Go program to tie it all together.

βš› Running in CI

My CI configuration for GitLab essentially is

build:
  stage: build
  image:
    name: docker.io/alpine:3.19
    entrypoint: [""]
  variables:
    CONTAINER_REPO: "$CI_REGISTRY_IMAGE"
  id_tokens:
    SIGSTORE_ID_TOKEN:
      aud: "sigstore"
  retry: 2
  script:
    - echo 'https://dl-cdn.alpinelinux.org/alpine/edge/community' | tee -a /etc/apk/repositories
    - apk add --no-cache tar crane hugo git cosign bash openssl jq yq
    - export CONTAINER_REPO="$(echo ${CONTAINER_REPO} | tr '[:upper:]' '[:lower:]')"
    - crane auth login "${CI_REGISTRY}" -u "${CI_REGISTRY_USER}" -p "${CI_REGISTRY_PASSWORD}"
    - ./hack/publish.sh --sign

which, installs dependencies, logs into the registry then performs the build with signing.

Helpfully with the snippets from the previous section occuring in ./hack/publish.sh, enabling it to be called from both CI and local.

Separately when a new tag is published, FluxCD picks up that tag and deploys it.

✨ Wrapping up

This approach means that there's zero package configuration around, nothing really to maintain and it's fast.

Ideally, there'd be no scripts for packaging, it would be called from a remote script somehow or purely CI takes care of it in a reusable include on GitLab CI – yet at the same time be self-hosted and defined.