@ wrote... (2 years, 6 months ago)

There are lots of posts about setting up CD with Jenkins and Kubernetes but I haven't found any describing how to do it with Nomad and Gitlab.

So here's how I did it…

I'm assuming that you've already got a working Gitlab (on premise) and Nomad cluster setup and that you've already got CI (continuous integration) working with them.

Getting this working isn't particularly difficult, the “trick” is to run a nomad command that loads an updated job file.

Overall steps are:

  1. make a docker image with nomad in it
  2. update your .nomad job file
  3. update your project's .gitlab-ci.yml with a deploy step
  4. go have a beverage

Docker in Docker build image

Keeping track of all the images when building can get a bit tricky…

I've made an “image builder” based on alpine linux that has docker in it. It's called ${docker_registry}/public/alpine:3 in the examples below.

If you already have a similar builder image then you can skip this step.

FROM gliderlabs/alpine

RUN apk add --update --no-cache docker make bash curl jq
docker build -t ${docker_registry}/public/alpine:3 -F Dockerfile .
docker push ${docker_registry}/public/alpine:3

or have an appropriate .gitlab-ci.yml do it for you.

Make nomad docker image

Make a new Gitlab project, in my case ops/nomad-deployer.

  1. download nomad
  2. make Dockerfile and .gitlab-ci.yml
  3. git commit and git push all the things


FROM alpine:3.9

RUN apk add --update --no-cache libc6-compat gettext
COPY nomad /usr/local/bin/nomad


  docker_registry: registry.example.com
  docker_image: public/nomad-deployer

  - build

  stage: build
  image: ${docker_registry}/public/alpine:3
    - tag=${docker_registry}/${docker_image}:latest
    - echo "building and pushing tag $tag"
    - docker build --pull -t ${tag} -f Dockerfile .
    - docker push ${tag}

Update your project

And now for the fun part!

I'm assuming that you already have a .nomad file for your service in the current directory and it's called project.nomad. Update code below to match your situation.

The magic happens by having envsubst substitute CI_COMMIT_SHORT_SHA with the current git hash. You lose having a nice docker image version but gain not having to increment anything and always knowing exactly what code you're running.


job "project" {

  group "project" {

    task "project" {
      driver = "docker"

      config {
        image   = "registry.example.com/example/project:${CI_COMMIT_SHORT_SHA}"
      } # config
    } # task run
  } # group
} # job


At the time of writing there is an open bug where gitlab doesn't properly expand CI_PROJECT_NAME but once it's fixed replace project with ${CI_PROJECT_NAME} below for a more generic .gitlab-ci.yml.

  1. change NOMAD_ADDR to match your situation
  2. change all the variables actually…
  3. feel free to delete line - cat job.nomad once everything is working
  4. note line when: manual, that means you need to manually push a button in gitlab to deploy
  NOMAD_ADDR: http://hashi1.example.com:4646
  docker_registry: registry.example.com
  docker_image: example/project

  - build
  - test
  - deploy

  stage: build
  image: ${docker_registry}/public/alpine:3
    - tag=${docker_registry}/${docker_image}:${CI_COMMIT_SHORT_SHA}
    - echo "building and pushing tag $tag"
    - docker build --pull -t ${tag} -f Dockerfile .
    - docker push ${tag}

  stage: test

  stage: deploy
  image: registry.example.com/public/nomad-deployer:latest
    - envsubst '${CI_COMMIT_SHORT_SHA}' < project.nomad > job.nomad
    - cat job.nomad
    - nomad validate job.nomad
    - nomad plan job.nomad || if [ $? -eq 255 ]; then exit 255; else echo "success"; fi
    - nomad run job.nomad
    name: production
  allow_failure: false
  when: manual

bringing it all together

  1. edit some sweet project code
  2. git commit and git push it
  3. open gitlab jobs url for your project
  4. push the play button to deploy your new version

If you have great tests and want to auto deploy then change:

when: manual


  - master

all done

Hopefully you've now got CD to accompany your CI.

Thanks to @henrikjohansen on https://gitter.im/hashicorp-nomad/Lobby for his code snippet that was the missing piece for me.

Category: tech, Tags: cd, ci, gitlab, hashistack, nomad
Comments: 7
Ishmael Rufus @ August 19, 2019 wrote... (2 years, 1 month ago)

This is a great starting point. However, I did run into issues with the Nomad deployer. Luckily it turns out someone figured out a way to do it using the frolvlad/alpine-glibc base container.

Then from there you would just need to pull in Nomad and certificates:

FROM frolvlad/alpine-glibc

ENV NOMAD_RELEASE https://releases.hashicorp.com/nomad/$NOMAD_VERSION/nomad_${NOMAD_VERSION}_linux_amd64.zip

# ca-certificates is needed to verify releases.hashicorp.com's certificate
RUN apk add --update bash wget gettext tar ca-certificates
RUN wget ${NOMAD_RELEASE} -O nomad.zip -o /dev/null
RUN cd /usr/local/bin  \
    && unzip /nomad.zip \
    && nomad --version
RUN apk del wget tar ca-certificates \
    && rm -rf /var/cache/apk/*

I appreciate your article. It helped tremendously.

Kurt @ August 26, 2019 wrote... (2 years ago)

Thanks for your helpful comment.

Regarding the triple-quote issue you had, you need a blank line before starting a code block.

João Serpa @ October 15, 2019 wrote... (1 year, 11 months ago)

Hey, thanks for this, it has already helped me.

One question I have is, how do you manage to make your nomad client to talk to the nomad server?

I'm wondering this as I want to have the nomad API closed for private network use only, but in order to trigger a deploy from a Gitlab worker (which I don't know it's IP) I think I need to have it open to the world.. and I don't want that.

How did you do that?

Thanks a lot!

Kurt @ October 16, 2019 wrote... (1 year, 11 months ago)

We run gitlab on premise so everything is local.

I haven't looked into it much since it doesn't apply in my situation but have a look at the ACL documenation. You'd have to open access to the world but it would be password protected, just like any website really.


João Serpa @ October 16, 2019 wrote... (1 year, 11 months ago)

Hey, Kurt! Thanks a lot for your answer.

I will try that solution!

albttx @ February 7, 2020 wrote... (1 year, 7 months ago)

I think you can set the docker_image_sha in a consul env variable. and when you deploy, just update the consul variable with the value, it will automatically update the job :)

(ps: set force_pull )

Kurt @ February 7, 2020 wrote... (1 year, 7 months ago)

Unfortunately not, consul env vars only get “injected” into consul templates and in the docker env itself.

Also, force_pull can be pretty wasteful so best to avoid if possible.

Click here to add a comment