March 23, 2022 | 03:08

Drone CI/CD

As I wanted to get back to writing some more blog posts I was annoyed by the fact that I need to build, push and deploy the updated posts everytime something changes.

As I already managed my whole blog with hugo and within a git repository hosted on my very-own instance of Gitea I decided it’s time to get into self-hosting CI/CD for my personal projects.

why drone?

I checked out multiple other solutions like Gitlab, Jenkins, Bamboo and Circle CI but all of them seemed a bit overkill for my needs.

Also drone has ready to use helm charts for both the server and the runner (I’ll explain both later on) which is great considering I already have nearly all my services running inside a kubernetes cluster.

what makes drone different

Drone is based on docker images, basically any action in your pipeline is a Docker-Container. That makes the system incredibly flexible because you can simply build plugins with new Containers or use existing containers for your build-steps, for example golang to build your app.

server and runner

Basically the drone server is the component you are interacting with either via drone-cli oder the web-gui. The server also receives the webhooks from your git repo and dispatches jobs to runners which do the actual work.

Runners come in a variety of sizes and shapes, for example they can run..

  • in kubernetes
  • via SSH
  • in Docker
  • on AWS
  • on DigitalOcean

..and some more you can find all of them in their docs.

Obviously I’ll be using the kubernetes runner.

deploying drone to kubernetes

I choose to deploy drone via helm3 because it’s easy and supported.

You can simply add the drone helm repos to your helm3 installation via the following commands:

helm repo add drone
helm repo update

To deploy I’d suggest checking the chart docs over at Github. Depending on your VCS the steps may vary slightly.


This is the file which tells drone how to handle your repository, it contains a pipeline with all steps needed to build your project.

This website is built using the following .drone.yml (I redacted confidential information)

kind: pipeline
type: kubernetes
name: default

  - name: generatetag
      - echo "${DRONE_REPO_BRANCH}-${DRONE_COMMIT_SHA},latest" >> .tags

  - name: submodules
      - git submodule update --init --recursive

  - name: dockerbuild
    image: plugins/docker
        from_secret: docker-username
        from_secret: docker-password

  - name: deploy
    image: myprivatedockerrepo/drone-plugins/k8s-deploy
        from_secret: k8stoken
        from_secret: k8sserver
        from_secret: k8snamespace
      deployment: marschallsystems
      container: marschallsystems

As you see the file isn`t too complicated, every step has a name and an image. The image is the Docker-Image used for the step.

At first, we are using alpine to generate a .tags file which contains all target tags for the resulting Image.

The Second Step is building the Dockerfile from which this blog is created (I will cover this in another post and put a link over here). This step will also take the tags from .tags and tag the image and push it. All credentials are supplied from drone secrets, these can be either set via the web gui or via drone-cli.

At last, the image which is used in the kubernetes deployment is updated via a little custom drone plugin. Actually this “plugin” is really only a bash one-liner in a docker container.

deployment update plugin

To update the image we just need to issue a kubectl set image deployment/deploymentname containername=newimagename:tag

building the image

As previously stated the image is no big deal. It consists of a and a corresponding Dockerfile. These two files need to be in the following places.

├── src
│   └──
└── Dockerfile

/bin/kubectl --token $PLUGIN_TOKEN --server $PLUGIN_SERVER --namespace $PLUGIN_NAMESPACE --insecure-skip-tls-verify set image deployment/$PLUGIN_DEPLOYMENT $PLUGIN_CONTAINER=$PLUGIN_REPO:$PLUGIN_TAG


FROM alpine
RUN apk add --no-cache curl ca-certificates
RUN curl -LO$(curl -s && \
    chmod 755 kubectl && \
    mv kubectl /bin/
ADD src/ /bin/
RUN chmod +x /bin/

Now we can build the image.

docker build . -t

kubectl auth

Obviously we need some kind of authentication for the kubectl command to work, and we will leverage the power of drone secrets again.

To get a token to use for kubectl we use a service-account.

Todo so we apply following yml in the cluster (change namespaces according to your deployments needs)

apiVersion: v1
kind: ServiceAccount
  name: drone-deploy
  namespace: marschallsystems
kind: Role
  name: drone-deploy
  namespace: marschallsystems
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get","list","patch","update"]
kind: RoleBinding
  name: drone-deploy
  namespace: marschallsystems
  - kind: ServiceAccount
    name: drone-deploy
    namespace: marschallsystems
  kind: Role
  name: drone-deploy

This will create the service account and the corresponding role and bind them together. Now we need to get the exact name of the secret which kubernetes created for the service account. To do so we use kubectl get secrets -n yournamespace. After that we can receive the token with the following command (replace the secret name to match yours).

kubectl get secrets drone-deploy-token-tn8t2 -o=jsonpath="{.data.token}" | base64 -d -i -

Congratulations you now have a token which you can save in drone as a secret with the name k8stoken.


After just a few hours we have a working CI/CD solution with very little footprint.

At first, it might seem a bit overwhelming to describe every step in the yml but after all it’s quite easy if you get the hang of it.

I’ve been also surprised how easy it is to make new plugins, and I just love it!

I hope I’ve been able to show you how amazing drone is. Simple, fast and easy to learn!

© 2024

This Site is affected by your System Dark/Lightmode

Powered by Hugo & Kiss'Em.