Deploying a multi-environment Ruby on Rails site to DigitalOcean Kubernetes

This guide is focused on a small, single-instance server. Initialisation logic in particular is probably not safe against horizontal scaling. Beware! ūüôā

This guide will go through:

  • Setting up your development environment for Kubernetes deployment.
  • Dockerising your Rails application
  • Setting up a container registry for deploying images to.
  • Setting up Kubernetes for your Rails project, with a focus on supporting multiple environments deployed in production (e.g. a staging and production instance).
  • Setting up Postgres instances for each of your environments.
  • Setting up a single Nginx ingress for your cluster which allows ingress to all your environments (which are otherwise isolated).
  • Setting up TLS with cert-manager, as well as a current caveat with the Nginx ingress and how you can avoid it.

Running Ruby on Rails in production can be one of the most hair-pulling steps to getting your new application up and running, especially in contrast to how elegant most of the process of writing a Rails application is.

One of Kubernetes’ biggest benefits is how it allows you to scale applications and leverage the power of the cloud, but similarly nice is how it lets you write declarative (as opposed to imperative) configuration for your services, rather than managing a VPS yourself, with all the trouble that entails. You can free yourself from manual iptables / ufw management, not worry as much about things like what starts your service & restarts it if it crashes, as well as developing skills that can come in useful in modern cloud-based businesses.

All that said, it presents its own difficulties. I ran into quite a few hold-ups, ranging from certificate issuance to serving static files from Rails through Nginx.


First, you’ll want to make sure you have a local Kubernetes development environment with Kustomize installed. If you’re on macOS that’s as easy as running:

brew install kubectl   # Kubernetes' CLI
brew install kustomize # Fantastic templating engine
brew install doctl     # DigitalOcean CLI

As well as installing Docker which you can currently do at

You’ll then want to set up your Kubernetes cluster in DigitalOcean. I went with a simple two-$10-node setup. Keep in mind you’ll also need a load balancer (currently $10/month), a container registry, as well as a bunch of persistent volumes. The latter aren’t hugely expensive, but will likely add up to a couple of dollars a month.

Dockerising Rails

If you don’t know much about Docker, it’s worth having a quick read up on it. But in short, Docker allows you to generate portable images of your application with batteries included, which can then be pushed to a container registry, which allows you to run them inside Kubernetes pods.

Docker containers are specified by a Dockerfile. Most commands generate a new layer, and layers are composed together to create the final image. Docker has intelligent caching, which means it’s best to put things that don’t change much (like system library installs for Nokigiri) first, and things that change often further down (such as your applications’ files).

Here’s my Dockerfile, which may help you dockerise your application. Full disclaimer – there may be better ways to do it – but I’ve found this to work quite well. You’ll also need to substitute your application’s ‘name’ where I’ve written <APPLICATION NAME> in a format that works as a folder name. If you choose to use this, you’ll need to save it as a file called Dockerfile in the same folder as your rails root.

FROM ruby:2.7.0

RUN curl -sS | apt-key add -
RUN echo "deb stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y build-essential nodejs yarn

# Postgres
RUN apt-get install -y libpq-dev

# Nokigiri
RUN apt-get install -y libxml2-dev libxslt1-dev

# Capybara-webkit
RUN apt-get install -y libqt4-dev xvfb


ADD Gemfile* $APP_HOME/
RUN bundle config build.nokogiri --use-system-libraries
RUN bundle install

# Dummy value to satisfy Rails
# You can still run non-production environments from 
# this Dockerfile, but this makes sure assets are compiled
# targeting production.
ARG RAILS_ENV=production
RUN yarn install --check-files
RUN bundle exec rake assets:precompile RAILS_ENV=production

Notice that we set SECRET_KEY_BASE=DUMMY. We will be deploying our Rails master key as a Kubernetes secret later, but sadly rake assets:precompile currently expects it to be around due to a dependency within that command, even though it doesn’t use it for anything. As a result, setting it to a dummy variable allows everything to run smoothly.

One more thing – notice that we specifically add Gemfile* (i.e. both Gemfile and Gemfile.lock) separately to everything else. This is because our application as a whole probably changes a lot more often than our Gemfile does. By ordering our Dockerfile like this, Docker can cache the layers involved in installing and setting up gems and avoid doing it every time something in your application changes.

Once your Dockerfile is set up, running docker build . should work. If so, you’re ready to continue (although you may want to actually run it to test it works correctly and everything is set up right, that’s out of the scope of this guide).

Setting up a container registry

Just like source code is best pushed to a source control repository, containers are best served col–.. er, I mean, in a container registry. This allows Kubernetes to pull them down and centralises your application’s runnable images.

DigitalOcean has a private container registry system in beta right now. You can set one up under Images -> Container Registry. Once that’s done, you’ll need to run doctl registry login in a terminal, which will set your Docker CLI up to be able to push to your container registry.

Once done, try it out. Your previous docker build (or just run docker build . now if you haven’t already) should have given you a hash at the end, for example it might look like:

Successfully built b952cefba0ac

You can tag the hash there to tag and push an image, as follows:



docker tag "$DOCKER_IMAGE_ID" "$IMAGE"
docker push "$IMAGE"

Setting up your Kubernetes cluster

You can set up your Kubernetes cluster using Terraform, but for this guide I suggest doing it in the UI. Note that currently during the early access, DigitalOcean seems to limit container registries to Amsterdam (AMS3). If so, it’s probably worth colocating your Kubernetes cluster in the same region if you don’t have a good reason not to. Use the latest Kubernetes version, and customise your Kubernetes cluster however you like. Personally, I went with two small ($10) nodes.

Then you’ll want to set up your kubectl CLI to be able to access the cluster. That’s pretty easy:

doctl kubernetes cluster kubeconfig save <CLUSTER NAME>

Deploying your Rails application

Now we’ll deploy our Rails application. While setting up, you’ll probably want to hard-core your application controller to show a maintenance page and perhaps even use a subdomain for the time being.

First, you’ll need to set up your Kubernetes configuration. Make a folder structure as follows (with empty files for now):


Recall your application name you used earlier for your Docker folder name. You needn’t use the same name for your Kubernetes labels, but it’s probably best to be consistent so I’ll be assuming you are doing so.

Within the base folder, you set up the basics shared between all of your deployed environments, so we’ll start with application.yaml:

apiVersion: v1
kind: Service
  type: ClusterIP
  - name: rails
    port: 80
    targetPort: 8080
  - name: assets
    port: 81
    targetPort: 80
apiVersion: apps/v1
kind: Deployment
  replicas: 1
        app: <APPLICATION_NAME>
      - name: public-assets 
        emptyDir: {}
      - name: init-static-files
        - name: public-assets
          mountPath: /public
      - name: db-migrate
        command: ["bin/rails"]
        args: ["db:migrate"]
      - name: db-seed
        command: ["bin/rails"]
        args: ["db:seed"]
      - name: <APPLICATION_NAME>
        command: ["bin/rails"]
        args: ["server", "--environment", "production", "--port", "8080"]
        - containerPort: 8080
        - name: public-assets
          mountPath: /<APPLICATION_NAME>/public
          subPath: public
        - name: RAILS_MASTER_KEY
              name: rails-master-key
              key: key
        - name: DATABASE_USERNAME
          value: postgres
        - name: DATABASE_PASSWORD
              name: rails-db-key
              key: key
      - name: nginx
        image: nginx
        - containerPort: 80
        - name: public-assets
          mountPath: /usr/share/nginx/html
          subPath: public

Replace <APPLICATION_NAME> with your application name throughout, and <REGISTRY> and <IMAGE> with your DigitalOcean registry name and image name throughout. Do not include a version on your images – Kustomize will handle that for us later on.

There’s a lot to unpack here. We’ve included both a Service and a Deployment in the same file, although you can split it into two files if you so wish. The triple hyphen in YAML separating the two definitions is essentially a “file break”.

First off, the deployment. We’re running our Rails server on port 8080, and an Nginx server on port 80. These are pod-specific ports and won’t be exposed to the internet, don’t worry. They’ll be used in our networking within the cluster.

The most confusing thing going on here is how we’re managing public asset serving. There’s certainly better ways to do it than this than what I’ve done here, like pushing your static assets to e.g. a CDN, S3 bucket, DigitalOcean space, etc. however this is a fairly simple approach that works pretty well. What we do is make use of the fact that our built image has all our public assets sitting nicely in the public/ folder. We create a volume called public-assets, which is mounted to both our Nginx container (which actually serves the static assets) and our application container. We abuse Kubernetes’ support for init containers, which sequentially run prior to your application’s container running, and make a container that runs your application’s image and copies all the public files onto the public volume mount.

This trick actually works slightly better in docker-compose instead of Kubernetes, as you can mount a shared volume onto an existing folder to automatically include the files in that folder. Sadly, it doesn’t appear to be possible in Kubernetes, but this gets around that limitation, albeit not incredibly elegantly.

We also run two other init containers, one to migrate our database and another to seed it. I’m assuming your db:seed operation is coded to be idempotent, that is to say, running it multiple times has no effect. This is generally good practice because it means you can seed new data (such as when you add a new table in a migration) when it’s added. If your seeding is not idempotent, you will want to remove the relevant init container and seed manually the same way we do a database setup below.

Note that we do not set up arguments to our Rails command to set the environment and port; don’t worry, that will be in the environment-specific configuration to come.

Next we set up database.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
  name: pvc
  - ReadWriteOnce
      storage: 5Gi
  storageClassName: do-block-storage
apiVersion: v1
kind: Service
  name: db
    app: postgres
    - protocol: TCP
      port: 5432
      targetPort: 5432
  type: ClusterIP
apiVersion: apps/v1
kind: Deployment
  name: postgres
  replicas: 1
      app: postgres
        app: postgres
        - name: postgres
          image: postgres:12.4
          imagePullPolicy: "IfNotPresent"
            - containerPort: 5432
            - name: POSTGRES_PASSWORD
                  name: rails-db-key
                  key: key
            - mountPath: /var/lib/postgresql/data
              name: postgredb
              subPath: postgres
        - name: postgredb
            claimName: pvc

This sets up a persistent volume claim which will automatically set up a 5Gi DigitalOcean volume for you, and attaches it to the Postgres database which it also sets up. Nice and simple.

This is a good time to note that this guide does not cover exporting metrics and logs – you won’t get any warning when your database is getting full, or or it’s erroring. That’s something you’ll want to set up afterwards as part of productionising.

We refer to a cluster-issuer in this, which we’ll set up soon, but first let’s fill in the kustomization.yaml within base/:

kind: Kustomization

  - application.yaml
  - database.yaml

We’re getting close, now, but there’s still a few more pieces to slide into place. Next, we set up a ClusterIssuer, which is one of the resources provided by cert-manager (which we’ll install into our cluster shortly) inside certificate_issuer.yaml:

kind: ClusterIssuer
  name: letsencrypt-prod
    # Email address used for ACME registration
    email: <YOUR EMAIL>
      # Name of a secret used to store the ACME account private key
      name: tls-key
    # Add a single challenge solver, HTTP01 using nginx
    - http01:
          class: nginx

Make sure you replace your email. Cert-manager automatically manages our TLS certificate renewal for us. Our ingress we wrote earlier references the cluster issuer above in annotations, which will automatically cause it to issue certificates for them.

Notice that this file is not contained within the base folder. This is because you only need a single ClusterIssuer in a cluster, and it will work across all Kubernetes namespaces. If you prefer to have an issuer per environment, you can instead move it in, add it to the Kustomization file, and change it from a ClusterIssuer to an Issuer (the rest of the file can remain the same).

Next, we set up our individual environments.

First, ingress.yaml:

kind: Ingress
  name: ingress
  annotations: nginx letsencrypt-prod
  - hosts:
    secretName: tls-key
  - host: <DOMAIN NAME>
      - path: /assets
          serviceName: <APPLICATION NAME>
          servicePort: 81
      - path: /packs
          serviceName: <APPLICATION NAME>
          servicePort: 81
      - path: /
          serviceName: <APPLICATION NAME>
          servicePort: 80

Notice that our ingress rules set up the public folders to forward to port 81 (the Nginx file server) on our application service, and everything else to our Rails backend on port 80.

Next, the environment-specific application.yaml:

apiVersion: apps/v1
kind: Deployment
        - name: <APPLICATION_NAME>
          args: ["server", "--environment", "production", "--port", "8080"]

Kustomize will merge this with our top-level base deployment; all we’re doing here is adding the argument list to set the environment. You may prefer to do this through an environment variable instead.

Next, namespace.yaml – which is pretty simple, it just sets the namespace up for this environment of our application:

apiVersion: v1
kind: Namespace

You’ll want to switch out -prod accordingly. You’ll probably want APPLICATION_SHORT_NAME to be something quick and easy to type, like initials of your website.

And, finally, kustomization.yaml:

kind: Kustomization

namespace: <APPLICATION_SHORT_NAME>-prod

- namespace.yaml
- ingress.yaml
- ../../base

- application.yaml

Make sure your namespace matches what you previously created.

Now we’re done setting up our configuration! Onto preparing our cluster…

Preparing your cluster for deployment

There’s two things you’ll need set up in your cluster: an Nginx ingress controller, and cert-manager. These commands should get them both set up nicely:

helm install ingress-nginx ingress-nginx/ingress-nginx \
           --set controller.publishService.enabled=true \
           --set-string controller.config.use-forward-headers=true,controller.config.compute-full-forward-for=true,controller.config.use-proxy-protocol=true \
           --set "''=true"

helm install cert-manager jetstack/cert-manager \
           --namespace cert-manager \
           --version v1.0.1

This will set up a LoadBalancer on DigitalOcean which you will be automatically billed for. There is no good way around this that gives you a reliable static IP that I am aware of, even if you don’t think you need the full power of a load balancer. That said, it’s reasonably affordable – currently $10/month – and should allow you to scale quite a bit before causing any problems.

Go to your LoadBalancer in DigitalOcean, and in the settings make turn on PROXY protocol. We’ve set up the Nginx ingress above so that it will use PROXY protocol, and what this means is your Rails app will be able to get the IP of your users correctly. Otherwise, all of your connections will appear to be coming from your load balancer… Not ideal! And it might make for some very interesting demographic conclusions: all of our users seem to live in the same house in Amsterdam!

Finally, you also need to deploy your certificate.yaml file which is independent of versions, unless you decided not to user a ClusterIssuer. You can do that as follows, from your rails root:

kubectl apply -f k8s/certificate.yaml

The first deploy

Now you’re ready to deploy your application for the first time.

First, you’ll want to set your image version. We discussed earlier how to tag and push an image, and I gave commands for pushing version 0.0.0. If you didn’t do that, go back and do it now. Then, you can run the following commands within the overlays/prod directory – make sure you fill in the three variables at the top first:



kustomize edit set image "${DOCKER_IMAGE}=${VERSIONED_DOCKER_IMAGE}"

The interesting thing here is kustomize edit set image. What it does is add some stuff to your kustomize.yaml so that it will set the image version to 0.0.0 everywhere your image is referenced, which makes it super easy to change version later – just this one kustomize command. You can also add or configure the relevant kustomize configuration by hand, but this command is super useful for building more reliable automation flows.

Once you’ve done that to set it to version 0.0.0, or whatever version you’ve chosen to deploy first, you’re finally ready to deploy your application to Kubernetes.

Run this from your Rails root (or anywhere else and adjust the path accordingly):

kustomize build k8s/overlays/prod/ | kubectl apply -f -

And, boom! Your application is deployed. But you won’t be able to access it right now. First things first, run kubectl get services to find your load balancer’s external IP. If you visit that IP, you should get an Nginx error: it doesn’t know what to do with you, because all it knows to do is route your domain name. So we’ll set that up next. You may have noticed your application’s service is not visible in the results of that command. That’s because it’s deployed to a separate namespace, don’t worry.

Take that external IP, and configure your DNS’ A record accordingly to point to it. It might take a little while to propagate. If you use your ISP’s default DNS (if you don’t know what that means, you probably are), then consider setting up Cloudflare or Google’s. They’re free, easy to set up, and will likely make your browsing faster and more reliable, as well as stopping your ISP DNS jacking you. In this case, it means you should see your domain update instantly!

There’s still some things left to do: your database isn’t set up yet, so your initialisation containers will be failing to run migrate and seed, and your TLS certificate won’t be working yet, but more on that soon…

Secret setup

You need to set up two secrets: a database secret, and your Rails master key.

The files above assume these are stored in rails-db-key and rails-master-key. You need to push these to the right namespace, which I recommended calling <APPLICATION_SHORT_NAME>-prod, but you may have called it something else. Run the commands as follows, using a random password for your DB key:

kubectl create secret generic rails-db-key --namespace <NAMESPACE> --from-literal="key=<RANDOM PASSWORD>"
kubectl create secret generic rails-master-key --namespace <NAMESPACE> --from-literal="key=<YOUR RAILS MASTER KEY>"

And your database needs a first-time setup. That’s an easy fix:

kubectl run -it --rm db-setup --namespace <NAMESPACE> --image=<YOUR RAILS IMAGE PATH WITH VERSION> -- bash

This will give you a bash terminal into your rails app. Just run the usual:

RAILS_ENV=production bin/rails db:setup

Quit the container with exit, and it will automatically get recycled (since we passed the --rm flag). Now your application service should automatically boot up, connect to your database fine, and be working… In HTTP at least…

About those certificate errors…

Now, cert-manager automatically sets up TLS certificates, however it won’t be working right now. For reasons which seem to be being worked on by the Kubernetes folks in collaboration with the various cloud providers, cert-manager cannot do a self-check on ACME challenges while the PROXY protocol is in place, which I gather is because the network routing doesn’t end up leaving the cluster, which means it doesn’t go through the load balancer, and doesn’t get the right headers set up and then gets rejected by the ingress (I may be misunderstanding, but I think this is the gist of it…).

It’s a pretty easy fix, but it’s potentially disruptive: disable the PROXY protocol, delete the certificate to prompt cert-manager to try again (it will do so in due course, but it’s faster to just force it to), and then re-enable once TLS is working. This means for a small period of time once every 90 days (the default renewal length) you will need to either have scheduled downtime or accept the loss of client IP address resolution in your Rails app.

If you truly don’t care about client IP resolution, you can avoid using the PROXY protocol altogether, but I don’t recommend this: IP addresses can be very useful for all sorts of things, not least of all post-incident security analysis.

Anyway, you can do that as follows:

# Or whatever you named your namespace

echo "Disabling proxy protocol, must also be disabled on DigitalOcean load balancer"

helm upgrade ingress-nginx ingress-nginx/ingress-nginx --set controller.publishService.enabled=true --set-string controller.config.use-forward-headers=true,controller.config.compute-full-forward-for=true

echo "Deleting existing certificate"
kubectl delete certificate --namespace "${NAMESPACE}"
echo "Sleeping while certificates refresh..."
sleep 15

echo "Re-enabling proxy protocol"
helm upgrade ingress-nginx ingress-nginx/ingress-nginx --set controller.publishService.enabled=true --set-string controller.config.use-forward-headers=true,controller.config.compute-full-forward-for=true,controller.config.use-proxy-protocol=true --set "''=true"

Make sure you disable PROXY protocol on your DigitalOcean load balancer settings (on the DigitalOcean website) beforehand and re-enable it afterwards. The sleep 15 is likely to be far more time than is actually necessary; you can refresh your website in HTTPS and run the final helm command and adjust the load balancer to re-enable PROXY protocol after the certificate has been issued if you like.


You can add new environments super simply – just copy the overlays/prod folder to, for example, overlays/staging, and then accordingly adjust the files within it to fix the namespace (both in namespace.yaml and in kustomize.yaml), the rails environment flag, and the hostnames in the ingress settings. You’ll need to do everything from secret setup onwards again for that new environment to set it up, but it should mostly be familiar to you.

Note that because of how database migration and seeding is done in init containers, it probably isn’t thread safe, so you can’t just up the replica count as you’d normally want to with Kubernetes. You’ll need to configure something more complicated to be safe against this, sadly. You could have your deployment script automatically shell into the cluster and run db:migrate whenever you run a deploy, for example, or have some fancy CI/CD solution doing it all for you.


I hope this guide has been of use to someone – it took a lot of trial and error to get this working properly and I thought it might be valuable to share. However, I’m very open to feedback to improve this! Please feel free to drop comments with any problems you ran into, constructive criticism, or even just a hello if it helped :).

Graphick – Simple(r) graphing

For one of my courses, I’ve been drawing a load of graphs of a program’s performance; something I also had to do in a course last year. To say the least, it’s a bit of a nightmare.

What stood out to me was that most of the time, the process I was following was almost mechanical — run an application a bunch of times with different inputs, save a portion of the output somewhere, and then either write a script to parse the CSV and generate a graph, or throw it into Excel and generate one by hand. Sometimes I’d throw together a quick shell script to generate the initial data too, but either way it was a lot of context switching between different languages and if I wanted to regenerate the data after a change I also had to mess around to make sure the graph was redrawn.

As we know, though, everything is improved with large configuration files! When in history has a project started with a configuration file, and gradually became more and more complicated? Never, of course!

As a result, I thought it would be fun and helpful to develop a reasonably general utility for creating line graphs to analyse program data – whether it be temporal data for performance analysis, or just plotting the output with varying inputs.

Motivating spoiler: A graph which I generated for my coursework using Graphick

I set out with a few goals, and picked up a few more along the way:

  • It should be easy to write the syntax; preferably nicer than GNUPlot
  • It should be able to handle series of data — multiple lines
  • Any output data or input variable should be able to be on the X axis, the Y axis, or part of the series
  • The data series should be able to be varied by more than one variable – i.e. you might have, as depicted in the picture above, four lines which represent varying two different variables.
  • It should be extensible, so it can support new ways of data processing and rendering easily.
  • Data should be cached, so if the same graph is drawn it can be redrawn without re-running the program
    • Ideally, cache per-data-point, but the current implementation of Graphick just caches per-graph based on the variables and program. This can definitely be implemented in future though.

After a week of hacking, Graphick is the result. Graphick parses a simple file and generates graph(s) based on it.

Here’s a simple Graphick file:

command echo "6 * $num" | bc
title Multiples of six

varying envvar num sequence 1 to 10
data output

When you run Graphick with a file like this, it will proceed to generate the data to graph (or load it from a cache if it has previously generated it) by running the program for every combination of inputs.

Each line of a Graphick file, besides blank lines and comments (beginning with a %), represent some kind of directive to the Graphick engine. The most important directive is the command directive. This begins a new graph based on the command following it.

The text after command is what is executed. In this case, it’s actually a short shell script which pipes a short mathematical expression into bc, which is just a built-in calculator program on Unix. Most of the time, you’ll probably write something more like command ./myApplication $variable.

There are a number of ‘aesthetic’ directives – title, x_label, y_label, series_label. The only complicated one is series_label, which I’ll go into later. For the rest, the text following the directive is simply put where you’d expect on the graph.

The varying and data directives are the most important. varying allows you to specify which variables to run the program for. If you have two variables, which each have six values, then the program will be run with every combination of them — thirty six times. Right now, only environment variables are supported. You write varying envvar <name> <values>. Values can either be a numeric sequence (as in the above example) or a set of values. For example, sequence 1 to 5 or vals 1 2 4 8 15.

Data is the other important one. Only output is supported, currently, which corresponds to lines of stdout. You can also filter for columns, by adding a selection after the directive – for example, data output column 2 separator ,. This would get the second comma-separated column.

Another type of directive, which isn’t featured in this example, is filtering. If you have a program which outputs lots of lines, and you only care about a certain subset of them, you can filter them. There is more detail on this in the repository README, but suffice to say you can filter for columns of output data to be either in or not in a set of data, which can be defined either as a sequence or a set of values. The columns you filter on need not be selected as data, which means you can filter on data which isn’t presented on the graph.

Graphick files can contain multiple graphs by just adding more command directives. Currently, there is no way to share directives between them, so properties like title need to be set for each graph. Here’s an example of two graphs in a single file:

command echo "6 * $num" | bc
title Multiples of six
output six.svg

varying envvar num sequence 1 to 10
data output

command echo "12 * $num" | bc
title Multiples of twelve
output twelve.svg

varying envvar num sequence 1 to 10
data output

As you can see, there’s no need to do anything except add a new command directive – Graphick automatically blocks lines to the most recent command.

As an example of generating more complicated graphs, the graph I featured at the start of this post, which was for my coursework, was generated as follows:

command bin/time_fourier_transform "hpce.ad5615.fast_fourier_transform_combined"
title Comparison of varying both types of parallelism in FFT combined
output results/fast_fourier_recursion_versus_iteration.svg
x_label n
y_label Time (s)
series_label Loop K %d, Recursion K %d

varying series envvar HPCE_FFT_LOOP_K vals 1 16
varying series envvar HPCE_FFT_RECURSION_K vals 1 16

data output column 4 separator ,
data output column 6 separator ,

As you can see, adding the series modifier allows you to turn the variable into data which is used to plot lines, rather than as part of the X/Y axis. There must always be two non-series data sources (where a data source is either a data directive or a varying directive), and the first one always represents the X axis (the second the Y axis). You can have any number of series data sources, which combine in all combinations to create lines. In this graph, both variables take the values one and sixteen, to create four lines in total. The series_label directive takes a format string. The n-th formatting template (both %d in the string) indicates to put the value used for the n-th series variable at that position in the label.

Finally, there is one more directive which is useful: postprocessing. Postprocessing directives allow you to run arbitrary Ruby code to process the resultant data before it is rendered on the graph. Currently, only postprocessing the y axis is supported, but it would be straightforward to add support for postprocessing the x axis and series data. The postprocessing attributes are fed three variables – x, y, and s. x and y are the corresponding values for each data point, and s is an array of all series values at that point, ordered by the definition in the file. For example, if you wanted to normalise the y axis by a certain value, you might do this:

postprocess_y y / 2

Or, you might want to divide it by x and add a constant:

postprocess_y y / x + 5

Imagine the postprocess_y directive to be y = and this syntax should be reasonably intuitive.

So, in summary, Graphick is a somewhat powerful tool for generating of program graphs. You can plot multiple columns of the output, or run the program multiple times to generate multiple outputs — or maybe even a combination of both! Graphick should handle what you throw at it reasonably how you’d expect.

If you come across this, and have feature requests, drop a GitHub issue on the repository, or a comment on this post, and I’ll definitely consider implementing it – especially if it seems like something which is widely useful.

Ruby, a few months on

Sidenote: I’ve pretty much dumped my¬†Thing of the Month plans, because they proved to be too difficult to balance with university work and general life, where¬†I’m trying to branch out more and also trying to be more active in game development. That said, I’m still always trying to learn new things in the software engineering field, as I always have; but just in a less forced and artificial way, which I think does not work as well for me. I’m looking into Kotlin right now, and may put up a post about it sometime soon.

Since I posted about learning Ruby, I think I’m getting rather good at it. My most recent project was a hundred or so line integration test runner for a university compiler project written in Java. It executes the project with different test files, checking the output is as expected, all the while producing a nice output, which overwrites a line in the terminal to give an updating appearance without spamming it. It then proceeds to allow manual test verification, where you can see source code, and the error the compiler produced, to manually verify if the error produced looks sensible and understandable.

Soon, we realised that running such a complicated Java program over 250 times was¬†slow, so I looked into multithreading the script. I was pleasantly surprised by just how easy it was to integrate concurrency into my test runner, and it essentially consisted of two additions; wrapping the main logic in¬†‘s block, and then storing that in a list, making up to 8 (Though this is variable by a command line parameter) threads, and waiting for the oldest one to finish before making a new one.

I’ve also started coding a little game akin to how I remember The Hobbit, a lovely game I used to play on a ZX Spectum emulator when I was pretty young. I emphasise how I remember it, because I recently watched a playthrough and my memories weren’t very reliable, but the main thing that my younger self found attractive was the method of input – you would type something, like¬†“light the fire,”¬†or¬†“kick Gandalf,” and it was like¬†magic – it seemed like it always had a well-written reaction to anything you could imagine, and I can remember being really interested in knowing how it worked. I think I’ve got a rather sensible approach to mimicking it, but I don’t think I could ever¬†hope¬†achieve the same kind of magic I felt playing that game. Wikipedia is fairly complimentary to it’s approach:

The parser was very advanced for the time and used a subset of English called Inglish.[5][6] When it was released most adventure games used simple verb-noun parsers (allowing for simple phrases like ‘get lamp’), but Inglish allowed one to type advanced sentences such as “ask Gandalf about the curious map then take sword and kill troll with it”. The parser was complex and intuitive, introducing pronouns, adverbs (“viciously attack the goblin”), punctuation and prepositions and allowing the player to interact with the game world in ways not previously possible.

Anyone who’s interested in retro games, I’d heavily recommend it. It’s truly something I wouldn’t have thought would have existed at the time if I didn’t know about it.

All this is to say that, despite my initial doubts about how long it would last, I still really do love the language, and it’ll definitely be one of my first choices for future projects. I’d probably lean towards C#, C++ or Java for any game that requires better performance, but for most other things I think Ruby is going to indefinitely be one of my favourite choices.

Ruby, and why it quickly became my favourite language

I’d be¬†taken by surprise if I were told by someone that their favourite programming language is one that they’d never written more than a few¬†lines of code in, but that’s my situation right now. Due to a number¬†of unrelated circumstances,¬†I’ve been unable to install and use Ruby, but¬†I’ve been reading a book I obtained recently – Eloquent Ruby, by Russ Olsen – which I’ve found to be a fantastic read. I’m currently about halfway through, and am almost certainly going to pick up some other books in the series – Design Patterns in Ruby, also by Russ, Practical Object-Oriented Design in Ruby, and, (though this isn’t in the same series of books), when it’s released, Agile Web Development in Rails 5.

So far, I’ve learnt that Ruby seems to be exactly how I want a programming language to be – very consistent, intuitive, expressive, and clean. As a short history, I began programming in Lua. At the time, I was pretty young – either eight or nine – and didn’t quite grasp the fundamentals of how a programming language is written. I could write code, but it was a short while before I realised that essentially everything was an expression, which could be nested and used in funky ways – meaning I could write lines like (Not that I’m advocating this style, of course)

tab[index + 3] = get_variable(get_function()({ [“a”] = 5}));

Or to realise that the functions provided to me by my environment (At the time, Roblox), such as their event system, where you’d subscribe a listener¬†in a manner similar to:

object.event:connect(function() … code … end);

Were often something I could manufacture myself, by making a table (Vaguely similar to a hash and an array butchered and stitched together), with a function called “connect” that accepts a function as it’s parameter. These kinds of complex nested expressions and the use of closures and anonymous functions hadn’t really made much of an impression on me, and the higher-level constructs I was using merely felt like a black magic that just worked. Once I realised this, I gradually drifted to feeling like Lua wasn’t ideal for many things – both in terms of speed, and a limited syntax (allowing for some incredible OO systems such as MiddleClass, but still falling short of true OO languages).

I then transitioned to writing object oriented code with C# and Java, languages that, of course, have methods coupled with data, so I could write code that kept functionality with it’s associated data; something that felt sensible and correct. I¬†still wasn’t completely satisfied, though. While “everything” (for the most part)¬†was an object, there were still things that I felt I should be able to do that I couldn’t. Primitives, for example, are essentially special cases, and although autoboxing is nice, it’s a bit clunky. While C# hides the detail better than Java (Array<int>, anyone?),¬†it’s still got its own problems.

Another two languages I’ve used widely are PHP (boo!) and Python. With these, I loved how it was object oriented, but you could flexibly pass objects around. I still prefer static typing, and I do think it’s often more optimal for larger projects (if only so your editor can be a bit¬†lot more intelligent; there’s sometimes type annotations, but they always seemed like a poor man’s static typing to me), but I think dynamic typing can, when used well, be a great convenience.

I had a placement this summer at Netcraft, an internet services company in Bath. It introduced me to Perl, a language which I’d heard about but never really been interested in – my first year of university was my first serious venture into the Unix world, and I’d spent most of that trying to hide from calculus and trigonometry, while trying to improve my ability with Haskell, a language we used in our first term.

At first, I really didn’t like Perl. I’m still not overly fussed, but it managed to persuade me and I ended up writing a few scripts in it at home. I find a few things about it rather annoying – it’s inconsistent, too many things have unexpected side effects or sets special variables, there’s too many ways of expressing the same idea, the object system is not just¬†unintuitive, but feels completely hacked on (Though, in fairness, Moose fixes this, I disagree with the principle that you should have to use a library for something like this). I find it ridiculous that¬†it took decades to add method signatures, and even now they’re considered experimental! I don’t like how I have to think for ages before I can even begin to get the length of an array in a data structure, and when I do, I end up producing code that looks a bit like this:

scalar @{$structure->[{[@@{$}->{a}->$@{ } ]]] ) }->{key}->[3]->{3}}

I jest, but this is certainly how it feels. Even if the speed of thinking comes with practice, it’s still a bit gross how many extra symbols I need to access children of arrayrefs and hashrefs, compared to other languages where you can just nest these kinds of structures effortlessly without thought. Even some of the dedicated Perl community seems to agree here – Perl 6 eliminates a lot of the variation in the symbols to make them at the very least more consistent.

But there are also a lot of things I love about it, and wish more other languages I use had: statement modifiers are a big one. For context, a statement modifier lets you suffix any statement with a little expression like:

print “hello” if should_print_hello;

For some reason, this seems infinitely more elegant than a faux statement modifier in C#, which would be:

if (should_print_hello) print “hello”;

Realistically, they’re similar, but the latter feels more bulky, doesn’t read as well, and I’m not particularly keen on¬†if statements with omitted curly braces.

All that said, whenever I have a basic task to do, my first thought is “Hey, I could write a 10 line Perl script to do this!”. The Perl community isn’t lying when it says it makes the “easy things easy” – it really does. This is something I’ve heard almost unanimously from all of my intern colleagues; most of us seem to harbour some level¬†of disdain for Perl, but still want to use it a lot, because it’s just that damn easy. It’s like an infection that grows on you, presumably eventually turning you into a fully fledged Perl monk before you go to live in a monastery and dedicate your life to answering questions on

Ruby is a language I’ve wanted to learn ever since I got pretty good at Lua and decided to move to greener pastures. It was for a completely superficial reason: I thought it’s website was really well designed. Looking at and comparing it to, you can probably see why I thought this.

For some reason, I slowly became under the impression that I didn’t like the look of Ruby, despite never taking a decent look, and avoided it. Until I decided to learn a new thing, and made it Ruby, after seeing a colleague writing some Rails code.

Soon after looking into it, I realised something: Ruby seemed to be just as good a language for writing quick scripts to solve problems as Perl, a trivially superior one for web development thanks to Rails and Sinatra, and seemed to take all the nice features, like statement modifiers, but wrap it in with a¬†very consistent object oriented approach, where¬†literally everything behaves like an object. “hello”.upcase? Well, “HELLO”, of course – no syntax errors to be found here!

I love the¬†loop structures, and how enumerating is handled so elegantly. I love how there’s a culture of writing DSLs (Though the term is used very loosely) to do all sorts of things, from testing, to make tools.¬†Everything I’ve read about the language makes me itch to rewrite all of my code-bases in it, but I think I’ll just settle for using it for personal systems administration and future website development.

No doubt this is just some kind of initial language infatuation, and it may pass, but right now, Ruby is my favourite language, and I’ve yet to even use it properly.

Thing of the Month 1: Ruby on Rails

I’m trying to learn a new¬†thing¬†(language or framework) every month. Each time,¬†I’d like to begin by answering¬†What, Why, Prior Experience, What, Why, and Compromises.¬†Respectively, those are: What language and why, previous experience I have that I think is relevant, what do I hope to build in the process and why, and any compromises I expect I may have to make to succeed. I’m open to varying what I plan to build through the month if I decide what I chose was too optimistic (or even not optimistic enough), or if I have a particularly busy month and don’t find enough time to learn my Thing of the Month.

Month 1: September 2016


I’m going to try to learn¬†two things: Ruby and Rails. I’m cheating a little, because it’s technically still August, but I think I can forgive myself.


I’ve seen a lot of Ruby and always wanted to give it a try, and¬†I think that it’s important for me to vary my server-side technologies more, as I’ve not used ASP.NET for a long time, and so am mostly limited to PHP, which is something I would like to change going forward.

Prior Experience?

I’ve written MVC code on top of ASP.NET and CakePHP before; in fact, my current main web project, Gamer-Island, is written on top of CakePHP. This should make learning the Rails aspect much simpler. A strong background in scripting languages should assist in learning Ruby. Overall, I think prior experience will make it easier, but certainly not easy, to gain a degree of fluency in Ruby on Rails.


I’d like to remake an old project of mine, which was a Minecraft server administration panel. Servers would get their own subdomain, and it’d monitor users, chat, and logs, which could then be accessed by staff of the server. Wish this, they could then, for example, issue time bans on users and associate the bans with chat messages, meaning server owners and admins can keep track of bans to ensure they are all fair.


I used to run a Minecraft server (running the Tekkit modpack). I stopped (due to a change in the EULA not allowing donations in exchange for in-game items on servers, which previously made the server self-sustaining), but I still think there is potential in this idea. I found, during my time running it, that it was difficult to find ‘staff’ who could be trusted to be cool-headed and fair in all situations. Initially, the panel was to ensure my own server’s staff had to provide evidence with their actions, but soon I realised other servers would likely be suffering the same issues. Additionally, Minecraft server plugins all tend to log in their own funky ways. If you don’t capture their messages at run-time, and parse them into a standard form, then the information is dumped in a log file full of a jumble of all different logging formats.

Additionally, I think this provides an ample challenge, as it will require well configured routing, lots of AJAX while maintaining a secure front against CSRF attacks, and configurable levels of access.


I suspect I will have to compromise on the core of the application: I do not believe I will have time to write a Java plugin to hook into servers and securely communicate with the admin panel, uploading user chat, logged information, and others. Instead, I will focus on writing the Ruby end, which would be the web front to the data, and the API for uploading data.

Then, in a future Thing of the Month, I have the option of writing a Minecraft server plugin in Java to upload this data, and create a fully-functioning product.