Concourse CI Basics - Part 1

Recently, I began working with Docker, and this opened my eyes up to the extremely versitile nature of Docker, and what all can be done with building Docker containers for applications. As an extension of this, I dug into what can be done to achieve CI/CD with Docker, and that introduced me to Concourse.

If you haven’t heard of Concourse, you can find their home page here:

Concourse, to me, makes a lot of sense in a world with Docker Swarm, where one is working with microservice architecture, and trying to push towards a CI/CD environment.

The first lesson (which I myself took a while to grasp) is that Concourse is extremely heavily tied to Docker, and the way in which that assists in achieving CI/CD (dreams?) is extremely useful. If you plan it correctly, nearly everything you do in a Concourse Pipeline happens in a Docker container itself. Concourse can leverage Docker containers to do work which would normally need to be done manually, such as moving files around, modifying files, setting environment variables, and much more.

Let’s begin with, what is a pipeline in Concourse? A pipeline is essentially a flow of tasks which work together to achieve an end goal. The pipeline has a series of steps that are carried out (either automatically, or by one of many triggers). In Concourse, the steps or tasks are called Jobs, and the pipeline has a series of resources that you declare, which are utilised in these Jobs.

One of the most basic pipelines that can be created is simply building and pushing a Docker image to dockerhub. The source code for this could be obtained from a Git repository (also containing the Dockerfile that one wants to build). This may sound trivial, however let’s outline what Resources, and what the Jobs would look like in this (before we dive into what the pipeline would actually look like):

Resources: Docker Image Repo - this is the repo that the image would eventually be pushed up to, for example: devinsmith911/test-image. Git Repo - this is the repo where the code (and dockerfile) for your image would be obtained from Slack Notification Source - this source is a bit specific but I have it incldued in every pipeline, does as the name suggests.

Jobs: Build image job - luckily in this pipeline we can establish everything we need done in one simple job, this job will grab the code from our (git) source and use the Dockerfile to build a Docker image.

The pipeline itself is written in YAML format, and is uploaded to a running Concourse via fly commands. I will be going over the fly CLI in the next post.

Now that we understand what Jobs and Resources we are going to need, and how we create them, we can dive into what the first actual pipeline will look like, and how to get it working to your liking. I will be detailing this in the next post.


EDIT: Stil trying to find my rhythm writing these posts, feel free to leave comments or suggestions on how I can improve!

comments powered by Disqus