January 02, 2017

How to integrate Docker into your build process

In this blog post I want to show you how to install Docker on your system and integrate it into your build process. So that it will be easy for you to run your tests against a running Docker container, we will add a Postgres database to the Docker container. This will enable you to test them against a fresh and clean database every time you run tests.

Docker Installation

For general installation instructions you can visit the Docker homepage and follow the steps there. In my example I will install it on a computer running Linux Mint, which is a derivative of Ubuntu, so you can follow the Ubuntu installation steps.

There are two prerequisites before you can install Docker on your machine [1]:
  1. a 64 Bit Linux installation and
  2. a Linux kernel version higher than 3.10
I will only show you the basic steps of how to install it. You can find the prerequisites for the installation phase on the Docker page. Here are the basic steps:
  1. Update your APT package index:
    1. sudo apt-get update
  2. Install Docker:
    1. sudo apt-get install docker-engine
If something doesn't work after the installation, a good thing to try is to log off and log on again. I had an error which was not easy to find, and I fixed it by rebooting my PC.

Maven Integration

To start Docker in your build process you have to add a plug-in in your pom.xml file. There are many docker-maven-plugins, but the plugin from io.fabric8 provides the most options for defining your build instructions  and your runtime configuration. You can find it on GitHub at this link https://github.com/fabric8io/docker-maven-plugin .

                   <log>database system is ready to accept connections</log>  

During the image configuration, you have to configure where the build can find your Docker file, in the property dockerFileDir. It will search for the file "Dockerfile" in this directory.

The ports section in the build configuration lets you decide which ports you want to expose.
While defining the run section you also have to configure the ports, because you need to define which host ports map to which exposed container ports.

In the execution section of the plugin you need to bind the phases to the corresponding goals that you want to start. In the pre-integration-test phase we will build the image and start the Docker container. In the post-integration-test phase we will stop and destroy the container. It is necessary that you add the build goal before the start goal, because just using start doesn't imply that build must be done first. 

The build goal will build each image which has a build configuration section or if the global configuration variable image is set [2]. Another great idea is to add a healthCheck section to your build goal, which allows you to periodically check if your container is still working correctly. To do this, you need to define a command, which will return 0 for a healthy container and 1 for a container which is not working correctly. If you are running a webapp in the container you could perform a curl request against the web service, and if it fails you would return 1 to indicate that the container is no longer healthy. The healthCheck section would look like the following code snippet, which is from the documentation of the plug-in [2]:

  <!-- Check every 5 minutes -->  
  <!-- Fail if no response after 3 seconds -->  
  <!-- Fail 3 times until the container is considerd unhealthy -->  
  <!-- Command to execute in shell form -->  
  <cmd>curl -f http://localhost/ || exit 1</cmd>  

The start goal creates and starts Docker containers; it evaluates the run section of all enabled images which are provided [3]. It is possible to define a restart policy if you want the container to try to restart when it exits. There are two options available: always restart and restart on-failure. You should consider defining a retry count if you are using the on-failure option to define how often it should try to restart before giving up.

The stop goal stops and removes every container that was started with docker:start during the same build, and containers created by a previous call to docker:start [4].

Hence, you don't have to worry about stopping containers manually if you don't need them anymore. If you'd like to keep your container you need to add a configuration section to your stop goal, and set the keepContainer property to true.


An example Dockerfile which will install Postgres test and development databases you can use is created by the following code snippet. In it, you have to expose the port again as you did in your pom file.

 # example Dockerfile for https://docs.docker.com/examples/postgresql_service/  
 FROM ubuntu  
 MAINTAINER daniel.zimmermann@willhaben.at  
 # Add the PostgreSQL PGP key to verify their Debian packages.  
 # It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc  
 RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8  
 # Add PostgreSQL's repository. It contains the most recent stable release  
 #   of PostgreSQL, ``9.3``.  
 RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list  
 # Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3  
 # There are some warnings (in red) that show up during the build. You can hide  
 # them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive  
 RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.5 postgresql-client-9.5 postgresql-contrib-9.5  
 # Note: The official Debian and Ubuntu images automatically ``apt-get clean``  
 # after each ``apt-get``  
 # Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``  
 USER postgres  
 # Create a PostgreSQL role named ``docker`` with ``docker`` as the password and  
 # then create a database `docker` owned by the ``docker`` role.  
 # Note: here we use ``&&\`` to run commands one after the other - the ``\``  
 #    allows the RUN command to span multiple lines.  
 RUN  /etc/init.d/postgresql start &&\  
   psql --command "CREATE USER testDb WITH SUPERUSER PASSWORD 'test1234';" &&\  
   createdb -O willhaben dev &&\  
   createdb -O willhaben test  
 # Adjust PostgreSQL configuration so that remote connections to the  
 # database are possible.  
 RUN echo "host all all md5" >> /etc/postgresql/9.5/main/pg_hba.conf  
 # And add ``listen_addresses`` to ``/etc/postgresql/9.5/main/postgresql.conf``  
 RUN echo "listen_addresses='*'" >> /etc/postgresql/9.5/main/postgresql.conf  
 # Expose the PostgreSQL port  
 EXPOSE 5432  
 # Add VOLUMEs to allow backup of config, logs and databases  
 VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]  
 # Set the default command to run when starting the container  
 CMD ["/usr/lib/postgresql/9.5/bin/postgres", "-D", "/var/lib/postgresql/9.5/main", "-c", "config_file=/etc/postgresql/9.5/main/postgresql.conf"]  

Example Project

I created a small example project in which all of these steps are connected together; hence, you can simply begin to develop against one of these this postgres databases which we preiviously defined in the Docker container. You can find the sample project on GitHub under the following link: https://github.com/willhaben/docker_tests .
To clone the repository to your local machine, use: git clone git@github.com:willhaben/docker_tests.git to clone the repository to your local machine.


Docker really helps you keep your test environment clean. As a result, there is no need for a database shared with developers which may have corrupted data at the end of the day. You can start from scratch with an empty database or even fill your database with test data every time you create the Docker container so that each developer has a base dataset with which they can work. In our environment, this approach helps us a lot because we don't have to worry about test data or clean it regularly.
The healthCheck in the build goal is great for your operations team to monitor the Docker container status.

[1]: Installation Prerequisites- https://docs.docker.com/engine/installation/linux/ubuntulinux/#/prerequisites
[2]: docker:build - https://dmp.fabric8.io/#maven-goals 5.1
[3]: docker:start - https://dmp.fabric8.io/#maven-goals 5.2
[4]: docker:build - https://dmp.fabric8.io/#maven-goals 5.3


  1. webpage is something that is required on the web, somebody with a little inventiveness. valuable employment for conveying one thing new to the web.
    pay for assignment

  2. Ceaseless joining is a standout among-st the most famous utilizes cases for Docker. Groups looking form and convey their applications rapidly utilize Docker. Webs Berry

  3. Awesome information that will help me to integrate docker into my website development process. In fact process that you shared here is easy to understand. cheap dissertation writing service

  4. Thanks for sharing post. It is really good one which provide enough information to explore about given topic. Good luck.
    Coursework writing services

  5. doker installation and mavan integration is one of hardest way in programming. your writing is okay but you have to improve your writing style or hire Assignment Writing Service to get high quality article for your blog

  6. This comment has been removed by the author.

  7. really nice post.all you want to tell is balance and as easy as any one can understand.thanks for sharing this.for more good writing u can contact to us at research proposal writting:)