Dynamic Environments for Everyone

Christian Platzer
willhaben Tech Blog
5 min readNov 4, 2019

--

photo of gray building
photo of gray building by Ian Battaglia on Unsplash

Recently, the demand for more versatile test environments has grown within various teams. While full-fledged development environments undeniably have their advantages, coordinating the deployment and testing of dependent applications becomes more difficult, especially with the tendency towards the use of micro-service architectures. Consequently, an implementation that facilitates on-demand dynamic environments was sorely needed. Since our team, which is responsible for the web implementation here at willhaben, practically started from scratch with the underlying technology stack, we were an ideal candidate for a dynamic environment prototype. Two major points, which are key enablers, are worth mentioning here:

  1. No legacy technologies. Since we were essentially able to choose our own technology stack, we had the enormous advantage of moving forward without any legacy technologies that had to be incorporated into our project. As a result, all of our applications are deployed within our Kubernetes cluster. What seems like a minor point at first, often turns out to be a showstopper for applications running in a virtual machine or even hardware.
  2. No persistence. Another issue that is hard to overcome concerns persistent data storage. Since we are interfacing with the rest of the platform via an API, we have no need to persist data directly; however, this also means that we are stuck with a single “traditional” environment when interfacing with the rest of our platform. In our case we chose staging (ST) for that purpose.

With these cornerstones in mind, we wanted to achieve the following.

  1. On each pull request to our repository, a build should be triggered automatically.
  2. If the build is successful, it should be deployed to a dynamic environment automatically.
  3. After completion, a URL that directs to the environment should be shown in both the pull request and a dedicated dashboard. This URL is then used to trigger automated tests.
  4. If a pull request is altered, the same environment should be redeployed.
  5. If a pull request is finished (merged, deleted, etc.). the environment must be torn down, freeing all occupied resources.

With all these points fulfilled, we effectively enable a test-before-merge policy which, in turn, acts as a requirement for continuous delivery, our main objective.

Now it is time to get more technical and explain each of the previous steps in more detail.

The first point is the easiest to solve, because it is essentially already covered by our restriction of showing the possibility of merging only if a successful build was run. The second point, however - deploying this build to a dynamic environment - is where most of the effort ran into.

Our Kubernetes cluster separates environments by namespace. As a result, we currently run a dedicated namespace for each environment, such as dev, uat or st. Our first idea would have been simply to deploy various versions of the same application under the same namespace and decide in our load balancer which one to use. This was not possible, however, because our ingress controller for HAProxy, our load balancer, is limited in its functionality and did not implement multiple Kubernetes service description. Therefore, we decided to create a separate namespace for each dynamic environment directly in the deployment step. As a result, our development cluster looks like this:

Apart from the dynamic prefix barbarix-dynamic, the namespace also includes the unique ID of the corresponding pull request to distinguish among environments.

After that, the application can normally be deployed to this fresh namespace. Here, the major advantage when using Kubernetes comes into play. Since normal applications must also trigger a re-configuration of the loadbalancer through the ingress controller, the same is true for the deployed dynamic application. After this step (where ingress, service, configmap and deployment are created), the backend shows up in HAProxy:

Achieving the same result with conservative virtual machines would require at least an automated, IP-independent provisioning and deployment of the application. This could be done, but only with a lot of effort.

Now, the application is running and has its own backend in our load balancer. To really access it, however, we need an IP address and a DNS entry for it. We solved this problem by creating a wildcard DNS entry for the “parent” environment:

*.barbarix-st IN CNAME barbarix-st.willhaben.at.

With this configuration in bind, all subdomains of barbarix-st.willhaben.at are resolved to the same IP address. The only thing left is to direct requests for this IP to the correct backend in our load balancer. That is achieved by the following line, which takes the host header and parses the backend from it, if it is directed towards a dynamic environment:

use_backend barbarix_dynamic_%[req.hdr(host),lower,regsub(\-,\.,g),field(3,’.’)]_barbarix-k8s if hdr_beg(host) -i barbarix-dynamic

With that, the basic functionality to provide a usable system is complete. To fulfill the rest of the above requirements, we configured our build server and Bitbucket so that the correct actions (redeploy, create, destroy namespace) are triggered when required.

To ensure a usable overview, we also created a Grafana dashboard that shows all currently available environments, a clickable link and the location to destroy it by hand:

Grafana Dashboard

Apart from the limitations mentioned in the introduction, we encountered various challenges during the implementation phase.

  • Configuration: We usually configure our applications according to the environment they run on. To enable dynamic environments, we use our normal ST environment as a fallback. However, that also means that, at the moment, we are not able to run dynamic environments against different regular environments.
  • Static resources: For better performance, static resources are usually delivered by our caching infrastructure. To avoid deployments overwriting each other, we serve this content from the application without caching, which is completely fine for development.
  • SSL certificates: Because of the wildcard DNS entry, our application runs under the third subdomain of willhaben.at and is not covered by our wildcard certificate for *.willhaben.at .
  • Integration into the rest of our applications: Since we are already using HSTS on our main site, subdomains without a proper certificate cannot be loaded. This is a problem that can only be fixed once we have the correct certificate.
  • Tests: Only a restricted set of our automated test runs can be executed on dynamic environments due to the restrictions explained above. Still, they mean an additional added value.

Summing up, the presented approach for dynamic environments is far from perfect, but it still enables us to use a better, more efficient and more agile development pipeline. We will continue to improve the implementation and tailor it to our needs. Hopefully, the approach can also be used for other applications and increase our agility in the company as a whole.

--

--