journey to cloud

Our journey to the cloud

How we migrated our infra­structure from self-managed Linux hosts to a cloud setup with Kubernetes

In the last few months, we migrated our infrastructure from self-managed Linux hosts to a cloud setup with Kubernetes due to new requirements. The reasons for the migration were diverse: We needed better scalability due to multi-tenancy and on the operations side a more modern tooling and CI/CD (configuration as code) integration for faster deployments. In the following blog post we write about our journey, the problems we met and the current state of our infrastructure.

Matthias Baldi
1.10.2021

Why did we do this

Our original setup

We used Ansible for a long time to provide servers which host the staging and production versions of our projects and websites, like our website, the portal of our partners Bergloufcup, GLUEX and many different projects, for example fidentity.

Problems we had

A lot of effort was required to add new instances or wasted time with difficult deployments and updates of some setups. Another difficulty with Ansible was that it is required to separately implement update, migration or removal configurations for different parts of a setup. In addition, it was difficult or impossible to run several different versions of Node or MongoDB on the same host.

One positive aspect was that we were already more or less 12-Factor ready with our apps and backends. Therefore we did not have to make any major adjustments to the code-bases.

New Requirements

But even having a more or less efficient CI/CD infrastructure and having applications that were mostly 12-factor compliant, new requirements arose. A new infrastructure was required to reduce the maintenance effort and improve maintainability, so we defined the following requirements:

  • It must be more scalable
  • Reduce the necessary effort to provision a new instance and its maintenance
  • The pricing should be the same or less
  • There should be stable and modern tooling
  • Operators must be located in Switzerland which can offer us "as a service" products

The solution in the cloud

Container and Kubernetes

We already tested a POC within Docker container setups some years ago. But at that time, it was not advantageous enough to change the CI/CD processes. Also the development stack and the provider partners seemed to be not yet ready to support the cloud based platforms in Switzerland that we required.

But we wanted to try the container setup again and evaluated many of the currently available container-based setups. We checked products like Rancher, OpenShift, Docker Swarm and even plain Kubernetes and checked different cloud providers which support them like Azure, AWS and Exoscale.

In the end we decided to use Kubernetes and Rancher for our future setups. These products covered the requirements and it was even possible to get everything as a service from Swiss partners and providers.

It was important for us to be able to use Kubernetes as a service because we are a small team and not able to maintain multiple clusters 24/7.

Deployment? Pulumi!

Quickly after we created our first POCs using the new container setup and deployed them, we began to struggle a bit because of the rather overwhelming YAML files and were wondering how we could simplify the configuration process.

There are many possibilities out there, from Helm over Kustomize to Skaffold. But during the evaluation of these tools, we stumbled upon another tool called Pulumi which can work together with YAML CRDs but also lets us describe the complete deployment using our main programming language, TypeScript. This convenient fact allowed us to use our well-known tools and libraries. Because almost our entire team consists of web developers, we were able to configure Kubernetes setups in a convenient way.

Note: Pulumi is a handy configuration-as-code tool for us so far and we gladly recommend it for everyone who has a programming background. One of the best features using this intermediate layer to generate YAML is, that it is almost fully type safe which for example helps to easily create shared code (in our case a npm library), create complex but still comprehensible configurations and easily create dynamic setups.

Container Builds and Registry

We needed to update the CI/CD build agents to be able to build docker images and upload them to a Github packages registry. Which then also required more account and token managements regarding these registries.

Writing the docker files was less painful than expected and due to the nature of containers it was comfortably testable.

In the end a lot of these changes were straightforward. It is also simple to automate parts of these processes via templates and scripts. But it still requires a lot of time and effort which should not be underestimated.

The transfer from a self-hosted system into a cloud based one surely isn’t something to be underestimated, it should be carefully planned ahead.

Benefits

We did not only encounter new problems and challenges. On the contrary: we discovered many interesting advantages when hosting projects in a Kubernetes based cloud environment.

The new hosting method improve the scalability of our maintenance capabilities and it also helped to better organise the configuration settings for the growing number of projects.

Another interesting new capability was that it is possible to plug and play reused parts of different projects effortless, because of the encouragement of the single responsibility pattern given by using containers.

Conclusion

The change from a self-hosted setup to a cloud based one can be hard and it can be difficult to migrate all the existing projects. But if it is necessary to have multiple types of setups in an organisation at the same time, the setup of a Kubernetes based environment will have its benefits.

In our case the process is still partially on-going, but we have arrived at a point where we can thoroughly recommend taking a look at the currently available toolset. Even with strict requirements, such as the need to be hosted in a Swiss location, it is possible to create a setup with benefits that far outweigh the drawbacks.

To summarise our journey so far:

  • Previously, the maintenance effort of the current system reached an overwhelming level
  • Having a cloud-based solution looked very promising and the CNCF namespace is so vast, that there’s bound to be a solution for mostly every use case
  • One of the more tedious tasks is to update or clone the current CI/CD systems to be able to create containers and upload them to custom registries
  • It is necessary to wrap existing or new projects into containers, and split each of the responsibilities of the project into different containers in order to comply with the 12-factor app pattern
  • Automation is the best friend of a programmer and system engineer. There are a lot of tools and scripts in order to optimise and simplify the setup and maintenance of a cloud environment

We hope that this blog entry might inspire you to also look into cloud based setups even if we did not provide specific tutorials on how to implement specific tasks in Kubernetes or other tools.

Author

Lambda IT Matthias

Matthias Baldi

Betreibt gerne Hobby-Fotografie im Berner Oberland und versucht sich im heimischen Garten als Gemüsegärtner.