On a recent client project, our team was tasked with setting-up local development environments for a new Node.js-based microservices system that would eventually be deployed on Red Hat’s OpenShift platform.
We have found a good approach by making use of the MiniShift project and we have put together a demo with some accompanying documentation about what we’ve learnt.
You can jump straight into the code and docs, or you can stick around for more on the journey that led us to this point.
OpenShift is a computer software product from Red Hat for container-based software deployment and management.
In concrete terms, it is a supported distribution of Kubernetes using Docker containers and DevOps tools for accelerated application development. — Wikipedia
So OpenShift is Kubernetes plus some very impressive Enterprise and quality-of-life improvements that form a compelling bigger offering.
Some of OpenShift’s additions have been built upon other upstream projects such as the integrated Jenkins instance for an out-of-the-box CI/CD pipeline and the integrated Docker registry for your build artefacts. Others are completely custom but very sensible, such as the router to direct traffic between your services. If this wasn’t provided you’d probably end up having to write your own version of this component.
One of the more pleasant surprises from our time with OpenShift has been the very clean user interface that ties all of the concepts together and provides you some good insight into how your system is working. You can view logs for all your services, scale number of instances and even track the progress of builds in the integrated CI/CD pipeline.
Once you become accustomed to OpenShift, you would find yourself missing this functionality if you ended-up back on plain Kubernetes.
It should also be noted that some of the non-technical reasons why OpenShift is an attractive platform are almost equally important.
When evaluating adoption of a platform, it is wise to consider the amount of risk that it introduces into a project, and here Red Hat has a solid reputation for acting as a filter of what can sometimes be a chaotic development of the upstream projects it packages.
This process is Red Hat’s bread-and-butter and is one of the reasons they have built strong relationships with many enterprise customers who trust them to reduce the risk for critical infrastructure components such as this. More and more clients are requesting OpenShift from us because of this.
There are many aspects for which you can optimise when building a development environment but these may change through the life of even a single project. The following are a few of the things that were important to us when evaluating our MiniShift based solution.
We wanted a solution that would allow us to start building our services as quickly as possible, and not require us to build too much custom tooling that would make it more time-consuming to on-board our developers.
It might seem a bit counterproductive to optimise for this. However, increasing the ease with which a developer can bring up a completely fresh environment has a significant long tail of improved productivity. That can benefit you throughout the lifetime of your project.
We find that the way that MiniShift makes use of Docker’s libmachine in combination with VirtualBox or Xhyve (depending on your platform) allows us to wipe and reinstall the environment with minimal fuss.
Now that we’ve conquered the initial learning curve and documented our findings clearly, we feel confident in our ability to get new developers up and running. But this approach hasn’t been without its gotchas. For one, we have found that starting-up MiniShift is sometimes unreliable, although retrying the command usually does the trick.
The holy grail of DevOps is a development environment that mirrors the production environment as closely as possible. This should theoretically reduce the possible bugs that could occur from slight differences, which you would only catch once you’ve deployed your code into production.
It goes without saying that there are situations where being as close as possible to production suffers from diminishing returns. While we have something running now that feels fairly solid, we can imagine that there will be scaling issues in the future as your application grows to a higher number of more complex services.
Running a full Kubernetes / OpenShift in a single virtual machine on your own hardware introduces a lot of complexity, which could also provide many opportunities for things to break. While we acknowledge this risk, it is our hope that things would at least break in the same way that they would once you head into production, allowing you to catch problems earlier.
We have yet to explore and document the process of taking a system developed locally using MiniShift to an OpenShift environment deployed in either the public or private clouds. It is likely that we will need to make more compromises as we map out that path.
It is critical in a local development environment that you be able to execute your code changes as quickly as possible, lest you lose a few minutes on every change resulting in many hours of waiting around each week.
This was the part of this process that was the trickiest to figure out because for the most part Kubernetes and MiniShift weren’t designed for this use case.
Because it takes entirely too long to wait for an entire Jenkins build and deployment cycle to happen for every code change, we decided to build the container and then mount the code into place from the host machine. This would make changes available immediately, where we could pick them up with nodemon and just refresh.
However we ran into some issues where we needed to change some security settings in Kubernetes to allow for host mounts of folders, and even once they were mounted there were incompatibilities with the default way that nodemon monitors files. We had to drop back to “legacy mode“, which has its own limitations.
A further wrinkle was introduced when we realised that it’s not practical to mount the node_modules folder from the host, because native modules need to be compiled for the operating system on which you are running the code. We had to get creative with Dockerfiles and mount points to make sure of that, but it’s still really simple to kick off a Jenkins build job from the shell. You only need to do this when your package.json changes, so it doesn’t slow down day-to-day development that much.
Kubernetes will only start new pods (instances) up to the level your hardware allows, and this decision is based on the memory limits set for each service and (in MiniShift) the amount of cpu cores/memory handed to the virtual machine on startup. We were able to tweak these quite easily to get up to 50 pods of our little test server running. Interestingly, there is a recommended upper limit of 110 instances across all services when using OpenShift Origin (the upstream project for MiniShift).
The last thing we needed to tweak was to change the deployment strategy in Kubernetes to “Replace”, so that it will kill and deploy new services instead of running both in parallel and slowly moving traffic over to the new one. The latter behaviour is only preferable in production and is a hindrance in a development setup.
If you are interested in more of the technical details of this implementation, please check out the extensive Readme we have created for the demo project.
We also recommend reading the material on Kubernetes By Example by the OpenShift team.