Like Adrian previously mentioned we like to us nodemon in development to sync/restart services with the latest local changes. Thus reducing the feedback loop to the developer about the changes and reducing wait time on deployments.
Originally we had to jump through a few hoops to get our local files into the pods using hostmounts and volumes. With this update, we are building, deploying and syncing a bit differently.
The create-project.sh will now setup configurations, build the image and then deploy. Once the deployment is complete you can rebuild using build.sh.
It’s safe to use create-project again when changes to your docker file happen, it just has a few extra steps that are unnecessary. Our deployment is now only triggered when there is a change in the image.
To get around the Native Modules issue we ran into originally, I changed our Dockerfile around to npm install in /tmp.
We then copy this into /opt/app-root/src where our application now lives.
This does two things:
First, It allows native dependencies to be built in the container and not your local development directory.
Secondly, when rebuilding your image, we will only have to npm install when the package actually changes.
While researching how to mount our local files into the pod I came across a familiar command in the OpenShift documentation that made life very simple.
So with this command and the --watch flag appended we end up with a fully synchronized local to pod development environment. If you look in our deploymentConfig you will notice that we don’t mount any volumes in the pod.
It appears the OpenShift team have been tackling pain points and pushing to make the Developer Experience much more enjoyable over the past year. We’re looking forward to what this next year brings.
On a recent client project, our team was tasked with setting-up local development environments for a new Node.js-based microservices system that would eventually be deployed on Red Hat’s OpenShift platform.
We have found a good approach by making use of the MiniShift project and we have put together a demo with some accompanying documentation about what we’ve learnt.
You can jump straight into the code and docs, or you can stick around for more on the journey that led us to this point.
Why target OpenShift?
OpenShift is a computer software product from Red Hat for container-based software deployment and management.
In concrete terms, it is a supported distribution of Kubernetes using Docker containers and DevOps tools for accelerated application development. Wikipedia
So OpenShift is Kubernetes plus some very impressive Enterprise and quality-of-life improvements that form a compelling bigger offering.
Some of OpenShifts additions have been built upon other upstream projects such as the integrated Jenkins instance for an out-of-the-box CI/CD pipeline and the integrated Docker registry for your build artefacts.
Others are completely custom but very sensible, such as the router to direct traffic between your services. If this wasn’t provided you’d probably end up having to write your own version of this component.
One of the more pleasant surprises from our time with OpenShift has been the very clean user interface that ties all of the concepts together and provides you with some good insight into how your system is working. You can view logs for all your services, scale number of instances and even track the progress of builds in the integrated CI/CD pipeline.
Once you become accustomed to OpenShift, you would find yourself missing this functionality if you ended-up back on plain Kubernetes.
It should also be noted that some of the non-technical reasons why OpenShift is an attractive platform are almost equally important.
When evaluating adoption of a platform, it is wise to consider the amount of risk that it introduces into a project, and here Red Hat has a solid reputation for acting as a filter of what can sometimes be chaotic development of the upstream projects it packages.
This process is Red Hat’s bread-and-butter and is one of the reasons they have built strong relationships with many enterprise customers who trust them to reduce the risk for critical infrastructure components such as this.
More and more clients are requesting OpenShift from us because of this.
What makes a good development environment?
There are many aspects for which you can optimise when building a development environment but these may change through the life of even a single project. The following are a few of the things that were important to us when evaluating our MiniShift based solution.
Time to first code
We wanted a solution that would allow us to start building our services as quickly as possible, and not require us to build too much custom tooling that would make it more time-consuming to onboard our developers.
It might seem a bit counterproductive to optimise for this. However, increasing the ease with which a developer can bring up a completely fresh environment has a significant long tail of improved productivity. That can benefit you throughout the lifetime of your project.
We find that the way that MiniShift makes use of Dockers libmachine in combination with VirtualBox or Xhyve (depending on your platform) allows us to wipe and reinstall the environment with minimal fuss.
Now that we’ve conquered the initial learning curve and documented our findings clearly, we feel confident in our ability to get new developers up and running.
But this approach hasn’t been without its gotchas. For one, we have found that starting-up MiniShift is sometimes unreliable, although retrying the command usually does the trick.
The holy grail of DevOps is a development environment that mirrors the production environment as closely as possible. This should theoretically reduce the possible bugs that could occur from slight differences, which you would only catch once you’ve deployed your code into production.
It goes without saying that there are situations where being as close as possible to production suffers from diminishing returns. While we have something running now that feels fairly solid, we can imagine that there will be scaling issues in the future as your application grows to a higher number of more complex services.
Running a full Kubernetes / OpenShift in a single virtual machine on your own hardware introduces a lot of complexity, which could also provide many opportunities for things to break.
While we acknowledge this risk, it is our hope that things would at least break in the same way that they would once you head into production, allowing you to catch problems earlier.
We have yet to explore and document the process of taking a system developed locally using MiniShift to an OpenShift environment deployed in either the public or private clouds. It is likely that we will need to make more compromises as we map out that path.
Quick feedback loops
It is critical in a local development environment that you be able to execute your code changes as quickly as possible, lest you lose a few minutes on every change resulting in many hours of waiting around each week.
This was the part of this process that was the trickiest to figure out because for the most part Kubernetes and MiniShift weren’t designed for this use case.
Kubernetes will only start new pods (instances) up to the level your hardware allows, and this decision is based on the memory limits set for each service and (in MiniShift) the number of CPU cores/memory handed to the virtual machine on startup. We were able to tweak these quite easy to get up to 50 pods of our little test server running. Interestingly, there is a recommended upper limit of 110 instances across all services when using OpenShift Origin (the upstream project for MiniShift).
If you are interested in more of the technical details of this implementation, please check out the extensive Readme we have created for the demo project.