Skip to content

Develop In Minishift, Deploy To Openshift

Quickly deploy your local Minishift app to Openshift on AWS

We have previously shown the benefits of using Minishift as a local development environment. In this post, we'll walk you through installing OpenShift Origin on AWS and then use it to deploy our Minishift demo project. If you haven't run through our Minishift Demo, do that now before proceeding. Our demo is quick to get up and running and is required to deploy the demo to an OpenShift cluster on AWS.

Provisioning the OpenShift Cluster on AWS

We will provide you with an easy and quick setup of OpenShift Origin, but more details see REDHAT AWS DEPLOYING OPENSHIFT CONTAINER Before starting, you will need to a registered domain added to Route53 as a Hosted Zone. You may purchase a registered domain through AWS. You will also need an AWS account with admin rights to EC2 and S3. From the REDHAT AWS DEPLOYING OPENSHIFT CONTAINER documentation:

The deployment of OpenShift requires a user that has the proper permissions by the AWS IAM administrator. The user must be able to create accounts, S3 buckets, roles, policies, Route53 entries, and deploy ELBs and EC2 instances. It is helpful to have delete permissions in order to be able to redeploy the environment while testing.

Quick Installation

To quickly provision an Openshift Origin cluster, we have included a Vagrantfile. Vagrant will spin up a virtual machine running CentOS 7 with already installed all required dependencies to run Ansible playbooks successfully. A Greenfield installation of Openshift Origin will start in the AWS Account defined by the exported variables.

Github App for OAuth

We use GitHub authentication for logging into OpenShift. You will need to create an OAth App in GitHub using your Hosted Zone from the Route53 setup.

To setup GitHub OAuth: Go to OAuth applications -> register new application Enter Name URL: needs to match https://openshift-master.HostedZone Callback URL needs to match is https://openshift-master.HostedZone/oauth2callback/github

Register application creates the clientID and secret that you will need to use these in the environment variables in the next step.

. See here for more details on setting up an app on GitHub Quick Openshift Origin Setup

The provisioning of OpenShift Origin takes about 50 minutes to run and should end with a list of snapshots. [INFO] Getting list of all snapshots ==> default: 2017-07-17 23:06:55 [INFO] Found 4 snapshots ==> default: 2017-07-17 23:06:55 [INFO] Processing snapshot 1 of 4 total snapshots ==> default: 2017-07-17 23:06:55 [INFO] Processing snapshot 2 of 4 total snapshots ==> default: 2017-07-17 23:06:56 [INFO] Processing snapshot 3 of 4 total snapshots ==> default: 2017-07-17 23:06:56 [INFO] Processing snapshot 4 of 4 total snapshots ==> default: 2017-07-17 23:06:56 [INFO] Completed processing all snapshots ==> default: 2017-07-17 23:06:56 [INFO] Graffiti Monkey completed successfully! OpenShift-Ansible Version The openshift/openshift-ansible repository is a development branch. We have targeted the stable branch release-1.5. You can change the release version in nearform/openshift-ansible On line 97 of reference-architecture/aws-ansible/Vagrantfile change the release to the desired version. Note: the OpenShift-Ansible release must match the version used in OpenShift Origin. cd openshift-ansible && git checkout release-1.5

EC2 Instances

When provisioning is successful, you will have nine instances running on EC2. Three masters, three infrastructure nodes, two application nodes, and one bastion server.

Below is the purpose for each type as described in RedHat AWS DEPLOYING OPENSHIFT CONTAINER :

  • Bastion Server: the bastion server in this reference architecture provides a secure way to limit SSH access to the AWS environment. The master and node security groups only allow for SSH connectivity between nodes inside of the Security Group while the bastion allows SSH access from everywhere. The bastion host is the only ingress point for SSH in the cluster from external entities. When connecting to the OpenShift Container Platform infrastructure, the bastion forwards the request to the appropriate server. Connecting through the bastion server requires specific SSH configuration. The .ssh/config is outlined in the deployment section of the reference architecture guide.
  • Master Nodes: maintains the cluster's configuration, manages nodes in its OpenShift cluster. The master assigns pods to nodes and synchronizes pod information with service configuration. The master is used to define routes, services, and volume claims for pods deployed within the OpenShift environment.
  • Infrastructure Nodes: The infrastructure nodes are used for the router and registry pods. These nodes could be used if the optional components Kibana and Hawkular metrics are required. The storage for the Docker registry that is deployed on the infrastructure nodes is S3 which allows for multiple pods to use the same storage. AWS S3 storage is used because it is synchronized between the availability zones, providing data redundancy.
  • Application Nodes: The Application nodes are the instances where non-infrastructure based containers run. Depending on the application, AWS specific storage can be applied such as an Elastic Block Storage which can be assigned using a Persistent Volume Claim for application data that needs to persist between container restarts. A configuration parameter is set on the master which ensures that OpenShift Container Platform user containers will be placed on the application nodes by default.

Openshift Web Console

The OpenShift web console can now be accessed using the URL https://openshift-master.PUBLIC_HOSTED_ZONE ( the URL that was setup in your GitHub application.

Browser Warning

Since we are using a self-signed certificate, the browsers are going to throw an error, You can bypass this for testing by clicking advanced and continue. Learn how to define your certificates to use in the Ansible playbook here

Continue by logging into Openshift using your Github credentials.

Oops, something went wrong.

You may start the deployment scripts but fail during the OpenShift provisioning. In this case, use Cloud Formation and delete the stack label openshift-infra then try again.

Deploying the Minishift demo

Now we have an Openshift Cluster running on AWS, and we used Minishift for our local development. Now we are ready to deploy our hello-server app to the OpenShift cluster.

Setup GIT Deploy Keys The git repo will need to be accessible using your deploy keys. Detail instructions on setting up deploy keys found at https://developer.github.com/v3/guides/managing-deploy-keys - Run this command below to match the SSH key filename used in the deploy script: ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -f "deploy" - Edit the scripts/create-openshift-project.sh and replace username with your GitHub account name.GIT_REPO=git@github.com:username/minishift-demo.git - Copy your public key (deploy.pub) to the Key text box in the GitHub Deploy Keys for the minishift-demo repo the add Key.

Install and login with the CLI

The Minishift installation downloads the Openshift Origin CLI called oc and logged into your local minishift environment. We will need to point oc to your AWS hosted Openshift cluster. - Go to you OpenShift web console found at https://openshift-master.example.com/console/ - Once there, click Command Line tools next to your login:

  • Copy your login command with the token to clipboard:
  • Run the login command on your local terminal. Will look like below: oc login https://openshift-master.your_hostname --token=your-token

Create Openshift Project

cd into the scripts directory and run ./create-openshift-project.sh to create the demo project on the Openshift cluster in AWS.

This script takes openshift/openshift-demo.yaml - a template file that defines all the resources needed to run the application and passes it into the oc cli tool (pointing to the AWS openshift cluster) along with some additional parameters. The values in the YAML file get passed into the OpenShift cluster where the resources get created.

You should see a Success message when the script finishes. To deploy the demo app execute oc start-build hello-server which will build and deploy the hello-server in openshift.

When the demo project has successfully deployed, visit the openshift web console again, and you will see the Demo Project

Pipeline Build

OpenShift uses the Jenkins pipeline plugin to execute the build and deploys.

In the openshift/openshift-demo.yaml, we defined a jenkins pipeline strategy that can build and deploy your project on a push to master.

We will need to copy the GitHub webhook URL from Openshift to GitHub.

Copy the GitHub Webhook URL from Pipeline configuration to the clipboard. In GitHub in your minishift-demo repo, go to settings and click Webhooks -> Add WebHook then: - Paste in the WebHook URL into the payload text box - Select application/json as content type and click add webhook.

Now, any code change pushed to your master repo will trigger a build and deploy on OpenShift using the Jenkins pipeline.

Thanks for going through the Openshift deployment. There are more details in the nearform/openshift-ansible README and we look forward to your comments and feedback below.


All the source code and another copy of this article could be found at: - https://github.com/nearform/openshift-ansible = https://github.com/nearform/minishift-demo

Insight, imagination and expertly engineered solutions to accelerate and sustain progress.

Contact