Announcing nscale 0.15 – Now with straightforward AWS configuration

By: Matteo Collina

The latest release of nscale, version 0.15.0, officially goes live today. This is an incremental release compared to version 0.14, which went out earlier this month. In fact, this release includes some changes that we could not ship in 0.14, as we preferred to ship, rather than wait. 0.15 is a safe upgrade for all of you running v0.14. We encourage everybody to upgrade!

If you are not yet familiar with nscale, then you should check it out. The github repo and quickstart guide can be found here.

nscale logo

In this post I’ll walk you through all the improvements we crafted for this release. If you have not read it, I suggest you read Peter Elger’s article that describes all the changes we introduced in v0.14 here.

Local Development Support

How do you run a distributed application locally? Imagine that you need to spin up 5 or 6 processes to get your application running. It is a very fragile development environment in which you need to be sure all of those are updated, or you might face some unpredictable error.

nscale simplifies all these processes, by using what we call ‘process containers’: nscale runs your services in your machines, orchestrating them. You can still use your standard docker images for databases and core services, so that the provisioning of new development environment is greatly simplified. Check out the related tutorial.

This feature has been around for quite some time, but before v0.15 it was not ‘production ready ’.

Automatic Detection of Analyzers

To recall, here is the deployment workflow in nscale. As you can see the analyzer is a critical step, as it is used by nscale to determine what to do.

The currently available analyzers are:

  • Amazon Web Services (AWS), which supports Amazon Elastic Compute Cloud (EC2) instances, Elastic Load Balancing (ELBs) and Security Groups;
  • local, which is used for single instance deployments and boot2docker;
  • direct, which is used for deployments where machines are not provisioned via nscale, such as Digital Ocean or bare hardware.
nscale

nscale deployment process

A problematic step in cloud deployments of nscale is configuring the right analyzer to use. This step is no more, as nscale can now auto-detect which analyzer to use. Just go ahead and use AWS, direct (Secure Shell (SHH)), or local and nscale will figure out the appropriate analyzer and use it! However, the AWS analyzer needs the access keys for AWS, and the SSH identityFile – how do we tell nscale what to use? Check out the next section, on system-specific configuration.

Local System Configuration

v0.15 introduces a new way for configuring nscale to deploy your application: just drop a config.js file into your repository. There, you can put all the configuration you would in the nscale main config file, but also in a simplified syntax, enabling a much simpler AWS configuration:

module.exports = {
 region: 'us-west-2',
 identityFile: "key.pem",
 accessKeyId: "xxxx",
 secretAccessKey: "xxxx",
 user: "ubuntu",
 defaultSubnetId: "subnet-xxxxxxxx",
 defaultVpcId: "vpc-xxxxxxxx",
 defaultImageId: "ami-xxxxxxx",
 defaultInstanceType: "t2.micro"
 };

You should not add this file, or any other file containing sensitive information, into version control, and in a newly provisioned system the config.js file is in .gitignore by default.

We also exclude all *.pem files for the same security reasons. This is a best practice in how to deploy in the cloud that nscale now fully supports.

Environment-specific configuration

You can also specify environment configuration options that are specific for each environment, like so:

module.exports = {
 region: 'us-west-2',
 identityFile: "key.pem",
 accessKeyId: "xxxx",
 secretAccessKey: "xxxx",
 user: "ubuntu",
 defaultImageId: "ami-xxxxxxx",
 defaultInstanceType: "m3.medium"
 staging: {
 defaultSubnetId: "subnet-xxxxxxxx",
 defaultVpcId: "vpc-xxxxxxxx",
 defaultInstanceType: "t2.micro"
 }
 };

Checkout via HTTP

If you are checking out from GIT over HTTP, we now support specifying the credentials, just add them in the config.js file:

module.exports = {
 repositories: {
 user: ‘myuser’,
 password: ‘mypassword’
 }
 };

You can also specify them for each single repository, like so:

module.exports = {
 repositories: {
 ‘http://myorg.com/my/repo.git’: {
 user: ‘myuser’,
 password: ‘mypassword’
 }
 }
 };

Better management of AWS credentials

In the last release of nscale all AWS credentials were global, while in the new release they are all local to your application: each application can have it’s own set of credentials, and there is no possibility of abuse.

Get excited with AWS autoscaling group support

The next release, v0.16, will feature automatic autoscaling group provisioning and it will be due in some weeks (when it is ready!).

But not least

Lastly, a big shout out to all of the awesome people who have contributed to this nscale release, in particular: Peter Elger, Darragh Hayes and Dean McDonnell.
That’s all for now folks – check it out here – and deploy awesome.

About nscale

nscale is a toolkit that makes deployment and management of distributed software systems easy. Try it on your software system today and let us know how you get on. Visit nscale.nearform.com to download and install. Follow nscale on Twitter here.

nscale represents nearForm’s ongoing commitment to helping node become a mainstream technology, and opensource is one of the ways that we support that.
If you’d like to know more about nearForm, get to know us here and the team here.

Want to work for nearForm? We’re hiring.


Email hello@nearform.com

Twitter @nearform

Phone +353-1-514 3545

Check out nearForm at www.nearform.com.


 

By: Matteo Collina

Matteo is a software engineer and Internet of Things (IoT) expert with a passion for coding, distributed architectures and agile methodologies. He has worked with a wide range of technologies (Java, Ruby, Javascript, Node.js, Objective-C) in a variety of fields. Matteo is the author of the Node.js MQTT Broker, Mosca, the LevelGraph database, and co-author of the book "Javascript: Best Practices" (FAG, Milan). In 2014, Matteo completed his PhD with a thesis entitled 'Application Platforms for the Internet of Things: Theory, Architecture, Protocols, Data Formats and Privacy'. Matteo is also an experienced conference speaker on the above topics. In his spare time, he builds and contributes to open source software (see his github profile https://github.com/mcollina) and sails the Sirocco.