How to drive your Ghost blog using Continuous Delivery - Part Two

Before we begin, this blog post is part of a 3-part series. You can read the all three blog posts here:

Part 1: How to drive a Ghost Blog using Continuous Delivery

Part 2: How to drive a Ghost Blog using Continuous Delivery

Part 3: How to drive a Ghost Blog using Continuous Delivery

In the previous post we learnt how to get started with Amazon EC2 instances and hook them up to a domain name we've registered.

That was in hindsight the relatively straightforward part. In this next part of the tutorial I'll teach you how to set Ghost up locally and give you the keys to the kingdom for getting it working via Continuous Delivery. You're welcome to take any of the example files and amend them as you please -- they're yours!

Recap

Before I begin on the next part of the exercise let's recap on what we want to achieve. The whole point of this entire tutorial is that we can create a Ghost blog that can be deployed using Continuous Delivery. What do I mean by that? Basically it means that all we need to do is commit and merge our work into our master branch, push our work up to GitHub and the automation does the rest. It'll automatically pick up all the code for us when we push to GitHub, download it and do any restart / backups that we need to do. Scary stuff but you'll hopefully agree neat!

First things first, let's jump back on to the EC2 instance:

    ssh -i your-key.pem root@ec2-XX-XXX-XXX-XX.eu-west-1.compute.amazonaws.com

Installing Node.js

We need to install npm (Node Package Manager) this will essentially allow us to build the blog when we download a fresh copy of the HEAD version from GitHub.

This was not as straightforward as I first thought. Apparently on ubuntu there is a namespace clash on the nodejs name insie packages - a bit shocking but unavoidable now.

To get around this by using Chris Lea's Personal Package Archive (PPA). This issue is what I came up against. To install node on ubuntu do this:

    sudo apt-get-repository ppa:chris-lea/node.js
    sudo apt-get update
    sudo apt-get install nodejs -y
    chown -R ubuntu:ubuntu /var/www

When you install stuff for npm you'll also want to do it without resorting to using sudo install this all the time as that's in all realms BAAAADDD!:

    npm config set prefix ~/npm

You'll then want to make sure the npm command is on the path of your bash. Type:

    which npm

If that command fails then npm isn't located on the path. You'll need to manually add it. Edit your profile and add the binary to the path e.g.

    vi ~/.profile

Adding the following line at the bottom:

    export PATH="$PATH:$HOME/npm/bin"

To refresh the profile either log back in via ssh or source it:

    source ~/.profile

Managing your processes

Next we want to install pm2. PM2 is a neat little process manager written in node.js that will manage all manner of javascript processes. Essentially it means you can easily stop, start and keep processes running continuously without us having to do anything - pretty handy when you want a blog to be resilient and survive server restarts.

    npm install -g pm2

Setup a Web Container

What is a web container you might ask?

Basically it's an application that handles all HTTP traffic for you. We need it to handle traffic routing to our Ghost blog. In this tutorial we're using Nginx which is a popular open source web server for high-traffic websites - it'll serve for our purposes which is to create a popular high-traffic blog right?

A Web Container can actually serve multiple sub-domains for a single application as well as HTTP cache so it's pretty essential for us.

A big thanks to Kenneth from Developer24Hours for his blog post on how to get started with Nginx on ubuntu.

First let's ssh back into our EC2 instance then run the following command to install nginx:

    sudo apt-get install -y nginx

Next we want to make sure we have the latest version of nginx:

    sudo vi /etc/apt/source.list

Add the following to the end of that file:

# Latest repositories for nginx
deb http://nginx.org/packages/ubuntu/ precise nginx  
deb-src http://nginx.org/packages/ubuntu/ precise nginx  

Exit and save (shift + :, x!)

    sudo apt-get update

If you get the following error

W: GPG error: http://nginx.org precise Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62

Add the nginx signing public key

    wget http://nginx.org/packages/keys/nginx_signing.key
    cat nginx_signing.key | sudo apt-key add -

Then reinstall nginx again:

    sudo apt-get install nginx

You may then get a further error:

dpkg: error processing /var/cache/apt/archives/nginx1.2.6-1~preciseamd64.deb (--unpack): trying to overwrite '/etc/logrotate.d/nginx', which is also in package nginx-common 1.1.19-1ubuntu0.1 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)
Errors were encountered while processing: /var/cache/apt/archives/nginx1.2.6-1~preciseamd64.deb

If that's the case you'll need to remove the defunct nginx-common package:

    sudo apt-get remove nginx-common
    sudo apt-get install nginx

Finally, check you have the latest version installed using:

    nginx -v

You then want to ensure nginx always starts when your EC2 instance starts:

    update-rc.d nginx defaults

Setup domain rules

    cp /etc/nginx/sites-available/default /etc/nginx/sites-available/{domain}

Then you want to edit and ensure the upstream domain and server properties match the following for your blog:

upstream domain {  
    ip_hash;
    server your-domain:2368;
}
server {  
    listen 80;
    server_name your-domain.com;
    access_log /var/log/nginx/web_portal.access.log;
    location / {
            proxy_pass      http://127.0.0.1:2368/;
            proxy_next_upstream error timeout invalid_header http_500;
            proxy_connect_timeout 2;
            proxy_set_header        Host            $host;
            proxy_set_header        X-Real-IP       $remote_addr;
            proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_intercept_errors on;
    }
}

Remove the old default configuration (we don't need it):

    sudo rm /etc/nginx/sites-enabled/default

Enable the new blog site in nginx:

    sudo ln -s /etc/nginx/sites-available/{domain} /etc/nginx/sites-enabled/{domain}

If Nginx doesn't seem to pick up on the configuration, make sure /etc/nginx/nginx.conf has the following within the http block:

    include /etc/nginx/sites-enabled/*;

Great - you should now have a full functioning webserver.

Simply hit your domain on in a browser and it should hopefully 404 from nginx (we don't have the ghost files up there yet).

Setup your Ghost blog repo on GitHub

For the next step we need to get all the Ghost blog code up on Github. Create a folder structure as follows in your blog project directory on your Mac (I mainly use a Mac but happy to help Windows users if you ask):

your-domain/
         |--> ghost

Next download a copy of Ghost from the official download site. Unzip the contents of the zip into the above ghost directory. In the root of your-domain we're going to be creating a bunch of files to help with the continuous delivery piece.

Push a vanilla copy of Ghost and this repository up to GitHub

Change the config.js production block so it includes your domain e.g.

production: {  
        url: 'http://your-domain.com',
        mail: {},
        database: {
            client: 'sqlite3',
            connection: {
                filename: path.join(__dirname, '/content/data/ghost.db')
            },
            debug: false
        },

        server: {
            // Host to be passed to node's 'net.Server#listen()'
            host: '127.0.0.1',
            // Port to be passed to node's 'net.Server#listen()', for iisnode set this to 'process.env.PORT'
            port: '2368'
        }
    }

Push that change up to Github too. I use private repositories because I obviously I don't want people seeing what version of Ghost I'm using etc. Technically you could exclude the config.js from being committed but the problem then is it won't be pulled in via Continuous Delivery and you'll have to manage the config.js separately. You could always work out how to get it working using Bitbucket I suppose!

Okay, I think that's enough for today. We've made some great progress. We have a functioning webserver configured for our blog domain and we're nearly ready to put together the Continuous Delivery piece that will be the ultimate cherry on the cake!

See you for the last part of the tutorial. Hope you've enjoyed it so far and it's opened your eyes a little to the possibilities we have with deploying our own copy of Ghost as opposed to using the Bitnami AMI we discussed in the last post.

Please feel free to leave comments and I'll get back to you and of course you can follow me on Twitter or the DevAngst Twitter if you'd like to follow my latest posts or if you'd just like to follow me.

Next in series: Part 3

James Murphy

Java dev by day, entrepreneur by night. James has 10+ years experience working with some of the largest UK businesses ranging from the BBC, to The Hut Group finally finding a home at Rentalcars.

Manchester, UK
comments powered by Disqus