An Iterative Approach to Building an API – Part 4: Fabric Scripts for Code Deployment

Over the past three days, we’ve stubbed an API in YAML on Tornado, then upgraded to an ElasticSearch backend, and finally, built and deployed a server to Amazon Web Services using Chef scripts.

Today, we’ll write scripts in Fabric to make deploying code changes to the production server simple.

An Introduction to the Fabfile

Writing Fabric scripts begins with a fabfile.py in our source code directory. Fabfiles are a very straightforward and easy way to run commands on a remote server programmatically.

Here’s an example of what a fabfile.py file would look like. Save this file in your elasticAPI directory:

from fabric.api import *                                                                                                                                                                                    
from fabric.contrib import files                                                                                                                                                                            
 
# For virtualenv and Fabric                                                                                                                                                                                 
env.hosts = ['ec2-XX-XX-XX-XX.compute-1.amazonaws.com'] # Replace with your server name
env.user = 'ubuntu'  
 
def ls():
    run('ls -al')

Now we can install Fabric and then run the Fabfile to make sure we can connect to the remote server:

$ pip install Fabric
$ fab ls
[ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com] Executing task 'ls'
[ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com] run: ls -al

We’ve successfully confirmed that we can connect to, and run remote commands on the server. Now, let’s stub out the pieces necessary for building and deploying code remotely.

Preparing Code for Deployment

Each time we decide to deploy code to our server, we first need to commit it and save the changes. This is straightforward enough, thanks to the local() function provided by Fabric:

from fabric.api import *                                                                                                                                                                                    
from fabric.contrib import files                                                                                                                                                                            
 
# For virtualenv and Fabric                                                                                                                                                                                 
env.hosts = ['ec2-XX-XX-XX-XX.compute-1.amazonaws.com'] # Replace with your server name
env.user = 'ubuntu'  
 
def test():
    local("nosetests")
 
def commit():
    local("pip freeze > requirements.txt")
    local("git add -p && git commit")
 
def push():
    local("git push")
 
def prepare_deploy():
    test()
    commit()
    push()

Now, if our code has a remote repository, our nosetests will be run and then pushed to the repository. In my case, I used Github as my remote directory.

Adding the Code to Run the Server Deployment

We’re getting close now. We can pull our remote code from our server, and then stop and restart the server if necessary:

from __future__ import with_statement                                                                                                                                                                       
from fabric.api import *                                                                                                                                                                                    
from fabric.contrib import files                                                                                                                                                                            
 
from contextlib import contextmanager as _contextmanager                                                                                                                                                    
 
# For virtualenv and Fabric                                                                                                                                                                                 
env.hosts = ['ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com']                                                                                                                                                   
env.user = 'ubuntu'                                                                                                                                                                                         
env.directory = '/home/ubuntu/elasticEnv'                                                                                                                                                                   
env.activate = 'source /home/ubuntu/elasticEnv/bin/activate'                                                                                                                                                
 
# For gunicorn process                                                                                                                                                                                      
env.remote_workdir = '/home/ubuntu/elasticAPI'                                                                                                                                                              
env.gunicorn_wsgi_app = 'webapp:app'                                                                                                                                                                        
 
@_contextmanager
def virtualenv():
    with cd(env.directory):
        with prefix(env.activate):
            yield
ef test():
    local("nosetests")
 
def commit():
    local("pip freeze > requirements.txt")
    local("git add -p && git commit")
 
def push():
    local("git push")
 
def prepare_deploy():
    test()
    commit()
    push()
 
def stop_gunicorn():
    with settings(warn_only=True):
        if files.exists("%s/gunicorn.pid" % env.remote_workdir):
            run("kill -9 `cat %s/gunicorn.pid`" % env.remote_workdir)
 
def restart_gunicorn():
    stop_gunicorn()
 
    with virtualenv():
        run("cd %s && gunicorn -k egg:gunicorn#tornado webapp:app --pid=%s/gunicorn.pid" % (env.remote_workdir, env.remote_workdir))
 
def deploy_server():
    with settings(warn_only=True):
       if run("nosetests %s" % env.remote_workdir).failed:
            run("git clone git://github.com/burningion/elasticAPI.git %s" % env.remote_workdir)
    with cd(env.remote_workdir):
        run("git pull")
    with virtualenv():
        run("pip install -U -r %s/requirements.txt" % env.remote_workdir)
    restart_gunicorn()

That’s it! Now we can do a fab deploy_server and see our server run on gunicorn!

$ fab deploy_server

You might get stuck at the end, because we’re just running gunicorn from the command line. It is safe to CTRL-C out of the Fabric process.

You can then ssh to the remote Amazon server and test that things are working:

$ ssh ubuntu@ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com
$ curl http://localhost:8000/

But we’re still not accessible from port 80 remotely. Let’s fix that now, by adjusting our nginx settings.

Adding an Nginx Proxy to Gunicorn

Now we’ll open up the default nginx web server configuration file, and add forwarding to our gunicorn process running on port 8000.

Open up /etc/nginx/sites-available/default with sudo. (Ex: sudo emacs /etc/nginx/sites-available/default)

upstream app_server {
         server 127.0.0.1:8000 fail_timeout=0;
}

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6

        root /usr/share/nginx/www;
        index index.html index.htm;

        # Make site accessible from http://localhost/
        server_name localhost;

        location / {
             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_redirect off;

             proxy_pass   http://app_server;
        }
}

Now we can restart our nginx server, and hopefully see our API server from the outside world. On the remote Amazon server:

$ sudo /etc/init.d/nginx restart

Verify the API Works

We can now verify the API works by running curl from our local computer.

$ curl http://c2-XX-XX-XXX-XXX.compute-1.amazonaws.com
Hello, world

Success! We now have a live base API we can iterate from. We add code, commit it, and then deploy it using our fabfile.

If you’ve got any questions, or got stuck anywhere, please let me know!

Part One – Building the API skeleton with YAML
Part Two – From YAML to ElasticSearch
Part Three – Writing Chef Scripts for Amazon Deployment
Part Four – Writing Fabric Scripts for Code Deployment

As before, the code for everything is already finished and available at github, and so are the Chef scripts.

Leave a Reply

Your email address will not be published. Required fields are marked *