Getting Started with Mixing Fabric Deployments and VirtualEnv

by Buddy Lindsey on August 14, 2012

Automating your deployment can be one of those things that is tricky to get right at first, but pays off in spades once done. It seems as you develop you continually evolve your processes until you finally figure out how to get your deploys to be fairly automated. I have been using Fabric for a while now, and it is one of the best tools I have used in a long time.

The problem is there aren’t a lot of great tutorials on how to get going with fabric. Most things I see just drop and script and explain line by line, if your lucky. In this blog post I want to walk you through as if you haven’t ever automated a deploy before and evolve a script before your eyes.

Prerequisites

I am not going to walk you through the actual hosting with apache, gunicorn or uwsgi. This guide assumes you have that worked out on your own. We will only discuss deploying to a place on server over SSH.

Server

You will need a few things on your server:

  • SSH Access. Fabric uses ssh to push changes
  • VirtualEnv and a location where you have your Virtual Environments stored
  • Fabric installed in the global site-packages

Local Machine (or deployment machine)

You can set a fabric script to be executed automatically or you can run it yourself. The minimum you need is fabric installed locally to run the fabric script.

  • Python
  • Fabric
  • Fabric Deploy Script

Fabric Basics

Fabric can be used to do more than just a deploy. In fact it can be used to do all sorts of remote stuff automagically. All fabric does at its core is execute commands locally and remotely. Fabric just provides a nice pythonic way of doing it.

Installation

Fabric is easy to install via pip or easy_install

easy_install fabric

Or

pip install fabric

fabfile.py

All of you’re fabric scripts need to be in a file called fabfile.py. It is the convention fabric uses so it assumes the file is there when you call the fab command.

run()

The only other thing you “need to know” to get started is about the `run` function. It executes whatever command you put in as a parameter on the server. Here is an example:

run("mkdir buddysite")

This would create a directory called buddysite in the root folder of what ever user was used with SSH. So if you are using your root account it creates a folder at /root/buddysite, and if your username is buddy it creates /home/buddy/buddysite.

Set Servers

The last thing I like to setup in my deployment file is a default server, or set of servers. That is very simple to do using env.hosts at the top. Something like this:

env.hosts = ['user@server.com']

That setting will tell the deployment to use the user ‘user’ and server address ‘server.com’ it also might look like:

env.hosts = ['buddy@191.168.42.178']

Closing Basics

So now we have determined we need fabric installed on the server and your local machine. You need a file named fabfile.py and will use the function `run` to run commands on the remote server you set in the env.hosts setting. So far is simple stuff lets actually do something a bit more and deploy some code.

Deploying Code

First we are going to look at a super basic script that gets our code from our git repository onto our server, and that is it. Then we will refine the script a bit more to deal with VirtualEnv as well.

The Deploy Script

from fabric.api import *

env.hosts = ['webuser@192.168.42.137']

def deploy():
     run("git clone https://github.com/user/repo.git")

That is really the most simple deployment you can get with fabric. There are problems with it, but really just shows how basic things can get.

Now to actually deploy just do:

fab deploy

Walking Through the Code

from fabric.api import *

As you have probably guessed this imports all the common fabric things we need when running our deployment.

env.hosts = ['webuser@192.168.42.137']

As discussed before this sets our server and user we are deploying to.

def deploy():

This is the function that we call when we do our deploy. From the `fab deploy` command above it just calls the function that is in the fabfile.py file. Deploy is generally what people use as a starting point the great thing is its just a function.

run("git clone https://github.com/user/repo.git")

This will checkout your code onto your server and since it is just using the ssh user it will be in /home/webuser/repo. Later we need to do better at telling it where to go.

Adding Specific Locations

We don’t really want to run stuff from the location of the root of our users directory. We will have conventions on our server of where to deploy stuff so this is a sample of the code specifying a location to put our code when cloning it.

from fabric.api import *

env.hosts = ['webuser@192.168.42.137']

def deploy():
     with cd("/some/path/www/"):
          run("git clone https://github.com/user/repo.git")

We only made one change to the code

with cd("/some/path/www/"):

Everything below this line that is indented will be executed in the folder you set. So instead of checking out code to /home/webuser/ it will be checked out to /some/path/www/. Just make sure at the beginning you have a / so it starts at the root of the file system.

VirtualEnv and Package Requirements

In order to do virtual environments with our script we are going to make a couple of assumptions.

  1. The virtual environment is in /home/webuser/venv/mysite/
  2. The virtual environment exists already

I usually have a bit more complex things going on than this, but it is the easiest way to start off. The next thing we need to do is install our requirements inside of the virtual environment.

from fabric.api import *

env.hosts = ['webuser@192.168.42.137']

def deploy():
     with cd("/some/path/www/"):
          run("git clone https://github.com/user/repo.git repo")

     run("source /home/webuser/venv/mysite/bin/activate && pip install -r /some/path/www/repo/requirements.txt")

If you note the code change:

run("source /home/webuser/venv/mysite/bin/activate && pip install -r /some/path/www/repo/requirements.txt")

That activates our virtual environment then installs all of our packages to make sure things are going to work when our server uses the virtual environment to execute our site code.

Closing the Deploy Script

So now we have a working deploy which puts our code where it needs to go, and even installs our requirements for virtual environment. We are still running a very basic deployment, and really any more code just makes the deployment easier to maintain or lets more complex scenarios get played out.

I hope you have noticed to this point nothing in this has been django specific so you can use this to do just about anything deployment wise. I even use this on rails projects on occasion, but you could even use this for php projects as well.

Refactoring the Deploy Script

So I don’t like such hardcoded stuff I like to add variables and move code around so here is the same script with a bit of stuff moved around.

from fabric.api import *

env.hosts = ['webuser@192.168.42.137']

SITE_ROOT = "/some/path/www"
VENV_DIR = "/home/webuser/venv/mysite"

def _install_dependencies(site_dir):
     run("source %s/bin/activate && pip install -r %s/%s/requirements.txt" % (VENV_DIR, SITE_ROOT, site_dir))

def deploy():
     site_dir = "repo"     

     with cd(SITE_ROOT):
          run("git clone https://github.com/user/repo.git")

     _install_dependencies(site_dir)

If you look the script still does the same thing just everything is moved around a bit. Please take a look and you will probably see a few things that confuse you, but I put them there to have fun with the next section.

Adding Very Basic Rollback Ability

This isn’t going to be necessarily the best way to do this, but it is a way to do it.

import datetime
from fabric.api import *

env.hosts = ['webuser@192.168.42.137']

SITE_ROOT = "/some/path/www"
VENV_DIR = "/home/webuser/venv/mysite"

def _install_dependencies(code_dir):
     run("source %s/bin/activate && pip install -r %s/requirements.txt" % (VENV_DIR, code_dir))

def _link_to_current(code_dir):
     # assumes site_root/current exists
     run("rm %s/current" % SITE_ROOT)
     run("ln -s %s %s/current" % (code_dir, SITE_ROOT))

def deploy():
     temp_dir = datetime.datetime.now().strftime("%Y%m%d%H%M")
     code_dir = "%s/%s" % (SITE_ROOT, temp_dir)
     run("mkdir %s" % code_dir)

     run("git clone https://github.com/user/repo.git %s" % code_dir)

     _install_dependencies(code_dir)

     _link_to_current(code_dir)

I am not going to go line by line, but what I have done is modify it so that each deploy gets set to its own timestamped folder. From there it symlinks to the “current” folder. Your web server needs to run the django, or any wsgi based, application out of the folder current. Now you can have a history of deploys so if the latest one breaks you can just symlink to the previous one and things should work again.

Conclusion

There are a couple of things I didn’t go over and that is doing syncdb or south migrations, along with resetting the wsgi application execution. Both I will leave to you based on your server configuration, and application specifics. What I do is set them in separate functions and have a “big_deploy” function that will call the database functionality as well. As for resetting the wsgi execution that depends on your server configuration.

Hopefully this will get you started with using fabric to do a deployment so that you don’t have to manually move things around. I am also sure you can figure out other things to do with this as well. Have fun deploying code now so you can do it more often. As an extra step look at git hooks and play with continuous deployment.

Related Posts:

Was this Helpful?

If you found this article useful you might find others useful as well. Please browse the archives and subscribe to the RSS Feed to stay up-to-date.

Sean Vieira August 16, 2012 at 10:36 am

Great article!

Just FYI, you can avoid *any* downtime by changing your `_link_to_current_site` calls to this one call:

run(r”ln -s %(code_dir)s temp_link && mv -Tf temp_link %(SITE_NAME)s/current” % {“code_dir”: code_dir, “SITE_NAME”: SITE_NAME})

I got this technique from http://blog.moertel.com/articles/2005/08/22/how-to-change-symlinks-atomically and I have been using it actively for a while.

Reply

Buddy Lindsey August 16, 2012 at 10:39 am

I like that. Thanks for letting me know.

Reply

geeknam August 16, 2012 at 7:24 pm

Thanks for the article. I’m just wondering why you pip installed requirements before cloning the repo? Shouldn’t you clone the repo first (to get the latest requirements.txt) and then pip install the latest requirements? I’m confused.

Reply

Buddy Lindsey August 16, 2012 at 7:31 pm

You were right. It was run out of order. That is what I get for pulling the basics out of larger deploy file. I fixed it. Thanks for point it out.

Reply

Leave a Comment

Previous post:

Next post: