Getting Started with Mixing Fabric Deployments and VirtualEnv
Automating your deployment can be one of those things that is tricky to get right at first, but pays off in spades once done. It seems as you develop you continually evolve your processes until you finally figure out how to get your deploys to be fairly automated. I have been using Fabric for a while now, and it is one of the best tools I have used in a long time.
The problem is there aren’t a lot of great tutorials on how to get going with fabric. Most things I see just drop and script and explain line by line, if your lucky. In this blog post I want to walk you through as if you haven’t ever automated a deploy before and evolve a script before your eyes.
Prerequisites
I am not going to walk you through the actual hosting with apache, gunicorn or uwsgi. This guide assumes you have that worked out on your own. We will only discuss deploying to a place on server over SSH.
Server
You will need a few things on your server:
- SSH Access. Fabric uses ssh to push changes
- VirtualEnv and a location where you have your Virtual Environments stored
- Fabric installed in the global site-packages
Local Machine (or deployment machine)
You can set a fabric script to be executed automatically or you can run it yourself. The minimum you need is fabric installed locally to run the fabric script.
- Python
- Fabric
- Fabric Deploy Script
Fabric Basics
Fabric can be used to do more than just a deploy. In fact it can be used to do all sorts of remote stuff automagically. All fabric does at its core is execute commands locally and remotely. Fabric just provides a nice pythonic way of doing it.
Installation
Fabric is easy to install via pip
or easy_install
easy_install fabric
# or
pip install fabric
fabfile.py
All of you’re fabric scripts need to start in a file called fabfile.py
. It is the convention fabric uses so it assumes the file is there when you call the fab command.
run()
The only other thing you “need to know” to get started is about the run
function. It executes whatever command you put in as a parameter on the server. Here is an example:
run("mkdir buddysite")
This would create a directory called buddysite in the root folder of what ever user was used with SSH. So if you are using your root account it creates a folder at /root/buddysite
, and if your username is buddy it creates /home/buddy/buddysite
.
Set Servers
The last thing I like to setup in my deployment file is a default server, or set of servers. That is very simple to do using env.hosts
at the top. Something like this:
env.hosts = ['user@server.com']
That setting will tell the deployment to use the user user
and server address server.com
it also might look like:
env.hosts = ['buddy@191.168.42.178']
Closing Basics
So now we have determined we need fabric installed on the server and your local machine. You need a file named fabfile.py
and will use the function run
to run commands on the remote server you set in the env.hosts
setting. So far it is simple stuff lets actually do something a bit more and deploy some code.
Deploying Code
First we are going to look at a super basic script that gets our code from our git repository onto our server, and that is it. Then we will refine the script a bit more to deal with VirtualEnv
as well.
The Deploy Script
from fabric.api import *
env.hosts = ['webuser@192.168.42.137']
def deploy():
run("git clone https://github.com/user/repo.git")
That is really the most simple deployment you can get with fabric. There are problems with it, but really just shows how basic things can get.
Now to actually deploy just do:
fab deploy
Walking Through the Code
from fabric.api import *
As you have probably guessed this imports all the common fabric things we need when running our deployment. We don't generally like useing *
import, but we are in this case since as of the writing of this the official docs say to.
env.hosts = ['webuser@192.168.42.137']
As discussed before this sets our server and user we are deploying to.
def deploy():
This is the function that we call when we do our deploy. From the fab deploy
command above it just calls the function that is in the fabfile.py
file. Deploy is generally what people use as a starting point the great thing is its just a function.
run("git clone https://github.com/user/repo.git")
This will checkout your code onto your server and since it is just using the ssh user it will be in /home/webuser/repo
. Later we need to do better at telling it where to go.
Adding Specific Locations
We don’t really want to run stuff from the location of the root of our users directory. We will have conventions on our server of where to deploy stuff so this is a sample of the code specifying a location to put our code when cloning it.
from fabric.api import *
env.hosts = ['webuser@192.168.42.137']
def deploy():
with cd("/some/path/www/"):
run("git clone https://github.com/user/repo.git")
We only made one change to the code
with cd("/some/path/www/"):
Everything in this context manager will be executed in the folder you set. So instead of checking out code to /home/webuser/
it will be checked out to /some/path/www/
. Just make sure at the beginning you have a /
so it starts at the root of the file system.
VirtualEnv and Package Requirements
In order to do virtual environments with our script we are going to make a couple of assumptions.
- The virtual environment is in
/home/webuser/venv/mysite/
- The virtual environment exists already
I usually have a bit more complex things going on than this, but it is the easiest way to start off. The next thing we need to do is install our requirements inside of the virtual environment.
from fabric.api import *
env.hosts = ['webuser@192.168.42.137']
def deploy():
with cd("/some/path/www/"):
run("git clone https://github.com/user/repo.git repo")
run("source /home/webuser/venv/mysite/bin/activate && pip install -r /some/path/www/repo/requirements.txt")
If you note the code change:
run("source /home/webuser/venv/mysite/bin/activate && pip install -r /some/path/www/repo/requirements.txt")
That activates our virtual environment then installs all of our packages to make sure things are going to work when our server uses the virtual environment to execute our site code.
Closing the Deploy Script
So now we have a working deploy which puts our code where it needs to go, and even installs our requirements in a virtual environment. We are still running a very basic deployment, and really any more code just makes the deployment easier to maintain or lets more complex scenarios get played out.
I hope you have noticed to this point nothing in this has been django specific so you can use this to do just about anything deployment wise. I even use this on rails projects on occasion, but you could even use this for php projects as well.
Refactoring the Deploy Script
So I don’t like such hardcoded stuff I like to add variables and move code around so here is the same script with a bit of stuff moved around.
from fabric.api import *
env.hosts = ['webuser@192.168.42.137']
SITE_ROOT = "/some/path/www"
VENV_DIR = "/home/webuser/venv/mysite"
def _install_dependencies(site_dir):
run("source {}/bin/activate && pip install -r {}/{}/requirements.txt".format(
VENV_DIR, SITE_ROOT, site_dir))
def deploy():
site_dir = "repo"
with cd(SITE_ROOT):
run("git clone https://github.com/user/repo.git")
_install_dependencies(site_dir)
If you look the script still does the same thing just everything is moved around a bit. Please take a look and you will probably see a few things that confuse you, but I put them there to have fun with the next section.
Adding Very Basic Rollback Ability
This isn’t going to be necessarily the best way to do this, but it is a way to do it.
import datetime
from fabric.api import *
env.hosts = ['webuser@192.168.42.137']
SITE_ROOT = "/some/path/www"
VENV_DIR = "/home/webuser/venv/mysite"
def _install_dependencies(code_dir):
run("source {}/bin/activate && pip install -r {}/{}/requirements.txt".format(
VENV_DIR, SITE_ROOT, site_dir))
def _link_to_current(code_dir):
# assumes site_root/current exists
run("rm {}/current".format(SITE_ROOT))
run("ln -s {} {}/current".format(code_dir, SITE_ROOT))
def deploy():
temp_dir = datetime.datetime.now().strftime("%Y%m%d%H%M")
code_dir = "{}/{}".format(SITE_ROOT, temp_dir)
run("mkdir {}".format(code_dir))
run("git clone https://github.com/user/repo.git {}".format(code_dir))
_install_dependencies(code_dir)
_link_to_current(code_dir)
I am not going to go line by line, but what I have done is modify it so that each deploy gets set to its own timestamped folder. From there it symlinks to the current
folder. Your web server needs to run the django, or any wsgi based, application out of the folder current
. Now you can have a history of deploys so if the latest one breaks you can just symlink to the previous one and things should work again.
Conclusion
There are a couple of things I didn’t go over and that is doing migrations, along with resetting the wsgi application execution. Both I will leave to you based on your server configuration, and application specifics. What I do is set them in separate functions and have a big_deploy
function that will call the database functionality as well. As for resetting the wsgi execution that depends on your server configuration.
Hopefully this will get you started with using fabric to do a deployment so that you don’t have to manually move things around. I am also sure you can figure out other things to do with this as well. Have fun deploying code now so you can do it more often. As an extra step look at git hooks and play with continuous deployment.