Release automation and project management are complicated. Definitions of terms may differ, but I think of release automation as more than just slapping a version number on a tarball, and at least for the purposes of this write-up I want to widen the definition to discuss a bit about tests and test coverage, documentation generation, and linters. Static asset generation and simple uploading of artifacts is also on the table, but complicated topics in general deployment and full-on CI are outside of scope. I'll probably add more stuff here as I think of them.

There are a lot of different ways to handle these things and it depends a lot on the project, and ultimately choosing your tool chain is largely a matter of taste. I'm documenting some of my go-to approaches and boilerplate here (all python specific) just for my own reference, but perhaps it will be useful to someone else as well.


The tox tool is pretty much the gold standard for python these days because it makes it easy to manage multiple virtual environments and test with different python versions. Tox itself is a very general tool and is completely agnostic about your choice of the underlying testing framework. Use whatever you want.. tox simply helps you to invoke the test runner. It's possible to add other automation stuff for code-quality (linters) to your toxfile (I used to do this) but now I'd suggest using commit hooks. Tox itself is well documented with lots of examples but for whatever it's worth here's my standard starter copy-pasta below.

envlist =
  py27, py34

# Turn off capture so that embedded debuggers can run inside tests
addopts= --capture=no

    python install
    rm -rf {toxinidir}/htmlcov
    py.test --cov-config {toxinidir}/.coveragerc \
            --cov=MY_PYTHON_PACKAGE --cov-report=html -vvv \
            --pyargs {toxinidir}/tests
    # override $HOME so tests don't change the user-dir

As you can see, I typically prefer py.test as my testing framework, I do not like to support python2.6, and I put tests inside a "test" directory under the source root (NOT inside my package).

Note that in addition to what you see above, you can also declare sections for various linters like flake8, pep8, and pycodestyle . Each tool, respectively, will parse configuration from it's own tox.ini and respect those settings regardless of whether the tool was invoked from tox.


Speaking of linters (the last section mentions 3 standard ones), there are a lot of other and more specialized options. There's pyroma, essentially a linter for files. There's a linter for ansible, a linter for django and a linter for flask. Go nuts, lint all the things you can as specifically as you can for the project or file type, and remember to throw that stuff into your pre-commit hooks if possible.

Commit hooks

Ideally linting should be done strictly, constantly, and consistently across all contributing project members. But where best to drive it from? Lately I advocate moving linters out of your text editor (or whatever) and into pre-commit hooks.

One big benefit of the hooks approach is that it works better with code-bases that are large, old, and still under construction. Hooks force developers to fix lint on files they are modifying anyway, whereas running lint elsewhere -- say from tox via a buildbot -- might force developers to de-lint the entire code base at once which is often times not practical.

I like to use yelp's pre-commit framework for managing my hooks. The setup is simple. Just make a .pre-commit-config.yaml file in your source root. For python code my hooks normally look something like what you see below (to see my elixir hooks go here).

-   repo: git://
    sha: master
    -   id: trailing-whitespace
    -   id: check-ast
    -   id: check-merge-conflict
    -   id: fix-encoding-pragma
    -   id: autopep8-wrapper
    -   id: flake8
    -   id: check-yaml
    -   id: check-json

A list of available hooks is here and you can always make your own. Checking YAML and JSON will probably save you from botching a deploy eventually, and using autopep8 to avoid like 70% of the useless bikeshedding on codereviews will probably make you smile.

Anyway after you've gotten the hooks written down, you'll probably want to actually use this file, so run the commands below:

$ pip install pre-commit
$ pre-commit install
$ git add .pre-commit-config.yaml
$ git commit -a -m"add pre-commit config"
$ git push

Now you just have to browbeat your coworkers until they use this thing, and afterwards everything is wonderful. :thumbsup:

Generic automation

There are a lot of pythonic approaches 1 to generic automation. Lots of people use paver for instance, but my weapon of choice is fabric.

Fabric simplifies command-line option-parsing and between the shell invocation/ssh-helpers it is very suitable for automating both local and remote work2. Fabric definitely has some problems, most notably a lack of py3 compatibility and the fact that the paramiko requirement means fairly long installations on fresh virtualenvs, but it still saves me far more time than it costs me.

In this section you'll often find a few descriptions of little automation problems and the corresponding solutions will often be presented as fabric commands.

Detect dead code

Dead code paths are tricky to find. A strong integration test suite is really the best thing to use, but many projects are unlikely to have this luxury (especially projects that are libraries rather than applications). Meanwhile.. your unit tests suite can help you or hide the truth from you, it just depends. Realistically, the best approach one can hope for is often just approximate.

I use vulture as a heuristic aid. Vulture qualifies as a static analysis tool, and unlike your test suites it won't actually execute code. However, since you are almost certain to get false positives, vulture cannot be used as a commit-hook or an abort-job in your buildbot.

So why a fabric task, why not just use the command-line tool vulture? Well, the problem is that since vulture will generate false positives, you're quite likely to eschew the default invocation because the output is cluttered. You'll end up with some customized command like this:

$ vulture py_pkg | grep -v KnownFalsePositive1 | grep -v KnownFalsePositive2

Eventually tracking your project's known false positives mentally and typing them out becomes work, and you'll just want a script. Another reason to fabric'ize this work is that the fab tool works the way you'd expect even after you're deep inside subdirectories of your project3.

Without further ado, you can find a simple fabric task below. Change PKG_NAME and the VULTURE_* variables to match your project, and invoke the task using fab vulture.

Pypi releases

I've already written elsewhere about a version-bump command that uses fabric.

If you want a do-everything release helper consider using the zest.releaser. I don't use zestreleaser much because project-to-project releases are likely to be a ticklish subject, what with idiosyncratic pre/post actions and conditions. My approach below quite limited in that it's designed for git and github, but for easy customization later, I nevertheless prefer to start with a fabric task.

I had just a few goals for my release automation in the abstract. Most importantly, I don't want to accidentally run a release from any place except master. Obviously the sdist commands should be automated, and I wanted consistent tagging to happen without doing it manually. It looks like a lot of code, but it's mostly sanity-checking and user feedback. Check it out:

  1. It's $current_year! Use rake or gulp if you must but shell-scripts and Makefiles are starting to seem kind of dumb..
  2. Automating remote stuff via fabric's ssh tools might be a bit controversial. Nevertheless, fabric is lightweight and very friendly to the less devops-inclined because the learning curve is small compared to ansible.
  3. Like git looking for .git folders, fabric ascends parent directories until it finds a fabfile