Requirements for Python’s pip

Posted on Sun 21 February 2016 in Code • Tagged with Python, pip, packages, dependenciesLeave a comment

In this post I’ll describe all (hopefully all!) the various ways you can specify a single dependency for a Python package.

This assumes pip is used for installation. The list of dependencies then goes either in install_requires= parameter of the setup function within setup.py, or as a separate requirements.txt file. Commonly, it will actually go in both places, with the latter being the canonical source of truth:

from setuptools import setup

with open('requirements.txt') as rf:
    setup(
        # ...
        install_requires=rf.readlines(),
    )

More details about this approach can be found in one of my previous posts.

Here, I will concentrate on the format of a single line in requirements.txt that defines a dependency. There are numerous variants that pip supports, and they are all described in excruciating detail in PEP 440. This post shall serve as a short reference on the most useful ones.

Package name (and version)

The simplest and most common option is to identify a dependency by its package name:

SQLAlchemy

This will locate it in a global index of packages, which is sometimes called a “cheese shop”. Currently, by far the most popular package registry for Python is PyPI, and pip uses it by default1.

Without any further modifiers, pip will download and install the “current” version of the package — either the newest, or the one designated explicitly by a maintainer. This obviously makes the dependency somewhat unpredictable, for it can mean unintended upgrades that introduce breaking changes to your code.

To prevent this, you’d normally pin the dependency to an exact version2:

SQLAlchemy==0.9.10

Other comparison operators are also available:

SQLAlchemy>=0.9.10
SQLAlchemy<1.0.0

and can even be combined:

SQLAlchemy>=0.9,<1.0.0

Specs like that will make pip find the newest version that’s within given range. Assuming your dependency follows the semantic versioning scheme, this will allow you to stay on top of any minor bugfixes and improvements to an older release (0.9.x here), without the risk of accidentally upgrading to a new one (1.x) that your code is not compatible with yet.

Repository URL

Sometimes you want to live on the bleeding edge, though, and depend not just on the latest release, but the head commit to the package’s repository. This makes sense especially in large systems that are distributed among multiple repos, and where development happens in lockstep.

For those occasions, and a few others, pip can recognize direct repository URLs. They are in the format:

$VCS+$PROTOCOL://$URL@$LABEL#egg=$PACKAGE

where the $PROTOCOL part can be optional if the version control system has a default there. That’s for example the case for git, which is of course the most important VCS you’d be interested in3:

git://git.example.com/somepackage#egg=somepackage

Note that the #egg=$PACKAGE part is not a part of the $URL, and it’s only there to give a local name for the package distribution. This is what makes it possible to refer to it later via pip, if only to remove it with pip uninstall $PACKAGE. Of course, the sanest practice is to use the PyPI moniker if possible.

When no $LABEL is given, pip will use the HEAD, trunk, tip, or the equivalent default/current revision from the repo. Often though (at least in case of Git), you would also pick a branch, tag, or even a particular commit hash:

git+https://github.com/Xion/unmatcher.git@0.1.3.1#egg=unmatcher
git+ssh://github.com/You/yourpackage.git@master#egg=yourpackage
git+https://github.com/mitsuhiko/jinja2.git@5b498453b5898257b2287f14ef6c363799f1405a#egg=Jinja2

The last two options could be a good choice even with third party packages, when you don’t want to wait for a new PyPI release to get a necessary feature or an urgent bug fix.

Local filesystem

Lastly, you can ask pip to install a package from a local directory or archive. The former option is often used with the -e (--editable) flag for pip install. This installs the package in the so-called development mode, allowing you to edit its source code in-place:

$ pip install -e /home/me/Code/myotherpackage

You almost certainly don’t want to put this line in requirements.txt: you should still be pulling the other package from PyPI. But if it’s your own one — maybe a self-contained utility library used by your main program — this setup will be very helpful for making changes to it, informed by your own usage of the package.


  1. This can be changed with --index_url flag to pip install. Running local indexes is a good practice for Python shops, especially those that rely on pip install as part of their deployment process. 

  2. If the package uses semantic versioning, a possible alternative to == is ~=, which means “compatible” version. The precise meaning of this is somewhat complicated, but it roughly means that upgrades are permitted as long as nothing in the public interface changes. 

  3. Other options include hg (Mercurial), svn, and bzr (Bazaar). 

Continue reading

Moving out of a container in Rust

Posted on Fri 05 February 2016 in Code • Tagged with Rust, vector, borrow checker, referencesLeave a comment

To prevent the kind of memory errors that plagues many C programs, the borrow checker in Rust tracks how data is moved between variables, or accessed via references. This is all done at compile time, with zero runtime overhead, and is a sizeable part of Rust’s value offering.

Like all rigid and automated systems, however, it is necessarily constrained and cannot handle all situations perfectly. One of its limitations is treating all objects as atomic. It’s impossible for a variable to own a part of some bigger structure, neither is it possible to maintain mutable references to two or more elements of a collection.

If we nonetheless try:

fn get_name() -> String {
    let names = vec!["John".to_owned(), "Smith".to_owned()];
    join(names[0], names[1])
}

fn join(a: String, b: String) -> String {
    a + " " + &b
}

we’ll be served with a classic borrow checker error:

<anon>:3:25: 3:33 error: cannot move out of indexed content [E0507]
<anon>:3     let fullname = join(names[0], names[1]);
                                 ^~~~~~~~

Behind its rather cryptic verbiage, it informs us that we tried to move a part of the names vector — its first element — to a new variable (here, a function parameter). This isn’t allowed, because in principle it would render the vector invalid from the standpoint of strict memory safety. Rust would no longer guarantee names[0] to be a legal String: its internal pointer could’ve been invalidated by the code which the element moved to (the join function)1.

But while commendable, this guarantee isn’t exactly useful here. Even though names[0] would technically be invalid, there isn’t anyone to actually notice this fact. The names vector is inaccessible outside of the function it’s defined in, and even the function itself doesn’t look at it after the move. In its present form, the program is inarguably correct2 could’ve been accepted if partial moves from Vec were allowed by the borrow checker.

Pointers to the rescue?

Vectors wouldn’t be very useful or efficient, though, if we could only obtain copies or clones of their elements. As this is an inherent limitation of Rust’s memory model, and applies to all compound types (structs, hashmaps, etc.), it’s been recognized and countermeasures are available.

However, the idiomatic practice is to actually leave the elements be and access them solely through references:

fn get_name() -> String {
    let names = vec!["John".to_owned(), "Smith".to_owned()];
    join(&names[0], &names[1])
}

fn join(a: &String, b: &String) -> String {
    a.clone() + " " + b
}

The obvious downside of this approach is that it requires an interface change to join: it now has to accept pointers instead of actual objects3. And since the result is a completely new String, we have to either bite the bullet and clone, or write a more awkward join_into(a: &mut String, b: &String) function.
In general, making an API switch from actual objects to references has an annoying tendency to percolate up the call stacks and abstraction layers.

Vector solution

If we still insist on moving the elements out, at least in case of vector we aren’t completely out of luck. The Vec type offers several specialized methods that can slice, dice, and splice the collection in various ways. Those include:

  • split_first (and split_first_mut) for cutting right after the first element
  • split_last (and split_last_mut) for a similar cut right before the last element
  • split_at (and split_at_mut), generalized versions of the above methods
  • split_off, a partially-in-place version of split_at_mut
  • drain for moving all elements from a specified range

Other types may offer different methods, depending on their particular data layout, though drain should be available on any data structure that can be iterated over.

Structural advantage

What about user-defined types, such as structs?

Fortunately, these are covered by the compiler itself. Since accessing struct fields is a fully compile-time operation, it is possible to track the ownership of each individual object that makes up the structure. Thus there are no obstacles to simply moving all the fields:

struct Person {
    first_name: String,
    last_name: String,
}

fn get_name() -> String {
    let p = Person{first_name: "John".to_owned(),
                   last_name: "Smith".to_owned()};
    join(p.first_name, p.last_name)
}

If all else fails…

This leaves us with some rare cases when the container’s interface doesn’t quite support the exact subset of elements we want to move out. If we don’t want to drain them all and inspect every item for potential preservation, it may be time to skirt around the more dangerous areas of the language.

But I don’t necessarily mean going all out with unsafe blocks, pointers, and (let’s be honest) segfaults. Instead, we can look at the gray zone between them and the regular, borrow-checked Rust code.

Some of the functions inside the std::mem module can be said to fall into this category. Most notably, mem::swap and mem::replace allow us to operate directly on the memory blocks that back every Rust object, albeit without the dangerous ability to freely modify them.

What those functions enable is a small sleight of hand — a quick exchange of two variables or objects while the borrow checker “isn’t looking”. Possessing such an ability, we can smuggle any item out of a container as long as we’re able to provide a suitable replacement:

use std::mem;

/// Pick only the items under indices that are powers of two.
fn pick_powers_of_2<T: Default>(mut v: Vec<T>) -> Vec<T> {
    let mut result: Vec<T> = Vec::new();
    let mut i = 1;
    while i < v.len() {
        let elem = mem::replace(&mut v[i], T::default());
        result.push(elem);
        i *= 2;
    }
    result
}

Swap!
Pictured: implementation of mem::replace.

The Default value, if available, is usually a great choice here. Alternately, a Copy or Clone of some other element can also work if it’s cheap to obtain.


  1. In Rust jargon, it is sometimes said that the object has been “consumed” there. 

  2. As /u/Gankro points out on /r/rust, since Vec isn’t a part of the language itself, it doesn’t get to bend the borrow checking rules. Therefore speaking of counterfactual correctness is a bit too far-fetched in this case. 

  3. For Strings specifically, the usual practice is to require a more generic &str type (string slice) instead of &String

Continue reading

Retry idiom for Python

Posted on Wed 27 January 2016 in Code • Tagged with Python, exceptions, elseLeave a comment

A relatively little known feature of Python is the else block for control flow statements other than if.

If you haven’t heard about it before, you can provide such a block for both while and for loops, as well as any variant of the try statement, Its functionality is roughly analogous in both cases:

  • in loops, the else block is executed if the loop didn’t exit abnormally (i.e. with break)
  • in try constructs, the else block runs if no exception happened

Likely because of the unique semantics that don’t exist in other languages, neither of those constructs has been seen much in real world code. Recently, however, I’ve found they can be combined into a very pythonic pattern that’s also quite useful.

The trick

Say you have a task that won’t always succeed. Perhaps it’s a request made to a janky server, or other network operartion that’s prone to timeouts. Since failures are likely to be transient, you’d like to retry it several more times before giving up permanently.

With try/except block, you can detect those half-expected failures. With a simple loop, you can repeat the attempt for as many times as you deem feasible. Combined, they can solve the problem rather neatly:

for _ in range(MAX_RETRIES):
    try:
        # ... do stuff ...
    except SomeTransientError:
        # ... log it, sleep, etc. ...
        continue
    else:
        break
else:
    raise PermanentError()

But why?

What’s the deal with the elses here, though? Are they both necessary?

The simple answer is of course no. else after either a loop or try/except block is always a syntactic sugar. Any code that contains it can be transformed into an equivalent snippet that utilizes different techniques to achieve the same effect.

But this view isn’t very useful, for many of the essential features in any programming languages can be dismissed as superfluous using this reasoning. The real question is whether the above idiom is more readable and understandable than the alternatives.

To that, I posit, the answer is: absolutely.

Desugaring

Without the double else, this example would have to be written in a considerably more convoluted way:

retries = MAX_RETRIES
while retries > 0:
    try:
        # ... do stuff ...
        break
    except SomeTransientError:
        # ... log it, sleep, etc. ...
        retries -= 1
if retries == 0:
    raise PermanentError()

Although at first glance the difference may be minuscule, this version adds significant extra busywork the programmer has to pay careful attention to:

  • The retries variable now has to be explicit, because the final conditional statement must look at its value.
  • We can’t use a for loop anymore (e.g. for retries in range(MAX_RETRIES)), because we wouldn’t distinguish the “success at last try” and “retry limit exceeded” cases: they’d both result in retries equal to MAX_RETRIES - 1 after the loop1.
  • As a result, we have to remember to decrement the counter ourselves upon an error.

Additionally, the break is easy to miss amidst the actual logic within the try block, both for developer who writes the code and for any subsequent readers. An alternative is to move it outside of the try/except clause, but that in turn reintroduces continue into the except branch and further complicates the whole flow.

In short, the desugared version is more error-prone (all those off-by-ones!) and also quite inscrutable.


  1. Or to 1, if we count from MAX_RETRIES down to zero. 

Continue reading

Rust: first impressions

Posted on Thu 10 December 2015 in Code • Tagged with Rust, pointers, types, FP, OOP, traitsLeave a comment

Having recently been writing some C++ code at work, I had once again experienced the kind of exasperation that this cumbersome language evokes on regular basis. When I was working in it less sporadically, I was shrugging it off and telling myself it’s all because of the low level it operates on. Superior performance was the other side of the deal, and it was supposed to make all the trade-offs worthwhile.

Now, however, I realized that running close to the metal by no means excuses the sort of clunkiness that C++ permits. For example, there really is no reason why the archaically asinine separation of header & source files — with its inevitable redundancy of declarations and definitions, worked around with Java-esque contraptions such as pimpl — is still the bread and butter of C++ programs.
Same goes for the lack of sane dependency management, or a universal, portable build system. None of those would be at odds with native compilation to machine code, or runtime speeds that are adequate for real-time programs.

Rather than dwelling on those gripes, I thought it’d be more productive to look around and see what’s the modern offerring in the domain of lower level, really fast languages. The search wasn’t long at all, because right now it seems there is just one viable contender: Rust1.

Rusty systems

Rust introduces itself as a “systems programming language”, which is quite a bold claim. What followed the last time this phrase has been applied to an emerging language — Go — was a kind of word twisting that’s more indicative of politics, not computer science.

But Rust’s pretense to the system level is well justified. It clearly provides the requisite toolkit for working directly with the hardware, be it embedded controllers or fully featured computers. It offers compilation to native machine code; direct memory access; running time guarantees thanks to the lack of GC-incuded stops; and great interoperability through static and dynamic linkage.

In short, with Rust you can wreak havoc against the RAM and twiddle bits to your heart’s content.

Safe and sound

To be fair, though, the “havoc” part is not entirely accurate. Despite its focus on the low level, efficient computing, Rust aims to be a very safe language. Unlike C, it actively tries to prevent the programmer from shooting themselves in the foot — though it will hand you the gun if you but ask for it.

The safety guarantees provided by Rust apply to resource management, with the specific emphasis on memory and pointers to it. The way that most contemporary languages deal with memory is by introducing a garbage collector which mostly (though not wholly) relieves the programmer from thinking about allocations and deallocations. However, the kind of global, stop-the-world garbage collections (e.g. mark-and-sweep) is costly and unpredictable, ruling it out as a mechanism for real-time systems.

For this reason, Rust doesn’t mandate a GC of this kind2. And although it offers mechanisms that are similar to smart pointers from C++ (e.g. std::shared_ptr), it is actually preferable and safer to use regular, “naked” pointers: &Foo versus Cell<Foo> or RefCell<Foo> (which are some of the Rust’s “smart pointer” types).

The trick is in the clever compiler. As long as we use regular pointers, it is capable of detecting potential memory bugs at compilation time. They are referred to as “data races” in Rust’s terminology, and include perennial problems that will segfault any C code which wasn’t written with utmost care.

Part of those safety guarantees is also the default immutability of references (pointers). The simplest reference of type &Foo in Rust translates to something like const Foo * const in C3. You have to explicitly request mutability with the mut keyword, and Rust ensures there is always at most one mutable reference to any value, thus preventing problems caused by pointer aliasing.

But what if you really must sling raw pointers, and access arbitrary memory locations? Maybe you are programming a microcontroller where I/O is done through a special memory region. For those occasions, Rust has got you covered with the unsafe keyword:

// Read the state of a diode in some imaginary uC.
fn get_led_state(i: isize) -> bool {
    assert!(i >= 0 && i <= 4, "There are FOUR lights!");
    let p: *const u8 = 0x1234 as *const u8;  // known memory location
    unsafe { *p .offset(i) != 0 }
}

Its usage, like in the above example, can be very localized, limited only to those places where it’s truly necessary and guarded by the appropriate checks. As a result, the interface exposed by the above function can be considered safe. The unrestricted memory access can be contained to where it’s really inevitable.

Typing counts

Ensuring memory safety is not the only way in which Rust differentiates itself from C. What separates those two languages is also a few decades of practice and research into programming semantics. It’s only natural to expect Rust to take advantage of this progress.

And advantage it takes. Although Rust’s type system isn’t nearly as advanced and complex like — say — Scala’s, it exhibits several interesting properties that are indicative of its relatively modern origin.

First, it mixes the two most popular programming paradigms — functional and object-oriented — in roughly equal concentrations, as opposed to being biased towards the latter. Rust doesn’t have interfaces or classes: it has traits and their implementations. Even though they often fulfill similar purposes of abstraction and encapsulation, these constructs are closer to the concepts of type classes and their instances, which are found for example in Haskell.

Still, the more familiar notions of OOP aren’t too far off. Most of the key functionality of classes, for example, can be simulated by implementing “default” traits for user-defined types:

struct Person {
    first_name: String,
    last_name: String,
}

impl Person {
    fn new(first_name: &str, last_name: &str) -> Person {
        Person {
            first_name: first_name.to_string(),
            last_name: last_name.to_string(),
        }
    }

    fn greet(&self) {
        println!("Hello, {}!", self.first_name);
    }
}

// usage
let p = Person::new("John", "Doe");
p.greet();

The second aspect of Rust’s type system that we would come to expect from a new language is its expressive power. Type inference is nowadays a staple, and above we can observe the simplest form of it. But it extends further, to generic parameters, closure arguments, and closure return values.

Generics, by the way, are quite nice as well. Besides their applicability to structs, type aliases, functions, traits, trait implementations, etc., they allow for constraining their arguments with traits. This is similar to the abandoned-and-not-quite-revived-yet idea of concepts in C++, or to an analogous mechanism from C#.

The third common trend in contemporary language design is the use of type system to solve common tasks. Rust doesn’t go full Haskell and opt for monads for everything, but its Option and Result types are evidently the functional approach to error handling4. To facilitate their use, a powerful pattern matching facility is also present in Rust.

Unexpectedly pythonic

If your general go-to language is Python, you will find Rust a very nice complement and possibly a valuable instrument in your coding arsenal. Interoperability between Python and Rust is stupidly easy, thanks to both the ctypes module and the extreme simplicity of creating portable, shared libraries in Rust. Offloading some expensive, GIL-bypassing computation to a fast, native code written in Rust can thus be a relatively painless way of speeding up crucial parts of a Python program.

But somewhat more surprisingly, Rust has quite a few bits that seem to be directly inspired by Python semantics. Granted, those two languages are conceptually pretty far apart in general, but the analogies are there:

  • The concept of iterators in Rust is very similar to iterables in Python. Even the for loop is basically identical: rather than manually increment a counter, both in Rust and Python you iterate over a range of numbers.
    Oh, and both languages have an enumerate method/ function that yields pairs of (index, element).

  • Syntax for method definition in Rust uses the self keyword as first argument to distinguish between instance methods and “class”/”static” methods (or associated functions in Rust’s parlance). This is even more pythonic than in actual Python, where self is technically just a convention, albeit an extremely strong one.

  • In either language, overloading operators doesn’t use any new keywords or special syntax, like it does in C++, C#, and others. Python accomplishes it through __magic__ methods, whereas Rust has very similarly named operator traits.

  • Rust basically has doctest. If you don’t know, the doctest module is a standard Python testing utility that can run usage examples found in documentation comments and verify their correctness. Rust version (rustdoc) is even more powerful and flexible, allowing for example to mark additional boilerplate lines that should be run when testing examples, but not included in the generated documentation.

I’m sure the list doesn’t end here and will grow over time. As of this writing, for example, nightly builds of Rust already offer advanced slice pattern matching which are very similar to the extended iterable unpacking from Python 3.

Is it worth it?

Depending on your background and the programming domain you are working in, you may be wondering if Rust is a language that’s worth looking into now, or in the near future.

Firstly, let me emphasize that it’s still in its early stages. Although the stable version 1.0 has been released a good couple of months ago, the ecosystem isn’t nearly as diverse and abundant as in some of the other new languages.

If you are specifically looking to deploying Rust-written API servers, backends, and other — shall I use the word — microservices, then right now you’ll probably be better served by more established solutions, like Java with fibers, asynchronous Python on PyPy, Erlang, Go, node.js, or similar. I predict Rust catching up here in the coming months, though, because the prospect of writing native speed JSON slingers with relative ease is just too compelling to pass.

The other interesting area for Rust is game programming, because it’s one of the few languages capable of supporting even the most demanding AAA+ productions. The good news is that portable, open source game engines are already here. The bad news is that most of the existing knowledge about designing and coding high performance games is geared towards writing (stripped down) C++. The community is also rather stubborn reluctant to adopt anything that may carry even a hint of potentially unknown performance implications. Although some inroads have been made (here’s, for example, an entity component system written in Rust), and I wouldn’t be surprised to see indie games written in Rust, it probably won’t take over the industry anytime soon.

When it comes to hardware, though, Rust may already have the upper hand. It is obviously much easier language to program in than pure C. Along with its toolchain’s ability to produce minimal executables, it makes for a compelling language for programming microcontrollers and other embedded devices.

So in short, Rust is pretty nice. And if you have read that far, I think you should just go ahead and have a look for yourself :)


  1. Because as much as we’d like for D to finally get somewhere, at this point we may have better luck waiting for the Year of Linux on Desktop to dawn… 

  2. Of course, nobody has stopped the community from implementing it

  3. Strictly speaking, it’s the binding such as let x = &foo; that translates to it. Unadorned C pointer type Foo* would correspond to mutable binding to a mutable reference in Rust, i.e. let mut x = &mut foo;

  4. Their Haskell equivalents are Maybe and Either type classes, respectively. 

Continue reading

URL library for Python

Posted on Fri 27 November 2015 in Code • Tagged with Python, URL, furlLeave a comment

Python has many batteries included, but a few things are still conspicuously missing.

One of them is a standardized and convenient approach to URL manipulation, akin to the URI class in Java. There are some functions in urllib, of course (or urllib.parse in Python 3), but much like their HTTP-related comrades, they prove rather verbose and somewhat clunky.

HTTP, however, is solved by the Requests package, so you may wonder if there is some analogous package for URL operations. The answer is affirmative, and the library in question is, quite whimsically, called furl.

URL in a wrap

The sole interesting part of the furl interface is the furl class. It represents a single URL, broken down to its constituents, with properties and methods for both reading them out and replacing with new values.

Thanks to this handy (and quite obvious) abstraction, common URL operations become quite simple and self-documenting:

from furl import furl


def to_absolute(url, base):
    """If given ``url`` is a relative path,
    make it relative to the ``base``.
    """
    furled = furl(url)
    if not furled.scheme:
        return furl(base).join(url).url
    return url


def is_same_origin(*urls):
    """Check whether URLs are from the same origin (host:port)."""
    origins = set(url.netloc for url in map(furl, urls))
    return len(origins) <= 1


def get_facebook_username(profile_url):
    """Get Facebook user ID from their profile URL."""
    furled = furl(profile_url)
    if not (furled.host == 'facebook.com' or
            furled.host.endswith('.facebook.com')):
        raise ValueError("not a Facebook URL: %s" % (profile_url,))
    return furled.path.segments[-1]

# etc.

This includes the extremely prevalent, yet very harmful pattern of bulding URLs through string interpolation:

url = '%s?%s' % (BASE_URL, urlencode(query_params))

Besides looking unpythonically ugly, it’s also inflexible and error-prone. If BASE_URL gains some innate query string params ('http://example.com/?a=b'), this method will start producing completely invalid urls (with two question marks, e.g. 'http://example.com/?a=b?foo=bar').

The equivalent in furl has none of these flaws:

url = furl(BASE_URL).add(query_params).url

The full package

To see the full power of furl, I recommend having a look at its API documentation. It’s quite clear and should be very easy to use.

Continue reading

let: binding for Knockout

Posted on Wed 18 November 2015 in Code • Tagged with Knockout, JavaScript, web frontend, data bindingLeave a comment

Knockout is a JavaScript “framework” that has always lurked in the shadows of other, more flashy ones. Even in the days of its relative novelty, the likes of Backbone or Ember had seemed to garner more widespread interest among frontend developers. Today, this hasn’t changed much, the difference being mainly that the spotlight is now taken by new actors (*cough* React *cough*).

But Knockout has a loyal following, and for good reasons. Possibly the most imporant one is why I’ve put the word “framework” in quotes. Knockout is, first and foremost, a data binding library: it doesn’t do much else besides tying DOM nodes to JavaScript objects.

This quality makes it both easy to learn and simple to integrate. In fact, it can very well live in just some small compartments of your web application, mingling easily with any server-side templating mechanism you might be using. It also interplays effortlessly with other JS libraries, and sticks very well to whatever duct tape you use to hold your frontend stack together.

Lastly, it’s also quite extensible. We can, for example, create our own bindings rather easily, extending the declarative language used to describe relationship between the data and UI.

In this post, I’m going to demonstrate this by implementing a very simple let: binding — a kind of “assignment” of an expression to a name.

From Knockout with: bluff

Out of box, Knockout proffers the with: binding, a quite similar mechanism. How it may be somewhat problematic is analogous to the widely discouraged with statement in JavaScript itself. Namely, it blends several namespaces together, making it harder to determine which object is being referred to. As a result, the code is more prone to errors.

On the other hand, freeing the developer from repeating long and complicated expressions is obviously valuable. Perhaps reducing them to nil is not the right approach, though, so how about we just shorten them to a more manageable length? Well, that’s exactly what the let: binding is meant to do:

<div data-bind="let: { addr: currentUser.personalInfo.address }">
  <p data-bind="text: addr.line1"></p>
  <!-- ko if: addr.line2 -->
    <p data-bind="text: addr.line2"></p>
  <!-- /ko -->
  <p>
    <span data-bind="text: add.city"></span>,
    <span data-bind="text: add.region"></span>
  </p>
  <p data-bind="text: add.country"></p>
</div>

Making it happen turns out to be pretty easy.

Binding contract

To define a Knockout binding, up to two things are needed. We have to specify what the library should do:

  • when the binding is first applied to a DOM node (the init method)
  • when any of the observed values changes (the update method)

Not every binding has to implement both methods. In our cases, only the init is necessary, because all we have to do is modify the binding context.

What’s that? Shortly speaking, a binding context, is an object holding all the data you can potentially bind to your DOM nodes. Think of it as a namespace, or local scope: whatever’s in there can be used directly inside data-bind attributes.

let: it go

Therefore, all that the let: binding has to do is to extend the context with a mapping passed to it as an argument. Well, almost all:

ko.bindingHandlers['let'] = {
    init: function(element, valueAccessor, allBindings, viewModel, bindingContext) {
        var innerContext = bindingContext.extend(valueAccessor);
        ko.applyBindingsToDescendants(innerContext, element);
        return { controlsDescendantBindings: true };
    }
};

The resulting innerContext is a copy of the original bindingContext, augmented with additional properties from the argument of let: (those are available through valueAccessor). Once we have it, though, we need to handle it in a little special way.

Normally, Knockout is processing all bindings recursively, pasing down the same bindingContext (which ultimately comes from the root viewModel). But since we want to locally alter the context, we also need to interrupt this regular way and take care of the lower-level DOM nodes ourselves.

This is exactly what the overly-long ko.applyBindingsToDescendants function is doing. The only caveat is that Knockout has to be told explicitly about our intentions through the return value from init. Otherwise, it would try to apply the original bindingContext recursively, which in our case would amount to applying it twice. { controlsDescendantBindings: true } prevents Knockout from doing so erroneously.

Continue reading

Turn SQLAlchemy queries into literal SQL

Posted on Thu 12 November 2015 in Code • Tagged with SQLAlchemy, SQL, databasesLeave a comment

Like I mentioned in the post about adding regular expression support, the nice thing about SQLAlchemy is the clean, layered structure of various abstractions it utilizes.

Regretably, though, we know that software abstractions tend to leak. Inn particular, any ORM seem to be a poster child of this rule, especially in the popular opinion among developer crowds. By design, they’re intended to hide at least some details of the operations on a database, but those very details can be quite critical at times. There are situations when we simply want to know what exactly is going on, and how all those model classes and mappers and relationships translate to the actual SQL code.

To make it a little more concrete, let’s focus on the SQLAlchemy Query class. Given such a query, we’d like to get the final SQL representation of it, the one that’s ultimately sent to the database, It could be useful for any number of things, from logging1 to profiling, or just displaying in the web page’s footer, or even solely for prototyping in the Python REPL.

In other words, we want to turn this:

db_session.query(User.id).filter(User.email == some_email)

into something like this:

SELECT users.id FROM users WHERE users.email = :1

regardless of the complexity of the query, the number of model classes it spans, or the number of relationship-related JOINs it involves.

It’s a dialect

There is one aspect we cannot really universalize, though. It’s the specific database backend that SQLAlchemy should compile our query for. Shrugging off syntactical and semantical differences between database engines is one thing that using an ORM can potentially buy us, but if we want to get down to the SQL level, we need to be specific about it.

In SQLAlchemy’s parlance, any specific variant of the SQL language is called a dialect. Most of the time, you’ll be interested in the particular dialect your database of choice is using. This is easily obtainable from the database Session:

dialect = db_session.bind.dialect

The resulting Dialect object is little more than a container for small tidbits of information, used by SQLAlchemy to handle various quirks of the database backends it supports. For our purposes, though, it can be treated as a completely opaque token.

Compile & unwrap

With the Dialect in hand, we can invoke the query compiler to get the textual representation of our Query. Or, to be more precise, the compiled version of the query’s Select statement

>>> query = db_session.query(User.id).filter(User.email == 'foo@example.com')
>>> print(query.statement.compile(dialect=db_session.bind.dialect))
SELECT user.id
FROM users
WHERE users.email = %(email_1)s

Perhaps unsurprisingly, even after compilation the result is still just another object: the Compiled one. As you can see, however, the actual SQL text is just its __str__ing representation, which we can print directly or obtain with str() or unicode().

Query.to_sql

But obviously, we don’t want to type the above incantations every time we need to take a peek at the generated SQL for an ORM query. Probably the best solution is to extend the Query class so that it offers this additional functionality under a new method:

from sqlalchemy.orm.query import Query as _Query


class Query(_Query):
    """Custom, enhanced SQLALchemy Query class."""

    def to_sql(self):
        """Return a literal SQL representation of the query."""
        dialect = self.session.bind.dialect
        return str(self.statement.compile(dialect=dialect))

This new Query class needs then be passed as query_cls argument to the constructor of Session. Details may vary a little bit depending on how exactly your application is set up, but in most cases, it should be easy enough to figure out.


  1. If you’re only interested in indiscriminate logging of all queries, setting the echo parameter in create_engine may be sufficient. Another alternative is to look directly at the logging configuration options for various parts of SQLAlchemy 

Continue reading

Celery task in a Flask request context

Posted on Tue 03 November 2015 in Code • Tagged with Celery, Flask, Python, queue, request contextLeave a comment

Celery is an asynchronous task worker that’s frequently used for background processing in Python web apps. Rather than performing a time-consuming task within the request loop, we delegate it to a queue so that a worker process can pick it up when ready. The immediate benefit is much better latency for the end user. Pros also include easier scalability, since you can adjust the number of workers to your task load.

Examples of tasks that are usually best done in the background vary from fetching data through a third-party API to sending emails, and from pushing mobile notifications to pruning the database of stale records. Anything that may take more than a couple hundred milliseconds to complete — and isn’t absolutely essential to the current HTTP request — is typically a good candidate for asynchronous processing.

In Celery, workers are just regular Python processes, often running the exact same code that’s powering the app’s web frontend1. But unlike most of that code, they aren’t servicing any HTTP requests — they simply run some function with given arguments, both specified by whomever sent a task for execution. Indeed, those functions don’t even know who or what asked for them to be executed.

Neat, right? It is what we usually compliment as decoupling, or separation of concerns. They are valuable qualities even regardless of the UI and scaling benefits mentioned earlier.

Not quite web

But those qualities come with a trade-off. Task code is no longer a web frontend code: it doesn’t run within the comfy environment of our web framework of choice. Losing it may be quite unnerving, actually, because in a typical web application there will be many things tied directly to the HTTP request pipeline. After all, this is what web applications do — respond to HTTP requests — so it often makes perfect sense e.g. to marry the request flow with database transactions, committing or rolling those back according to HTTP status code that the app produced.

Tasks may also require a database, though, if only to assert the expected state of the world. Similar goes for memcache, a Redis instance, or basically any resource used by the frontend code. Alas, it’s quite possible the very reason we delegate work to a task is to shift lengthy interactions with those external systems away from the UI. Obviously, we’re going to need them for that!

Fake a request

So one way or another, our tasks will most likely need some initialization and/or cleanup code. And since it’s probably the same code that most HTTP request handlers require and use already, why not just pretend we’re handling a request after all?

In Flask, we can pull that off rather easily. The test_request_context method is conveniently provided to allow for faking the request context — that is, an execution environment for HTTP handlers. Like the name suggest, it is used mostly for testing, but there is nothing stopping us from using it in tasks run by Celery.

We probably don’t want to call it directly, though. What would be better is to have Celery prepare the context first, and then run the task code as if it was an HTTP handler. For even better results, the context would preserve information extracted from the actual HTTP request, one that has sent the task for execution. Moving some work to the background would then be a trivial matter, for both the task and the original handler would operate within the same environment.

Convenient? I believe so. And as I’ll demonstrate next, it isn’t very complicated to implement either.

Wrap the decorator?

At least one piece of the solution should stand out as pretty obvious. Since our intention is to wrap the task’s code in some additional packaging — the request context — it seems fairly natural to write our own @task decorator:

import functools

from myapp import app, celery


def task(**kwargs):
    """Decorator function to apply to Celery tasks.
    Executes the actual task inside Flask's test request context.
    """
    def decorator(func):
        """Actual decorator."""
        @celery.task(**kwargs)
        @functools.wraps(func)
        def wrapped(*args, **kwargs):
            with app.test_request_context():
                return func(*args, **kwargs)

        return wrapped

    return decorator

Here, app is the Flask application, and celery is the Celery object that’s often configured alongside it.

The Task class

While this technique will give us a request context inside the task code, it won’t be the request context from which the task has been sent for execution. To replicate that context correctly, we need additional support on the sender side.

Internally, Celery is converting every function we annotate with @task decorator into a subclass of Celery.app.task.Task. This process can be customized, for example by providing an explicit base= parameter to @task which specifies a custom Task subclass to use in place of the default one. Within that subclass, we’re free to override any functionality that we need to.

In our case, we’ll scrape the current request context from Flask for all relevant information, and later use it to recreate the context in the task code.

But before we get to that, let’s just create the Task subclass and move the above execution logic to it. This way, users won’t have to use a completely new @task decorator which would needlessly couple them to a specific Celery app instance:

from celery import Task

class RequestContextTask(Task):
    """Base class for tasks that run inside a Flask request context."""
    abstract = True

    def __call__(self, *args, **kwargs):
        with app.test_request_context():
            return super(RequestContextTask, self).__call__(*args, **kwargs)

Instead, they can either set this class as base= for a specific task:

@celery.task(base=RequestContextTask)
def sync_user_data(user_id):
    # ...

or make it into new default for all their tasks:

celery = Celery(...)
celery.Task = RequestContextTask

Invocation patterns

When the frontend asks for a task to be executed, it most often uses the Task.delay method. It will package a payload contaning task arguments, and send it off through a broker — usually an AMQP-based queue, such as RabbitMQ — so that a Celery worker can pick it up and actually execute.

But there are other means of task invocation. We can even run it “in place”, locally and synchronously, which is especially useful for various testing scenarios. Lastly, a task can also be retried from within its own code, terminating its current run and scheduling another attempt for some future date.

Obviously, for the RequestContextTask to be useful, it needs to behave correctly in every situation. Therefore we need to cover all the entry points I’ve mentioned — the asynchronous call, a synchronous invocation, and a task retry:

class RequestContextTask(Task):
    # ...

    def apply_async(self, args=None, kwargs=None, **rest):
        self._include_context(kwargs)
        return super(RequestContextTask, self) \
            .apply_async(args, kwargs, **rest)

    def apply(self, args=None, kwargs=None, **rest):
        self._include_context(kwargs)
        return super(RequestContextTask, self) \
            .apply(args, kwargs, **rest)

    def retry(self, args=None, kwargs=None, **rest):
        self._include_context(kwargs)
        return super(RequestContextTask, self) \
            .retry(args, kwargs, **rest)

Note that Task.apply_async is being called internally by Task.delay, so it’s only that first method that we have to override.

Context in a box

As you can deduce right away, the Flask-related magic is meant to go in the _include_context method. The idea is to prepare arguments for the eventual invocation of Flask.test_request_context, and pass them through an extra task parameter. Those arguments are relatively uncomplicated: they are just a medley of various pieces of information that we can easily obtain from the Flask’s request object:

from flask import has_request_context, request


class RequestContextTask(Task):
    CONTEXT_ARG_NAME = '_flask_request_context'

    # ...

    def _include_request_context(self, kwargs):
        """Includes all the information about current HTTP request context
        as an additional argument to the task.
        """
        if not has_request_context():
            return

        context = {
            'path': request.path,
            'base_url': request.url_root,
            'method': request.method,
            'headers': dict(request.headers),
        }
        if '?' in request.url:
            context['query_string'] = request.url[(request.url.find('?') + 1):]

        kwargs[self.CONTEXT_ARG_NAME] = context

On the worker side, we simply unpack them and recreate the context:

from flask import make_response

class RequestContextTask(Task):
    # ...
    def __call__(self, *args, **kwargs):
        call = lambda: super(RequestContextTask, self).__call__(*args, **kwargs)

        context = kwargs.pop(self.CONTEXT_ARG_NAME, None)
        if context is None or has_request_context():
            return call()

        with app.test_request_context(**context):
            result = call()
            app.process_response(make_response(result or ''))

        return result

The only tricky part is calling Flask.process_response at the end. We need to do that for the @after_request hooks to execute correctly. This is quite crucial, because those hooks are where you’d normally put important cleanup code, like commit/rollback of the database transaction.

Complete solution

To see how all those code snippets fit together, see this gist. You may also want to have a look at the article on Celery integration in Flask docs for some tips on how to integrate it with your own project.


  1. This isn’t strictly necessary, as Celery supports sending tasks for execution by explicit name. For that request to reach the worker, however, the task broker configuration must be correctly shared between sender and the worker. 

Continue reading