Small Rust crates I (almost) always use

Posted on Tue 31 October 2017 in Code • Tagged with Rust, librariesLeave a comment

Alternative clickbait title: My Little Crates: Rust is Magic :-)


Due to its relatively scant standard library, programming in Rust inevitably involves pulling in a good number of third-party dependencies.

Some of them deal with problems that are solved with built-ins in languages that take a more “batteries included” approach. A good example would be the Python’s re module, whose moral equivalent in the Rust ecosystem is the regex crate.

Things like regular expressions, however, represent comparatively large problems. It isn’t very surprising that dedicated libraries exist to address them. It is less common for a language to offer small packages that target very specialized applications.

As in, one function/type/macro-kind of specialized, or perhaps only a little larger than that.

In this post, we’ll take a whirlwind tour through a bunch of such essential “micropackages”.

either

Rust has the built-in Result type, which is a sum1 of an Ok outcome or an Error. It forms the basis of a general error handling mechanism in the language.

Structurally, however, Result<T, E> is just an alternative between the types T and E. You may want to use such an enum for other purposes than representing results of fallible operations. Unfortunately, because of the strong inherent meaning of Result, such usage would be unidiomatic and highly confusing.

This is why the either crate exists. It contains the following Either type:

enum Either<L, R> {
    Left(L),
    Right(R),
}

While it is isomorphic to Result, it carries no connotation to the entrenched error handling practices2. Additionally, it offers symmetric combinator methods such as map_left or right_and_then for chaining computations involving the Either values.

lazy_static

As a design choice, Rust doesn’t allow for safe access to global mutable variables. The semi-standard way of introducing those into your code is therefore the lazy_static crate.

However, the most important usage for it is to declare lazy initialized constants of more complex types:

lazy_static! {
    static ref TICK_INTERVAL: Duration = Duration::from_secs(7 * 24 * 60 * 60);
}

The trick isn’t entirely transparent3, but it’s the best you can do until we get a proper support for compile-time expressions in the language.

maplit

To go nicely with the crate above — and to act as a natural syntactic follow-up to the standard vec![] macro — we’ve got the maplit crate.

What it does is add HashMap and HashSet literals” by defining some very simple hashmap! and hashset! macros:

lazy_static! {
    static ref IMAGE_EXTENSIONS: HashMap<&'static str, ImageFormat> = hashmap!{
        "gif" => ImageFormat::GIF,
        "jpeg" => ImageFormat::JPEG,
        "jpg" => ImageFormat::JPG,
        "png" => ImageFormat::PNG,
    };
}

Internally, hashmap! expands to the appropriate amount of HashMap::insert calls, returning the finished hash map with all the keys and values given.

try_opt

Before the ? operator was introduced to Rust, the idiomatic way of propagating erroneous Results was the try! macro.

A similar macro can also be implemented for Option types so that it propagates the Nones upstream. The try_opt crate is doing precisely that, and the macro can be used in a straightforward manner:

fn parse_ipv4(s: &str) -> Option<(u8, u8, u8, u8)> {
    lazy_static! {
        static ref RE: Regex = Regex::new(
            r"^(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})$"
        ).unwrap();
    }
    let caps = try_opt!(RE.captures(s));
    let a = try_opt!(caps.get(1)).as_str();
    let b = try_opt!(caps.get(2)).as_str();
    let c = try_opt!(caps.get(3)).as_str();
    let d = try_opt!(caps.get(4)).as_str();
    Some((
        try_opt!(a.parse().ok()),
        try_opt!(b.parse().ok()),
        try_opt!(c.parse().ok()),
        try_opt!(d.parse().ok()),
    ))
}

Until Rust supports ? for Options (which is planned), this try_opt! macro can serve as an acceptable workaround.

exitcode

It is a common convention in basically every mainstream OS that a process has finished with an error if it exits with a code different than 0 (zero), Linux divides the space of error codes further, and — along with BSD — it also includes the sysexits.h header with some more specialized codes.

These have been adopted by great many programs and languages. In Rust, those semi-standard names for common errors can be used, too. All you need to do is add the exitcode crate to your project:

fn main() {
    let options = args::parse().unwrap_or_else(|e| {
        print_args_error(e).unwrap();
        std::process::exit(exitcode::USAGE);
    });

In addition to constants like USAGE or TEMPFAIL, the exitcode crate also defines an ExitCode alias for the integer type holding the exit codes. You can use it, among other things, as a return type of your top-level functions:

    let code = do_stuff(options);
    std::process::exit(code);
}

fn do_stuff(options: Options) -> exitcode::ExitCode {
    // ...
}

enum-set

In Java, there is a specialization of the general Set interface that works for enum types: the EnumSet class. Its members are represented very compactly as bits rather than hashed elements.

A similar (albeit slightly less powerful4) structure has been implemented in the enum-set crate. Given a #[repr(u32)] enum type:

#[repr(u32)]
#[derive(Clone, Copy, Debug Eq, Hash, PartialEq)]
enum Weekday {
    Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday,
}

you can create an EnumSet of its variants:

let mut weekend: EnumSet<Weekday> = EnumSet::new();
weekend.insert(Weekday::Saturday);
weekend.insert(Weekday::Sunday);

as long as you provide a simple trait impl that specifies how to convert those enum values to and from u32:

impl enum_set::CLike for Weekday {
    fn to_u32(&self) -> u32            { *self as u32 }
    unsafe fn from_u32(v: u32) -> Self { std::mem::transmute(v) }
}

The advantage is having a set structure represented by a single, unsigned 32-bit integer, leading to O(1) complexity of all common set operations. This includes membership checks, the union of two sets, their intersection, difference, and so on.

antidote

As part of fulfilling the promise of Fearless Concurrency™, Rust offers multiple synchronization primitives that are all defined in the std::sync module. One thing that Mutex, RwLock, and similar mechanisms there have in common is that their locks can become “poisoned” if a thread panicks while holding them. As a result, acquiring a lock requires handling the potential PoisonError.

For many programs, however, lock poisoning is not even a remote, but a straight-up impossible situation. If you follow the best practices of concurrent resource sharing, you won’t be holding locks for more than a few instructions, devoid of unwraps or any other opportunity to panic!(). Unfortunately, you cannot prove this to the Rust compiler statically, so it will still require you to handle a PoisonError that cannot happen.

This is where the aptly named antidote crate crate offers help. In it, you can find all the same locks & guards API that is offered by std::sync, just without the PoisonError. In many cases, this removal has radically simplified the interface, for example by turning Result<Guard, Error> return types into just Guard.

The caveat, of course, is that you need to ensure all threads holding these “immunized” locks either:

  • don’t panic at all; or
  • don’t leave guarded resources in an inconsistent state if they do panic

Like it’s been mentioned earlier, the best way to make that happen is to keep lock-guarded critical sections minimal and infallible.

matches

Pattern matching is one of the most important features of Rust, but some of the relevant language constructs have awkward shortcomings. The if let conditional, for example, cannot be combined with boolean tests:

if let Foo(_) = x && y.is_good() {

and thus requires additional nesting, or a different approach altogether.

Thankfully, to help with situations like this, there is the matches crate with a bunch of convenient macros. Besides its namesake, matches!:

if matches!(x, Foo(_)) && y.is_good() {

it also exposes assertion macros (assert_match! and debug_assert_match!) that can be used in both production and test code.


This concludes the overview of small Rust crates, at least for now.

To be certain, these crates are by far not the only ones that are small in size and simultaneously almost indispensable. Many more great libraries can be found e.g. in the Awesome Rust registry, though obviously you could argue if all of them are truly “micro” ;-)

If you know more crates in the similar vein, make sure to mention them in the comments!


  1. A sum type consists of several alternatives, out of which only one has been picked for a particular instance. The other common name for it is a tagged union

  2. Unless you come from Haskell, that is, where Either is the equivalent of Rust’s Result :) 

  3. You will occasionally need an explicit * to trigger the Deref coercion it uses. 

  4. It only supports unitary enums of up to 32 variants. 


O(n log n) isn’t bad

Posted on Thu 19 October 2017 in Programming • Tagged with algorithms, complexity, Big OLeave a comment

Most programmers should be familiar with the Big O notation of computational complexity. This is how, in very theoretical terms, we are describing the relative differences in the performance of algorithms.

Excluding the case of constant time complexity (O(1)), the vast majority of practical algorithms falls into one of the following classes:

  • O(log n)
  • O(n)
  • O(n log n)
  • O(n²)

The further down a class is on this list, the worse (less efficient) it gets. What may not be completely obvious, however, is the magnitude of differences.

Let’s have a closer look.

The best and the worst

First, it’s pretty easy when it comes to the extreme points. A logarithmic complexity is clearly great, because the number of operations barely even grows as the size of input increases. For N of one million, the (natural1) logarithm is equal to about 14. For one trillion — million times more — log n is only 27!

Such amazing scalability is one of the reasons why databases, for example, can execute queries extremely efficiently even for millions or billions of records.

On the other end, an algorithm that has quadratic complexity will only do well for very small datasets. It can still be useful in practice, especially as a small-input optimization of some larger procedure2, or because of some other desirable properties (like good parallelizability).

Outside of those carefully selected cases, however, the computational requirements of O(n²) for any large dataset are usually too great.

Middle ground

As for the remaining two classes, the linear one (O(n)) is probably the easiest to reason about.

In a linear algorithm, the number of operations increases steadily along with the size of input.
For thousand elements, you need roughly a thousand steps (times a constant factor).
For a million, there will be a million operations necessary.

Thus, by itself, the linear scaling doesn’t get any better or worse when data gets bigger3. In many cases, it means there is nothing to be exploited in the structure of input set that could make the running time any better (compared to e.g. the reliance of logarithmic searches on sorted order). Typically, all the data must be traversed at least once in its entirety.

All in all, it can be a decent time complexity, but it’s nothing to write home about.

A function has no name

What about O(n log n), then? It falls between the linear and the quadratic, which suggests that it’s somewhere half-way between mediocre and awful. We don’t even have a widely used word for it, meaning it is probably not even that important.

Both of those suppositions are wrong.

First, O(n log n) isn’t even remotely close to the “median” (whatever that means) of O(n) and O(n²). In reality, its asymptotic rate of growth places it very close to the former. You can see this pretty clearly by looking at the following plot:

Time complexity plot
Source

The gap between O(n) and O(n log n) barely even widens, even as the values on vertical axis increase to the limits of practicality.

Indeed, the log n part of the function grows slowly enough that, for many practical purposes, it can be considered a large “constant” in the complexity formula. Some complicated algorithm that’s technically linear may therefore be a worse choice than a simpler solution with O(n log n) scaling.

Sorting it out

What are the typical situations where O(n log n) arises in practice? Very often, it has to do with establishing some kind of ordering of the input which includes at least one of the following:

  • a wholesale sorting of it (using pairwise comparison)
  • repeated queries for the current maximum or minimum (via a priority queue)

Considering that many practical algoithms — from pathfinding to compression — utilize some form of sorting or sorted data structures, it makes O(n log n) quite an important complexity class.


  1. Natural logarithm has a base of e = 2.71828183… The exact choice of logarithm base doesn’t matter for asymptotic complexity, because it changes only the constant coefficient in the O(f(n)) function. 

  2. A widely used example is Timsort which switches from merge sort (O(log n)) to insertion sort (O(n²)) when the array slice is short enough to warrant it. 

  3. In reality, practical factors like memory/cache size, OS scheduling behavior, and a myriad of other things can make the actual running time scale sublinearly beyond a certain point. 


The Printer Monad in Haskell

Posted on Fri 04 August 2017 in Code • Tagged with Haskell, Writer, monads, monad transformers, WriterTLeave a comment

Quite recently, I have encountered an interesting case of monad-based refactoring in Haskell.

Suppose you have a ComplicatedRecord that holds the results of some lengthy and important process in your program. You want to present that data to the user in a nicely formatted way, so you write a function which begins somewhat like this:

{-# LANGUAGE RecordWildcards #-}

-- | Pretty-print the content of the record.
ppRecord :: ComplicatedRecord -> IO ()
ppRecord ComplicatedRecord{..} = do
    -- ...

Inside, there is plenty of putStrLn calls, likely hidden inside more specific subfunctions that format all the numerous parts of ComplicatedRecord. But the IO monad isn’t there just for printing: because the code went through multiple iterations, some of this logic actually takes advantage of it by making additional system & network calls.

So yeah, it’s not particularly pretty.

Now, however, we find out that the output we’re printing here shouldn’t always go directly to stdout. In some cases, unsurprisingly, we actually want it back as a single string, without having it sent to the standard output at all.

Just $ return . it

Your first instinct here may be to simply give back the final string (well, Text) as the function result1:

ppRecord :: ComplicatedRecord -> IO Text

However, this turns out to be rather awkward. While in most other languages we would simply accumulate output by progressively adding more data to a mutable result, this would be much more inconvenient (and somewhat weird) to do in Haskell.

This is where the stdout-based approach seems cleaner; instead of straightforward, sequential code like this:

{-# LANGUAGE OverloadedStrings #-}

import Control.Monad.Extra (whenJust)
import Data.Text.IO
import TextShow

ppOrder Order{..} = do
    putStrLn $ "Order #" <> ordNumber
    ppAddress ordDeliveryAddress
    forM_ (zip [1..] ordItems) $ \(i, Item{..}) -> do
        putStrLn $ showt (i::Int) <> ". " <> itName <> " x" <> showt itQuantity
    whenJust ordBillingAddress ppAddress

ppAddress Address{..} = do
    putStrLn $ addrFirstName <> " " <> addrLastName
    putStrLn addrLine1
    whenJust addrLine2 putStrLn
    putStrLn $ addrCity <> ", " <> addrPostalCode

we have to overhaul each function and turn it into a much less pleasant “mappendage”:

ppOrder Order{..} = unlines $ mconcat
    [ "Order #" <> ordNumber
    , ppAddress ordDeliveryAddress
    , ppItems ordItems
    ] + maybe [] (\addr -> [ppAddress addr]) ordBillingAddress
    where
    ppItems = mconcat . map (uncurry ppItem) . zip [1..]
    ppItem i Item{..} = showt i <> ". " <> itName <> " x" <> itQuantity

One may argue that this is, in fact, the more idiomatic approach, but I’m not very fond of all those commas. Plus, it shows rather clearly that any conditional logic (like with ordBillingAddress here) is going to get pretty cumbersome.

Along comes the Writer

What I’m saying here is that even in pure code, it is sometimes very desirable to have a do notation. For that, however, we need a suitable Monad2 to provide the meaning of “invisible semicolon” in a do block.

And Text, obviously, isn’t one. Neither is [Text] (lines of text), nor any other type we’d use to represent the final output of formatting & printing. They are unsuitable, because they cannot encode the computation that eventually produces said output — either the top-level one (ppRecord) or any of its building blocks (like the ppOrder or ppAddress), down to a most elementary putStrLn. The only thing they can stand for is the result itself.

Fortunately, the pattern of executing code and occassionally producing some “additional” output has been abstracted over in the Haskell standard library. This is exactly the use case for the Writer monad!

The definition of Writer is roughly equivalent to the following:

newtype Writer w a = ... -- omitted

Of the two type parameters it takes, the w one signifies what output it can produce “on the side”. This is contrasted with a which is the regular result of a monadic expression or function. In our case, a will basically always be () (unit/”empty” type), but it is nonetheless necessary for the Writer to behave as a monad.

To complement the above definition, Writer comes with several useful functions. Among those, the most interesting one is tell:

tell :: w -> Writer w ()

write would’ve probably been a better name for it, as it’s definitely the main and defining operation of Writer.

Looking at its signature, we can see it takes a bit of the Writers output (w) and results in a Writer action. Internally, it will simply add the argument to the already accumulated output of the writer3.

To make everything more concrete, here’s a literal “Hello world” example coded very verbosly as a Writer action:

import Control.Monad.Writer

hello :: Writer Text ()
hello = do
    tell "Hello"
    tell " "
    tell "world"

main :: IO ()
main = do
    let (_, greeting) = runWriter hello
    Text.putStrLn greeting

It also contains the last element of the Writer puzzle:

runWriter :: Writer w a -> (a, w)

Like its name suggests, this function will “run” any Writer action that we give it, returning both the “regular” result (a) plus any output passed in tells (w).4

My little monad: transformers are magic

The last example may be very simple, but it contains all the building blocks for many of the printing functions we need. If we define a convenience wrapper for tell:

putLn :: Text -> Writer Text ()
putLn line = tell $ line <> "\n"

then both ppAddress and ppOrder can be translated through a mere mechanical substitution of putStrLn with putLn:

ppAddress Address{..} = do
    putLn $ addrFirstName <> " " <> addrLastName
    putLn addrLine1
    whenJust addrLine2 putLn
    putLn $ addrCity <> ", " <> addrPostalCode

-- ppOrder omitted

Unfortunately, a bare Writer like this can only work for pure code, which isn’t a luxury we can expect in every situtation. In my case, some of the printing logic was tied pretty strongly to IO, and it would be difficult and time consuming to decouple it.

Thankfully, the reliance on IO isn’t a complete deal breaker. While we cannot ensure that nothing calls putStrLn anymore, we can provide the tell/putLn capabilities alongside whatever other IO calls our code has to make (for now).

To achieve that, we need to create a monad stack with WriterT:

newtype WriterT w m a = ... -- omitted

WriterT is a monad transformer, one of those scary Haskell concepts that are actually simpler than they appear on the surface. This is because transfomers like WriterT are mere wrappers. The only difference between it and a regular Writer is the additional m parameter, which is the inner monad we’re packaging inside a new Writer.

Here (and in many other cases), m will be substituted with IO:

type Printer a = WriterT Text IO a  -- w == Text, m == IO

thus creating the titular Printer monad. This hybrid beast can both output Text through the Writer API, as well as perform any additional IO operations that the code may (still) require.

Below is an example; the User record requires an I/O call to get the size of its $HOME directory:

import Control.Monad.IO.Class (liftIO)
import System.Directory (getFileSize)

-- To print this data type nicely, we sadly require I/O :(
data User = User { usrName :: Text
                 , usrHomeDir :: FilePath
                 }

ppUser :: User -> IO Text
ppUser User{..} = snd <$> runWriterT $ do
    putLn $ "Name: " <> usrName
    homeSize <- liftIO $ getFileSize usrHomeDir
    putLn $ "$HOME: " <> showt usrHomeDir <> "(" <> showt homeSize <> " bytes)"

As a bit of necessary cruft, we have to use liftIO to “lift” (wrap) IO actions such as getFileSize in a full Printer monad before executing them. Besides everything else you can think of, this is yet another argument for eventually getting rid of the IO :)

Making the monads coexist

But our job isn’t done yet. Despite looking very reasonable, this version of ppUser doesn’t actually compile! The actual type error may vary a little, but it all boils down to a difference between WriterT Text IO () (i.e. Printer ()) and Writer Text () at each call site of putLn.

GHC is obviously correct. However, the problem lies not in how we’re calling putLn, but rather the way it’s been defined:

putLn :: Text -> Writer Text ()

This type can only produce a specific, pure Writer action. But to fit inside the do block of our compound monad, we need the Writer + IO combo from WriterT Text IO (i.e. Printer).

We can try to address the mismatch by changing the signature to:

putLn :: Text -> WriterT Text IO ()  -- or just: Printer ()

but this will only result in the opposite problem. Now, all the pure printers like ppAddress are facing the fact that putLn is a (wrapped) IO action, despite not actually doing any I/O whatsoever.

The obvious question is, can we have something that fits both?

Earlier on, I’ve said that both vanilla Writer and the IO-spruced Printer support the “Writer API”, most notably the tell function. This notion of a “monadic interface” isn’t just hand-waving, though, and Haskell (obviously!) provides a way to express it programmatically.

Meet the MonadWriter typeclass:

class (Monad m, Monoid w) => MonadWriter w m

Any monad that can work as a Writer will be an instance of it, regardless of whether it wraps over IO or anything else. Functions like tell are defined to be polymorphic over it, enabling us to leverage the same technique they use when we define putLn:

{-# LANGUAGE FlexibleContexts #-}

import Control.Monad.Writer.Class

putLn :: MonadWriter Text m => Text -> m ()
putLn line = tell $ line <> "\n"

If you aren’t very familiar with this syntax, the part before => is a typeclass constraint, or context. It defines the requirements to be satisfied by types which are later used in the function signature.

Here, we request a MonadWriter instance — one where Text is the output but anything can be the inner monad. We refer to that unknown monad only as m, a type variable. The compiler will figure out what to substitute for it at every call site of putLn.

As a result, both a pure Writer and the IO-bound Printer can now use it. In the second case, the relevant instance of MonadWriter will, naturally, have IO fill in the m position.

But curiously, the “pure” Writer also has an inner monad. It just literally does nothing but wrap some other value:

newtype Identity a = Identity { runIdentity :: a }

In most cases, this fact is hidden behind the real definition of Writer, though runIdentity may sometimes come handy for some on-the-spot type hacks5.

The wrap

The many things we’ve talked about here could of course be a starting point for even more advanced stuff, but obviously we have to stop somewhere! But don’t worry: knowing about MonadWriter and other monad typeclasses like this is enough to write quite idiomatic code…

…at least until you learn about free monads, effects, and the like ;-)

In any case, you can check this gist for the complete code from this post.


  1. IO is still necessary due to ad-hoc network fetches and syscalls mentioned earlier. 

  2. Or at least an Applicative, via the ApplicativeDo GHC extension. 

  3. The adding is done via mappend, requiring w to be a Monoid

  4. There is also the execWriter variant which is actually more practical here as it only returns the accumulated output. 

  5. We could, for example, use it alongside mapWriterT to “fix” the calls to putLn if we didn’t have control over its definition. 


Extension traits in Rust

Posted on Tue 20 June 2017 in Code • Tagged with Rust, C#, methods, extension methods, traitsLeave a comment

In a few object-oriented languages, it is possible to add methods to a class after it’s already been defined.

This feature arises quite naturally if the language has a dynamic type system that’s modifiable at runtime. In those cases, even replacing existing methods is perfectly possible1.

In addition to that, some statically typed languages — most notably in C# — offer extension methods as a dedicated feature of their type systems. The premise is that you would write standalone functions whose first argument is specially designated (usually by this keyword) as a receiver of the resulting method call:

public static int WordCount(this String str) {
    return str.Split(new char[] { ' ', '.', '?' },
                     StringSplitOptions.RemoveEmptyEntries).Length;
}

At the call site, the new method is indistinguishable from any of the existing ones:

string s = "Alice has a cat.";
int n = s.WordCount();

That’s assuming you have imported both the original class (or it’s a built-in like String), as well as the module in which the extension method is defined.

Rewrite it in Rust

The curious thing about Rust‘s type system is that it permits extension methods solely as a side effect of its core building block: traits.

In this post, I’m going to describe a certain design pattern in Rust which involves third-party types and user-defined traits. Several popular crates — like itertools or unicode-normalization — utilize it very successfully to add new, useful methods to the language standard types.

I’m not sure if this pattern has an official or widely accepted name. Personally, I’ve taken to calling it extension traits.

Let’s have a look at how they are commonly implemented.

Ingredients

We can use the extension trait pattern if we want to have additional methods in a type that we don’t otherwise control (or don’t want to modify).

Common cases include:

  • Rust standard library types, like Result, String, or anything else inside the std namespace
  • types imported from third-party libraries
  • types from the current crate if additional methods only make sense in certain scenarios (e.g. conditional compilation / testing)2

The crux of this technique is really simple. Like with most design patterns, however, it involves a certain degree of boilerplate and duplication.

So without further ado… In order to “patch” some new method(s) into an external type you will need to:

  1. Define a trait with signatures of all the methods you want to add.
  2. Implement it for the external type.
  3. There is no step three.

As an important note on the usage side, the calling code needs to import your new trait in addition to the external type. Once that’s done, it can proceed to use the new methods is if they were there to begin with.

I’m sure you are keen on seeing some examples!

Broadening your Options

We’re going to add two new methods to Rust’s standard Option type. The goal is to make it more convenient to operate on mutable Options by allowing to easily replace an existing value with another one3.

Here’s the appropriate extension trait4:

/// Additional mutation methods for `Option`.
pub trait OptionMutExt<T> {
    /// Replace the existing `Some` value with a new one.
    ///
    /// Returns the previous value if it was present, or `None` if no replacement was made.
    fn replace(&mut self, val: T) -> Option<T>;

    /// Replace the existing `Some` value with the result of given closure.
    ///
    /// Returns the previous value if it was present, or `None` if no replacement was made.
    fn replace_with<F: FnOnce() -> T>(&mut self, f: F) -> Option<T>;
}

It may feel at little bit weird to implement it.
You will basically have to pretend you are inside the Option type itself:

impl<T> OptionMutExt<T> for Option<T> {
    fn replace(&mut self, val: T) -> Option<T> {
        self.replace_with(move || val)
    }

    fn replace_with<F: FnOnce() -> T>(&mut self, f: F) -> Option<T> {
        if self.is_some() {
            let result = self.take();
            *self = Some(f());
            result
        } else {
            None
        }
    }
}

Unfortunately, this is just an illusion. Extension traits grant no special powers that’d allow you to bypass any of the regular visibility rules. All you can use inside the new methods is still just the public interface of the type you’re augmenting (here, Option).

In our case, however, this is good enough, mostly thanks to the recently introduced Option::take.

To use our shiny new methods in other places, all we have to do is import the extension trait:

use ext::rust::OptionMutExt;  // assuming you put it in ext/rust.rs

// ...somewhere...
let mut opt: Option<u32> = ...;
match opt.replace(42) {
    Some(x) => debug!("Option had a value of {} before replacement", x),
    None => assert_eq!(None, opt),
}

It doesn’t matter where it was defined either, meaning we can ship it away to crates.io and let it accrue as many happy users as Itertools has ;-)

Are you hyper::Body ready?

Our second example will demonstrate attaching more methods to a third-party type.

Last week, there was a new release of Hyper, a popular Rust framework for HTTP servers & clients. It was notable because it marked a switch from synchronous, straightforward API to a more complex, asynchronous one (which I incidentally wrote about a few weeks ago).

Predictably, there has been some confusion among its new and existing users.

We’re going to help by pinning a more convenient interface on hyper’s Body type. Body here is a struct representing the content of an HTTP request or response. After the ‘asyncatastrophe’, it doesn’t allow to access the raw incoming bytes as easily as it did before.

Thanks to extension traits, we can fix this rather quickly:

use std::error::Error;

use futures::{BoxFuture, future, Future, Stream};
use hyper::{self, Body};

pub trait BodyExt {
    /// Collect all the bytes from all the `Chunk`s from `Body`
    /// and return it as `Vec<u8>`.
    fn into_bytes(self) -> BoxFuture<Vec<u8>, hyper::Error>;

    /// Collect all the bytes from all the `Chunk`s from `Body`,
    /// decode them as UTF8, and return the resulting `String`.
    fn into_string(self) -> BoxFuture<String, Box<Error + Send>>;
}

impl BodyExt for Body {
    fn into_bytes(self) -> BoxFuture<Vec<u8>, hyper::Error> {
        self.concat()
            .and_then(|bytes| future::ok::<_, hyper::Error>(bytes.to_vec()))
            .boxed()
    }

    fn into_string(self) -> BoxFuture<String, Box<Error + Send>> {
        self.into_bytes()
            .map_err(|e| Box::new(e) as Box<Error + Send>)
            .and_then(|bytes| String::from_utf8(bytes)
                .map_err(|e| Box::new(e) as Box<Error + Send>))
            .boxed()
    }
}

With these new methods in hand, it is relatively straightforward to implement, say, a simple character-counting service:

use std::error::Error;

use futures::{BoxFuture, future, Future};
use hyper::server::{Service, Request, Response};

use ext::hyper::BodyExt;  // assuming the above is in ext/hyper.rs

pub struct Length;
impl Service for Length {
    type Request = Request;
    type Response = Response;
    type Error = Box<Error + Send>;
    type Future = BoxFuture<Self::Response, Self::Error>;

    fn call(&self, request: Request) -> Self::Future {
        let (_, _, _, _, body) = request.deconstruct();
        body.into_string().and_then(|s| future::ok(
            Response::new().with_body(s.len().to_string())
        )).boxed()
    }
}

Replacing Box<Error + Send> with an idiomatic error enum is left as an exercise for the reader :)

Extra credit bonus explanation

Reading this section is not necessary to use extension traits.

So far, we have seen what extension traits are capable of. It is only right to mention what they cannot do.

Indeed, this technique has some limitations. They are a conscious choice on the part of Rust authors, and they were decided upon in an effort to keep the type system coherent.

Coherence isn’t an everyday topic in Rust, but it becomes important when working with traits and types that cross package boundaries. Rules of trait coherence (described briefly towards the end of this section of the Rust book) state that the following combinations of “local” (this crate) and “external” (other crates5) are legal:

  • implement a local trait for a local type.
    This is common in larger programs that use polymorphic abstractions.
  • implement an external trait for a local type.
    We do this often to integrate with third-party libraries and frameworks, just like with hyper above.
  • implement a local trait for an external type.
    That’s extension traits for you!

What is not possible, however, is to:

  • implement an external trait for an external type

This case is prohibited in order to make the choice of trait implementations more predictable, both for the compiler and for the programmer. Without this rule in place, you could introduce many instances of impl Trait for Type (same Trait and same Type), each one with different functionality, leaving the compiler to “guess” the right impl for any given situation6.

The decision was thus made to disallow the impl ExternalTrait for ExternalType case altogether. If you like, you can read some more extensive backstory behind it.

Bear in mind, however, that this isn’t the unequivocally “correct” solution. Some languages choose to allow this so-called orphan case, and try to resolve the potential ambiguities in various different ways. It is a genuinely useful feature, too, as it makes easier it to glue together two unrelated libraries7.

Thankfully for extension traits, the coherence restriction doesn’t apply as long as you keep those traits and their impls in the same crate.


  1. This practice is often referred to as monkeypatching, especially in Python and Ruby. 

  2. In this case, a more common solution is to just open another impl Foo block, annotated with #[cfg(test)] or similar. An extension trait, however, makes it easier to extract Foo into a separate crate along with some handy, test-only API

  3. Note that this is not the same as the unstable (as of 1.18) Option methods guarded behind the options_entry feature gate

  4. My own convention is to call those traits FooExt if they are meant to enhance the interface of type Foo. The other practice is to mirror the name of the crate that the trait is packaged in; both Itertools and UnicodeNormalization are examples of this style. 

  5. Standard library (std or core namespaces) counts as external crate for this purpose. 

  6. Or throw an error. However, trait impls are always imported implicitly, so this could essentially prevent some combination of different modules/libraries in the ecosystem from being used together, and generally create an unfathomable mess. 

  7. The usual workaround for coherence/orphan rules in Rust involves creating a wrapper around the external type in order to make it “local”, and therefore allow external trait impls for it. This is called the newtype pattern and there are some crates to support it. 


Rust as a gateway drug to Haskell

Posted on Tue 13 June 2017 in Programming • Tagged with Rust, Haskell, traits, typeclasses, monads, ADTs, FPLeave a comment

For work-related reasons, I had to recently get up to speed on programming in Haskell.

Before that, I had very little actual experience with the language, clocking probably at less than a thousand lines of working code over a couple of years. Nothing impressive either: some wrapper script here, some experimental rewrite there…

These days, I heard, there are a few resources for learning Haskell1 that don’t require having a PhD in category theory2. They may be quite helpful when your exposure to the functional programming is limited. In my case, however, the one thing that really enabled me to become (somewhat) productive was not even related to Haskell at all.

It was Rust.

In theory, this shouldn’t really make much of a sense. If you compare both languages by putting checkmarks in a feature chart, you won’t find them to have much in common.

Some of the obvious differences include:

  • predominantly functional vs. mostly imperative
  • garbage collection vs. explicit memory management
  • lazy vs. eager evaluation
  • rich runtime3 vs. almost no runtime
  • global vs. localized type inference
  • indentation vs. braces
  • two decades (!) vs. barely two years since release

Setting aside syntax, most of those differences are pretty significant.

You probably wouldn’t use Haskell for embedded programming, for instance, both for performance (GC) and memory usage reasons (laziness). Similarly, Rust’s ownership system can be too much of a hassle for high level code that isn’t subject to real time requirements.

But if you look a little deeper, beyond just the surface descriptions of both languages, you can find plenty of concepts they share.

Traits: they are typeclasses, essentially

Take Haskell’s typeclasses, for example — the cornerstone of its rich and expressive type system.

A typeclass is, simply speaking, a list of capabilities: it defines what a type can do. There exist analogs of typeclasses in most programming languages, but they are normally called interfaces or protocols, and remain closely tied to the object-oriented paradigm.

Not so in Haskell.

Or in Rust for that matter, where the equivalent concept exists under the name of traits. What typeclasses and traits have in common is that they’re used for all kinds of polymorphism in their respective languages.

Generics

For example, let’s consider parametrized types, sometimes also referred to as templates (C++) or generics (C#).

In many cases, a generic function or type requires its type arguments to exhibit certain characteristics. In some languages (like the legacy C++), this is checked only implicitly: as long as the template type-checks after its expansion, everything is okay:

template <typename T> T min(T a, T b) {
    return a > b ? b : a;
}

struct Foo {};

int main() {
    min(1, 2);  // OK
    min(Foo(), Foo());  // ERROR, no operator `>`
}

More advanced type systems, however, allow to specify the generic constraints explicitly. This is the case in Rust:

fn min<T: Ord>(a: T, b: T) -> T {
    if a > b { b } else { a }
}

as well as in Haskell:

min :: (Ord a) => a -> a -> a
min a b = if a > b then b else a

In both languages, the notion of a type supporting certain operations (like comparison/ordering) is represented as its own, first-class concept: a trait (Rust) or a typeclass (Haskell). Since the compiler is aware of those constraints, it can verify that the min function is used correctly even before it tries to generate code for a specific substitution of T.

Dynamic dispatch

On the other hand, let’s look at runtime polymorphism: the one that OO languages implement through abstract base classes and virtual methods. It’s the tool of choice if you need a container of objects of different types, which nevertheless all expose the same interface.

To offer it, Rust has trait objects, and they work pretty much exactly like base class pointers/references from Java, C++, or C#.

// Trait definition
trait Draw {
    fn draw(&self);
}

// Data type implementing the trait
struct Circle { radius: i32 }
impl Draw for Circle {
    fn draw(&self) { /* omitted */ }
}

// Usage
fn draw_all(objects: &Vec<Box<Draw>>) {
    for &obj in objects {
        obj.draw();
    }
}

The Haskell analogue is, in turn, based on typeclasses, though the specifics can be a little bit trickier:

{-# LANGUAGE ExistentialQuantification #-}

-- Typeclass definition
class Draw a where
    draw :: a -> IO ()

-- Polymorphic wrapper type
data Draw' = forall a. Draw a => Draw' a
instance Draw Draw' where
    draw (Draw' d) = draw d

-- Data types instantiating ("implementing") the typeclass
data Circle = Circle ()
instance Draw Circle where draw = undefined -- omitted
data Square = Square ()
instance Draw Square where draw = undefined -- omitted

-- Usage
drawAll :: (Draw a) => [a] -> IO ()
drawAll ds = mapM_ draw ds

main = do
    let shapes = [Draw' Circle (), Draw' Square ()]
    drawAll shapes

Here, the generic function can use typeclass constraints directly ((Draw a) => ...), but creating a container of different object types requires a polymorphic wrapper4.

Differences

All those similarities do not mean that Rust traits and Haskell typeclasses are one and the same. There are, in fact, quite a few differences, owing mostly to the fact that Haskell’s type system is more expressive:

  • Rust lacks higher kinded types, making certain abstractions impossible to encode as traits. It is possible, however, to implement a trait for infinitely many types at once if the implementation itself is generic (like here).

  • When defining a trait in Rust, you can ask implementors to provide some auxiliary, associated types in addition to just methods5. A similar mechanism in Haskell is expanded into type families, and requires enabling a GHC extension.

  • While typeclasses in Haskell can be implemented for multiple types simultaneously via a GHC extension, Rust’s take on this feature is to make traits themselves generic (e.g. trait Foo<T>). The end result is roughly similar; however, the “main implementing type” (one after for in impl ... for ...) is still a method receiver (self), just like in OO languages.

  • Rust enforces coherence rules on trait implementations. The topic is actually rather complicated, but the gist is about local (current package) vs. remote (other packages / standard library) traits and types.
    Without too much detail, coherence demands that there be a local type or trait somewhere in the impl ... for ... construct. Haskell doesn’t have this limitation, although it is recommended not to take advantage of this.

The M-word

Another area of overlap between Haskell and Rust exists in the data model utilized by those languages. Both are taking heavy advantage of algebraic data types (ADT), including the ability to define both product types (“regular” structs and records) as well as sum types (tagged unions).

Maybe you’d like Some(T)?

Even more interestingly, code in both languages makes extensive use of the two most basic ADTs:

  • Option (Rust) or Maybe (Haskell) — for denoting a presence or absence of a value
  • Result (Rust) or Either (Haskell) — for representing the alternative of “correct” and “erroneous” value

These aren’t just simple datatypes. They are deeply interwoven into the basic semantics of both languages, not to mention their standard libraries and community-provided packages.

The Option/Maybe type, for example, is the alternative to nullable references: something that’s been heavily criticized for making programs prone to unexpected NullReferenceExceptions. The idea behind both of those types is to make actual values impossible to confuse with nulls by encoding the potential nullability into the type system:

enum Option<T> { Some(T), None }
data Maybe a = Just a | Nothing

Result and Either, on the other hand, can be thought as an extension of this idea. They also represent two possibilities, but the “wrong” one isn’t just None or Nothing — it has some more information associated with it:

enum Result<T, E> { Ok(T), Err(E) }
data Either e a = Left e | Right a

This dichotomy between the Ok (or Right) value and the Error value (or the Left one) makes it a great vehicle for carrying results of functions that can fail.

In Rust, this replaces the traditional error handling mechanisms based on exceptions. In Haskell, the exceptions are present and sometimes necessary, but Either is nevertheless the preferred approach to dealing with errors.

What to do?

One thing that Haskell does better is composing those fallible functions into bigger chunks of logic.

Relatively recently, Rust has added the ? operator as a replacement for the try! macro. This is now the preferred way of error propagation, allowing for a more concise composition of functions that return Results:

/// Read an integer from given file.
fn int_from_file(path: &Path) -> io::Result<i32> {
    let mut file = fs::File::open(path)?;
    let mut s = String::new();
    file.read_to_string(&mut s)?;
    let result = s.parse().map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))?;
    Ok(result)
}

But Haskell had it for much longer, and it’s something of a hallmark of the language and functional programming in general — even though it looks thoroughly imperative:

intFromFile :: FilePath -> IO Int
intFromFile path = do
    s <- readFile path
    i <- readIO s
    return i

If you haven’t seen it before, this is of course a monad — the IO monad, to be precise. While discussing monads in detail is way outside of the scope of this article, we can definitely notice some analogies with Rust. The do notation with <- arrows is evidently similar to how in Rust you’d assign the result of a fallible operation after “unpacking” it with ?.

But of course, there’s plenty of different monads in Haskell: not just IO, but also Either, Maybe, Reader, Writer, Cont, STM, and many others. In Rust (at least as of 1.19), the ? operator only works for Result types, although there is some talk about extending it to Option as well6.

Eventually, we may see the language adopt some variant of the do notation, though the motivation for this will most likely come from asynchronous programming with Futures rather than plain Results. General monads, however, require support for higher kinded types which isn’t coming anytime soon.

A path through Rust?

Now that we’ve discussed those similarities, the obvious question arises.

Is learning Rust worthwhile if your ultimate goal is getting proficient at functional programming in general, or Haskell in particular?

My answer to that is actually pretty straightforward.

If “getting to FP” is your main goal, then Rust will not help you very much. Functional paradigm isn’t the main idea behind the language — its shtick is mostly memory safety, and zero-cost abstractions. While it succeeds somewhat at being “Haskell Lite”, it really strives to be safer C++7.

But if, on the other hand, you regard FP mostly as a curiosity that seems to be seeping into your favorite imperative language at an increasing rate, Rust can be a good way to gain familiarity with this peculiar beast.

At the very least, you will learn the functional way of modeling programs, with lots of smart enums/unions and structs but without inheritance.

And the best part is: you will be so busy fighting the borrow checker you won’t even notice when it happens ;-)


  1. Just ask in #haskell-beginners on Freenode if you’re interested. 

  2. Though ironically, I found the CT lectures by Bartosz Milewski very helpful in developing the right intuitions, even though they’re very abstract. 

  3. For example, Haskell has green threads (created with forkIO) which are somewhat similar to goroutines from Go. To get anything remotely similar in Rust, you need to use external libraries

  4. Note that such containers aren’t very idiomatic Haskell. A more typical solution would be to just curry the draw function, implicitly putting the Draw object inside its closure. 

  5. This mechanisms expands to associated constants in Rust 1.20. 

  6. Those two types also have a form of monadic bind (>>= in Haskell) exposed as the and_then method

  7. If you want another language for easing into the concept of functional programming, I’ve heard that Scala fills that niche quite well. 


Long Live Dynamic Languages!

Posted on Wed 24 May 2017 in Programming • Tagged with Python, Rust, dynamic languages, dynamic typing, static typingLeave a comment

If you followed the few (or a dozen) of my recent posts, you’ve probably noticed a sizable bias in the choice of topics. The vast majority were about Rust — a native, bare metal, statically typed language with powerful compile time semantics but little in the way of runtime flexibility.

Needless to say, Rust is radically different than (almost the exact opposite of) Python, the other language that I’m covering sometimes. Considering this topical shift, it would fair to assume that I, too, have subscribed to the whole Static Typing™ trend.

But that wouldn’t be very accurate.

Don’t get me wrong. As far as fashion cycles in the software industry go, the current trend towards static/compiled languages is difficult to disparage. Strong in both hype and merit, it has given us some really innovative & promising solutions (as well as some not-so-innovative ones) that are poised to shape the future of programming for years, if not decades to come. In many ways, it is also correcting mistakes of the previous generation: excessive boilerplate, byzantine abstractions, and software bloat.

What about dynamic languages, then? Are they slowly going the way of the dodo?

Trigger warning: TypeError

Some programmers would certainly wish so.

Indeed, it’s not hard at all to find articles and opinions about dynamic languages that are, well, less than flattering.

The common argument echoed in those accounts points to supposed unsuitability of Python et al. for any large, multi-person project. The reasoning can be summed up as “good for small scripts and not much else”. Without statically checked types, the argument goes, anything bigger than a quick hack or a prototype shall inevitably become hairy and dangerous monstrosity.

And when that happens, a single typo can go unchecked and bring down the entire system!…

At the very end of this spectrum of beliefs, some pundits may eventually make the leap from languages to people. If dynamically typed languages (or “untyped” ones, as they’re often mislabeled) are letting even trivial bugs through, then obviously anyone who wants to use them is dangerously irresponsible. It must follow that all they really want is to hack up some shoddy code, yolo it over to production, and let others worry about the consequences.

Mind the gap

It’s likely unproductive to engage with someone who’s that extreme. If the rhetoric is dialed down, however, we can definitely find the edge of reason.

In my opinion, this fine line goes right through the “good in small quantities” argument. I can certainly understand the apprehension towards large projects that utilize dynamically typed languages throughout their codebases. The prospect of such a project is scary, because it contains an additional element of uncertainty. More so than with many other technologies, you ought to know what you’re doing.

Some people (and teams) do. Others, not so much.

I would therefore refine the argument so that it better reflects the strengths and weaknesses of dynamic languages. They are perfectly suited for at least the following cases:

  • anyone writing small, standalone applications or scripts
  • any project (large or small) with a well-functioning team of talented individuals

The sad reality of the software industry is the vast, gaping chasm of calamity and despair that stretches between those two scenarios.

Within lies the bulk of commercial software projects, consistently hamstrung by the usual suspects: incompetent management, unclear and shifting requirements, under- or overstaffing, ancient development practices, lack of coding standards, rampant bureaucracy, inexperienced developers, and so on.

In such an environment, it becomes nigh impossible to capitalize on the strengths of dynamic languages. Instead, the main priority is to protect from even further productivity losses, which is what bog-standard languages like Java, C#, or Go tend to be pretty good at. Rather than to move fast, the objective is to remain moving at all.

Freedom of choice

But that’s backwards”, the usual retort goes. “Static typing and compilation checks are what enables me to be productive!”

I have no doubt that most people saying this do indeed believe they’re better off programming in static languages. Regardless of what they think, however, there exists no conclusive evidence to back up such claims as a universal rule.

This is of course the perennial problem with software engineering in general, and the project management aspect of it in particular. There is very little proper research on optimal and effective approaches to it, which is why any of the so-called “best practices” are quite likely to stem from unsubstantiated hearsay.

We can lament this state of affairs, of course. But on the other hand, we can also find it liberating. In the absence of rigid prescriptions and judgments about productivity, we are free to explore, within technical limitations, what language works best for us, our team, and our projects.

Sometimes it’ll be Go, Java, Rust, or even Haskell.
A different situation may be best handled by Python, Ruby, or even JavaScript.

As the old adage goes, there is no silver bullet. We should not try to polish static typing into one.


Asynchronous Rust for fun & profit

Posted on Fri 28 April 2017 in Programming • Tagged with Rust, async, Tokio, futures, HTTPLeave a comment

…or: Is Rust webscale?

In this day and age, no language can really make an impact anymore unless it enables its programmers to harness the power of the Internet. Rust is no different here. Despite posing as a true systems language (as opposed to those only marketed as such), it includes highly scalable servers as a prominent objective in its 2017 agenda.

Presumably to satisfy this very objective, the Rust ecosystem has recently seen some major developments in the space of asynchronous I/O. Given the pace of those improvements, it may seem that production quality async services are quite possible already.

But is that so? How exactly do you write async Rust servers in the early to mid 2017?

To find out, I set to code up a toy application. Said program was a small intermediary/API server (a “microservice”, if you will) that tries to hit many of the typical requirements that arise in such projects. The main objective was to test the limits of asynchronous Rust, and see how easily (or difficult) they can be pushed.

This post is a summary of all the lessons I’ve learned from that.

It is necessarily quite long, so if you look for some TL;DR, scroll down straight to Conclusions.

Asynchro-what?

Before we dive in, I have to clarify what “asynchronous” means in this context. Those familiar with async concepts can freely skip this section.

Pulling some threads

Asynchronous processing (or async for short) is brought up most often in the context of I/O operations: disk reads, network calls, database queries, and so on.

Relatively speaking, all those tasks tend to be slow: they take orders of magnitude longer than just executing code or even accessing RAM. The “traditional” (synchronous) approach to dealing with them is to relegate those tasks to separate threads.

When one thread has to wait for a lengthy I/O operation to complete, the operating system (its scheduler, to be precise) can suspend that thread. This lets others execute their code in the mean time and not waste CPU cycles.

This is the essence of concurrency1.

Schedule yourself

But threads are not the only option when dealing with many things (i.e. requests) at once.

The alternative approach is one where no threads are automatically suspended or resumed by the OS. Instead, a special version of I/O subroutines allows the program to continue execution immediately after issuing an I/O call. While the operation happens in the background2, the code is given an opaque handle — usually called a promise, a future, or an async result — that will eventually resolve to the actual return value.

The program can wait for the handle synchronously, but it would typically hand it over to an event loop, an abstraction provided by a dedicated async framework. Such a framework (among which node.js is probably the best known example) maintains a list of all I/O “descriptors” (fds in Unix) that are associated with pending I/O operations.

Then, in the loop, it simply waits on all of them, usually via the epoll system call. Whenever an I/O task completes, the loop would execute a callback associated with its result (or promise, or future). Through this callback, the application is able to process it.

In a sense, we can treat the event loop as a dedicated scheduler for its program.

But why?

So, what exactly the benefit of asynchronous I/O? If anything, it definitely sounds more complicated for the programmer. (Spoiler alert: it is).

The impetus for the development of async techniques most likely came from the C10K problem. The short version of it is that computers are nowadays very fast and should therefore be able to serve thousands of requests simultaneously. (especially when those requests are mostly I/O, which translate to waiting time for the CPU).

And if “serving” queries is indeed almost all waiting, then handling thousands of clients should be very possible.

In some cases, however, it was found that when the OS is scheduling the threads, it introduces too much overhead on the frequent pause/resume state changes (context switching). Like I mentioned above, the asynchronous alternative does away with all that, and indeed lets the CPU just wait (on epoll) until something interesting happens. Once it does, the application can deal with it quickly, issue another I/O call, and let the server go back to waiting.

With today’s processing power we can theoretically handle a lot of concurrent clients this way: up to hundreds of thousands or even millions.

Reality check

Well, ain’t that grand? No wonder everyone is writing everything in node.js now!

Jokes aside, the actual benefits of asynchronous I/O (especially when weighed against its inconvenience for developers) are a bit harder to quantify. For one, they rely heavily on the assumption of fast code & slow I/O being valid in all situations.

But this isn’t really self-evident, and becomes increasingly dubious as time goes on and code complexity grows. It should be obvious, for example, that a Python web frontend talking mostly to in-memory caches in the same datacenter will have radically different performance characteristics than a C++ proxy server calling HTTP APIs over public Internet. Those nuances are often lost in translation between simplistic benchmarks and exaggerated blog posts3.

Upon a closer look, however, these details point quite clearly in favor of asynchronous Rust. Being a language that compiles to native code, it should usually run faster than interpreted (Python, Ruby) or even JITed (JVM & .NET) languages, very close to what is typically referred to as “bare metal” speed. For async I/O, it means the event loop won’t be disturbed for a (relatively) long time to do some trivial processing, leading to higher potential throughput of requests.

All in all, it would seem that Rust is one of the few languages where async actually makes sense.

Rust: the story so far

Obviously, this means it’s been built into the language right from the start… right?

Well, not really. It was always possible to use native epoll through FFI, of course, but that’s not exactly the level of abstraction we’d like to work with. Still, the upper layers of the async I/O stack have been steadily growing at least since Rust 1.0.

The major milestones here include mio, a comparatively basic building block that provides an asynchronous version of TCP/IP. It also offers idiomatic wrappers over epoll, allowing us to write our own event loop.

On the application side, the futures crate abstracts the notion of a potentially incomplete operation into, well, a future. Manipulating those futures is how one can now write asynchronous code in Rust.

More recently, Tokio has been emerging as defacto framework for async I/O in Rust. It essentially combines the two previously mentioned crates, and provides additional abstractions specifically for network clients and servers.

And finally, the popular HTTP framework Hyper is now also supporting asynchronous request handling via Tokio. What this means is that bread-and-butter of the Internet’s application layer — API servers talking JSON over HTTP — should now be fully supported by the ecosystem of asynchronous Rust.

Let’s take it for a spin then, shall we?

The Grand Project

Earlier on, we have established that the main use case for asynchronous I/O is intermediate microservices. They often sit somewhere between a standard web frontend and a storage server or a database. Because of their typical role within a bigger system, these kinds of projects don’t tend to be particularly exciting on their own.

But perhaps we can liven them up a little.

In the end, it is all about the Internet that we’re talking here, and everything on the Internet can usually be improved by one simple addition.


Image source

…Okay, two possible additions — the other one being:

Memes!

If you’re really pedantic, you may call them image macros. But regardless of the name, the important part is putting text on pictures, preferably in a funny way.

The microservice I wrote is doing just that. Thought it won’t ensure your memes are sufficiently hilarious, it will try to deliver them exactly to your specifications. You may thus think of it as possible backend for an image site like this one.

Flimsy excuses & post-hoc justifications

It is, of course, a complete coincidence, lacking any premeditation on my part, that when it comes to evaluating an async platform, a service like this fits the bill very well.

And especially when said platform is async Rust.

Why, though, is it such a happy, er, accident?

  • It’s a simple, well-defined application. There is basically a single endpoint, accepting simple input (JSON or query string) and producing a straightforward result (an image). No need to persist any state made creating an MVP significantly easier.

  • Caching can be used for meme templates and fonts. Besides being an inherent part of most network services, a cache also represents a point of contention for Rust programs. The language is widely known for its alergy to global mutable state, which is exactly what programmatic caches boil down to.

  • Image captioning is a CPU-intensive operation. While the “async” part of async I/O may sometimes go all the way down, many practical services either evolve some important CPU-bound code, or require it right from the start. For this reason, I wanted to check if & how async Rust can mix with threaded concurrency.

  • Configuration knobs can be added. Unlike trivial experiments in the vein of an echo or “Hello world” server, this kind of service warrants some flags that the user could tweak, like the number of image captioning threads, or the size of the template cache. We can see how easy (or how hard) it is to make them applicable across all future-based requests.

All in all, and despite its frivolous subject matter, a meme server is actually hitting quite a few notable spots in the microservice domain.

Learnings

As you may glean from its GitHub repo, it would seem that the experiment was successful. Sure, you could implement some features in the captioning department (supporting animated GIFs comes to mind), but none of these are pertinent to the async mechanics of the server.

And since it’s the async (I/O) in Rust that we’re interested in, let me now present you with an assorted collection of practical experiences with it.

>0-cost futures

If you read the docs’ preamble to the futures crate, you will see it mentioning the “zero-cost” aspect of the library. Consistent with the philosophy behind Rust, it proclaims to deliver its abstractions without any overhead.

Thing is, I’m not sure how this promise can be delivered on in practice.

Flip through the introductory tutorial to Tokio, for example, and you will already find plenty of compromises. Without the crucial (but nightly-only) impl Trait feature, you are basically required to put all your futures in a Box4. They even encourage it themselves, offering a convenient Future::boxed method exactly for this purpose, as well the matching BoxFuture typedef right in the crate.

But hey, you can always just use nightly Rust, right? impl Trait will stabilize eventually, so your code should be, ahem, future-proof either way.

Unfortunately, this assumes all the futures that you’re building your request handlers from shall never cross any thread boundaries. (BoxFuture, for example, automatically constrains them to be Send). As you’ve likely guessed, this doesn’t jive very well with computationally intensive tasks which are best relegated to a separate thread.

To deal with them properly, you’re going to need a thread pool-based executor, which is currently implemented in the futures_cpupool crate. Using it requires a lot of care, though, and a deep understanding of both types of concurrency involved.

Evidently, this was something that I lacked at the time, which is why I encountered problems ensuring that my futures are properly Send. In the end, I settled on making them Send in the most straightforward (and completely unnecessary) manner: by wrapping them in Arc/Mutex. That in itself wasn’t without its perils, but at least allowed me to move forward.

Ironically, this also shows an important, pragmatic property of the futures’ system: sub-par hacks around it are possible — a fact you’ll be glad to know about on the proverbial day before a deadline.

Templates-worthy error messages

Other significant properties of the futures’ abstraction shall include telling the programmer what’s wrong with his code in the simplest, most straightforward, and concise manner possible.

Here, let me show you an example:


…which you can also behold in its gist form .

The reason you will encounter such incomprehensible messages stems from the very building blocks of async code.

Right now, each chained operation on a future — map, and_then, or_else, and so on — produces a nested type. Every subsequent application of those methods “contains” (in terms of the type system) all the previous ones. Keep going, and it will eventually balloon into one big onion of Chain<Map<OrElse<Chain<Map<...etc...>>>>>.


Futures are like ogres.

I haven’t personally hit any compiler limits in this regard, but I’m sure it is plausible for a complicated, real-world program.

It also gets worse if you use nightly Rust with impl Trait. In this case, function boundaries no longer “break” type stacking via Boxing the results into trait objects. Indeed, you can very well end up with some truly gigantic constructs as the compiler tries represent the return types of your most complex handlers.

But even if rustc is up to snuff and can deal with those fractals just fine, it doesn’t necessarily mean the programmer can. Looking at those error messages, I had vivid flashbacks from hacking on C++ templates with ancient compilers like VS2005. The difference is, of course, that we’re not trying any arcane metaprogramming here; we just want to do some relatively mundane I/O.

I have no doubt the messaging will eventually improve, and the mile-long types will at least get pretty-printed. At the moment, however, prepare for some serious squinting and bracket-counting.

Where is my (language) support?

Sadly, those long, cryptic error messages are not the only way in which the Rust compiler disappoints us.

I keep mentioning impl Trait as a generally desirable language feature for writers of asynchronous code. This improvement is still a relatively long way from getting precisely defined, much less stabilized. And it is only a somewhat minor improvement in the async ergonomics.

The wishlist is vastly longer and even more inchoate.

Saying it bluntly, right now Rust doesn’t really support the async style at all. All the combined API surface of futures/Tokio/Hyper/etc. is a clever, but ultimately contrived design, and it has no intentional backing in the Rust language itself.

This is a stark contrast with numerous other languages. They often support asynchronous I/O as something of a first class feature. The list includes at least C#, Python 3.5+, Hack/PHP, ES8 / JavaScript, and basically all the functional languages. They all have dedicated async, await, or equivalent constructs that make the callback-based nature of asynchronous code essentially transparent.

The absence of similar support puts Rust in the same bucket as frontend JavaScript circa 2010, where .then-chaining of promises reigned supreme. This is of course better than the callback hell of early Node, but I wouldn’t think that’s a particularly high bar. In this regard, Rust leaves plenty to be desired.

There are proposals, obviously, to bring async coroutines into Rust. There is an even broader wish to make the language cross the OOP/FP fence already and commit to the functional way; this would mean adding an equivalent of Haskell’s do notation.

Either development could be sufficient. Both, however, require significant amount of design and implementation work. If solved now, this would easily be the most significant addition to the language since its 1.0 release — but the solution is currently in the RFC stages at best.

Future<Ecosystem>

While the core language support is lacking, the great as usual Rust community has been picking up some of the slack by establishing and cultivating a steadily growing ecosystem.

The constellation of async-related crates clusters mostly around the two core libraries: futures crate itself and Tokio. Any functionality you may need while writing asynchronous should likely be found quite easily by searching for one of those two keywords (plus Rust, of course). Another way of finding what you need is to look at the list of Tokio-related crates directly.

To be fair, I can’t really say much about the completeness of this ecosystem. The project didn’t really require too many external dependencies — the only relevant ones were:

  • futures_cpupool mentioned before
  • tokio-timer for imposing a timeout on caption requests
  • tokio-signal which handles SIGINT/Ctrl+C and allows for a graceful shutdown

Normally, you’d also want to research the async database drivers for your storage system of choice. I would not expect anything resembling the Diesel ORM crate, though, nor a web framework comparable to Iron, Pencil, or Rocket.

Conclusions

Alright, so what can we get from this overall analysis?

Given the rapid development of async Rust ecosystem so far, it is clear the technology is very promising. If the community maintains its usual enthusiasm and keeps funneling it into Tokio et al., it won’t be long before it matures into something remarkable.

Right now, however, it exposes way too many rough edges to fully bet on it. Still, there may be some applications where you could get away with an async Rust backend even in production. But personally, I wouldn’t recommend it outside of non-essential services, or tools internal to your organization.

If you do use async Rust for microservices, I’d also advise to take steps to ensure they remain “micro”. Like I’ve elaborated in the earlier sections, there are several issues that make future-based Rust code scale poorly with respect to maintainability. Keeping it simple is therefore essential.

To sum up, async Rust is currently an option only for the adventurous and/or small. Others should stick to a tried & tested solution: something like Java (with Quasar), .NET, Go, or perhaps node.js at the very least.


  1. It is also the crux of parallelism, but that’s different and is not the focus here. 

  2. Background” here refers to the low level, innate concurrency of the OS kernel (mediated with hardware interrupts), not the epoll-based event loops on the application side. 

  3. There is a great parallel to be drawn between a trivial echo/Hello world server, and a 3D graphics program that only redraws an empty screen. Both may start at some very high performance numbers (requests/frames per second) but once you start adding practical stuff, those metrics must drop hyperbolically

  4. Technically, you are not, but the alternative is extremely cumbersome.
    In short, you’d have to follow an approach similar to custom Iterators: define a new struct for each individual case (possibly just newtype‘ing an existing one), and then implement the necessary trait for it.
    For iterators, this works reasonably well, and you don’t need custom ones that often anyway. But futures, by their very nature, are meant to encapsulate any computation. For them, “each individual case” is literally every asynchronous function in your code. 


Iteration patterns for Result & Option

Posted on Mon 10 April 2017 in Code • Tagged with Rust, iteratorsLeave a comment

Towards the end of my previous post about for loops in Rust, I mentioned how those loops can often be expressed in a more declarative way. This alternative approach involves chaining methods of the Iterator trait to create specialized transformation pipelines:

let odds_squared: Vec<_> = (1..100)
    .filter(|x| x % 2 != 0)
    .map(|x| x * x)
    .collect();

Playground link

Code like this isn’t unique to Rust, of course. Similar patterns are prevalent in functional languages such as F#, and can also be found in Java (Streams), imperative .NET (LINQ), JavaScript (LoDash) and elsewhere.

This saying, Rust also has its fair share of unique iteration idioms. In this post, we’re going to explore those arising on the intersection of iterators and the most common Rust enums: Result and Option.

filter_map()

When working with iterators, we’re almost always interested in selecting elements that match some criterion or passing them through a transformation function. It’s not even uncommon to want both of those things, as demonstrated by the initial example in this post.

You can, of course, accomplish those two tasks independently: Rust’s filter and map methods work just fine for this purpose. But there exists an alternative, and in some cases it fits the problem amazingly well.

Meet filter_map. Here’s what the official docs have to say about it:

Creates an iterator that both filters and maps.

Well, duh.

On a more serious note, the common pattern that filter_map simplifies is unwrapping a series of Options. If you have a sequence of maybe-values, and you want to retain only those that are actually there, filter_map can do it in a single step:

// Get the sequence of all files matching a glob pattern via the glob crate.
let some_files = glob::glob("foo.*").unwrap().map(|x| x.unwrap());
// Retain only their extensions, e.g. ".txt" or ".md".
let file_extensions = some_files.filter_map(|p| p.extension());

The equivalent that doesn’t use filter_map would have to split the checking & unwrapping of Options into separate steps:

let file_extensions = some_files.map(|p| p.extension())
    .filter(|e| e.is_some()).map(|e| e.unwrap());

Because of this check & unwrap logic, filter_map can be useful even with a no-op predicate (.filter_map(|x| x)) if we already have the Option objects handy. Otherwise, it’s often very easy to obtain them, which is exactly the case for the Result type:

// Read all text lines from a file:
let lines: Vec<_> = BufReader::new(fs::File::open("file.ext")?)
    .lines().filter_map(Result::ok).collect();

With a simple .filter_map(Result::ok), like above, we can pass through a sequence of Results and yield only the “successful” values. I find this particular idiom to be extremely useful in practice, as long as you remember that Errors will be discarded by it1.

As a final note on filter_map, you need to keep in mind that regardless of how great it often is, not all combinations of filter and map should be replaced by it. When deciding whether it’s appropriate in your case, it is helpful to consider the equivalence of these two expressions:

iter.filter(f).map(m)
iter.filter_map(|x| if f(x) { Some(m(x)) } else { None })

Simply put, if you find yourself writing conditions like this inside filter_map, you’re probably better off with two separate processing steps.

collect()

Let’s go back to the last example with a sequence of Results. Since the final sequence won’t include any Erroneous values, you may be wondering if there is a way to preserve them.

In more formal terms, the question is about turning a vector of results (Vec<Result<T, E>>) into a result with a vector (Result<Vec<T>, E>). We’d like for this aggregated result to only be Ok if all original results were Ok. Otherwise, we should just get the first Error.

Believe it or not, but this is probably the most common Rust problem!2

Of course, that doesn’t necessarily mean the problem is particularly hard. Possible solutions exist in both an iterator version:

let result = results.into_iter().fold(Ok(vec![]), |mut v, r| match r {
    Ok(x) => { v.as_mut().map(|v| v.push(x)); v },
    Err(e) => Err(e),
});

and in a loop form:

let mut result = Ok(vec![]);
for r in results {
    match r {
        Ok(x) => result.as_mut().map(|v| v.push(x)),
        Err(e) => { result = Err(e); break; },
    };
}

but I suspect not many people would call them clear and readable, let alone pretty3.

Fortunately, you don’t need to pollute your codebase with any of those workarounds. Rust offers an out-of-the-box solution which solves this particular problem, and its only flaw is one that I hope to address through this very post.

So, here it goes:

let result: Result<Vec<_>, _> = results.collect();

Yep, that’s all of it.

The background story is that Result<Vec<T>, E> simply “knows” how to construct itself from a sequence of Results. Unfortunately, this API is hidden behind Rust’s iterator abstraction, and specifically the fact that Result implements FromIterator in this particular manner. The way the documentation page for Result is structured, however — with trait implementations at the very end — ensures this useful fact remains virtually undiscoverable.

Because let’s be honest: no one scrolls that far.

Incidentally, Option offers analogous functionally: a sequence of Option<T> can be collected into Option<Vec<T>>, which will be None if any of the input elements were. As you may suspect, this fact is equally hard to find in the relevant docs.

But the good news is: you know about all this now! :) And perhaps thanks to this post, those handy tricks become a little better in a wider Rust community.

partition()

The last technique I wanted to present here follows naturally from the other idioms that apply to Results. Instead of extracting just the Ok values with flat_map, or keeping only the first error through collect, we will now learn how to retain all the errors and all the values, both neatly separated.

The partition method, as this is what the section is about, is essentially a more powerful variant of filter. While the latter only returns items that do match a predicate, partition will also give us the ones which don’t.

Using it to slice an iterable of Results is straightforward:

let (oks, fails): (Vec<_>, Vec<_>) = results.partition(Result::is_ok);

The only thing that remains cumbersome is the fact that both parts of the resulting tuple still contain just Results. Ideally, we would like them to be already unwrapped into values and errors, but unfortunately we need to do this ourselves:

let values: Vec<_> = oks.into_iter().map(Result::unwrap).collect();
let errors: Vec<_> = fails.into_iter().map(Result::unwrap_err).collect();

As an alternative, the partition_map method from the itertools crate can accomplish the same thing in a single step, albeit a more verbose one.


  1. A symmetrical technique is to use .filter_map(Result::err) to get just the Error objects, but that’s probably much less useful as it drops all the successful values. 

  2. Based on my completely unsystematic and anecdotal observations, someone asks about this on the #rust-beginners IRC approximately every other day

  3. The fold variant is also rife with type inference traps, often requiring explicit type annotations, a “no-op” Err arm in match, or both.