Long Live Dynamic Languages!

Posted on Wed 24 May 2017 in Programming • Tagged with Python, Rust, dynamic languages, dynamic typing, static typingLeave a comment

If you followed the few (or a dozen) of my recent posts, you’ve probably noticed a sizable bias in the choice of topics. The vast majority were about Rust — a native, bare metal, statically typed language with powerful compile time semantics but little in the way of runtime flexibility.

Needless to say, Rust is radically different than (almost the exact opposite of) Python, the other language that I’m covering sometimes. Considering this topical shift, it would fair to assume that I, too, have subscribed to the whole Static Typing™ trend.

But that wouldn’t be very accurate.

Don’t get me wrong. As far as fashion cycles in the software industry go, the current trend towards static/compiled languages is difficult to disparage. Strong in both hype and merit, it has given us some really innovative & promising solutions (as well as some not-so-innovative ones) that are poised to shape the future of programming for years, if not decades to come. In many ways, it is also correcting mistakes of the previous generation: excessive boilerplate, byzantine abstractions, and software bloat.

What about dynamic languages, then? Are they slowly going the way of the dodo?

Trigger warning: TypeError

Some programmers would certainly wish so.

Indeed, it’s not hard at all to find articles and opinions about dynamic languages that are, well, less than flattering.

The common argument echoed in those accounts points to supposed unsuitability of Python et al. for any large, multi-person project. The reasoning can be summed up as “good for small scripts and not much else”. Without statically checked types, the argument goes, anything bigger than a quick hack or a prototype shall inevitably become hairy and dangerous monstrosity.

And when that happens, a single typo can go unchecked and bring down the entire system!…

At the very end of this spectrum of beliefs, some pundits may eventually make the leap from languages to people. If dynamically typed languages (or “untyped” ones, as they’re often mislabeled) are letting even trivial bugs through, then obviously anyone who wants to use them is dangerously irresponsible. It must follow that all they really want is to hack up some shoddy code, yolo it over to production, and let others worry about the consequences.

Mind the gap

It’s likely unproductive to engage with someone who’s that extreme. If the rhetoric is dialed down, however, we can definitely find the edge of reason.

In my opinion, this fine line goes right through the “good in small quantities” argument. I can certainly understand the apprehension towards large projects that utilize dynamically typed languages throughout their codebases. The prospect of such a project is scary, because it contains an additional element of uncertainty. More so than with many other technologies, you ought to know what you’re doing.

Some people (and teams) do. Others, not so much.

I would therefore refine the argument so that it better reflects the strengths and weaknesses of dynamic languages. They are perfectly suited for at least the following cases:

  • anyone writing small, standalone applications or scripts
  • any project (large or small) with a well-functioning team of talented individuals

The sad reality of the software industry is the vast, gaping chasm of calamity and despair that stretches between those two scenarios.

Within lies the bulk of commercial software projects, consistently hamstrung by the usual suspects: incompetent management, unclear and shifting requirements, under- or overstaffing, ancient development practices, lack of coding standards, rampant bureaucracy, inexperienced developers, and so on.

In such an environment, it becomes nigh impossible to capitalize on the strengths of dynamic languages. Instead, the main priority is to protect from even further productivity losses, which is what bog-standard languages like Java, C#, or Go tend to be pretty good at. Rather than to move fast, the objective is to remain moving at all.

Freedom of choice

But that’s backwards”, the usual retort goes. “Static typing and compilation checks are what enables me to be productive!”

I have no doubt that most people saying this do indeed believe they’re better off programming in static languages. Regardless of what they think, however, there exists no conclusive evidence to back up such claims as a universal rule.

This is of course the perennial problem with software engineering in general, and the project management aspect of it in particular. There is very little proper research on optimal and effective approaches to it, which is why any of the so-called “best practices” are quite likely to stem from unsubstantiated hearsay.

We can lament this state of affairs, of course. But on the other hand, we can also find it liberating. In the absence of rigid prescriptions and judgments about productivity, we are free to explore, within technical limitations, what language works best for us, our team, and our projects.

Sometimes it’ll be Go, Java, Rust, or even Haskell.
A different situation may be best handled by Python, Ruby, or even JavaScript.

As the old adage goes, there is no silver bullet. We should not try to polish static typing into one.

Continue reading

Asynchronous Rust for fun & profit

Posted on Fri 28 April 2017 in Programming • Tagged with Rust, async, Tokio, futures, HTTPLeave a comment

…or: Is Rust webscale?

In this day and age, no language can really make an impact anymore unless it enables its programmers to harness the power of the Internet. Rust is no different here. Despite posing as a true systems language (as opposed to those only marketed as such), it includes highly scalable servers as a prominent objective in its 2017 agenda.

Presumably to satisfy this very objective, the Rust ecosystem has recently seen some major developments in the space of asynchronous I/O. Given the pace of those improvements, it may seem that production quality async services are quite possible already.

But is that so? How exactly do you write async Rust servers in the early to mid 2017?

To find out, I set to code up a toy application. Said program was a small intermediary/API server (a “microservice”, if you will) that tries to hit many of the typical requirements that arise in such projects. The main objective was to test the limits of asynchronous Rust, and see how easily (or difficult) they can be pushed.

This post is a summary of all the lessons I’ve learned from that.

It is necessarily quite long, so if you look for some TL;DR, scroll down straight to Conclusions.

Asynchro-what?

Before we dive in, I have to clarify what “asynchronous” means in this context. Those familiar with async concepts can freely skip this section.

Pulling some threads

Asynchronous processing (or async for short) is brought up most often in the context of I/O operations: disk reads, network calls, database queries, and so on.

Relatively speaking, all those tasks tend to be slow: they take orders of magnitude longer than just executing code or even accessing RAM. The “traditional” (synchronous) approach to dealing with them is to relegate those tasks to separate threads.

When one thread has to wait for a lengthy I/O operation to complete, the operating system (its scheduler, to be precise) can suspend that thread. This lets others execute their code in the mean time and not waste CPU cycles.

This is the essence of concurrency1.

Schedule yourself

But threads are not the only option when dealing with many things (i.e. requests) at once.

The alternative approach is one where no threads are automatically suspended or resumed by the OS. Instead, a special version of I/O subroutines allows the program to continue execution immediately after issuing an I/O call. While the operation happens in the background2, the code is given an opaque handle — usually called a promise, a future, or an async result — that will eventually resolve to the actual return value.

The program can wait for the handle synchronously, but it would typically hand it over to an event loop, an abstraction provided by a dedicated async framework. Such a framework (among which node.js is probably the best known example) maintains a list of all I/O “descriptors” (fds in Unix) that are associated with pending I/O operations.

Then, in the loop, it simply waits on all of them, usually via the epoll system call. Whenever an I/O task completes, the loop would execute a callback associated with its result (or promise, or future). Through this callback, the application is able to process it.

In a sense, we can treat the event loop as a dedicated scheduler for its program.

But why?

So, what exactly the benefit of asynchronous I/O? If anything, it definitely sounds more complicated for the programmer. (Spoiler alert: it is).

The impetus for the development of async techniques most likely came from the C10K problem. The short version of it is that computers are nowadays very fast and should therefore be able to serve thousands of requests simultaneously. (especially when those requests are mostly I/O, which translate to waiting time for the CPU).

And if “serving” queries is indeed almost all waiting, then handling thousands of clients should be very possible.

In some cases, however, it was found that when the OS is scheduling the threads, it introduces too much overhead on the frequent pause/resume state changes (context switching). Like I mentioned above, the asynchronous alternative does away with all that, and indeed lets the CPU just wait (on epoll) until something interesting happens. Once it does, the application can deal with it quickly, issue another I/O call, and let the server go back to waiting.

With today’s processing power we can theoretically handle a lot of concurrent clients this way: up to hundreds of thousands or even millions.

Reality check

Well, ain’t that grand? No wonder everyone is writing everything in node.js now!

Jokes aside, the actual benefits of asynchronous I/O (especially when weighed against its inconvenience for developers) are a bit harder to quantify. For one, they rely heavily on the assumption of fast code & slow I/O being valid in all situations.

But this isn’t really self-evident, and becomes increasingly dubious as time goes on and code complexity grows. It should be obvious, for example, that a Python web frontend talking mostly to in-memory caches in the same datacenter will have radically different performance characteristics than a C++ proxy server calling HTTP APIs over public Internet. Those nuances are often lost in translation between simplistic benchmarks and exaggerated blog posts3.

Upon a closer look, however, these details point quite clearly in favor of asynchronous Rust. Being a language that compiles to native code, it should usually run faster than interpreted (Python, Ruby) or even JITed (JVM & .NET) languages, very close to what is typically referred to as “bare metal” speed. For async I/O, it means the event loop won’t be disturbed for a (relatively) long time to do some trivial processing, leading to higher potential throughput of requests.

All in all, it would seem that Rust is one of the few languages where async actually makes sense.

Rust: the story so far

Obviously, this means it’s been built into the language right from the start… right?

Well, not really. It was always possible to use native epoll through FFI, of course, but that’s not exactly the level of abstraction we’d like to work with. Still, the upper layers of the async I/O stack have been steadily growing at least since Rust 1.0.

The major milestones here include mio, a comparatively basic building block that provides an asynchronous version of TCP/IP. It also offers idiomatic wrappers over epoll, allowing us to write our own event loop.

On the application side, the futures crate abstracts the notion of a potentially incomplete operation into, well, a future. Manipulating those futures is how one can now write asynchronous code in Rust.

More recently, Tokio has been emerging as defacto framework for async I/O in Rust. It essentially combines the two previously mentioned crates, and provides additional abstractions specifically for network clients and servers.

And finally, the popular HTTP framework Hyper is now also supporting asynchronous request handling via Tokio. What this means is that bread-and-butter of the Internet’s application layer — API servers talking JSON over HTTP — should now be fully supported by the ecosystem of asynchronous Rust.

Let’s take it for a spin then, shall we?

The Grand Project

Earlier on, we have established that the main use case for asynchronous I/O is intermediate microservices. They often sit somewhere between a standard web frontend and a storage server or a database. Because of their typical role within a bigger system, these kinds of projects don’t tend to be particularly exciting on their own.

But perhaps we can liven them up a little.

In the end, it is all about the Internet that we’re talking here, and everything on the Internet can usually be improved by one simple addition.


Image source

…Okay, two possible additions — the other one being:

Memes!

If you’re really pedantic, you may call them image macros. But regardless of the name, the important part is putting text on pictures, preferably in a funny way.

The microservice I wrote is doing just that. Thought it won’t ensure your memes are sufficiently hilarious, it will try to deliver them exactly to your specifications. You may thus think of it as possible backend for an image site like this one.

Flimsy excuses & post-hoc justifications

It is, of course, a complete coincidence, lacking any premeditation on my part, that when it comes to evaluating an async platform, a service like this fits the bill very well.

And especially when said platform is async Rust.

Why, though, is it such a happy, er, accident?

  • It’s a simple, well-defined application. There is basically a single endpoint, accepting simple input (JSON or query string) and producing a straightforward result (an image). No need to persist any state made creating an MVP significantly easier.

  • Caching can be used for meme templates and fonts. Besides being an inherent part of most network services, a cache also represents a point of contention for Rust programs. The language is widely known for its alergy to global mutable state, which is exactly what programmatic caches boil down to.

  • Image captioning is a CPU-intensive operation. While the “async” part of async I/O may sometimes go all the way down, many practical services either evolve some important CPU-bound code, or require it right from the start. For this reason, I wanted to check if & how async Rust can mix with threaded concurrency.

  • Configuration knobs can be added. Unlike trivial experiments in the vein of an echo or “Hello world” server, this kind of service warrants some flags that the user could tweak, like the number of image captioning threads, or the size of the template cache. We can see how easy (or how hard) it is to make them applicable across all future-based requests.

All in all, and despite its frivolous subject matter, a meme server is actually hitting quite a few notable spots in the microservice domain.

Learnings

As you may glean from its GitHub repo, it would seem that the experiment was successful. Sure, you could implement some features in the captioning department (supporting animated GIFs comes to mind), but none of these are pertinent to the async mechanics of the server.

And since it’s the async (I/O) in Rust that we’re interested in, let me now present you with an assorted collection of practical experiences with it.

>0-cost futures

If you read the docs’ preamble to the futures crate, you will see it mentioning the “zero-cost” aspect of the library. Consistent with the philosophy behind Rust, it proclaims to deliver its abstractions without any overhead.

Thing is, I’m not sure how this promise can be delivered on in practice.

Flip through the introductory tutorial to Tokio, for example, and you will already find plenty of compromises. Without the crucial (but nightly-only) impl Trait feature, you are basically required to put all your futures in a Box4. They even encourage it themselves, offering a convenient Future::boxed method exactly for this purpose, as well the matching BoxFuture typedef right in the crate.

But hey, you can always just use nightly Rust, right? impl Trait will stabilize eventually, so your code should be, ahem, future-proof either way.

Unfortunately, this assumes all the futures that you’re building your request handlers from shall never cross any thread boundaries. (BoxFuture, for example, automatically constrains them to be Send). As you’ve likely guessed, this doesn’t jive very well with computationally intensive tasks which are best relegated to a separate thread.

To deal with them properly, you’re going to need a thread pool-based executor, which is currently implemented in the futures_cpupool crate. Using it requires a lot of care, though, and a deep understanding of both types of concurrency involved.

Evidently, this was something that I lacked at the time, which is why I encountered problems ensuring that my futures are properly Send. In the end, I settled on making them Send in the most straightforward (and completely unnecessary) manner: by wrapping them in Arc/Mutex. That in itself wasn’t without its perils, but at least allowed me to move forward.

Ironically, this also shows an important, pragmatic property of the futures’ system: sub-par hacks around it are possible — a fact you’ll be glad to know about on the proverbial day before a deadline.

Templates-worthy error messages

Other significant properties of the futures’ abstraction shall include telling the programmer what’s wrong with his code in the simplest, most straightforward, and concise manner possible.

Here, let me show you an example:


…which you can also behold in its gist form .

The reason you will encounter such incomprehensible messages stems from the very building blocks of async code.

Right now, each chained operation on a future — map, and_then, or_else, and so on — produces a nested type. Every subsequent application of those methods “contains” (in terms of the type system) all the previous ones. Keep going, and it will eventually balloon into one big onion of Chain<Map<OrElse<Chain<Map<...etc...>>>>>.


Futures are like ogres.

I haven’t personally hit any compiler limits in this regard, but I’m sure it is plausible for a complicated, real-world program.

It also gets worse if you use nightly Rust with impl Trait. In this case, function boundaries no longer “break” type stacking via Boxing the results into trait objects. Indeed, you can very well end up with some truly gigantic constructs as the compiler tries represent the return types of your most complex handlers.

But even if rustc is up to snuff and can deal with those fractals just fine, it doesn’t necessarily mean the programmer can. Looking at those error messages, I had vivid flashbacks from hacking on C++ templates with ancient compilers like VS2005. The difference is, of course, that we’re not trying any arcane metaprogramming here; we just want to do some relatively mundane I/O.

I have no doubt the messaging will eventually improve, and the mile-long types will at least get pretty-printed. At the moment, however, prepare for some serious squinting and bracket-counting.

Where is my (language) support?

Sadly, those long, cryptic error messages are not the only way in which the Rust compiler disappoints us.

I keep mentioning impl Trait as a generally desirable language feature for writers of asynchronous code. This improvement is still a relatively long way from getting precisely defined, much less stabilized. And it is only a somewhat minor improvement in the async ergonomics.

The wishlist is vastly longer and even more inchoate.

Saying it bluntly, right now Rust doesn’t really support the async style at all. All the combined API surface of futures/Tokio/Hyper/etc. is a clever, but ultimately contrived design, and it has no intentional backing in the Rust language itself.

This is a stark contrast with numerous other languages. They often support asynchronous I/O as something of a first class feature. The list includes at least C#, Python 3.5+, Hack/PHP, ES8 / JavaScript, and basically all the functional languages. They all have dedicated async, await, or equivalent constructs that make the callback-based nature of asynchronous code essentially transparent.

The absence of similar support puts Rust in the same bucket as frontend JavaScript circa 2010, where .then-chaining of promises reigned supreme. This is of course better than the callback hell of early Node, but I wouldn’t think that’s a particularly high bar. In this regard, Rust leaves plenty to be desired.

There are proposals, obviously, to bring async coroutines into Rust. There is an even broader wish to make the language cross the OOP/FP fence already and commit to the functional way; this would mean adding an equivalent of Haskell’s do notation.

Either development could be sufficient. Both, however, require significant amount of design and implementation work. If solved now, this would easily be the most significant addition to the language since its 1.0 release — but the solution is currently in the RFC stages at best.

Future<Ecosystem>

While the core language support is lacking, the great as usual Rust community has been picking up some of the slack by establishing and cultivating a steadily growing ecosystem.

The constellation of async-related crates clusters mostly around the two core libraries: futures crate itself and Tokio. Any functionality you may need while writing asynchronous should likely be found quite easily by searching for one of those two keywords (plus Rust, of course). Another way of finding what you need is to look at the list of Tokio-related crates directly.

To be fair, I can’t really say much about the completeness of this ecosystem. The project didn’t really require too many external dependencies — the only relevant ones were:

  • futures_cpupool mentioned before
  • tokio-timer for imposing a timeout on caption requests
  • tokio-signal which handles SIGINT/Ctrl+C and allows for a graceful shutdown

Normally, you’d also want to research the async database drivers for your storage system of choice. I would not expect anything resembling the Diesel ORM crate, though, nor a web framework comparable to Iron, Pencil, or Rocket.

Conclusions

Alright, so what can we get from this overall analysis?

Given the rapid development of async Rust ecosystem so far, it is clear the technology is very promising. If the community maintains its usual enthusiasm and keeps funneling it into Tokio et al., it won’t be long before it matures into something remarkable.

Right now, however, it exposes way too many rough edges to fully bet on it. Still, there may be some applications where you could get away with an async Rust backend even in production. But personally, I wouldn’t recommend it outside of non-essential services, or tools internal to your organization.

If you do use async Rust for microservices, I’d also advise to take steps to ensure they remain “micro”. Like I’ve elaborated in the earlier sections, there are several issues that make future-based Rust code scale poorly with respect to maintainability. Keeping it simple is therefore essential.

To sum up, async Rust is currently an option only for the adventurous and/or small. Others should stick to a tried & tested solution: something like Java (with Quasar), .NET, Go, or perhaps node.js at the very least.


  1. It is also the crux of parallelism, but that’s different and is not the focus here. 

  2. Background” here refers to the low level, innate concurrency of the OS kernel (mediated with hardware interrupts), not the epoll-based event loops on the application side. 

  3. There is a great parallel to be drawn between a trivial echo/Hello world server, and a 3D graphics program that only redraws an empty screen. Both may start at some very high performance numbers (requests/frames per second) but once you start adding practical stuff, those metrics must drop hyperbolically

  4. Technically, you are not, but the alternative is extremely cumbersome.
    In short, you’d have to follow an approach similar to custom Iterators: define a new struct for each individual case (possibly just newtype‘ing an existing one), and then implement the necessary trait for it.
    For iterators, this works reasonably well, and you don’t need custom ones that often anyway. But futures, by their very nature, are meant to encapsulate any computation. For them, “each individual case” is literally every asynchronous function in your code. 

Continue reading

Iteration patterns for Result & Option

Posted on Mon 10 April 2017 in Code • Tagged with Rust, iteratorsLeave a comment

Towards the end of my previous post about for loops in Rust, I mentioned how those loops can often be expressed in a more declarative way. This alternative approach involves chaining methods of the Iterator trait to create specialized transformation pipelines:

let odds_squared: Vec<_> = (1..100)
    .filter(|x| x % 2 != 0)
    .map(|x| x * x)
    .collect();

Playground link

Code like this isn’t unique to Rust, of course. Similar patterns are prevalent in functional languages such as F#, and can also be found in Java (Streams), imperative .NET (LINQ), JavaScript (LoDash) and elsewhere.

This saying, Rust also has its fair share of unique iteration idioms. In this post, we’re going to explore those arising on the intersection of iterators and the most common Rust enums: Result and Option.

filter_map()

When working with iterators, we’re almost always interested in selecting elements that match some criterion or passing them through a transformation function. It’s not even uncommon to want both of those things, as demonstrated by the initial example in this post.

You can, of course, accomplish those two tasks independently: Rust’s filter and map methods work just fine for this purpose. But there exists an alternative, and in some cases it fits the problem amazingly well.

Meet filter_map. Here’s what the official docs have to say about it:

Creates an iterator that both filters and maps.

Well, duh.

On a more serious note, the common pattern that filter_map simplifies is unwrapping a series of Options. If you have a sequence of maybe-values, and you want to retain only those that are actually there, filter_map can do it in a single step:

// Get the sequence of all files matching a glob pattern via the glob crate.
let some_files = glob::glob("foo.*").unwrap().map(|x| x.unwrap());
// Retain only their extensions, e.g. ".txt" or ".md".
let file_extensions = some_files.filter_map(|p| p.extension());

The equivalent that doesn’t use filter_map would have to split the checking & unwrapping of Options into separate steps:

let file_extensions = some_files.map(|p| p.extension())
    .filter(|e| e.is_some()).map(|e| e.unwrap());

Because of this check & unwrap logic, filter_map can be useful even with a no-op predicate (.filter_map(|x| x)) if we already have the Option objects handy. Otherwise, it’s often very easy to obtain them, which is exactly the case for the Result type:

// Read all text lines from a file:
let lines: Vec<_> = BufReader::new(fs::File::open("file.ext")?)
    .lines().filter_map(Result::ok).collect();

With a simple .filter_map(Result::ok), like above, we can pass through a sequence of Results and yield only the “successful” values. I find this particular idiom to be extremely useful in practice, as long as you remember that Errors will be discarded by it1.

As a final note on filter_map, you need to keep in mind that regardless of how great it often is, not all combinations of filter and map should be replaced by it. When deciding whether it’s appropriate in your case, it is helpful to consider the equivalence of these two expressions:

iter.filter(f).map(m)
iter.filter_map(|x| if f(x) { Some(m(x)) } else { None })

Simply put, if you find yourself writing conditions like this inside filter_map, you’re probably better off with two separate processing steps.

collect()

Let’s go back to the last example with a sequence of Results. Since the final sequence won’t include any Erroneous values, you may be wondering if there is a way to preserve them.

In more formal terms, the question is about turning a vector of results (Vec<Result<T, E>>) into a result with a vector (Result<Vec<T>, E>). We’d like for this aggregated result to only be Ok if all original results were Ok. Otherwise, we should just get the first Error.

Believe it or not, but this is probably the most common Rust problem!2

Of course, that doesn’t necessarily mean the problem is particularly hard. Possible solutions exist in both an iterator version:

let result = results.into_iter().fold(Ok(vec![]), |mut v, r| match r {
    Ok(x) => { v.as_mut().map(|v| v.push(x)); v },
    Err(e) => Err(e),
});

and in a loop form:

let mut result = Ok(vec![]);
for r in results {
    match r {
        Ok(x) => result.as_mut().map(|v| v.push(x)),
        Err(e) => { result = Err(e); break; },
    };
}

but I suspect not many people would call them clear and readable, let alone pretty3.

Fortunately, you don’t need to pollute your codebase with any of those workarounds. Rust offers an out-of-the-box solution which solves this particular problem, and its only flaw is one that I hope to address through this very post.

So, here it goes:

let result: Result<Vec<_>, _> = results.collect();

Yep, that’s all of it.

The background story is that Result<Vec<T>, E> simply “knows” how to construct itself from a sequence of Results. Unfortunately, this API is hidden behind Rust’s iterator abstraction, and specifically the fact that Result implements FromIterator in this particular manner. The way the documentation page for Result is structured, however — with trait implementations at the very end — ensures this useful fact remains virtually undiscoverable.

Because let’s be honest: no one scrolls that far.

Incidentally, Option offers analogous functionally: a sequence of Option<T> can be collected into Option<Vec<T>>, which will be None if any of the input elements were. As you may suspect, this fact is equally hard to find in the relevant docs.

But the good news is: you know about all this now! :) And perhaps thanks to this post, those handy tricks become a little better in a wider Rust community.

partition()

The last technique I wanted to present here follows naturally from the other idioms that apply to Results. Instead of extracting just the Ok values with flat_map, or keeping only the first error through collect, we will now learn how to retain all the errors and all the values, both neatly separated.

The partition method, as this is what the section is about, is essentially a more powerful variant of filter. While the latter only returns items that do match a predicate, partition will also give us the ones which don’t.

Using it to slice an iterable of Results is straightforward:

let (oks, fails): (Vec<_>, Vec<_>) = results.partition(Result::is_ok);

The only thing that remains cumbersome is the fact that both parts of the resulting tuple still contain just Results. Ideally, we would like them to be already unwrapped into values and errors, but unfortunately we need to do this ourselves:

let values: Vec<_> = oks.into_iter().map(Result::unwrap).collect();
let errors: Vec<_> = fails.into_iter().map(Result::unwrap_err).collect();

As an alternative, the partition_map method from the itertools crate can accomplish the same thing in a single step, albeit a more verbose one.


  1. A symmetrical technique is to use .filter_map(Result::err) to get just the Error objects, but that’s probably much less useful as it drops all the successful values. 

  2. Based on my completely unsystematic and anecdotal observations, someone asks about this on the #rust-beginners IRC approximately every other day

  3. The fold variant is also rife with type inference traps, often requiring explicit type annotations, a “no-op” Err arm in match, or both. 

Continue reading

The “let” type trick in Rust

Posted on Wed 01 February 2017 in Code • Tagged with Rust, types, pattern matchingLeave a comment

Here’s a neat little trick that’s especially useful if you’re just starting out with Rust.

Because the language uses type inference all over the place (or at least within a single function), it can often be difficult to figure out the type of an expression by yourself. Such knowledge is very handy in resolving compiler errors, which may be rather complex when generics and traits are involved.

The formula itself is very simple. Its shortest, most common version — and arguably the cleverest one, too — is the following let binding:

let () = some_expression;

In virtually all cases, this binding will cause a type error on its own, so it’s not something you’d leave permanently in your regular code.

But the important part here is the exact error message you get:

error[E0308]: mismatched types
  --> <anon>:42:13
   |
42 |         let () = some_expression;
   |             ^^ expected f64, found ()
   |
   = note: expected type `f64`
   = note:    found type `()`

The type expected by Rust here (in this example, f64) is also the type of some_expression. No more, no less.

There is nothing particularly wrong with using this technique and not caring too much how it works under the hood. But if you do want to know a little more what exactly is going on here, the rest of this post covers it in some detail.

The unit

Firstly, you may be wondering about this curious () type that the compiler has apparently found in the statement above. The official name for it is the unit type, and it has several notable characteristics:

  1. There exists only one value1 of this type: () (same symbol as the type itself).
  2. It represents an empty tuple and has therefore the size of zero.
  3. It is the type of any expression that’s turned into a statement.

That last fact is particularly interesting, as it makes () appear in error messages that are more indicative of syntactic mishaps rather than mismatched types:

fn positive_signum(x: i32) -> i32 {
    if x > 0 { 1i32 }
    0i32
}
error[E0308]: mismatched types
 --> <anon>:2:17
  |
2 |     if x > 0 { 1i32 }
  |                ^^^^ expected (), found i32
  |
  = note: expected type `()`
  = note:    found type `i32`

If you think about it, however, it makes perfect sense. The last expression inside a function body is the return value. This also means that everything before it has to be a statement: an expression of type ().

Working its way backward, Rust will therefore expect only such expressions before the final 0i32. This, in turn, puts the same constraint on the body of the if statement. The expression 1i32 (with its type of i32) clearly violates it, causing the above error2.

Expanded” version

A natural question now arises: is () inside of the let () = ... formula a type () or a value ()?…

To answer that, it’s quite helpful to compare and contrast the original binding with its longer “equivalent”:

let _: () = some_expression;

This statement is conceptually very similar to our original one. The error message it causes can also be used to debug issues with type inference.

Despite some cryptic symbols, the syntax here should also be more familiar. It occurs in many typical, ordinary bindings you can see in everyday Rust code. Here’s an example:

let x: i32 = 42;

where it’s abundantly clear that i32 is the type of variable x.

Analogously above, you can see that an unnamed symbol (_, the underscore) is declared to be of type ().

So in this alternate phrasing, () denotes a type.

Let a pattern emerge

What about the original form, let () = ...? There is no explicit type declaration here (i.e. no colon), and a pair of empty parentheses isn’t a name that could be assigned a new value.

What exactly is happening there, then?…

Well, it isn’t really anything special. While it may look exceptional, and totally unlike common usages of let, it is in fact exactly the same thing as a mundane let x = 5. The potential misconception here is about the exact meaning of x.

The simple version is that it’s a name for the bound expression.
But the actual truth is that it’s a pattern which is matched against that expression.

The terms “pattern” and “matching” here refer to the same mechanism that occurrs within the match statement. You could even imagine a peculiar form of desugaring, where a let statement is converted into a semantically equivalent match:

fn original() -> i32 {
    let x = 5;
    let y = 6;
    x + y
}

fn desugared() -> i32 {
    match 5 {
        x => match 6 {
            y => x + y
        }
    }
}

This analogy works perfectly3, because the patterns here are irrefutable: any value can match them, as all we’re doing is giving the value a name. Should the case be any different, Rust would reject our let statement — just like it rejects a match block that doesn’t include branches for all possible outcomes.

An empty pattern

But just because a pattern has to always match the expression, it doesn’t mean only simple identifiers like x or y are permitted in let. If Rust is able to statically ensure a match, it is perfectly OK to use a pattern with an internal structure4:

use std::num::Wrapping;
let Wrapping(x) = Wrapping(42);

Of course, something like this is just superfluous and silly. Same mechanism, however, is also behind the ability to “initialize multiple variables”:

let (x, y) = (0, 1);

What really happens is that we take a tuple expression (0, 1) and match it against a pattern (x, y). Because it is trivially satisified, we have the symbols x and y bound to the tuple elements. For all intents and purposes, this is equivalent to having two separate let statements:

let x = 0;
let y = 1;

Of course, a 2-tuple is not the only pattern of this kind we can use in let. Others possible patterns include, for example, the 0-tuple.

Or, as we express it in Rust, ():

let () = ();

Now that’s a truly useless statement! But it also harkens straight to our debug binding. It should be pretty clear now how it works:

  • The () stanza on the left is neither a type nor a name, but a pattern.
  • The expression on the right is being matched against this pattern.
  • Because the types of both of those things differ, the compiler signals an appropriate error.

The curious thing is that there is nothing inherently magical about using () on the left hand side. It’s simply the shortest pattern we can put after let. It’s also one that’s extremely unlikely to actually match the right hand side, which ensures we get the desired error. But if you substituted something equally exotic and rare — say, (x, ((y, z), Wrapping(w))) — it would work equally well as a rudimentary type detector.

Except for one thing, of course: nobody wants to type this much! Borne out of this frugality (and/or laziness), a custom thus emerged to use ().

Short, sweet, and clever.


  1. A more formal, type-theoretic formulation of this fact is saying that () is inhabited by only one value. 

  2. In case you are wondering, one possible fix here is to return 1i32; inside the if. An (arguably more idiomatic) alternative is to put 0i32 in an else branch, turning the entire if construct into the last — and only — expression in the function body. 

  3. Note how each nested match is also introducing a new scope, exactly like the canonical desugaring of let which is often used to explain lifetimes and borrowing. 

  4. Unfortunately, Rust isn’t currently capable of proving that the pattern is irrefutable in all obvious cases. For example, let Some(x) = Some(42); will be rejected due to the existence of a None variant in Option, even though it isn’t actually used in the (constant) expression on the right. 

Continue reading

Better location for unit tests in Rust

Posted on Fri 06 January 2017 in Code • Tagged with Rust, unit tests, testing, modulesLeave a comment

For a unit test to be comprehensive, it must often access some private symbols from the module it checks.

In Rust, this is permitted for submodules: they can freely refer to anything defined “upwards” in the module hierarchy. The only requirement is that they import it explicitly by name, using statements such as use super::foo.

To illustrate this, here’s an example of a ridiculously well-factored FizzBuzz along with its accompanying unit test:

use std::borrow::Cow;

pub fn fizzbuzz(n: u32) {
    for i in 1..n+1 {
        println!("{}", fizzbuzz_string(i));
    }
}

fn fizzbuzz_string(i: u32) -> Cow<'static, str> {
    let by3 = i % 3 == 0;
    let by5 = i % 5 == 0;
    if by3 && by5 { "FizzBuzz".into() }
    else if by3   { "Fizz".into() }
    else if by5   { "Buzz".into() }
    else          { format!("{}", i).into() }
}


#[cfg(test)]
mod tests {
    use super::fizzbuzz_string;

    #[test]
    fn single_numbers() {
        assert_eq!("1", fizzbuzz_string(1));
        assert_eq!("2", fizzbuzz_string(2));
        assert_eq!("Fizz", fizzbuzz_string(3));
        assert_eq!("Buzz", fizzbuzz_string(5));
        assert_eq!("7", fizzbuzz_string(7));
        assert_eq!("Fizz", fizzbuzz_string(9));
        assert_eq!("Buzz", fizzbuzz_string(10));
        assert_eq!("FizzBuzz", fizzbuzz_string(15));
        # etc.
    }
}

The internal function, as shown above, can be imported and verified independently of the public one. This is done through a #[test] procedure in an inline submodule.

Such factorization and granular testing is commonplace, especially when the public API may cause unwanted side effects, such as printing stuff to stdout here.

The issue of length

But if you are like me and prefer your modules to be short and sweet, you may feel justifiably concerned about this inline submodule business.

In the toy example above, tests have already taken at least as many lines as the actual code. Real world usually matches this ratio. A module with a couple hundred lines of regular code starts to be measured in KLOCs if we also include its tests.

While this could be taken as a strong hint to split things up, it can just as easily disincentivize testing instead.

The obvious solution is to move those tests somewhere else. What is not so evident is how to preserve this crucial module-submodule relation, enabling us to write comprehensive tests in the first place.

Looking for inspiration

I must quickly disappoint anyone who would like to round up all their unit tests and sequester them in some distant tests/ directory. Such layout is reserved for crate-level (“integration”) tests. Unit tests, on the other hand, are predestined to live among production code1.

So let’s at least relocate them to separate files.

To make this goal more concrete, we will try to emulate the project layout described in Google’s C++ style guide. By this convention, a conceptual “module” or “unit” consists of the following files:

  • foo.h
  • foo.cc
  • foo_test.cc

Translating this to Rust, we get:

  • foo.rs
  • foo_test.rs

The first one is obviously our production code. The second file, foo_test.rs, contains all the tests we would previously put in the mod tests { } construct.

Seems pretty clean and straightforward, right? Unfortunately, Rust will not accept this setup without some convincing.

Family problems

To understand why, recall that the mere presence of some .rs files is not enough for the Rust compiler to care. If we want them picked up and included in the project, we also need to add some module declarations first.

In other words, there must also be a mod.rs file in this directory, containing at the very least the following content:

// (mod.rs)

mod foo;
#[cfg(test)]
mod foo_test;

Now it should be clearer that something is wrong.

We got two modules here, but they are siblings. Both foo and foo_test are on the same level, children of whatever parent module contains them both. More to the point, it’s foo_test that’s not a child module of foo, meaning it can only see the public symbols of the latter.

This is not quite enough to write a proper unit test. It definitely isn’t for our initial FizzBuzz example, because the fizzbuzz_string function cannot even be imported!

Existential crises

Okay, so how about we move the mod foo_test; declaration to foo.rs? This should be enough to establish the proper hierarchy. After all, this is how the module tree is normally reconstructed: from the appropriate placement of the mod statements.

So, here we go:

// (foo.rs)

#[cfg(test)]
mod foo_test;
error: cannot declare a new module at this location
  --> src/parent/foo.rs:4:5
   |
 4 | mod foo_test;

…Really?

Well, yes. A declaration like this simply isn’t allowed. The reason for this is actually much less arbitrary than the error message would indicate.

To put it bluntly, foo_test simply cannot exist if it’s introduced there. To deliver on its declaration promise, the submodule would have to reside within foo itself. But of course, foo.rs is just a file, so this setup is evidently impossible.

All in all, Rust seems to be looking for our module in all the wrong places.

Perhaps we can just tell it where it should be going instead?…

The right path

Enter the #[path] attribute, which fulfills this exact purpose:

// (foo.rs)

#[cfg(test)]
#[path = "./foo_test.rs"]
mod foo_test;

#[path] tells the Rust compiler where to look for the module it is attached to. Its argument is relative to the location of the outer module (like foo here), and can be either a single file, or a directory with mod.rs.

Conceptually, this is similar to a custom ClassLoader in Java, or the common sys.path hacks in Python. Unlike those two languages, however, the #[path] attribute is only relevant at compile time.

Additionally, and somewhat confusingly, #[path] can also be applied retroactively to a module that the compiler has already located. In such case, it will affect the lookup of any child modules by making rustc search for them in the new location.


With #[path] handy, it is therefore possible to implement custom layouts of regular source modules and test files.

But like with every tool that can be used to defy conventions, it should be used with the appropriate care. While a straightforward and self-documenting approach described here is unlikely to raise any eyebrows, rewriting module paths willy-nilly is most certainly a bad idea.


  1. Okay, technically it is possible to completely isolate them, essentially by abusing the approach I describe later in this post. 

Continue reading

A tale of two Rusts

Posted on Sat 24 December 2016 in Programming • Tagged with Rust, nightly Rust, stable Rust, Rocket.rsLeave a comment

The writing has been on the wall for many months now, but I think the time has come when we can officially declare it.

Stable Rust is dead. Nightly Rust is the only Rust.

Say what?

If you’re out of the loop, Rust is this newfangled system programming language. Rust is meant to fit in the niches normally occupied by C, so its domain includes performance-sensitive and safety-critical applications. Embedded programming, OS kernels, databases, servers, and similar low-level pieces of computing and networking infrastructure are all within its purview.

Of course, this “replacing C” thing is still an ambition that’s years or decades away. But in theory, there is nothing preventing it from happening. The main thing Rust would need here is time: time to buy trust of developers by having been used in real-world, production scenarios without issues.

To facilitate this (and for other reasons), Rust has been using three release channels with varying frequency of updates. There are the stable, beta, and nightly Rust. Of those, beta is pretty much an RC for a future stable release, so there aren’t many differences at all between the first two channels.

Nightly perks

This cannot be said about nightly.

In fact, nightly Rust is essentially its own language.

First, there is a number of exclusive language features that are only available on nightly. They are all guarded by numerous #![feature(...)] gates which are required to activate them. Because stable Rust doesn’t accept any such directive, trying to compile code that uses them will fail on a non-nightly compiler.

This has been justified as a necessary step for testing out new features in real scenarios, or at least those that resemble (stable) reality as close as possible. Indeed, many features did eventually land in stable Rust by going through this route — a recent example would be the ? operator, an error-handling measure analogous to the try! macro.

But some features take a lot of time to stabilize. And few (like zero_one which guards the numeric traits Zero and One) may even be deprecated without ever getting out of the nightly channel.

Unplugged

Secondly, and most importantly, there is at least one feature that won’t get stabilized ever:

#![feature(plugin)]

And it’s all by design.

This plugin switch is what’s necessary to include #![plugin(...)] directives. Those in turn activate compiler plugins: user-provided additions to the compiler itself. Plugins operate against the API provided directly by rustc and enhance its capabilities beyond what the language normally provides.

Although it sounds rather ominous, the vast majority of plugins in the wild serve a singular purpose: code generation. They are written with the sole purpose of combating Rust’s rigidity, including the (perfectly expected) lack of dynamic runtime capabilities and the (disappointingly) stiff limits of its wanting macro system.

This is how they are utilized by Diesel, for example, a popular ORM and SQL query interface; or Serde, a serialization framework.

Why compiler plugins can never be stable, though? It’s because the internal API they are coded against goes too deep into the compiler bowels to ever get stabilized. If it were, it would severely limit the ability to further develop the language without significant breakage of the established plugins.

Pseudo-stable

Wait,” you may ask, “how do we even talk about «established» compiler plugins? Shouldn’t they be, by their very definition, unstable?”

Well… yes. They definitely should. And therein lies the crux of the problem.

Turns out, plugins & nightly Rust are only mostly treated as unstable.

In reality, the comfort and convenience provided by nightly versions of many libraries — all of which rely on compiler plugins — is difficult to overstate. While their stable approximations are available, they at best require rather complicated setup.

What’s always involved is a custom build step, and usually a separate file for the relevant code symbols and declarations. In the end, we get a bunch of autogenerated modules whose prior non-existence during development may also confuse IDEs and autocompletion tools.

For all those reasons and more, an ecosystem has developed where several popular libraries are “nightly but pseudo-stable”. This includes some key components in many serious applications, like the aforementioned ORM & serialization crates.

The precedent

And so has been the state of affairs until very recently. The nightly Rust has been offering some extremely enticing features, but the stable channel was at least paid a lip service to. However, the mentality among library authors that “nightly-first” is an acceptable policy had been strong for a long time now.

No wonder it has finally shifted towards “nightly-only”.

Meet Rocket, the latest contestant in the already rich lineup of Rust web frameworks. Everything about it is really slick: a flashy designer website; approachable and comprehensive documentation; and concise, Flask-like API for routing and response handling. Predictably, it’s been making quite a buzz on Reddit and elsewhere.

There is just an itty bitty little problem: Rocket only works on nightly. No alternatives, no codegen shims… and no prospects of any change in the foreseeable future. Yet, there doesn’t seem to be many people concerned about this, so clearly this is (a new?) norm.

The Rusts split

In essence, Rust is now two separate languages.

The stable-nightly divide has essentially evolved into something that closely resembles the early stages of the 2.x vs. 3.x split in the Python world. The people still “stuck” on 2.7 (i.e. stable) were “holdouts”, and the future was with 3.x (nightly). Sure, there have been some pithy backports (feature stabilizations), but the interesting stuff has been happening on the other side.

It’s astonishing that Rust managed to replicate this phenomenon without any major version bumps, and with no backwards-incompatible releases. Technically, everything is still version 1.x.. Not even Cargo, the Rust package manager, recognizes the stable-nightly distinction.

But that’s hardly any consolation when you try to install a nightly-only crate on stable Rust. You will download it just fine, and get all the way to compiling its code, only to have it error out due to unsupported #![feature(...)] declarations.

What now?

The natural question is, can this situation be effectively addressed?

I hope it’s obvious why stable Rust cannot suddenly start supporting compiler plugins. Given that they rely on rustc internals which aren’t standardized, doing so would be contrary to the very definition of a “stable” release channel.

The other option is to fully embrace nightly as de facto recommended toolchain. This has been informally happening already, despite the contrary recommendations in the official docs.

The downsides are obvious here, though: nightly Rust is not a misnomer at all. The compiler is in active development and its build breaks often. Some of those breakages make it into nightly releases with unsatisfying regularity.

Of course, there was also another option: stick to the intended purpose of release channels and don’t build castles on the sand by publishing nightly-first or nightly-only crates. This ship seems to have sailed by now, as the community has collectively decided otherwise.

Oh well.

It’s just a little ironic that in a language that is so focused on safety, everyone is perfectly happy with an unstable compiler.

Continue reading

Simulating exceptions in Rust with IIFE

Posted on Sat 17 December 2016 in Code • Tagged with Rust, IIFE, error handling, exceptions, closures, lambdasLeave a comment

While many languages use exceptions for handling errors, Rust prefers a slightly different, yet very classical approach: return values.

Now, they aren’t exactly the same thing as in C, where the error is indicated by a special value within the same return type. In Rust, the Result enum can neatly separate the two, in similar vein to how ad-hoc tuples in Go do1. But unlike Go, Rust also offers additional facilities for error propagation, including the try! macro and the recently stabilized ? operator. And finally, the Result wrappings can be straightforwardly unpacked, possibly by defaulting to a known safe value.

Some conveniences of exceptions may be hard to pass up, though. The try-catch construct is evidently one of them, and Rust might eventually get it in one form or another. Before that happens, however, there is a trick that can often work as an acceptable substitute.

Many lets

Here’s an example where it can be very useful.

Have a look at the following function. Its purpose is to retrieve a GitHub login of a user who owns a specific gist — a small sample of code posted to the gists.github.com website2.

Let’s assume we have already talked to GitHub API and received the following JSON response from its relevant endpoint:

{
    "id": "12345678",
    "owner": {
        "login": "Octocat",
        ...
    }
    ...
}

Parsing it is easy: we can do it with the rustc_serialize crate, among other options. What proves a little more involved is to dig through the JSON tree in order to reach the interesting value:

use rustc_serialize::json::Json;


/// Retrieve the gist owner from a JSON received from
/// the /gists/$ID endpoint of the GitHub API.
///
/// If the gist is anonymous, "anonymous" is returned.
fn gist_owner_from_info(info: &Json) -> String {
    if let Some(info) = info.as_object() {
        if let Some(owner) = info.get("owner").and_then(|o| o.as_object()) {
            if let Some(result) = owner.get("login").and_then(|l| l.as_string()) {
                return result.to_owned();
            }
        }
    }
    "anonymous".into()
}

Whew! I guess we’re lucky we don’t need to go too deep into that JSON. The code is clearly exhibiting a rightward slant, which some people refer to as the “arrow code”, Unsurprisingly, it is generally considered bad for readability.

There are few other ways of writing this, of course, including a style reminiscent of JavaScript promises — that is, relying completely on the and_then method. Neither seem very satisfying, though, especially if you compare it with something like this:

try:
    return str(info["owner"]["login"])
except (KeyError, TypeError):
    return "anonymous"

Yes, exceptions are quite useful sometimes.

So, how can we get something like this in Rust?

JavaScript for the rescue

Succor comes from an unexpected direction. To emulate exceptions — specifically, the try-catch exception blocks — we can utilize a technique that is most popular in… JavaScript.

At least until recently, JavaScript did not have a block local scope. Since every variable declaration within a function is hoisted to the top of that function, it essentially makes function scope the only usable one (besides global, of course).

As a result, a variety of JavaScript idioms rely on introducing “superfluous” functions, solely for the purpose of creating a nested scope. Many times, those functions are neither named nor stored in any variable; rather, they are immediately invoked.

This is what is commonly understood as Immediately Invoked Function Expression, or IIFE for short.

An oft-cited example involves an IIFE which itself returns another function:

for (var i = 0; i < 10; ++i) {
    var $para = $("p#" + i);  // <p id="0">, <p id="1">, etc.
    var clickHandler = (function(i) {  // IIFE!
        return function() {
            alert("Clicked element no. " + (i + 1));
        };
    })(i);
    $para.on('click', clickHandler);
}

The function expression is necessary here, because it allows to control what exactly goes into the closure of the inner function. If the clickHandlers were assigned the function() { alert(...) } expression directly, they would all close over the same loop counter variable. All would then display the exact same message.

We don’t need to employ those workarounds in Rust. Thanks to local scoping, a simple pair of { braces } would work exactly the same. You can imagine a direct rewrite of the above example, though, where an anonymous closure is used to similar effect:

// WARNING: Not idiomatic! (Also not a real DOM library).

for i in (0..10) {
    let para = dom.find_element_by_id("p", i.to_string()).unwrap();
    let click_handler = |i| {
        move |_: Event| { dom.exec_js(&format!(
            "alert('Clicked element no. #{}');", i + 1)); }
    }(i);
    para.add_event_listener(Event::Click, click_handler)
}

In other words, Rust supports IIFEs just fine.

Just put a function on it

Okay, this is quite amusing and probably pretty neat. But does it help us with the error handling story exactly?…

Let’s take another stab at rewriting the gist_owner_from_info routine. This time, we’ll extract the meaty part into a separate function. We will also take advantage of one trivial, but very useful try_opt crate which is essentially an equivalent of the try! macro for Options:

#[macro_use] extern crate try_opt;

fn gist_owner_from_info(info: &Json) -> String {
    gist_owner_from_info_internal(info).unwrap_or("anonymous".into())
}

fn gist_owner_from_info_internal(info: &Json) -> Option<String> {
    let info = try_opt!(info.as_object());
    let owner = try_opt!(info.get("owner").and_then(|o| o.as_object()));
    let login = try_opt!(owner.get("login").and_then(|l| l.as_string()));
    Some(login.to_owned())
}

Now this should be a little easier on the eyes. (And if you want, you can eschew and_then completely in favor of more try_opt!).

The downside is that we now have this _internal function that’s awkwardly sticking out. We could pull it in, and turn it into an inner function, but why stop half-way? Let’s just make it an IIFE already:

fn gist_owner_from_info(info: &Json) -> String {
    || -> Option<String> {
        let info = try_opt!(info.as_object());
        let owner = try_opt!(info.get("owner").and_then(|o| o.as_object()));
        let login = try_opt!(owner.get("login").and_then(|l| l.as_string()));
        Some(login.to_owned())
    }().unwrap_or("anonymous".into())
}

Not bad, eh? The analogies with exception handling should be pretty evident, too3:

  • The closure itself works as a try block, with closure’s body containing the “guarded” code.
  • The unwrap family of methods (especially unwrap_or_else) dubs for a catch/except section.

Sure, we do need try! (or try_opt!) macros to mark instructions that may “throw an exception”, but with the ?-based syntax it shouldn’t be too big of a deal. And when the time comes, this code will be very easy to port to a trait-based exception handling solution that’s currently in the works.

Oh, and the best part? Both Rust and the underlying LLVM are very adept at inlining closures, so everything here should compile to optimal code.

Bonus: a lifetime conundrum

Well, almost optimal. There is one more thing left to do before we can call this a truly zero-cost abstraction.

We need to stop allocating so damn much!

It should be pretty obvious that the function doesn’t need to create a brand new String every time it’s called. The text is in the input Json, and we take that Json by reference already. It’s only fair we stop creating Strings and simply return a &str reference instead.

In fact, this should be as easy as removing the to_owned/into calls, right?

fn gist_owner_from_info(info: &Json) -> &str {
    || -> Option<&str> {
        let info = try_opt!(info.as_object());
        let owner = try_opt!(info.get("owner").and_then(|o| o.as_object()));
        owner.get("login").and_then(|l| l.as_string()))
    }().unwrap_or("anonymous")
}

Wrong, apparently. If you present this code to the compiler, it will serve you quite a mouthful of an error, including helpful tidbits in the vein of “expected A, found A”:

error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
   --> src/github.rs:3:34
    |
  3 |         let info = try_opt!(info.as_object());
    |                                  ^^^^^^^^^
    |
note: first, the lifetime cannot outlive the anonymous lifetime #1 defined on the block at 1:45...
   --> src/github.rs:1:46
    |
  1 | fn gist_owner_from_info(info: &Json) -> &str {
    |                                              ^
note: ...so that reference does not outlive borrowed content
   --> src/github.rs:3:29
    |
  3 |         let info = try_opt!(info.as_object());
    |                             ^^^^
note: but, the lifetime must be valid for the anonymous lifetime #1 defined on the block at 2:23...
   --> src/github.rs:2:24
    |
  2 |     || -> Option<&str> {
    |                        ^
note: ...so that expression is assignable (expected std::option::Option<&str>, found std::option::Option<&str>)
   --> src/github.rs:5:9
    |
  5 |         owner.get("login").and_then(|l| l.as_string())
    |

The crux of this verbiage is that the Rust compiler is unable to reconcile the lifetime of the closure’s return value, the input, and final result of the function.

It shouldn’t really be trying very hard, though, for the lifetime is obvious. It’s the same as the one implicitly attached to the input &Json. Seems like in this case, we need to be a little more helpful and label it explicitly:

fn gist_owner_from_info<'i>(info: &'i Json) -> &'i str {
    || -> Option<&'i str> {
// (rest as before)

Voila, this should now compile without any issues.

Once again, “Keep calm and add more 'lifetimes” proves to be an effective approach ;)


  1. Technically, they aren’t called tuples there but “multiple return values“. 

  2. This is something I needed to do when rewriting this Python project of mine to Rust. 

  3. This is also the closest Rust can currently get to a do notation from Haskell, at least without any macro-based hacks. 

Continue reading

Optional arguments in Rust 1.12

Posted on Thu 29 September 2016 in Code • Tagged with Rust, arguments, parameters, functionsLeave a comment

Today’s announcement of Rust 1.12 contains, among other things, this innocous little tidbit:

Option implements From for its contained type

If you’re not very familiar with it, From is a basic converstion trait which any Rust type can implement. By doing so, it defines how to create its values from some other type — hence its name.

Perhaps the most widespread application of this trait (and its from method) is allocating owned String objects from literal str values:

let hello = String::from("Hello, world!");

What the change above means is that we can do similar thing with the Option type:

let maybe_int = Option::from(42);

At a first glance, this doesn’t look like a big deal at all. For one, this syntax is much more wordy than the traditional Some(42), so it’s not very clear what benefits it offers.

But this first impression is rather deceptive. In many cases, this change can actually reduce the number of times we have to type Some(x), allowing us to replace it with just x. That’s because this new impl brings Rust quite a bit closer to having optional function arguments as a first class feature in the language.

Until now, a function defined like this:

fn maybe_plus_5(x: Option<i32>) -> i32 {
    x.unwrap_or(0) + 5
}

was the closest Rust had to default argument values. While this works perfectly — and is bolstered by compile-time checks! — callers are unfortunately required to build the Option objects manually:

let _ = maybe_plus_5(Some(42));  // OK
let _ = maybe_plus_5(None);      // OK
let _ = maybe_plus_5(42);        // error!

After Option<T> implements From<T>, however, this can change for the better. Much better, in fact, for the last line above can be made valid. All that is necessary is to take advantage of this new impl in the function definition:

fn maybe_plus_5<T>(x: T) -> i32 where Option<i32>: From<T> {
    Option::from(x).unwrap_or(0) + 5
}

Unfortunately, this results in quite a bit of complexity, up to and including the where clause: a telltale sign of convoluted, generic code. Still, this trade-off may be well worth it, as a function defined once can be called many times throughout the code base, and possibly across multiple crates if it’s a part of the public API.

But we can do better than this. Indeed, using the From trait to constrain argument types is just complicating things for no good reason. What we should so instead is use the symmetrical trait, Into, and take advantage of its standard impl:

impl<T, U> Into<U> for T where U: From<T>

Once we translate it to the Option case (now that Option<T> implements From<T>), we can switch the trait bounds around and get rid of the where clause completely:

fn maybe_plus_5<T: Into<Option<i32>>>(x: T) -> i32 {
    x.into().unwrap_or(0) + 5
}

As a small bonus, the function body has also gotten a little simpler.


So, should you go wild and change all your functions taking Optionals to look like this?… Well, technically you can, although the benefits may not outweigh the downsides for small, private functions that are called infrequently.

On the other hand, if you can afford to only support Rust 1.12 and up, this technique can make it much more pleasant to use the external API of your crates.

What’s best is the full backward compatibility with any callers that still pass Some(x): for them, the old syntax will continue to work exactly like before. Also note that the Rust compiler is smart about eliding the no-op conversion calls like the Into::into above, so you shouldn’t observe any changes in the performance department either.

And who knows, maybe at some point Rust makes the final leap, and allows skipping the Nones?…

Continue reading