In this post, I want to discuss some of the experiences I had with a project
that I recently finished, gisht.
By “finished” I mean that I don’t anticipate developing any new major features for it,
though smaller things, bug fixes, or non-code stuff, is of course still very possible.
I’m thinking this is as much “done” as most software projects can ever hope to be.
Thus, it is probably the best time for a recap / summary / postmortem / etc. —
something to recount the lessons learned, and assess the choices made.
Some context
The original purpose of gisht was to facilitate download & execution of GitHub gists
straight from the command line:
$ gisht Xion/git-outgoing # run the https://gist.github.com/Xion/git-outgoing gist
I initially wrote its first version in Python
because I’ve accumulated a sizable number of small & useful scripts
(for Git, Unix, Python, etc.) which were all posted as gists.
Sure, I could download them manually to ~/bin
every time I used a new machine
but that’s rather cumbersome, and I’m quite lazy.
Well, lazy and impatient :)
I noticed pretty fast that the speed tax of Python
is basically unacceptable for a program like gisht.
What I’m referring to here is not the speed of code execution, however,
but only the startup time of Python interpreter.
Irrespective of the machine, operating system, or language version,
it doesn’t seem to go lower than about one hundred milliseconds;
empirically, it’s often 2 or 3 times higher than that.
For the common case of finding a cached gist (no downloads)
and doing a simple fork
+exec
,
this startup time was very noticeable and extremely jarring.
It also precluded some more sophisticated uses for gisht,
like putting its invocation into the shell’s $PROMPT
.
Speed: delivered
And so the obvious solution emerged:
let’s rewrite it in Rust!…
Because if I’m executing code straight from the internet,
I should at least do it in a safe language.
But jokes aside, it is obvious that a language compiling to native code
is likely a good pick if you want to optimize for startup speed.
So while the choice of Rust was in large part educational
(gisht was one of my first projects to be written in it),
it definitely hasn’t disappointed there.
Even without any intentional optimization efforts,
the app still runs instantaneously.
I tried to take some measurements using the time
command,
but it never ticked into more than 0.001s.
Perceptively, it is at least on par with git
,
so that’s acceptable for me :)
Can’t segfault if your code doesn’t build
Achieving the performance objective wouldn’t do us much good, however,
if the road to get there involved excessive penalties on productivity.
Such negative impact could manifest in many ways,
including troublesome debugging due to a tricky runtime,
or difficulty in getting the code to compile in the first place.
If you had even a passing contact with Rust,
you’d expect the latter to be much more likely than the former.
Indeed, Rust’s very design eschews runtime flexibility to a ridiculous degree
(in its “safe” mode, at least),
while also forcing you to absorb subtle & complex ideas
to even get your code past the compiler.
The reward is increased likelihood your program will behave as intended —
although it’s definitely not on the level of “if it compiles, it works”
that can be offered by Haskell or Idris.
But since gisht is hardly mission critical,
I didn’t actually care too much about this increased reliability.
I don’t think it’s likely that Rust would buy me much over something like modern C++.
And if I were to really do some kind of cost-benefit analysis of several languages
— rather than going with Rust simply to learn it better —
then it would be hard to justify it over something like Go.
It scales
So the real question is: has Rust not hampered my productivity too much?
Having the benefit of hindsight,
I’m happy to say that the trade-off was definitely acceptable :)
One thing I was particularly satisfied with was the language’s scalability.
What I mean here is the ability to adapt as the project grows,
but also to start quickly and remain nimble
while the codebase is still pretty small.
Many languages (most, perhaps) are naturally tailored towards the large end,
doing their best to make it more bearable to work with big codebases.
In turn, they often forget about helping projects take off in the first place.
Between complicated build systems and dependency managers (Java),
or a virtual lack of either (C++),
it can be really hard to get going in a “serious” language like this.
On the other hand, languages like Python make it very easy to start up
and achieve relatively impressive results.
Some people, however, report having encountered problems
once the code evolves past certain size.
While I’m actually
very unsympathetic to those claims,
I realize perception plays a significant role here,
making those anecdotal experiences into a sort of self-fulfilling prophecy.
This perception problem should almost certainly spare Rust,
as it’s a natively compiled and statically typed language,
with a respectable type system to boot.
There is also some evidence
that the language works well in large projects already.
So the only question that we might want to ask is:
how easy it is to actually start a project in Rust,
and carry it towards some kind of MVP?
Based on my experiences with gisht,
I can say that it is, in fact, quite easy.
Thanks mostly to the impressive Swiss army knife of cargo
— acting as both package manager and a rudimentary build system —
it was almost Python-trivial to cook a “Hello World” program
that does something tangible, like
talk to a JSON API.
From there, it only took a few coding sessions to grow it
into a functioning prototype.
Abstractions galore
As part of rewriting gisht from Python to Rust,
I also wanted to fix some longstanding issues that limited its capabilities.
The most important one was the hopeless coupling to GitHub
and their particular flavor of gists.
Sure, this is where the project even got its name from,
but people use a dozen of different services to share code snippets
and it should very possible to support them all.
Here’s where it became necessary to utilize
the abstraction capabilities that Rust has to offer.
It was somewhat obvious to
define a Host
trait
but of course its exact form had to be
shaped
over numerous iterations.
Along the way, it even turned out that Result<Option<T>>
and Option<Result<T>>
are sometimes both necessary
as return types :)
Besides cleaner architecture,
another neat thing about an explicit abstraction is
the ability to slice a concept into smaller pieces —
and then put some of them back together.
While the Host
trait could support a very diverse set of gist services and pastebins,
many of them turned out to be just a slight variation of one central theme.
Because of this similarity, it was possible to introduce
a single Basic
implementation
which handles multiple services through varying sets of URL patterns.
Devices like these aren’t of course specific to Rust:
interfaces (traits) and classes are a staple of OO languages in general.
But some other techniques were more idiomatic;
the concept of iterators, for example,
is flexible enough to accommodate
looping over GitHub user’s gists,
even as they read directly from HTTP responses.
Hacking time
Not everything was sunshine and rainbows, though.
Take clap, for example.
It’s mostly a very good crate for parsing command line arguments,
but it couldn’t quite cope with the unusual requirements that gisht had.
To make gisht Foo/bar
work alongside gisht run Foo/bar
,
it was necessary to
analyze argv
before even handing it over to clap
.
This turned out to be
surprisingly tricky
to get right.
Like,
really
tricky, with
edges cases
and
stuff.
But as it is often the case in software,
the answer turned out to be yet another layer of indirection plus
a copious amount of tests.
In another instance, however, a direct library support was crucial.
It so happened that hyper, the crate I’ve been using for HTTP requests,
didn’t handle the Link:
response header out of the box.
This was a stumbling block that prevented the gist iterator (mentioned earlier)
from correctly handling pagination in the responses from GitHub API.
Thankfully, having the Header
abstraction in hyper
meant it was possible to add the missing support in
a relatively straighforward manner.
Yes, it’s not a universal implementation
that’d be suitable for every HTTP client,
but it does the job for gisht just fine.
Test-Reluctant Development
And so the program kept growing steadily over the months,
most notably through
more and more gist hosts
it could now support.
Eventually, some of them would fall into a sort of twilight zone.
They weren’t as complicated as GitHub to warrant writing a completely new Host
instance,
but they also couldn’t be handled via
the Basic
structure alone.
A good example would be sprunge.us:
mostly an ordinary pastebin,
except for its optional syntax highlighting
which may add some “junk” to the otherwise regular URLs.
In order to handle those odd cases,
I went for a classic wrapper/decorator pattern which, in its essence,
boils down to something like this:
pub struct Sprunge {
inner: Basic,
}
impl Sprunge {
pub fn new() -> {
Sprunge{inner: Basic::new(ID, "sprunge.us",
"http://sprunge.us/${id}", ...)}
}
}
impl Host for Sprunge {
// override & wrap methods that require custom logic:
fn resolve_url(&self, url: &str) -> Option<io::Result<Gist>> {
let mut url_obj = try_opt!(Url::parse(url).ok());
url_obj.set_query(None);
inner.resolve_url(url_obj.to_string().as_str())
}
// passthrough to the `Basic` struct for others:
fn fetch_gist(&self, gist: &Gist, mode: FetchMode) -> io::Result<()> {
self.inner.fetch_gist(gist, mode)
}
// (etc.)
}
Despite the noticeable boilerplate of a few pass-through methods,
I was pretty happy with this solution, at least initially.
After a few more unusual hosts, however,
it became cumbersome to fix all the edge cases
by looking only at the final output of the inner Basic
implementation.
The code was evidently asking for some tests,
if only to check how the inner structure is being called.
Shouldn’t be too hard, right?… Yeah, that’s what I thought, too.
The reality, unfortunately, fell very short of those expectations.
Stubs, mocks, fakes —
test doubles
in general —
are a dark and forgotten corner of Rust
that almost no one seems to pay any attention to.
Absent a proper library support — much less a language one —
the only way forward was to roll up my sleeves
and implement
a fake Host
from scratch.
But that was just the beginning.
How do you seamlessly inject this fake implementation into the wrapper
so that it replaces the Basic
struct for testing?
If you are not careful and go for the “obvious” solution — a trait object:
pub struct Sprunge {
inner: Box<Host>,
}
you’ll soon realize that you need not just a Box
, but at least an Rc
(or maybe even Arc
).
Without this kind of shared ownership,
you’ll lose your chance to interrogate the test double once you hand it over to the wrapper.
This, in turn, will heavily limit your ability to write effective tests.
What’s the non-obvious approach, then?
The full rationale would probably warrant a separate post,
but the working recipe looks more or less like this:
-
First, parametrize the wrapper with its inner type:
pub struct Sprunge<T: Host> { inner: T }
.
-
Put that in an internal module with the correct visibility setup:
mod internal {
pub struct Sprunge<T: Host> {
pub(super) inner: T,
}
}
-
Make the regular (“production”) version of the wrapper into an alias,
giving it the type parameter that you’ve been using directly:
pub type Sprunge = internal::Sprunge<Basic>;
-
Change the new
constructor to instantiate the internal
type.
-
In tests, create the wrapper with a fake inner
object inside.
As you can see in
the real example,
this convoluted technique removes the need for any pointer indirection.
It also permits you to
access the out-of-band interface
that a fake object would normally expose.
It’s a shame, though, that so much work is required for something
that should be very simple.
As it appears, testing is still a neglected topic in Rust.
Packing up
It wasn’t just Rust that played a notable role in the development of gisht.
Pretty soon after getting the app to a presentable state,
it became clear that a mere cargo build
won’t do everything
that’s necessary to carry out a complete build.
It could do more, admittedly,
if I had the foresight to explore Cargo build scripts
a little more thoroughly.
But overall, I don’t regret dropping back to my trusty ol’ pick: Python.
Like in a few previous projects, I used the Invoke task runner
for both the crucial and the auxiliary automation tasks.
It is a relatively powerful tool
— and probably the best in its class in Python that I know of —
though it can be a bit capricious if you want to
really fine-tune it.
But it does make it much easier to organize your automation code,
to reuse it between tasks, and to (ahem) invoke those tasks in a convenient manner.
In any case, it certainly beats a collection of disconnected Bash scripts ;)
What have I automated in this way, you may ask?
Well, a couple of small things; those include:
-
embedding of the current Git commit hash into the binary,
to help identify the exact revision in the logs of any potential bug reports
-
after a successful build,
replacing
the Usage section in README with the program’s --help
output
-
generating completion scripts
for popular shells by invoking the binary with a magic hidden flag (courtesy of clap)
Undoubtedly the biggest task that I relegated to Python/Invoke,
was the preparation of
release packages.
When it comes to the various Linuxes (currently Debian and Red Hat flavors),
this wasn’t particularly complicated.
Major thanks are due to the amazing fpm tool here,
which I recommend to anyone who needs to package their software in a distro-compatible manner.
Homebrew, however — or more precisely, OS X itself — was quite a different story.
Many, many
failed attempts were needed to even get it to build on Travis,
and the additional dependency on Python was
partially to blame.
To be fair, however, most of the pain was exclusively due to OpenSSL;
getting that thing to build is always loads of “fun”,
especially in such an opaque and poorly debuggable environment as Travis.
The wrap
There’s probably a lot of minor things and tidbits I could’ve mentioned along the way,
but the story so far has most likely covered all the important topics.
Let’s wrap it up then, and highlight some interesting points in the classic Yay/Meh/Nay manner.
Yay
-
It was definitely a good choice to rewrite gisht specifically in Rust.
Besides all the advantages I’ve mentioned already,
it is also worth noting that the language went through about 10 minor version bumps
while I was working on this project.
Of all those new releases,
I don’t recall a single one that would introduce a breaking change.
-
Most of the Rust ecosystem (third-party libraries) was a joy to use,
and very easy to get started with.
Honorable mention goes to serde_json and how easy it was to
transition the code
from rustc_serialize that I had used at first.
-
With a possible exception of sucking in node.js as a huge dependency of your project
and using Grunt, there is probably no better way of writing automation & support code than Python.
There may eventually be some Rust-based task runners that could try to compete,
but I’m not very convinced about using a compiled language for this purpose
(and especially one that takes so long to build).
Meh
-
While the clap crate is quite configurable and pretty straightforward to use,
it does lack at least one feature
that’d be very nice for gisht.
Additionally, working with raw clap is often a little tedious,
as it doesn’t assist you in translating parsed flags into your own configuration types,
and thus requires
shuffling those bits
manually.
-
Being a defacto standard for continuous integration in open-source projects,
Travis CI could be a little less finicky.
In almost every project I decide to use it for,
I end up with about half a dozen commits
that frantically try to fix silly configuration issues,
all before even a simple .travis.yml works as intended.
Providing a way to test CI builds locally would be an obvious way to avoid this churn.
Nay
-
Testing in Rust is such a weird animal.
On one hand, there is a first-class, out-of-the-box support for unit tests
(and even integration tests) right in the toolchain.
On the other hand, the relevant parts of the ecosystem are immature or lacking,
as evidenced by the dreary story of mocking and stubbing.
It’s no surprise that there is a long way to catch up to languages with the strongest testing culture
(Java and C#/.NET), but it’s disappointing to see Rust outclassed
even by C++.
-
Getting anything to build reliably on OSX in a CI environment is already a tall order.
But if it involves things as OpenSSL, then it quickly goes from bad to terrible.
I’m really not amused anymore how this “Just Works” system often turns out to hardly work at all.
Since I don’t want to end on such a negative note,
I feel compelled to state the obvious fact: every technology choice is a trade-off.
In case of this project, however, the drawbacks were heavily outweighed by the benefits.
For this reason, I can definitely recommend the software stack I’ve just described
to anyone developing non-trivial, cross-platform command line tools.
Continue reading