Recap of the gisht project

Posted on Fri 24 November 2017 in Programming • Tagged with Rust, gisht, CLI, GitHub, Python, testingLeave a comment

In this post, I want to discuss some of the experiences I had with a project that I recently finished, gisht. By “finished” I mean that I don’t anticipate developing any new major features for it, though smaller things, bug fixes, or non-code stuff, is of course still very possible.

I’m thinking this is as much “done” as most software projects can ever hope to be. Thus, it is probably the best time for a recap / summary / postmortem / etc. — something to recount the lessons learned, and assess the choices made.

Some context

The original purpose of gisht was to facilitate download & execution of GitHub gists straight from the command line:

$ gisht Xion/git-outgoing  # run the https://gist.github.com/Xion/git-outgoing gist

I initially wrote its first version in Python because I’ve accumulated a sizable number of small & useful scripts (for Git, Unix, Python, etc.) which were all posted as gists. Sure, I could download them manually to ~/bin every time I used a new machine but that’s rather cumbersome, and I’m quite lazy.

Well, lazy and impatient :) I noticed pretty fast that the speed tax of Python is basically unacceptable for a program like gisht.

What I’m referring to here is not the speed of code execution, however, but only the startup time of Python interpreter. Irrespective of the machine, operating system, or language version, it doesn’t seem to go lower than about one hundred milliseconds; empirically, it’s often 2 or 3 times higher than that. For the common case of finding a cached gist (no downloads) and doing a simple fork+exec, this startup time was very noticeable and extremely jarring. It also precluded some more sophisticated uses for gisht, like putting its invocation into the shell’s $PROMPT1.

Speed: delivered

And so the obvious solution emerged: let’s rewrite it in Rust!…

Because if I’m executing code straight from the internet, I should at least do it in a safe language.

But jokes aside, it is obvious that a language compiling to native code is likely a good pick if you want to optimize for startup speed. So while the choice of Rust was in large part educational (gisht was one of my first projects to be written in it), it definitely hasn’t disappointed there.

Even without any intentional optimization efforts, the app still runs instantaneously. I tried to take some measurements using the time command, but it never ticked into more than 0.001s. Perceptively, it is at least on par with git, so that’s acceptable for me :)

Can’t segfault if your code doesn’t build

Achieving the performance objective wouldn’t do us much good, however, if the road to get there involved excessive penalties on productivity. Such negative impact could manifest in many ways, including troublesome debugging due to a tricky runtime2, or difficulty in getting the code to compile in the first place.

If you had even a passing contact with Rust, you’d expect the latter to be much more likely than the former.

Indeed, Rust’s very design eschews runtime flexibility to a ridiculous degree (in its “safe” mode, at least), while also forcing you to absorb subtle & complex ideas to even get your code past the compiler. The reward is increased likelihood your program will behave as intended — although it’s definitely not on the level of “if it compiles, it works” that can be offered by Haskell or Idris.

But since gisht is hardly mission critical, I didn’t actually care too much about this increased reliability. I don’t think it’s likely that Rust would buy me much over something like modern C++. And if I were to really do some kind of cost-benefit analysis of several languages — rather than going with Rust simply to learn it better — then it would be hard to justify it over something like Go.

It scales

So the real question is: has Rust not hampered my productivity too much? Having the benefit of hindsight, I’m happy to say that the trade-off was definitely acceptable :)

One thing I was particularly satisfied with was the language’s scalability. What I mean here is the ability to adapt as the project grows, but also to start quickly and remain nimble while the codebase is still pretty small.

Many languages (most, perhaps) are naturally tailored towards the large end, doing their best to make it more bearable to work with big codebases. In turn, they often forget about helping projects take off in the first place. Between complicated build systems and dependency managers (Java), or a virtual lack of either (C++), it can be really hard to get going in a “serious” language like this.

On the other hand, languages like Python make it very easy to start up and achieve relatively impressive results. Some people, however, report having encountered problems once the code evolves past certain size. While I’m actually very unsympathetic to those claims, I realize perception plays a significant role here, making those anecdotal experiences into a sort of self-fulfilling prophecy.

This perception problem should almost certainly spare Rust, as it’s a natively compiled and statically typed language, with a respectable type system to boot. There is also some evidence that the language works well in large projects already. So the only question that we might want to ask is: how easy it is to actually start a project in Rust, and carry it towards some kind of MVP?

Based on my experiences with gisht, I can say that it is, in fact, quite easy. Thanks mostly to the impressive Swiss army knife of cargo — acting as both package manager and a rudimentary build system — it was almost Python-trivial to cook a “Hello World” program that does something tangible, like talk to a JSON API. From there, it only took a few coding sessions to grow it into a functioning prototype.

Abstractions galore

As part of rewriting gisht from Python to Rust, I also wanted to fix some longstanding issues that limited its capabilities.

The most important one was the hopeless coupling to GitHub and their particular flavor of gists. Sure, this is where the project even got its name from, but people use a dozen of different services to share code snippets and it should very possible to support them all.

Here’s where it became necessary to utilize the abstraction capabilities that Rust has to offer. It was somewhat obvious to define a Host trait but of course its exact form had to be shaped over numerous iterations. Along the way, it even turned out that Result<Option<T>> and Option<Result<T>> are sometimes both necessary as return types :)

Besides cleaner architecture, another neat thing about an explicit abstraction is the ability to slice a concept into smaller pieces — and then put some of them back together. While the Host trait could support a very diverse set of gist services and pastebins, many of them turned out to be just a slight variation of one central theme. Because of this similarity, it was possible to introduce a single Basic implementation which handles multiple services through varying sets of URL patterns.

Devices like these aren’t of course specific to Rust: interfaces (traits) and classes are a staple of OO languages in general. But some other techniques were more idiomatic; the concept of iterators, for example, is flexible enough to accommodate looping over GitHub user’s gists, even as they read directly from HTTP responses.

Hacking time

Not everything was sunshine and rainbows, though.

Take clap, for example. It’s mostly a very good crate for parsing command line arguments, but it couldn’t quite cope with the unusual requirements that gisht had. To make gisht Foo/bar work alongside gisht run Foo/bar, it was necessary to analyze argv before even handing it over to clap. This turned out to be surprisingly tricky to get right. Like, really tricky, with edges cases and stuff. But as it is often the case in software, the answer turned out to be yet another layer of indirection plus a copious amount of tests.

In another instance, however, a direct library support was crucial.

It so happened that hyper, the crate I’ve been using for HTTP requests, didn’t handle the Link: response header out of the box3. This was a stumbling block that prevented the gist iterator (mentioned earlier) from correctly handling pagination in the responses from GitHub API. Thankfully, having the Header abstraction in hyper meant it was possible to add the missing support in a relatively straighforward manner. Yes, it’s not a universal implementation that’d be suitable for every HTTP client, but it does the job for gisht just fine.

Test-Reluctant Development

And so the program kept growing steadily over the months, most notably through more and more gist hosts it could now support.

Eventually, some of them would fall into a sort of twilight zone. They weren’t as complicated as GitHub to warrant writing a completely new Host instance, but they also couldn’t be handled via the Basic structure alone. A good example would be sprunge.us: mostly an ordinary pastebin, except for its optional syntax highlighting which may add some “junk” to the otherwise regular URLs.

In order to handle those odd cases, I went for a classic wrapper/decorator pattern which, in its essence, boils down to something like this:

pub struct Sprunge {
    inner: Basic,
}

impl Sprunge {
    pub fn new() -> {
        Sprunge{inner: Basic::new(ID, "sprunge.us",
                                  "http://sprunge.us/${id}", ...)}
    }
}

impl Host for Sprunge {
    // override & wrap methods that require custom logic:
    fn resolve_url(&self, url: &str) -> Option<io::Result<Gist>> {
        let mut url_obj = try_opt!(Url::parse(url).ok());
        url_obj.set_query(None);
        inner.resolve_url(url_obj.to_string().as_str())
    }

    // passthrough to the `Basic` struct for others:
    fn fetch_gist(&self, gist: &Gist, mode: FetchMode) -> io::Result<()> {
        self.inner.fetch_gist(gist, mode)
    }
    // (etc.)
}

Despite the noticeable boilerplate of a few pass-through methods, I was pretty happy with this solution, at least initially. After a few more unusual hosts, however, it became cumbersome to fix all the edge cases by looking only at the final output of the inner Basic implementation. The code was evidently asking for some tests, if only to check how the inner structure is being called.

Shouldn’t be too hard, right?… Yeah, that’s what I thought, too.

The reality, unfortunately, fell very short of those expectations. Stubs, mocks, fakes — test doubles in general — are a dark and forgotten corner of Rust that almost no one seems to pay any attention to. Absent a proper library support — much less a language one — the only way forward was to roll up my sleeves and implement a fake Host from scratch.

But that was just the beginning. How do you seamlessly inject this fake implementation into the wrapper so that it replaces the Basic struct for testing? If you are not careful and go for the “obvious” solution — a trait object:

pub struct Sprunge {
    inner: Box<Host>,
}

you’ll soon realize that you need not just a Box, but at least an Rc (or maybe even Arc). Without this kind of shared ownership, you’ll lose your chance to interrogate the test double once you hand it over to the wrapper. This, in turn, will heavily limit your ability to write effective tests.

What’s the non-obvious approach, then? The full rationale would probably warrant a separate post, but the working recipe looks more or less like this:

  • First, parametrize the wrapper with its inner type: pub struct Sprunge<T: Host> { inner: T }.

  • Put that in an internal module with the correct visibility setup:

    mod internal {
        pub struct Sprunge<T: Host> {
            pub(super) inner: T,
        }
    }
    
  • Make the regular (“production”) version of the wrapper into an alias, giving it the type parameter that you’ve been using directly4:

    pub type Sprunge = internal::Sprunge<Basic>;
    
  • Change the new constructor to instantiate the internal type.

  • In tests, create the wrapper with a fake inner object inside.

As you can see in the real example, this convoluted technique removes the need for any pointer indirection. It also permits you to access the out-of-band interface that a fake object would normally expose.

It’s a shame, though, that so much work is required for something that should be very simple. As it appears, testing is still a neglected topic in Rust.

Packing up

It wasn’t just Rust that played a notable role in the development of gisht.

Pretty soon after getting the app to a presentable state, it became clear that a mere cargo build won’t do everything that’s necessary to carry out a complete build. It could do more, admittedly, if I had the foresight to explore Cargo build scripts a little more thoroughly. But overall, I don’t regret dropping back to my trusty ol’ pick: Python.

Like in a few previous projects, I used the Invoke task runner for both the crucial and the auxiliary automation tasks. It is a relatively powerful tool — and probably the best in its class in Python that I know of — though it can be a bit capricious if you want to really fine-tune it. But it does make it much easier to organize your automation code, to reuse it between tasks, and to (ahem) invoke those tasks in a convenient manner.

In any case, it certainly beats a collection of disconnected Bash scripts ;)

What have I automated in this way, you may ask? Well, a couple of small things; those include:

  • embedding of the current Git commit hash into the binary, to help identify the exact revision in the logs of any potential bug reports5

  • after a successful build, replacing the Usage section in README with the program’s --help output

  • generating completion scripts for popular shells by invoking the binary with a magic hidden flag (courtesy of clap)

Undoubtedly the biggest task that I relegated to Python/Invoke, was the preparation of release packages. When it comes to the various Linuxes (currently Debian and Red Hat flavors), this wasn’t particularly complicated. Major thanks are due to the amazing fpm tool here, which I recommend to anyone who needs to package their software in a distro-compatible manner.

Homebrew, however — or more precisely, OS X itself — was quite a different story. Many, many failed attempts were needed to even get it to build on Travis, and the additional dependency on Python was partially to blame. To be fair, however, most of the pain was exclusively due to OpenSSL; getting that thing to build is always loads of “fun”, especially in such an opaque and poorly debuggable environment as Travis.

The wrap

There’s probably a lot of minor things and tidbits I could’ve mentioned along the way, but the story so far has most likely covered all the important topics. Let’s wrap it up then, and highlight some interesting points in the classic Yay/Meh/Nay manner.

Yay
  • It was definitely a good choice to rewrite gisht specifically in Rust. Besides all the advantages I’ve mentioned already, it is also worth noting that the language went through about 10 minor version bumps while I was working on this project. Of all those new releases, I don’t recall a single one that would introduce a breaking change.

  • Most of the Rust ecosystem (third-party libraries) was a joy to use, and very easy to get started with. Honorable mention goes to serde_json and how easy it was to transition the code from rustc_serialize that I had used at first.

  • With a possible exception of sucking in node.js as a huge dependency of your project and using Grunt, there is probably no better way of writing automation & support code than Python. There may eventually be some Rust-based task runners that could try to compete, but I’m not very convinced about using a compiled language for this purpose (and especially one that takes so long to build).

Meh
  • While the clap crate is quite configurable and pretty straightforward to use, it does lack at least one feature that’d be very nice for gisht. Additionally, working with raw clap is often a little tedious, as it doesn’t assist you in translating parsed flags into your own configuration types, and thus requires shuffling those bits manually6.

  • Being a defacto standard for continuous integration in open-source projects, Travis CI could be a little less finicky. In almost every project I decide to use it for, I end up with about half a dozen commits that frantically try to fix silly configuration issues, all before even a simple .travis.yml works as intended. Providing a way to test CI builds locally would be an obvious way to avoid this churn.

Nay
  • Testing in Rust is such a weird animal. On one hand, there is a first-class, out-of-the-box support for unit tests (and even integration tests) right in the toolchain. On the other hand, the relevant parts of the ecosystem are immature or lacking, as evidenced by the dreary story of mocking and stubbing. It’s no surprise that there is a long way to catch up to languages with the strongest testing culture (Java and C#/.NET7), but it’s disappointing to see Rust outclassed even by C++.

  • Getting anything to build reliably on OSX in a CI environment is already a tall order. But if it involves things as OpenSSL, then it quickly goes from bad to terrible. I’m really not amused anymore how this “Just Works” system often turns out to hardly work at all.

Since I don’t want to end on such a negative note, I feel compelled to state the obvious fact: every technology choice is a trade-off. In case of this project, however, the drawbacks were heavily outweighed by the benefits.

For this reason, I can definitely recommend the software stack I’ve just described to anyone developing non-trivial, cross-platform command line tools.


  1. This is not an isolated complaint, by the way, as the interpreter startup time has recently emerged as an important issue to many developers of the Python language. 

  2. Which may also include a practical lack thereof. 

  3. It does handle it now, fortunately. 

  4. Observant readers may notice that we’re exposing a technically private type (internal::Sprunge) through a publicly visible type alias. If that type was actually private, this would trigger a compiler warning which is slated to become a hard error at some point in the future. But, amusingly, we can fool the compiler by making it a public type inside a private module, which is exactly what we’re doing here. 

  5. This has since been rewritten and is now done in build.rs — but that’s only because I implemented the relevant Cargo feature myself :) 

  6. For an alternative approach that doesn’t seem to have this problem, check the structopt crate

  7. Dynamically typed languages, due to their rich runtime, are basically a class of their own when it comes to testing ease, so it wouldn’t really be fair to hold them up for comparison. 

Continue reading

In Microsoft we trust

Posted on Fri 08 April 2016 in Thoughts • Tagged with Microsoft, Windows, GitHub, Apple, Facebook, Google, tech cultureLeave a comment

Just like many other people, I was following news from the last week’s BUILD conference with piqued interest. The ability to run Linux userland programs on Windows — including, of course, bash — is something to be excited about. If nothing else, it should dramatically improve Windows support of new programming languages that seem to pop up all the time.

There was something else, however, that I couldn’t help but notice. The reactions of tech communities to this and similar developments focused very frequently on Microsoft itself.

The beloved “new Microsoft”, as some call it, embraces open source, supports Linux, and generally does almost a full 180 with their stance on proprietary vs. free software. The circumstances fit this narrative rather snuggly, too: a new CEO makes a clean break with the past to pivot the company in this new world ruled by mobile and cloud.

Still… love? Even considering how Internet comments are grotesquely exaggerated most of the time, that’s quite a declaration. The sentiment is nowhere near isolated either. But it’s not a question whether this infatuation has a rational merit, or whether those feelings will eventually turn out to be misplaced.

The question is: why does it exist at all? We are talking about a company here, a for-profit organization. How can such a language even enter the picture?

Then I realized this is not really a new phenomenon. Quite the opposite: the broad developer community seems to always need a company to champion its core values. Nowadays, we’re simply trying to find someone new to carry the standard.

Why? Because we feel that our old heroes have forsaken us.

Hall of past fame

Take GitHub for example. Once a darling of the open source community, it’s been suffering sharp criticism for many months now. Rightfully or not, many people aren’t exactly excited — to put it mildly — about changes to the policy and atmosphere at GitHub, typified by the ill-fated meritocracy rug. The widely backed and long standing plea for a few critical features has only recently stopped falling on deaf ears. And in the background, there are always concerns about GitHub simply becoming too big, and exterting too much control over the open source ecosystem.

Nota bene, the very same ecosystem it had once been lauded for nurturing.

Among the other flagship tech companies, Apple or Facebook have never garnered much good will. Sure, they are recognized for the pure utilitarian value of hardware they produce; or convenience, broad applicability, and stability of APIs they offer. Facebook may be scoring some additional points for trying to sort out the mess of frontend development, but many say it’s not doing anybody any favors. Neither company is easy to portray as a paragon of openness, though, and it’s probably easier to argue the exact opposite.

And then there is Google, of course1. Some time in the past few years, a palpable shift occurred in how the company is perceived by the techie crowd. It’s difficult to pinpoint the exact pivotal moment, and the one event that leaps to attention doesn’t seem sufficient to explain it. But the tone has been set, allowing news to be molded to fit it. Add this to the usual backdrop of complaints about the onerous interview process and general fearmongering, and the picture doesn’t look very bright.

Shades of grey

No similar woes seem to have been plaguing Microsoft as of late, even though their record of recent “unwholesome” deeds isn’t exactly clean either. Does it mean they are indeed the most fitting candidate for a (new) enterprise ally of the hacker community?…

Or maybe we can finally recognize the whole notion as the utterly silly concept it actually is. Hard to shake though it be, this quasi-Manicheic mentality of assigning labels of virtue or sin is, at best, a naive idealism. At worst, it’s a peculiar kind of harmful partisanship that technologists are particularly susceptible to. You may recognize it as something that has a very long tradition in the hacker community.

But it doesn’t mean it’s a tradition worth keeping.


  1. I have no illusions I will be attributed any objectivity, but it bears mentioning again that nothing I say here is representative of anything but my own opinions. 

Continue reading