Cessen's Ramblings

2019 - 01 - 09

Rust Community Norms for Unsafe Code

I recently released Ropey 1.0, a text rope library for Rust. Ropey uses unsafe code internally, and its use of unsafe unsurprisingly came up in the 1.0 release thread on Reddit.

The ensuing discussion (especially thanks to Shnatsel) helped me significantly reduce the amount of unsafe code in Ropey with minimal (though not non-existent) performance degradation. But the whole thing nevertheless got me thinking about unsafe code and community norms around it, and I figured writing some of those thoughts down might be useful.

My hope is this post will be part of a community-wide discussion, and I would love for others (likely smarter than me) to write their thoughts on this topic as well. This post is simply my take on it.

Different Use-cases

I've gotten the impression that, on-the-whole, the Rust community's attitude towards unsafe code is something along the lines of, "Don't ever use it unless you're a wizard... and you're not a wizard." Unsafe code, it seems to me, is seen as this incredibly dangerous thing, and if it's in your crate or application then there's something wrong.

(Having said that, I'm not at all sure whether this impression is actually accurate, but I imagine if I've gotten that impression, many others likely have as well.)

In any case, I absolutely don't see unsafe code that way, and I think this is because my use-cases for Rust are different.

As I understand it, Rust more-or-less began its life at Mozilla with the goal of making browsers more secure. From that standpoint I largely agree with avoiding unsafe code. We've seen time and time again that memory safety bugs are both easy to introduce and a significant attack vector. Eliminating that whole class of bugs is good for security.

However, for my own projects I'm mostly uninterested in the web. I suspect this puts me in a minority, both within the Rust community and within the software development community as a whole. The vast majority of software developed today seems to be web-facing in some way or another. However, the software I'm most interested in writing runs locally on people's computers and is not web-facing. For example, my 3d renderer.

Now, I'm not saying there aren't security concerns with local software. Local software has a way of eventually finding itself on networks (for example, render farms in the case of 3d rendering). Nevertheless, the cost-benefit analysis is different: trading some risk of unsafety for better performance is almost always the right choice for something like an offline 3d renderer.

So for me Rust's memory safety guarantees aren't about security, they're about making software development easier. Tracking down memory safety bugs is a pain—in my experience, far more of a pain than fighting the borrow checker. But if I need to drop into unsafe code in a few places to squeeze more performance out of my renderer, I'm certainly going to do it.

Of course, there are a lot of use-cases where that is absolutely the wrong trade-off, and I suspect that includes most software being written today. But I think it's worth making this distinction: different software has (legitimately) different priorities. And I would like to see more acknowledgment of that in the Rust ecosystem.

Which leads me to the next part of this post...

Marking Crates

Ropey has a prominent notice in its readme about its use of unsafe code. I put it there precisely because of varying use-cases: people who are writing software where security is a high priority probably shouldn't use Ropey, or should only utilize it in a compartmentalized way. On the other hand, people who are writing high performance text editors that can handle crazy-huge files without breaking a sweat...? That's precisely what I designed Ropey for.

Ropey's readme notice—or something like it—is something I would like to see adopted as a norm in the Rust community. Different software has different needs, and I don't think a one-size-fits-all attitude towards unsafe code makes sense. But I do think we all need to be forthright about the priorities of our crates. In fact, that forthrightness probably shouldn't be limited to just the use of unsafe code.

So I guess what I'm saying is: I don't know exactly what I want this to look like, but I would like there to be some generally accepted way for crates to advertise what their priorities are. This would allow people to better choose crates with trade-offs and cost/benefit analysis appropriate to their own projects.

Safety Feature Flag

At the most recent Seattle Rust Meetup I had a great conversation about unsafety with Ivan (don't know his last name, alas, but thanks Ivan!) and he had an excellent idea for Ropey that I think may be more widely applicable: provide a feature flag that switches to an all-safe version of Ropey. In Ropey's case this is pretty straightforward: the unsafe code is well compartmentalized, and could be easily swapped out with safe (but slower / more memory hungry) equivalents based on a feature flag.

I am seriously considering this for Ropey, but I think it can apply to other crates as well. And maybe having a standard flag for this would be good. It obviously wouldn't be useful for every crate, but it would allow crate consumers to make this trade-off themselves whenever a crate does provide the flag.

But Can't We Just Make Safe Code Faster?

One of the things I legitimately love about the Rust ecosystem is the push to get unsafe code compartmentalized into crates with safe APIs. Having a rich set of crates that are working hard to make unsafe code correct, and expose it safely for others, is super awesome.

However, this will never be able to cover all possible situations. Covering reasonably common cases is feasible, but eventually it becomes a game of whac-a-mole. There are always going to be things that won't be covered in the ecosystem, especially the more niche you get.

Moreover, some optimizations are specific to the needs of a holistic design. (John Carmack's talk about systems engineering explains this much better than I can, and is also just a great talk.) And sometimes these sorts of optimizations will require unsafe code, and because of their specificity won't be generally useful as a separate crate.

For these and other reasons, I believe there will always be legitimate cases for using unsafe code in one-off ways. The more we can shrink the number of such cases, the better, for sure. But it will never shrink to zero.

Wrapping Up

I don't mean any of this to be an "excuse" for people using unsafe code haphazardly. Even in my own case, I was able to significantly reduce (though not eliminate) the unsafe code in Ropey with a little prodding from Shnatsel on Reddit. When you're in a performance mindset it's easy to reach for the lowest level before you strictly need to.

But legitimate uses of unsafe code are inevitable, and I think a "know and broadcast your use-case" ethos would be healthier and more useful for the Rust community than a more-or-less "don't use unsafe code" attitude.