Cessen's Ramblings

2014 - 10 - 27

Zed: a Code Editor

I like to write code. And like many such people, I am constantly in search of the perfect code editor. Of course, such a tool does not actually exist. But nevertheless, from time to time I like to try new editors that seem to present something new and interesting, to see if I can improve my code editing experience.

Most recently, over the last year or so, I've been playing with editors built on top of web technology. Specifically, I've been playing with Light Table, Brackets, Atom, and Zed.

I started off with Light Table—which is a really cool and innovative project, and I certainly recommend checking it out—but I pretty quickly migrated to Brackets (which is a bit more conventional) for most of my coding.

Brackets is a really cool editor, with an obvious focus on UX concerns. The editor itself looks beautiful, and I appreciated how it eschewed tabs in favor of a simple vertical list of open documents. Despite Brackets being targeted at web development, I primarily used it for C++ coding. And it worked wonderfully. The only real complaints I had about it at the time were that its scrolling and editing were a bit laggy, and it didn't support split views. (Incidentally, it now supports split view.)

Then GitHub announced Atom, and not too long after that released it as open source. Atom has split view capabilities, and in general looked like a nice editor, so I gave it a try. Despite the split view feature, and ostensibly being better suited to non-web-development coding, I found I just didn't like it as much as Brackets. So I went back to Brackets for a while.

Then I somehow stumbled upon Zed. I don't actually recall how I found it, but I did. And boy am I glad I did.

Zed, in my opinion, is the best code editor I have ever used. It makes some really bold UX decisions:

  • There is no manual saving. The editor automatically and continuously saves your documents as you are editing. This takes some getting used to, but it's actually really fantastic. It means documents are never in a "dirty" state, and you never have to worry about saving.
  • Zed has an infinite undo stack that is preserved across editing sessions. You can close a file, close Zed, and restart your computer if you like, and when you open the file again you still have your full undo stack all the way back to the first time you edited that file. This works extremely well in combination with the continuous saving.
  • No tabs or open document lists, just quick-open short-cuts. Since documents are never unsaved, there isn't any need for an "open" set of files. The only files that are open are the files currently displayed on screen for editing. You can jump between documents easily using various short cuts.

This unique (dare I say innovative?) approach that Zed takes means that it takes some getting used to. But after using it for a few weeks, I find it really difficult to go back to other editors. Zed really smooths out the editing workflow in a way that I've never seen before, and it is quite fantastic. It really changes the feel of working with a code base. Trying to go back to other editors now feels very much like a step backwards.

Moreover, despite being built on top of web technologies, Zed is surprisingly snappy. Unlike Light Table, Brackets, and Atom—all of which feel sluggish and laggy—Zed feels almost like a native application. I have yet to experience any lag in scrolling or editing except in really egregious circumstances. Opening files is also extremely snappy for all but the largest of files.

There is plenty more to Zed than what I've outlined above. It is a project-oriented editor (you start a session by selecting a root directory for the editing session), so it is not well suited to editing individual documents. But what you gain from that is project-wide code indexing for jumping to symbol definitions, project-wide search, etc. It is a very capable code editor in general, with things like multi-cursor support, etc. And to top it all off, it's cross-platform and open source.

I forsee this being my code editor of choice for quite some time. I highly recommend checking it out.

2014 - 06 - 25

Psychopath Renderer Website

I've created a blog/website for my 3d renderer: http://psychopath.io

Woo hoo!

2014 - 02 - 14

Ray Reordering and Breadth-First Ray Tracing

As I mentioned in my post about random number generation, I've been working on an experimental 3d renderer named "Psychopath".

I haven't posted much about it, but I would like to change that. Psychopath is easily the largest one-man personal project I've ever undertaken, and it's something I regularly get excited and obsessed about, disappearing into my own little programming world, often not eating for extended periods because I don't want to stop working on it.

So I'm going to start by writing about a core aspect of Psychopath that makes it different from most renderers I've seen out there: ray reordering.

Psychopath is what is commonly referred to as a "path tracer", which means (in over-simplified terms) that it is based on tracing rays of light through the scene, bouncing all over the place.

The traditional way to do this kind of ray tracing is to trace one ray through the scene at a time. This has the advantage that you only have to store one ray at a time in memory, and also has the advantage of a large body of research to draw from for making it as fast as possible. It has the disadvantage, however, of accessing the scene in a very incoherent way.

There are a couple of alternative ways of doing ray tracing, both of which are based around the idea of handling many rays at once, and both of which have many variations (some of which blur the distinction between the two).

The first is called "ray reordering". The basic idea behind ray reordering is to queue many rays to be traced, and adjust the order in which they are actually traced to improve access patterns of the scene. Sometimes this means just reordering the rays based on some criteria before any of them have been traced, and then tracing them in the typical one-at-a-time fashion. But typically ray reordering schemes are more complex, and involve partially tracing rays through the scene and pausing them at strategic points to be resumed later, allowing more fine-grained reordering.

The second is called "breadth-first ray tracing". Breadth-first ray tracing, like ray reordering, also deals with many rays at once. However, while ray reordering is still fundamentally dealing with tracing individual rays through the scene (just rearranged), breadth-first ray tracing deals with an entire batch of rays at every stage of the ray tracing process. So, for example, when traversing the acceleration structure of the scene, breadth-first ray tracing will test every relevant ray against a node when visiting it.

So what does this all have to do with Psychopath? Well, Psychopath has a weird quirk that most ray tracers don't have: Psychopath dynamically dices geometry into sub-pixel geometry before testing rays against them. And it doesn't do this is a preparation stage before rendering starts, it does it on-demand when a ray needs to be tested against a surface. This is much like the Reyes rendering architecture.

The first implementation of Psychopath accomplished this with standard one-ray-at-a-time ray tracing and a geometry cache. Whenever a surface was diced, the result was stored in a fixed-size LRU cache to be used again by later rays. That worked pretty well, but doing things that way has some problems. First, there was a hard performance cliff when the geometry cache wasn't large enough. Especially for more complex scenes, that could become a real problem. Second, since the cache is accessed separately for every single ray, there is a lot of fine-grained locking going on for multi-threading, causing a lot of thread contention.

So the second implementation of Psychopath used ray reordering to allow the cache access (and thus locking) to be amortized over many rays. I used the ray reordering scheme from the paper "Two-level ray tracing with Reordering for Highly Complex Scenes" by Hanika et al. This worked remarkably well at solving both issues. The ray reordering amortized not only cache access over many rays, but also cache misses. Even with a tiny cache that would normally slow rendering to a crawl, this new scheme rendered reasonably quickly.

This method worked really well, right up until I wanted to add object instancing. The trouble with object instancing is that it turns the scene acceleration structure from a tree into a DAG. And the reason that poses an issue is that traversing a DAG requires more state to be maintained. With ray reordering, you have to maintain traversal state for each of a large number of rays. When traversing a tree, you can make that state very small (e.g. this paper). But when traversing a DAG, you have to maintain a stack for each ray, which significantly limits the number of rays that you can hold in RAM at once.

This is where breadth-first ray tracing comes in. With breadth-first ray tracing, you can maintain just a single stack for your entire set of rays. This allows you to easily and efficiently handle instancing, among other things. There are down sides, however. In general, it tends to be notably slower than single-ray tracing or ray reordering (at least with the current research out there). But nevertheless, it seems to fit the goals of Psychopath better than single-ray tracing or ray reordering, so I've decided to go that direction and see how fast I can make it.

I have a basic implementation of breadth-first ray tracing in the Stream Tracing branch of Psychopath. It is indeed slower than ray reordering on most scenes (although there are some interesting exceptions). But I'm hoping I can narrow that gap in performance. We'll see.

2012 - 06 - 26

Pseudo-Random Number Generators

Holy crap it's been a long time since I've blogged here. And you know what? I'm fine with that. I'll blog when I actually have something I want to share. Like now, for example.

So, I've been pecking away at a 3d renderer that I've dubbed "Psychopath". I've put it up on GitHub: https://github.com/cessen/psychopath_cpp

Anyway, I'm discovering all sorts of nifty little things as I'm working on this, many of which are handy and general-purpose. The latest one is a simple pseudo-random number generator that is quite robust.

For most simulation applications (advanced 3d rendering can be considered such) the included random functions in e.g. most C or C++ standard libraries are insufficient. Moreover, it is important to be able to guarantee that the RNG you are using is consistent across platforms and satisfies specific properties of randomness. Therefore it is generally a good idea, in these applications, to include a custom RNG implementation and use that.

The go-to RNG for such purposes these days is the Mersenne Twister, and as such that is the RNG I've been using in Psychopath up to this point. But the code for it is quite complex. And as it turns out, there are other RNG's out there that are pretty much just as good, but have far simpler code.

Several such RNG's are covered in the short and accessible paper "Good Practice in (Pseudo) Random Number Generation for Bioinformatics Applications" by David Jones. I have opted to use his JKISS generator, which is a variation on George Marsaglia's KISS (Keep It Simple Stupid) RNG.

You can see my implementation here.

The generator has very good statistical properties, passing a large battery of intensive tests as outlined in the paper. In fact, it appears to be just as robust as the Mersenne Twister in almost every respect. The only way in which is it less robust is period length. Mersenne Twister has the enormous period 2^19937-1, whereas JKISS has a period of roughly 2^127. JKISS's period is still very large, however, and is very appropriate for most practical applications, 3d rendering included.

For applications that require larger periods, several other simple-yet-robust generators are also listed in the paper, some of which have enormous periods exceeding even the Mersenne Twister.