Consider this part of my current series of programming advice posts. Role Of Algorithms, by matklad:

“Algorithms” are a useful skill not because you use it at work every day, but because they train you to be better at particular aspects of software engineering.

...

Second, algorithms teach about properties and invariants. Some lucky people get those skills from a hard math background, but algorithms are a much more accessible way to learn them, as everything is very visual, immediately testable, and has very short and clear feedback loop.

There's a lot here I agree with or at least consider worth thinking about. However, the article is focused on the process of learning to program, and I'd like to suggest a different perspective too: rather than “you don't use algorithms at work every day”, every single function you write can be considered as an algorithm. Many of them are trivial algorithms, but if you have the experience with looking through the algorithmic lens, you can notice as you write (or read) useful facts like:

  • “Oops, this is going to be O(N²); can I do it more efficiently?”
  • “This almost has an invariant or postcondition that …, but it could be violated if …, so let's think about where to check for that and what we want to do about it then, or what will happen if we don't check.”

While I worked at Google I spent a lot of time doing testing and code review. One of the things I learned from this is:

A test must document what it is testing.

This doesn’t have to be an explicit comment (it might be adequately explained by the name of the test, or the code in it), but either way, it should be obvious to maintainers what the goal of the test is — what property the code under test should have. “This test should continue to pass” is not a sufficient explanation of the goal. Why? Consider these scenarios:

  • A test failed. Does that mean…

    • …the test successfully detected the loss of a desired behavior? (This is the ideal case, that we hope to enable by writing and running tests.)

    • …the test incidentally depended on something that is changing but is not significant, such as ordering of something unordered-but-deterministic, or the exact sequence of some method calls? (Ideally, test assertions will be written precisely so that they accept everything that meets requirements, but this is not always feasible or correctly anticipated.)

    • …the test incidentally depended on something that is being intentionally changed, but it wasn’t intending to test that part of the system, which has its own tests?

  • A test no longer compiles because you deleted a function (or other declaration/item/symbol) it used. Does that mean…

    • …the test demonstrates a reason why you should not delete that function?

    • …the test needs to be rewritten to test the thing it is about, without incidentally using that function?

    • …the function is necessary to perform this test, so it should continue to exist but be private?

If tests specify the property they are looking for, then maintainers can quickly decide how to respond to a test failure — whether to count it a bug in the code under test, or to change or delete the test.

If they do not, then maintainers will waste time keeping the code conformant to obsolete requirements, or digging through revision history to determine the original intent.

The piece of software I've mostly been working on recently is still All is Cubes (posts). However, this one project has sprawled out into a lot of things to do with Rust.

For example, something about my coding style (or maybe my attention to error messages) seems to turn up compiler bugs — such as #80772, #88630, #95324, #95873, #97205, #104025, #105645, #108481, and #109067. (Which is not to say that Rust is dreadfully buggy — on average, the experience is reliable and pleasant.) I'm also taking some steps into contributing to the compiler, so I can get some bugs (and pet peeves) fixed myself. (The first step, though, was simply running my own code against rustc nightly builds, so I can help spot these kinds of bugs before they get released.)

I've also taken some code from All is Cubes and split it out into libraries that might be useful for others.

  • The first one was exhaust, a trait-and-macro library that provides the ability to take a type (that implements the Exhaust trait) and generate all possible values of that type. The original motivation was to improve on (for my purposes) strum::IntoEnumIterator (which does this for enums but always leaves the enum’s fields with the Default value) by generating enums and structs arbitrarily recursively.

    (Exhaustive iteration is perhaps surprisingly feasible even in bigger domains than a single enum; if you have a simple piece of arithmetic, for example, it only takes a few seconds to run it on every 32-bit integer or floating-point value, and look for specific outcomes or build a histogram of the results.)

  • The second one, published just yesterday, is rendiff, a (slightly) novel image diffing algorithm which I invented to compare the output of All is Cubes’ renderer test cases. Its value is that it is able to compensate for the results of rounding errors on the positions of the edges of objects in the scene — instead of such errors counting against a budget of allowable wrong pixels, they're just counted as correct, by observing that they exist at a neighboring position in the other image.

I woke up last night with a great feature idea which on further examination was totally inapplicable to All is Cubes in its current state. (I was for some dream-ish reason imagining top-down tile- and turn-based movement, and had the idea to change the cursor depending on whether a click was a normal move or an only-allowed-because-we're-debugging teleport. This is totally unlike anything I've done yet and makes no sense for a first-person free movement game. But I might think about cursor/crosshair changes to signal what clicking on a block would do.)

That seems like a good excuse to write a status update. Since my last post I've made significant progress, but there are still large missing pieces compared to the original JavaScript version.

(The hazard in any rewrite, of course, is second-system effect — “we know what mistakes we made last time, so let's make zero of them this time”, with the result that you add both constraints and features, and overengineer the second version until you have a complex system that doesn't work. I'm trying to pay close attention to signs of overconstraint.)

[screenshot]

Now done:

  • There's a web server you can run (aic-server) that will serve the web app version; right now it's just static files with no client-server features.

  • Recursive blocks exist, and they can be rendered both in the WebGL and raytracing modes.

  • There's an actual camera/character component, so we can have perspective projection, WASD movement (but not yet mouselook), and collision.

    For collision, right now the body is considered a point, but I'm in the middle of adding axis-aligned box collisions. I've improved on the original implementation in that I'm using the raycasting algorithm rather than making three separate axis-aligned moves, so we have true “continuous collision detection” and fast objects will never pass through walls or collide with things that aren't actually in their path.

  • You can click on blocks to remove them (but not place new ones).

  • Most of the lighting algorithm from the original, with the addition of RGB color.

    Also new in this implementation, Space has an explicit field for the “sky color” which is used both for rendering and for illuminating blocks from outside the bounds. This actually reduces the number of constants used in the code, but also gets us closer to “physically based rendering”, and allows having “night” scenes without needing to put a roof over everything. (I expect to eventually generalize from a single color to a skybox of some sort, for outdoor directional lighting and having a visible horizon, sun, or other decorative elements.)

  • Rendering space in chunks instead of a single list of vertices that has to be recomputed for every change.

  • Added a data structure (EvaluatedBlock) for caching computed details of blocks like whether their faces are opaque, and used it to correctly implement interior surface removal and lighting. This will also be critical for efficiently supporting things like rotated variants of blocks. (In the JS version, the Block type was a JS object which memoized this information, but here, Block is designed to be lightweight and copiable (because I've replaced having a Blockset defining numeric IDs with passing around the actual Block and letting the Space handle allocating IDs), so it's less desirable to be storing computed values in Block.)

  • Made nearly all of the GL/luminance rendering code not wasm-specific. That way, we can support "desktop application" as an option if we want to (I might do this solely for purposes of being able to graphically debug physics tests) and there is less code that can only be compiled with the wasm cross-compilation target.

  • Integrated embedded_graphics to allow us to draw text (and other 2D graphics) into voxels. (That library was convenient because it came with fonts and because it allows implementing new drawing targets as the minimal interface "write this color (whatever you mean by color) to the pixel at these coordinates".) I plan to use this for building possibly the entire user interface out of voxels — but for now it's also an additional tool for test content generation.

Still to do that original Cubes had:

  • Mouselook/pointer lock.
  • Block selection UI and placement.
  • Any UI at all other than movement and targeting blocks. I've got ambitious plans to build the UI itself out of blocks, which both fits the "recursive self-defining blocks" theme and means I can do less platform-specific UI code (while running headlong down the path of problematically from-scratch inaccessible video game UI).
  • Collision with recursive subcubes rather than whole cubes (so slopes/stairs and other smaller-than-an-entire-cube blocks work as expected).
  • Persistence (saving to disk).
  • Lots and lots of currently unhandled edge cases and "reallocate this buffer bigger" cases.

Stuff I want to do that's entirely new:

  • Networking; if not multiplayer, at least the web client saves its world data to a server. I've probably already gone a bit too far down the path of writing a data model without consideration for networking.

In my previous post, I said “Rust solved the method chaining problem!” Let me explain.

It's popular these days to have “builders” or “fluent interfaces”, where you write code like

let house = HouseBuilder()
    .bedrooms(2)
    .bathrooms(2)
    .garage(true)
    .build();

The catch here is that (in a “conventional” memory-safe object-oriented language, not Rust) each of the methods here has the option of:

  1. Mutating self/this/recipient of the message (I'll say self from here on), and then returning self.
  2. Returning a different object with the new configuration.
  3. “Both”: returning a new object which wraps self, and declaring it a contract violation for the caller to use self further (with or without actually documenting that contract).

The problem — in my opinion — with the fluent interface pattern by itself is that it’s underconstrained in this way: in a type (1) case, which is often the simplest to implement, the caller is free to completely ignore the return values,

let hb = HouseBuilder();
hb.bedrooms(2);
hb.bathrooms(2);
hb.garage(true);
let house = hb.build();

but this means that the fluent interface cannot change from a type 1 implementation to a type (2) or (3), even if this is a non-breaking change to the intended usage pattern. Or to look at it from the “callee misbehaves” angle rather than “caller misbehaves”, the builder is free to return something other than self, thus causing the results to differ depending on whether the caller used chained calls or not.

(Why is this a problem? From my perspective on software engineering, it is highly desirable to, whenever possible, remove unused degrees of freedom so that the interaction between two modules contains no elements that were not consciously designed in.)


Now here's the neat thing I noticed about Rust in this regard: Rust prevents this confusion from happening by default!

In Rust, there is no garbage collector and no arbitrary object-reference graph: by default, everything is either owned (stored in memory belonging to the caller, like a non-pointer variable or field in C) or borrowed (referred to by a “reference” which is statically checked to last no longer than the object does via its ownership). The consequence of this is that every method must explicitly take an owned or borrowed self, and this means you can't equivocate between writing a setter and writing a chaining method:

impl HouseBuilder {
    /// This is a setter. It mutates the builder passed by reference.
    fn set_bedrooms(&mut self, bedrooms: usize) {
        self.bedrooms = bedrooms;
    }

    /// This is a method that consumes self and returns a new object of
    /// the same type; “is it the same object” is not a meaningful question.
    /// Notice the lack of “&”, meaning by-reference, on “self”.
    fn bedrooms(mut self, bedrooms: usize) -> HouseBuilder {
        // This assignment mutates the *local variable* “self”, which the
        // caller cannot observe because the value was *moved* out of the
        // caller's ownership.
        self.bedrooms = bedrooms;
        self                       // return value
    }
}

Now, it's possible to write a setter that can be used in chaining fashion:

    fn set_bedrooms(&mut self, bedrooms: usize) -> &mut HouseBuilder {
        self.bedrooms = bedrooms;
        self
    }

But because references have to refer to objects owned by something, a method with this signature cannot just decide to return a different object instead. Well, unless it decides to return some object that's global, allocated-and-leaked, or present in some larger but non-global context. (And, having such a method will contaminate the entire rest of the builder interface with the obligation to either take &mut self everywhere or make the builder an implicitly copyable type, both of which would look funny.)

So this isn't a perfect guarantee that everything that looks like a method chain/fluent interface is nonsurprising. But it's pretty neat, I think.


Here's the rest of the code you'd need to compile and play with the snippets above:
struct HouseBuilder {
    bedrooms: usize,
}

impl HouseBuilder {
    fn new() -> Self {
        HouseBuilder {
            bedrooms: 0
        }
    }

    fn build(self) -> String {
        format!("Home sweet {}br home!", self.bedrooms)
    }
}

fn main() {
    let h = HouseBuilder::new()
        .bedrooms(3)
        .build();
    println!("{:?}", h);
}

I've now been programming in Rust for over a month (since the end of July). Some thoughts:

  • It feels a lot like Haskell. Of course, Rust has no mechanism for enforcing/preferring lack of side effects, but the memory management, which avoids using a garbage collection algorithm in favor of statically analyzable object lifetimes, gives a very similar feeling of being a force which shapes every aspect of your program. Instead of having to figure out how to, at any given code location, fit all the information you want to preserve for the future into a return value, you instead get to store it somewhere with a plain old side effect, but you have to prove that that side effect won't conflict with anything else.

    And, of course, there are algebraic data types and type classes, er, traits.

  • It's nice to be, for once, living in a world where there's a library for everything and you can just use them by declaring a dependency on them and recompiling. Of course, there's risks here (unvetted code, library might be doing unsound unsafe, unmaintained libraries you get entangled with), but I haven't had a chance to have this experience at all before.

  • The standard library design sure is a fan of short names like we're back in the age of “linker only recognizes 8 characters of symbol name”. I don't mind too much, and if it helps win over C programmers, I'm all in favor.

  • They (mostly) solved the method chaining problem! (This got long, so it's another post.)

I've been getting back into playing Minecraft recently, and getting back into that frame of mind caused me to take another look at my block-game-engine project titled "Cubes" (previous posts).

I've fixed some API-change bitrot in Cubes so that it's runnable on current browsers; unfortunately the GitHub Pages build is broken so the running version that I'd otherwise link to isn't updated. (I intend to fix that.)

The bigger news is that I've decided to rewrite it. Why?

  • There's some inconsistency in the design of how block rotation works, and the way I've thought of to fix it is to start with a completely different strategy: instead of rotation being a feature of a block's behavior, there will be the general notion of blocks derived from other block definitions, so “this block but rotated” is such a derivation.

  • I'd like to start with a client-server architecture from the beginning, to support the options of both multiplayer and ✨ saving to the cloud!✨ — I mean, having a server which stores the world data instead of fitting it all into browser local storage.

  • I've been looking for an excuse to learn Rust. And, if it works as I hope, I'll be able to program with a much better tradeoff between performance and high-level code.

The new version is already on GitHub. I've given it the name “All is Cubes”, because “Cubes” really was a placeholder from the beginning and it's too generic.

I'm currently working on porting (with improvements) various core data structures and algorithms from the original version — the first one being the voxel raycasting algorithm, which I then used to implement a raytracer that outputs to the terminal. (Conveniently, "ASCII art" is low-resolution and thus doesn't require too many rays.) And after getting that solid, I set up compiling the Rust code into WebAssembly to run in web browsers and render with WebGL.

[console screenshot] [WebGL screenshot]

(In the unlikely event that anyone cares, I haven't quite decided what to do with the post tags; I think that I will switch to tagging them all with all is cubes, but I might or might not go back and apply that to the old posts on the grounds that having a tag that gets everything is good and I'm not really giving the rewrite a different name so much as taking the opportunity to replace the placeholder at a convenient time.)

Here’s another idea for a video game.

The theme of the game is “be consistent”. It's a minimalist-styled 2D platformer. The core mechanic is that whatever you do the first time, the game makes it so that that was the right action. Examples of how this could work:

  • At the start, you're standing at the center of a 2×2 checkerboard of background colors (plus appropriate greebles, not perfect squares). Say the top left and bottom right is darkish and the other quadrants are lightish. If you move left, then the darkish stuff is sky, the lightish stuff is ground, and the level extends to the left. If you move right, the darkish stuff is ground, and the level extends to the right.

  • The first time you need to jump, if you press W or up then that's the jump key, or if you press the space bar then that's the jump key. The other key does something else. (This might interact poorly with an initial “push all the keys to see what they do”, though.)

  • You meet a floaty pointy thing. If you walk into it, it turns out to be a pickup. If you shoot it or jump on it, it turns out to be an enemy.
  • If you jump in the little pool of water, the game has underwater sections or secrets. If you jump over the little pool, water is deadly.

(I could say some meta-commentary about how I haven't been blogging much and I've made a resolution to get back to it and it'll be good for me and so on, but I think I've done that too many times already, so let's get right to the actual thing...)

When I wrote Cubes (a browser-based “Minecraft-like”), one of the components I built was a facility for key-bindings — that is, allowing the user to choose which keys (or mouse buttons, or gamepad buttons) to assign to which functions (move left, fly up, place block, etc.) and then generically handling calling the right functions when the event occurs.

Now, I want to use that in some other programs. But in order for it to exist as a separate library, it needs a name. I have failed to think of any good ones for months. Suggestions wanted.

Preferably, the name should hint at that it supports the gamepad API as well as keyboard and mouse. It should not end in “.js” because cliche. Also for reference, the other library that arose out of Cubes development I named Measviz (which I chose as a portmanteau and for having almost zero existing usage according to web searches).

(The working draft name is web-input-mapper, which is fairly descriptive but also thoroughly clunky.)

One of the nice things about Common Lisp is the pervasive use of (its notion of) symbol objects for names. For those unfamiliar, I'll give a quick introduction to the relevant parts of their semantics before going on to my actual proposal for a “good parts version”.

A CL symbol is an object (value, if you prefer). A symbol has a name (which is a string). A CL package is a map from strings to symbols (and the string key is always equal to the symbol's name). A symbol may be in zero or more packages. (Note in particular that symbol names need not be unique except within a single package.)

Everywhere in CL that something is named — a variable, a function, a class, etc. — the name is a symbol object. (This is not impractical because the syntax makes it easy to write symbols; in fact, easier than writing strings, because they are unquoted.)

The significance of this is that the programmer need never give significance to characters within a string name in order to avoid collisions. Namespacing of explicitly written symbols is handled by packages; namespacing of programmatically generated symbols is handled by simply never putting them in any package (thus, they are accessible only by passing references); these are known as gensyms.

Now, I don't mean to say that CL is perfect; it fails by way of conflating too many different facilities on a single symbol (lexical variables, dynamic variables, global non-lexical definitions, ...), and some of the multiple purposes motivate programmers to use naming conventions. But I think that there is value in the symbol system because it discourages the mistake of providing an interface which requires inventing unique string names.

(One thinking along capability lines might ask — why use names rather than references at all? Narrowly, think about method names (selectors, for the Smalltalk/ObjC fans) and module exports; broadly, distribution and bootstrapping.)


So, here’s my current thought on a “good parts version”, specifically designed for an E-style language with deep equality/immutability and no global mutable state.

There is a notion of name, which includes three concrete types:

  1. A symbol is an object which has a string-valued name, and whose identity depends solely on that string.
  2. A gensym also has a name, but has an unique identity (selfish, in E terms). Some applications might reject gensyms since they are not data.
  3. A space-name holds two names and its identity depends solely on that combination. (That is, it is a “pair” or “cons” specifically of names.)

Note that these three kinds of objects are all immutable, and use no table structures, and yet can produce the same characteristics of names which I mentioned above. (For implementation, the identity of a name as above defined can be turned into pointer identity using hash consing, a generalization of interning.) Some particular examples and notes:

  • A CL symbol in a package corresponds to a pair of two symbols, or perhaps a gensym and a symbol. This correspondence is not exact, of course. (In particular, there is no notion here of the set of exported symbols in a package. But that's the sort of thing you have to be willing to give up to obtain a system without global mutable state. And you can still imagine 'linting' for unexpected symbols.)
  • The space-name type means that names can be arbitrary binary trees. If we consistently give the left side a “namespace” interpretation and the right side a “local name” one, then we have a system, I think, where people can carve out all sorts of namespaces without ever fearing collisions or conflicts, should it become necessary. Which probably means it's massively overdesigned (cf. "worse is better").
  • Actual use case example: Suppose one wishes to define (for arbitrary use) a subtype of some well-known interface, which adds one method. There is a risk that your choice of name for that method conflicts with someone else's different subtype. Under this system, you can construct a space-name whose two components are a large random number (i.e. a unique ID) acting as the namespace, and a symbol which is your chosen simple name. One can imagine syntax and tools which make it easy to forget about the large random number and merely use the simple name.
  • It's unclear to me how these names would be used inside the lexical variable syntax of a language, if they would at all; I suspect the answer is that they would not be, or mostly confined to machine-generated-code cases. The primary focus here is improving the default characteristics of a straightforwardly written program which uses a map from names to values in some way.

(This is all very half-baked — I'm just publishing it on the grounds described in my previous post: in the long run I'll have more ideas than I ever implement, and this is statistically likely to be one of them, so I might as well publish it and hope someone else finds some use for it; if nothing else, I can stop feeling any obligation to remember it in full detail.)

I have come to realize that I have more ideas for programs than I'll ever have time to write. (This means they're not actually all that significant, on average — see all that's been said on ‘ideas vs. execution’.) But maybe I have the time to scribble a blog post about them, and that's stuff to blog about, if nothing else.

So, a video game idea I had today: reverse bullet-hell shooter.

A regular bullet-hell shooter is a game where you move in a 2D space dodging an immense number of mostly dumb instant-death projectiles launched in mostly predefined patterns, and trying to shoot back with dinkier, but better aimed, weapons. Instead, here you design the bullet pattern so as to trap and kill AI enemies doing the dodging.

The roles seem a bit similar to tower defense, but the space of strategies would be considerably more, ah, bumpy, since you're not doing a little bit of damage at a time and how it plays out depends strongly on the AI's choices.

That's probably the downfall of this idea: either the outcome is basically butterfly effect random due to enemy AI decisions and you mostly lose, or there are trivial ways to design undodgeable bullet patterns and you mostly win. I don't immediately see how to make the space of inputs and outcomes “smooth” enough.

Let's say you have two or more independent Git branches, and you want to make sure the combination of them works correctly, but aren't ready to permanently merge or rebase them together. You can do a merge and discard it (either by resetting afterward or using a temporary branch), but that takes extra commands when you're done with the trial. Here's the script I put together to eliminate all unnecessary steps:

#!/bin/sh
set -e
set -x
git checkout --detach HEAD
git merge --no-edit -- "$@"

In a single command, this merges HEAD and any branches given as arguments and leaves you at the merge as a detached HEAD. This means that when you're done with it you can just switch back to your branch (git checkout - is a shortcut for that) and the merge is forgotten. If you committed changes on top of the merge, git checkout will tell you about them and you can transplant them to a real branch with git cherry-pick.

“:”

Thursday, February 21st, 2013 21:34

When Larry Wall was designing Perl 6, he started with lots of community proposals, from which he made the following observation:

I also discovered Larry's First Law of Language Redesign: Everyone wants the colon.

When I was recently trying to redesign E, I found that this holds true even if only one person is involved in the process. One of the solutions considered was having “” and “ :” be two different tokens…

I really haven't been posting very much, have I? It's mostly the job occupying most of my “creative energy”, but I've also been doing a little bit of this and that and not ever finishing something to the point of feeling like writing it up.

On the programming-projects front, I'm attempting to extract two reusable libraries from Cubes for the benefit of other web-based games.

  • Measviz takes performance-measurement data (frames per second and whatever else you want) and presents (in HTML) a compact widget with graphs; my excuse for not announcing it is that the API needs revision, and I haven't thought of a good toy example to put in the documentation-and-demo page I'm writing, but if you're willing to deal with later upgrades it's ready to use now.
  • The other library, currently in need of a good name, is a generalized keybinding library (generalized in that it also handles gamepads/joysticks, which are completely different). You define the commands in your application, and it handles feeding events into them. Commands can be polled, or you can receive callbacks on press and release, with optional independent autorepeat. It's currently in need of a name, and also of API cleanup.

I've been making some sketches towards a redesign of E (list archive pointer: starting here), basically to take into account everything we've learned over the years without being constrained by compatibility, but it hasn't gotten very far, partly because language syntax is hard — all options are bad. (The current E syntax is pretty good for usability, but it has some particularly verbose/sea-of-punctuation corner cases, and I'd also like to see a simpler syntax, with more facilities moved into code libraries.)

stdin, stdout, stderr, stdcpu, stdmem, stdfs
  1. Premise: Any attack on a password — whether online (login attempts) or offline (hash cracking) — will be designed so that the more likely a given password is, out of the space of all possible passwords, the less work is required to recover that password (unless a trivial amount of work is required to discover any possible password).

  2. From (1), there exists a probability distribution of passwords.

  3. Premise: There is a (practical) maximum length for passwords.

  4. From (3), the set of possible passwords is finite.

  5. From (2) and (4), there is a minimum probability in that distribution.

  6. Use one of the passwords which has that minimum probability.

(There are at least two ways this doesn't work.)

A couple weekends ago, I was musing that among my electronic devices there was no radio — as in AM/FM, not WiFi and Bluetooth and NFC and etc. Of course, radio is not exactly the most generally useful of information or entertainment sources, but it still has some things to be said for it, such as being independent of Internet connections.

Another thing that came to mind was my idle curiosity about software-defined radio. So, having read that Wikipedia article, it led me to an article with a neat list of radio hardware, including frequency range, sampling rate (≈ bandwidth) and price. Sort by price, and — $20, eh? Maybe I’ll play around with this.

What that price was for was RTL-SDR — not a specific product, but any of several USB digital TV receivers built around the Realtek RTL2832U chip, which happens to have a mode where it sends raw downshifted samples to the host computer — intended to be used to provide FM radio receiving without requiring additional hardware for the task. But there's plenty of room to do other things with it.

I specifically bought the “ezTV”/“ezcap” device, from this Amazon listing by seller NooElec (who also sells on eBay, I hear) (note: not actually $20). One of the complications in this story is that different (later?) models of the same device have slightly different hardware which cannot tune as wide a frequency range. (Side note: when buying from Amazon, what you actually get depends on the “seller” you choose, not just the product listing, and as far as I know, any seller can claim to sell any product. If you see a product with mixed “this is a fake!” and “no it's not!” reviews, you're probably seeing different sellers for the same product listing.)

Of course, the point of SDR is to turn hardware problems into software problems — so I then had a software problem. Specifically, my favorite source for unixy software is MacPorts, but they have an obsolete version of GNU Radio. GNU Radio is a library for building software radios, and it is what is used by the first-thing-to-try recommendation on the Osmocom RTL-SDR page (linked above), multimode.py. The MacPorts version of GNU Radio, 3.3.0, is too old for the RTL-SDR component, which requires 3.5.3 or later. So I ended up building it from source, which took a bit of tinkering. (I'm working on contributing an updated port for MacPorts, however.)

I've had plenty of fun just using it “scanner” style, seeing what I can pick up. A coworker and friend who is into aviation posed a problem — receive and decode VOR navigation signals — which has led to several evenings of fiddling with GNU Radio Companion, and reading up on digital signal processing while I wait for compiles and test results at work. (It sort-of works!)

This is also notable as the one time in my life where a ferrite bead on a cable actually did something — putting one on the computer end of the USB extension cord noticeably reduced the noise level. (And, of course, there remains a large, metallic hardware problem: antennas!)

(I could say more, such as the detailed fiddling to build GNU Radio, and various useful links, but it's taken me long enough to get around to writing this much. Let me know if you'd like me to expand on any particular technical details.)

Started a new project, GLToyJS; I’m porting my GLToy to WebGL. The advantage, besides using a higher-level language and modern OpenGL (shaders!), is that it is more cross-platform, rather than being a Mac-only screensaver. The disadvantage is that it’s not a screensaver at all, but a web page; I plan to add a wrapper to fix that, and I have a working proof of concept.

So far I’ve put together the core framework and ported 6 of the original 13 effects (most of the in-my-current-opinion good ones, of course). An additional feature is that an effect’s parameters are described in JSON, which will be used to allow you to save a particularly good random result for future viewing. (I could just put them in the URL, in fact — I think I’ll try that next.)

I haven't yet created any new effects, so nothing takes obvious advantage of the additional capabilities provided by shaders (other than refinements such as Phong-rather-than-Gouraud lighting and GPU-side particle systems). I also wrote a sketchy compatibility layer for the GLSL Sandbox’s interface so that you can drop in a fragment shader from there to make an effect; a possible thing to do would be automatically downloading from their gallery (if politeness and copyright law permits).

It's not published as a web page anywhere yet, but it should be and I’ll let you know as soon as it is.

The draft-standard Gamepad API allows JavaScript in the browser to obtain input from connected gamepad/joystick devices. This is of course useful for games, so I have worked on adding support for it to Cubes.

This (about) is the only Gamepad API demo that I found that worked with arbitrary gamepads (or rather, the junk one I had around) rather than ignoring or crashing on anything that wasn't a known-to-it devices such as a PS3 or Xbox controller. (It's part of a game framework called Construct 2, but I haven't investigated that further.) It was critical to my early setup in making sure that I had a compatible gamepad and browser configuration.

(There's a reason for libraries having information about specific devices — the Gamepad API just gives you a list of inputs and doesn't tell you what the buttons should be called in the user interface — and these days you're almost expected to have pictures of the buttons, too. But there's no reason not to have a fallback, too. Incidentally, the USB HID protocol which most gamepads use is capable of including some information about the layout/function of buttons, but this information is often incorrect and the Gamepad API does not expose it.)

In order to integrate gamepad support into Chrome, I used Toji's Game Shim library, a very nice lightweight library which only adapts browser-provided interfaces to the current draft standards so that you can use the Gamepad API, as well as requestAnimationFrame, fullscreen, and pointer lock, without making your code full of conditionals or browser-specific prefixes.

An early stage in the development of lighting in Cubes (long since past).