Consider this part of my current series of programming advice posts. Role Of Algorithms, by matklad:

“Algorithms” are a useful skill not because you use it at work every day, but because they train you to be better at particular aspects of software engineering.

...

Second, algorithms teach about properties and invariants. Some lucky people get those skills from a hard math background, but algorithms are a much more accessible way to learn them, as everything is very visual, immediately testable, and has very short and clear feedback loop.

There's a lot here I agree with or at least consider worth thinking about. However, the article is focused on the process of learning to program, and I'd like to suggest a different perspective too: rather than “you don't use algorithms at work every day”, every single function you write can be considered as an algorithm. Many of them are trivial algorithms, but if you have the experience with looking through the algorithmic lens, you can notice as you write (or read) useful facts like:

  • “Oops, this is going to be O(N²); can I do it more efficiently?”
  • “This almost has an invariant or postcondition that …, but it could be violated if …, so let's think about where to check for that and what we want to do about it then, or what will happen if we don't check.”

While I worked at Google I spent a lot of time doing testing and code review. One of the things I learned from this is:

A test must document what it is testing.

This doesn’t have to be an explicit comment (it might be adequately explained by the name of the test, or the code in it), but either way, it should be obvious to maintainers what the goal of the test is — what property the code under test should have. “This test should continue to pass” is not a sufficient explanation of the goal. Why? Consider these scenarios:

  • A test failed. Does that mean…

    • …the test successfully detected the loss of a desired behavior? (This is the ideal case, that we hope to enable by writing and running tests.)

    • …the test incidentally depended on something that is changing but is not significant, such as ordering of something unordered-but-deterministic, or the exact sequence of some method calls? (Ideally, test assertions will be written precisely so that they accept everything that meets requirements, but this is not always feasible or correctly anticipated.)

    • …the test incidentally depended on something that is being intentionally changed, but it wasn’t intending to test that part of the system, which has its own tests?

  • A test no longer compiles because you deleted a function (or other declaration/item/symbol) it used. Does that mean…

    • …the test demonstrates a reason why you should not delete that function?

    • …the test needs to be rewritten to test the thing it is about, without incidentally using that function?

    • …the function is necessary to perform this test, so it should continue to exist but be private?

If tests specify the property they are looking for, then maintainers can quickly decide how to respond to a test failure — whether to count it a bug in the code under test, or to change or delete the test.

If they do not, then maintainers will waste time keeping the code conformant to obsolete requirements, or digging through revision history to determine the original intent.

The piece of software I've mostly been working on recently is still All is Cubes (posts). However, this one project has sprawled out into a lot of things to do with Rust.

For example, something about my coding style (or maybe my attention to error messages) seems to turn up compiler bugs — such as #80772, #88630, #95324, #95873, #97205, #104025, #105645, #108481, and #109067. (Which is not to say that Rust is dreadfully buggy — on average, the experience is reliable and pleasant.) I'm also taking some steps into contributing to the compiler, so I can get some bugs (and pet peeves) fixed myself. (The first step, though, was simply running my own code against rustc nightly builds, so I can help spot these kinds of bugs before they get released.)

I've also taken some code from All is Cubes and split it out into libraries that might be useful for others.

  • The first one was exhaust, a trait-and-macro library that provides the ability to take a type (that implements the Exhaust trait) and generate all possible values of that type. The original motivation was to improve on (for my purposes) strum::IntoEnumIterator (which does this for enums but always leaves the enum’s fields with the Default value) by generating enums and structs arbitrarily recursively.

    (Exhaustive iteration is perhaps surprisingly feasible even in bigger domains than a single enum; if you have a simple piece of arithmetic, for example, it only takes a few seconds to run it on every 32-bit integer or floating-point value, and look for specific outcomes or build a histogram of the results.)

  • The second one, published just yesterday, is rendiff, a (slightly) novel image diffing algorithm which I invented to compare the output of All is Cubes’ renderer test cases. Its value is that it is able to compensate for the results of rounding errors on the positions of the edges of objects in the scene — instead of such errors counting against a budget of allowable wrong pixels, they're just counted as correct, by observing that they exist at a neighboring position in the other image.

end and beginning

Saturday, May 6th, 2023 14:23

One month ago, April 7, 2023, I left my job at Google.

At the time, I was working on Earth Engine — “a planetary-scale platform for Earth science data & analysis”, or if you're looking at it through my lens, a cloud computing service which interprets parallel, pure-functional programs interleaved with image processing and database queries. (It’s a really interesting system, and if you'd like to check it out yourself, it's free for non-commercial use. Play with big data and make interesting pictures!)

One of the elements of the plan I made for the future after this choice was that, this year, I would get to take some time to pursue my personal projects and interests unencumbered by “having a day job” or intellectual property contracts — and I have been doing that.

Now, as you can see from the fact that my blog is as silent the last month as it has been for the last two years, that didn't instantly bring back all the energy for blogging I had in 2012. But, step one is to break the awkward silence. Step two is to post about stuff I've been doing. Maybe someday I'll get to step “update my website”.

I woke up last night with a great feature idea which on further examination was totally inapplicable to All is Cubes in its current state. (I was for some dream-ish reason imagining top-down tile- and turn-based movement, and had the idea to change the cursor depending on whether a click was a normal move or an only-allowed-because-we're-debugging teleport. This is totally unlike anything I've done yet and makes no sense for a first-person free movement game. But I might think about cursor/crosshair changes to signal what clicking on a block would do.)

That seems like a good excuse to write a status update. Since my last post I've made significant progress, but there are still large missing pieces compared to the original JavaScript version.

(The hazard in any rewrite, of course, is second-system effect — “we know what mistakes we made last time, so let's make zero of them this time”, with the result that you add both constraints and features, and overengineer the second version until you have a complex system that doesn't work. I'm trying to pay close attention to signs of overconstraint.)

[screenshot]

Now done:

  • There's a web server you can run (aic-server) that will serve the web app version; right now it's just static files with no client-server features.

  • Recursive blocks exist, and they can be rendered both in the WebGL and raytracing modes.

  • There's an actual camera/character component, so we can have perspective projection, WASD movement (but not yet mouselook), and collision.

    For collision, right now the body is considered a point, but I'm in the middle of adding axis-aligned box collisions. I've improved on the original implementation in that I'm using the raycasting algorithm rather than making three separate axis-aligned moves, so we have true “continuous collision detection” and fast objects will never pass through walls or collide with things that aren't actually in their path.

  • You can click on blocks to remove them (but not place new ones).

  • Most of the lighting algorithm from the original, with the addition of RGB color.

    Also new in this implementation, Space has an explicit field for the “sky color” which is used both for rendering and for illuminating blocks from outside the bounds. This actually reduces the number of constants used in the code, but also gets us closer to “physically based rendering”, and allows having “night” scenes without needing to put a roof over everything. (I expect to eventually generalize from a single color to a skybox of some sort, for outdoor directional lighting and having a visible horizon, sun, or other decorative elements.)

  • Rendering space in chunks instead of a single list of vertices that has to be recomputed for every change.

  • Added a data structure (EvaluatedBlock) for caching computed details of blocks like whether their faces are opaque, and used it to correctly implement interior surface removal and lighting. This will also be critical for efficiently supporting things like rotated variants of blocks. (In the JS version, the Block type was a JS object which memoized this information, but here, Block is designed to be lightweight and copiable (because I've replaced having a Blockset defining numeric IDs with passing around the actual Block and letting the Space handle allocating IDs), so it's less desirable to be storing computed values in Block.)

  • Made nearly all of the GL/luminance rendering code not wasm-specific. That way, we can support "desktop application" as an option if we want to (I might do this solely for purposes of being able to graphically debug physics tests) and there is less code that can only be compiled with the wasm cross-compilation target.

  • Integrated embedded_graphics to allow us to draw text (and other 2D graphics) into voxels. (That library was convenient because it came with fonts and because it allows implementing new drawing targets as the minimal interface "write this color (whatever you mean by color) to the pixel at these coordinates".) I plan to use this for building possibly the entire user interface out of voxels — but for now it's also an additional tool for test content generation.

Still to do that original Cubes had:

  • Mouselook/pointer lock.
  • Block selection UI and placement.
  • Any UI at all other than movement and targeting blocks. I've got ambitious plans to build the UI itself out of blocks, which both fits the "recursive self-defining blocks" theme and means I can do less platform-specific UI code (while running headlong down the path of problematically from-scratch inaccessible video game UI).
  • Collision with recursive subcubes rather than whole cubes (so slopes/stairs and other smaller-than-an-entire-cube blocks work as expected).
  • Persistence (saving to disk).
  • Lots and lots of currently unhandled edge cases and "reallocate this buffer bigger" cases.

Stuff I want to do that's entirely new:

  • Networking; if not multiplayer, at least the web client saves its world data to a server. I've probably already gone a bit too far down the path of writing a data model without consideration for networking.

In my previous post, I said “Rust solved the method chaining problem!” Let me explain.

It's popular these days to have “builders” or “fluent interfaces”, where you write code like

let house = HouseBuilder()
    .bedrooms(2)
    .bathrooms(2)
    .garage(true)
    .build();

The catch here is that (in a “conventional” memory-safe object-oriented language, not Rust) each of the methods here has the option of:

  1. Mutating self/this/recipient of the message (I'll say self from here on), and then returning self.
  2. Returning a different object with the new configuration.
  3. “Both”: returning a new object which wraps self, and declaring it a contract violation for the caller to use self further (with or without actually documenting that contract).

The problem — in my opinion — with the fluent interface pattern by itself is that it’s underconstrained in this way: in a type (1) case, which is often the simplest to implement, the caller is free to completely ignore the return values,

let hb = HouseBuilder();
hb.bedrooms(2);
hb.bathrooms(2);
hb.garage(true);
let house = hb.build();

but this means that the fluent interface cannot change from a type 1 implementation to a type (2) or (3), even if this is a non-breaking change to the intended usage pattern. Or to look at it from the “callee misbehaves” angle rather than “caller misbehaves”, the builder is free to return something other than self, thus causing the results to differ depending on whether the caller used chained calls or not.

(Why is this a problem? From my perspective on software engineering, it is highly desirable to, whenever possible, remove unused degrees of freedom so that the interaction between two modules contains no elements that were not consciously designed in.)


Now here's the neat thing I noticed about Rust in this regard: Rust prevents this confusion from happening by default!

In Rust, there is no garbage collector and no arbitrary object-reference graph: by default, everything is either owned (stored in memory belonging to the caller, like a non-pointer variable or field in C) or borrowed (referred to by a “reference” which is statically checked to last no longer than the object does via its ownership). The consequence of this is that every method must explicitly take an owned or borrowed self, and this means you can't equivocate between writing a setter and writing a chaining method:

impl HouseBuilder {
    /// This is a setter. It mutates the builder passed by reference.
    fn set_bedrooms(&mut self, bedrooms: usize) {
        self.bedrooms = bedrooms;
    }

    /// This is a method that consumes self and returns a new object of
    /// the same type; “is it the same object” is not a meaningful question.
    /// Notice the lack of “&”, meaning by-reference, on “self”.
    fn bedrooms(mut self, bedrooms: usize) -> HouseBuilder {
        // This assignment mutates the *local variable* “self”, which the
        // caller cannot observe because the value was *moved* out of the
        // caller's ownership.
        self.bedrooms = bedrooms;
        self                       // return value
    }
}

Now, it's possible to write a setter that can be used in chaining fashion:

    fn set_bedrooms(&mut self, bedrooms: usize) -> &mut HouseBuilder {
        self.bedrooms = bedrooms;
        self
    }

But because references have to refer to objects owned by something, a method with this signature cannot just decide to return a different object instead. Well, unless it decides to return some object that's global, allocated-and-leaked, or present in some larger but non-global context. (And, having such a method will contaminate the entire rest of the builder interface with the obligation to either take &mut self everywhere or make the builder an implicitly copyable type, both of which would look funny.)

So this isn't a perfect guarantee that everything that looks like a method chain/fluent interface is nonsurprising. But it's pretty neat, I think.


Here's the rest of the code you'd need to compile and play with the snippets above:
struct HouseBuilder {
    bedrooms: usize,
}

impl HouseBuilder {
    fn new() -> Self {
        HouseBuilder {
            bedrooms: 0
        }
    }

    fn build(self) -> String {
        format!("Home sweet {}br home!", self.bedrooms)
    }
}

fn main() {
    let h = HouseBuilder::new()
        .bedrooms(3)
        .build();
    println!("{:?}", h);
}

I've now been programming in Rust for over a month (since the end of July). Some thoughts:

  • It feels a lot like Haskell. Of course, Rust has no mechanism for enforcing/preferring lack of side effects, but the memory management, which avoids using a garbage collection algorithm in favor of statically analyzable object lifetimes, gives a very similar feeling of being a force which shapes every aspect of your program. Instead of having to figure out how to, at any given code location, fit all the information you want to preserve for the future into a return value, you instead get to store it somewhere with a plain old side effect, but you have to prove that that side effect won't conflict with anything else.

    And, of course, there are algebraic data types and type classes, er, traits.

  • It's nice to be, for once, living in a world where there's a library for everything and you can just use them by declaring a dependency on them and recompiling. Of course, there's risks here (unvetted code, library might be doing unsound unsafe, unmaintained libraries you get entangled with), but I haven't had a chance to have this experience at all before.

  • The standard library design sure is a fan of short names like we're back in the age of “linker only recognizes 8 characters of symbol name”. I don't mind too much, and if it helps win over C programmers, I'm all in favor.

  • They (mostly) solved the method chaining problem! (This got long, so it's another post.)

I've been getting back into playing Minecraft recently, and getting back into that frame of mind caused me to take another look at my block-game-engine project titled "Cubes" (previous posts).

I've fixed some API-change bitrot in Cubes so that it's runnable on current browsers; unfortunately the GitHub Pages build is broken so the running version that I'd otherwise link to isn't updated. (I intend to fix that.)

The bigger news is that I've decided to rewrite it. Why?

  • There's some inconsistency in the design of how block rotation works, and the way I've thought of to fix it is to start with a completely different strategy: instead of rotation being a feature of a block's behavior, there will be the general notion of blocks derived from other block definitions, so “this block but rotated” is such a derivation.

  • I'd like to start with a client-server architecture from the beginning, to support the options of both multiplayer and ✨ saving to the cloud!✨ — I mean, having a server which stores the world data instead of fitting it all into browser local storage.

  • I've been looking for an excuse to learn Rust. And, if it works as I hope, I'll be able to program with a much better tradeoff between performance and high-level code.

The new version is already on GitHub. I've given it the name “All is Cubes”, because “Cubes” really was a placeholder from the beginning and it's too generic.

I'm currently working on porting (with improvements) various core data structures and algorithms from the original version — the first one being the voxel raycasting algorithm, which I then used to implement a raytracer that outputs to the terminal. (Conveniently, "ASCII art" is low-resolution and thus doesn't require too many rays.) And after getting that solid, I set up compiling the Rust code into WebAssembly to run in web browsers and render with WebGL.

[console screenshot] [WebGL screenshot]

(In the unlikely event that anyone cares, I haven't quite decided what to do with the post tags; I think that I will switch to tagging them all with all is cubes, but I might or might not go back and apply that to the old posts on the grounds that having a tag that gets everything is good and I'm not really giving the rewrite a different name so much as taking the opportunity to replace the placeholder at a convenient time.)

Just about a year ago, I acquired a 3D printer.

I've been casually reading about the progress of hobbyist 3D printing since the RepRap days, but sometime in late 2017 I decided to seriously consider whether I should get one for myself. I'd already purchased items from printing services (i.materialise and Shapeways), both existing and of my own design, and that experience taught me that I wanted to do more and to iterate cheaper and faster. As a sanity check, I made a list of further things I thought I could make with one — not general themes but specific items to solve problems I had. That list was immediately filled with ten or so items, so I bought one.

I purchased a Prusa i3 MK3 kit printer (despite, at the time, uncertainty about whether the MK3 design was flawed as a lot of people were reporting quality and reliability issues), set it up in April 2018, and have been printing things ever ever since (with very little trouble).

I've been posting my designs on Thingiverse (pictures there) and on Github — rather, those that I have declared finished and documented. There's another 30 or so that aren't published, yet.

(You can also see hints of some other ‘new hobbies’ in what I've been posting, but I'm overly fond of putting things in chronological or at least dependency order.)

A Visual Introduction to DSP for SDR now includes some slides on digital modulation.

I wrote these a long time ago but the wording was targeted at an amateur radio audience. I finally got around to tweaking it to be slightly more general and publishing the result.

HTTPS, finally

Thursday, June 28th, 2018 10:34

In further news of updating my personal web presence, I have finally set up HTTPS for switchb.org. As I write this I'm working on updating all the links to it that I control.

The thing I found underdocumented in Let's Encrypt/Certbot is: if you want to (or must) manually edit the HTTP configuration, what should the edits be? What I concluded was:

<VirtualHost *:443>
  ServerName YOUR DOMAIN NAME
  Include /etc/letsencrypt/options-ssl-apache.conf
  SSLCertificateFile /etc/letsencrypt/live/YOUR DOMAIN OR CERT NAME/cert.pem
  SSLCertificateKeyFile /etc/letsencrypt/live/YOUR DOMAIN OR CERT NAME/privkey.pem
  SSLCertificateChainFile /etc/letsencrypt/live/YOUR DOMAIN OR CERT NAME/chain.pem

  ...rest of configuration for this virtual host...
</VirtualHost>

Notes:

  • /etc/letsencrypt/options-ssl-apache.conf (which of course may be in a different location depending on your OS and package manager) contains the basic configuration to enable SSL (SSLEngine on) and certbot-recommended cipher options.
  • You have to have a separate VirtualHost entry for *:443 and *:80; there's no way to copy the common configuration as far as I heard.
  • By "CERT NAME" I mean the name assigned to a multi-domain-name certificate if you have requested one. You can find out the certificate names with the command certbot certificates. For a single domain it will be identical to the domain name.

As of right now, I've imported my blog contents from LiveJournal to Dreamwidth. Everything older than this entry was originally posted on LJ.

I've been procrastinating doing anything for a long time, because of the feeling that I really should move to a self-hosted blog that I can guarantee is forever unchanged. However, I haven't found satisfactory software or put much effort into it at all, and I think it's well past the point that it's more important to me to have a place to write than that it be the perfect customized URL-never-changes-again solution.

Background

In software-defined radio, there are well-established ways of visually representing the signal(s) in the entire bandwidth available from the hardware; we create a plot where the horizontal axis is frequency (using the Fourier transform to obtain the data). Then either the vertical axis is amplitude (creating an ordinary graph, sometimes called panorama) or the vertical axis is time and color is amplitude (creating a waterfall plot).

Here is an example of ShinySDR's spectrum display which includes both types (y=amplitude above and y=time below):

A further refinement is to display in the graph not just the most recent data but average or overlay many. In the above image, the blue fill color in the upper section is an overlay (both color and height correspond to amplitude), the green line is the average, and the red line is the peak amplitude over the same time interval.

We can see signals across an immensely wide spectrum (subject to hardware limitations), but is there a way to hear them meaningfully? Yes, there is, with caveats.

What's pictured above is a small portion of the band assigned to aviation use — they are used primarily for communication between aircraft in flight and air traffic control ground stations. The most significant thing about these communications is that there are a lot of different frequencies for different purposes, so if you're trying to hear “what's in the area”, you have to monitor all of them.

The conventional solution to this problem is a scanner, which is a radio receiver programmed to rapidly step through a range of frequencies and stop if a signal is detected. Scanners have disadvantages: they will miss the beginning of a signal, and they require a threshold set to trade off between missing weak signals and false-triggering on noise.

An alternative, specific to AM modulation (which is used by aircraft), is to make a receiver with very poor selectivity: the ability to receive only a specific channel and ignore other signals. (Historically, when RF electronic design was less well understood and components had worse characteristics, selectivity was a specification one would care about, but only if one lived in an area with closely-spaced radio stations — today, every receiver has good selectivity.)

I'm going to explain how to build an unselective receiver in software, and then refine this to create spatial audio — that is, the frequency of the signal shall correspond to the stereo panning of the output audio. This is the analogue of the spectrum display in audio.

Of course, this is an AM receiver and so it will only make intelligible sound for amplitude-modulated signals. However, many signals will produce some sound in an AM receiver. The exception is that a clean frequency-modulated (FM) or phase-modulated signal will produce silence, because its amplitude is theoretically constant, but this silence is still audibly distinct from background noise (if the signal is intermittent), and transmitted signals often do not have perfect constant amplitude.

Implementation

A normal software AM demodulator has a structure like the following block diagram (some irrelevant details omitted). The RF signal is low-pass filtered to select the desired signal, then demodulated by taking the magnitude (which produces an audio signal with a DC offset corresponding to the carrier).

In order to produce an unselective receiver, we omit the RF filter step, and therefore also the downsampling — therefore demodulating at the RF sample rate. The resulting real signal must be low-pass filtered and downsampled to produce a usable audio sample rate (and because the high-frequency content is not interesting; see below), so we have now “just” swapped the two main components of the receiver.

This simple change works quite well. Two or more simultaneous AM signals can be received with clear stereo separation.

One interesting outcome is that, unlike the normal AM receiver, the audio noise when there is no signal is quieter (assuming AGC is present before the demodulator block in both cases) — this conveniently means that no squelch function is needed.

The reason for this is obvious-in-hindsight: loosely speaking, most of the noise power will be at RF frequencies and outside of the audio passband. In order to have a strong output signal, the input signal must contain a significant amount of power in a narrow band to serve as the AM carrier and sideband. (I haven't put any math to this theory, so it could be nonsense.)

Adding stereo

In order to produce the spatial audio, we want the audio signal amplitude, in a single stereo channel, to vary with frequency. And that is simply a filter with a sawtooth frequency response. The signal path is split for the two stereo channels, with opposite-slope filters. (AGC must be applied before the split.)

An undesired effect is that near the band edges, since the filter has a steep but not perfectly sharp transition from full-left to full-right, there is a lot of slope detection (output from frequency-modulated signals) that does not occur anywhere else. Of course,

This design can of course be applied to more than two audio channels; using surround sound would avoid the need for steepness of the filter at the edges and map the inherently circular digitized spectrum to a circular space, so it's worth trying.

Notes

I've implemented this in ShinySDR (and it is perhaps the first novel DSP feature I've put in). Just click the “AM unselective” mode button.

Some “directions for future research”:

As I mentioned above, this is useless for listening to FM signals. Is some technique which can do the same for FM? Naïvely creating an “unselective FM receiver” seems like it would be a recipe for horrible noise, because to a FM demodulator, noise looks like a very loud signal (because the apparent frequency is jumping randomly within the band, and frequency maps to amplitude of the output).

If we declare that the output need not be intelligible at all, is there a way to make a receiver that will respond to localized signal power independent of modulation? Can we make an unmodulated carrier act like an AM signal? (CW receivers do this using the BFO but that is dependent on input frequency.)

The usual definition of the decibel is of course that the dB value y is related to the proportion x by

y = 10 · log10(x).

It bothers me a bit that there's two operations in there. After all, if we expect that y can be manipulated as a logarithm is, shouldn't there be simply some log base we can use, since changing log base is also a multiplication (rather, division, but same difference) operation? With a small amount of algebra I found that there is:

y = log(100.1)(x).

Of course, this is not all that additionally useful in most cases. If you're using a calculator or a programming language, you usually have loge and maybe log10, and 10·log10 will have less floating-point error than involving the irrational value 100.1. If you're doing things by hand, you either have a table (or memorized approximations) of dB (or log10) and are done already, or you have a tedious job which carrying around 100.1 is not going to help.

As vaguely promised before, another update on what I've been working on for the past couple of years:

ShinySDR (why yes, I am terrible at naming things) is a software-defined radio receiver application.

Specifically, it is in the same space as Gqrx, SDR#, HDSDR, etc.: a program which runs on your computer (as opposed to embedded in a standalone radio) and uses a peripheral device (rtl-sdr, HackRF, USRP, etc.) for the RF interface. Given such a device, it can be used to listen to or otherwise decode a variety of radio transmissions (including the AM and FM broadcast bands everyone knows, but also shortwave, amateur radio, two-way radios, certain kinds of telemetry including aircraft positions, and more as I get around to it).

ShinySDR is basically my “I want my own one of these” project (the UI still shows signs of “I’ll just do what Gqrx did for now”), but it does have some unique features. I'll just quote myself from the README:

I (Kevin Reid) created ShinySDR out of dissatisfaction with the user interface of other SDR applications that were available to me. The overall goal is to make, not necessarily the most capable or efficient SDR application, but rather one which is, shall we say, not clunky.

Here’s some reasons for you to use ShinySDR:

  • Remote operation via browser-based UI: The receiver can be listened to and remotely controlled over a LAN or the Internet, as well as from the same machine the actual hardware is connected to. Required network bandwidth: 3 Mb/s to 8 Mb/s, depending on settings.

    Phone/tablet compatible (though not pretty yet). Internet access is not required for local or LAN operation.

  • Persistent waterfall display: You can zoom, pan, and retune without losing any of the displayed history, whereas many other programs will discard anything which is temporarily offscreen, or the whole thing if the window is resized. If you zoom in to get a look at one signal, you can zoom out again.

  • Frequency database: Jump to favorite stations; catalog signals you hear; import published tables of band, channel, and station info; take notes. (Note: Saving changes to disk is not yet well-tested.)

  • Map: Plot station locations from the frequency database, position data from APRS and ADS-B, and mark your own location on the map. (Caveat: No basemap, i.e. streets and borders, is currently present.)

Supported modes:

  • Audio: AM, FM, WFM, SSB, CW.
  • Other: APRS, Mode S/ADS-B, VOR.

If you’re a developer, here’s why you should consider working on ShinySDR (or: here’s why I wrote my own rather than contributing to another application):

  • All server code is Python, and has no mandatory build or install step.

  • Plugin system allows adding support for new modes (types of modulation) and hardware devices.

  • Demodulators prototyped in GNU Radio Companion can be turned into plugins with very little additional code. Control UI can be automatically generated or customized and is based on a generic networking layer.

On the other hand, you may find that the shiny thing is lacking substance: if you’re looking for functional features, we do not have the most modes, the best filters, or the lowest CPU usage. Many features are half-implemented (though I try not to have things that blatantly don’t work). There’s probably lots of code that will make a real DSP expert cringe.

Now that I've finally written this introduction post, I hope to get around to further posts related to the project.

At the moment, I'm working on adding the ability to transmit (given appropriate hardware), and secondarily improving the frequency database subsystem (particularly to have a useful collection of built-in databases and allow you to pick which ones you want to see).

Side note: ShinySDR may hold the current record for most popular program I've written by myself; at least, it's got 106 stars on GitHub. (Speaking of which: ShinySDR doesn't have a page anywhere on my own web site. Need to fix that — probably starting with a general topics/radio. Eventually I hope to have a publicly accessible demo instance, but there’s a few things I want to do to make it more multiuser and robust first.)

My interactive presentation on digital signal processing (previous post with video) is now available on the web, at visual-dsp.switchb.org! More details, source code, etc. at the site.

(P.S. I'll also be at the next meetup, which is tomorrow, January 21, but I don’t have another talk planned. (Why yes, I did procrastinate getting this site set up until a convenient semi-deadline.))

I have really failed to get around to blogging what I've been doing lately, which is all software-defined radio. Let's start fixing that, in reverse order.

Yesterday, I went to a Bay Area SDR meetup, “Cyberspectrum” organized by Balint Seeber and gave a presentation of visual representations of digital signals and DSP operations. It was very well received. This video is a recording of the entire event, with my talk starting at 12:30.

Here’s another idea for a video game.

The theme of the game is “be consistent”. It's a minimalist-styled 2D platformer. The core mechanic is that whatever you do the first time, the game makes it so that that was the right action. Examples of how this could work:

  • At the start, you're standing at the center of a 2×2 checkerboard of background colors (plus appropriate greebles, not perfect squares). Say the top left and bottom right is darkish and the other quadrants are lightish. If you move left, then the darkish stuff is sky, the lightish stuff is ground, and the level extends to the left. If you move right, the darkish stuff is ground, and the level extends to the right.

  • The first time you need to jump, if you press W or up then that's the jump key, or if you press the space bar then that's the jump key. The other key does something else. (This might interact poorly with an initial “push all the keys to see what they do”, though.)

  • You meet a floaty pointy thing. If you walk into it, it turns out to be a pickup. If you shoot it or jump on it, it turns out to be an enemy.
  • If you jump in the little pool of water, the game has underwater sections or secrets. If you jump over the little pool, water is deadly.

(I could say some meta-commentary about how I haven't been blogging much and I've made a resolution to get back to it and it'll be good for me and so on, but I think I've done that too many times already, so let's get right to the actual thing...)

When I wrote Cubes (a browser-based “Minecraft-like”), one of the components I built was a facility for key-bindings — that is, allowing the user to choose which keys (or mouse buttons, or gamepad buttons) to assign to which functions (move left, fly up, place block, etc.) and then generically handling calling the right functions when the event occurs.

Now, I want to use that in some other programs. But in order for it to exist as a separate library, it needs a name. I have failed to think of any good ones for months. Suggestions wanted.

Preferably, the name should hint at that it supports the gamepad API as well as keyboard and mouse. It should not end in “.js” because cliche. Also for reference, the other library that arose out of Cubes development I named Measviz (which I chose as a portmanteau and for having almost zero existing usage according to web searches).

(The working draft name is web-input-mapper, which is fairly descriptive but also thoroughly clunky.)