We're back, babyyyy (& my 2023 music retrospective)

So, first item of business – my blog is back up and running! I had a fake post1 here explaining as much (though I forgot to mirror it on RSS, oops), but I upgraded to a new laptop recently, and in the process lost a bunch of data I needed to keep this site running. It took a while, but I’ve sorted it out. It’s possible that I may have missed something, and if I find that’s the case, I should still be able to rebuild it from a snapshot I have.

File Managers

In 2023-12, I got a nag email from Jam Software, passive-aggressively letting me know that I was using TreeSize on more machines than I was licensed for. Perhaps they meant my old laptop, from which I can’t delicense because said computer is an unbootable mess of corrupted data. But honestly, it’s hard to say what they meant; the email was as self-contradictory as it was condescending. TreeSize is great software, but a practice like this makes Jam a company I can’t recommend, and I’ve removed the links to their site accordingly.
Microsoft’s File (or Windows) Explorer1 has never been good2. Early Windows felt like a GUI for the sake of a GUI, competition to the Macintosh. The Mac’s Finder was itself quite simple, and also never really quite grew into anything for power users. This makes sense for Apple, but Microsoft started off with a weak simulacrum of Finder and never really got around to embracing its power users. Before Windows was ever released, Peter Norton was selling an incredibly powerful file manager for DOS, Norton Commander3.

GemiNaut's clever solution to a peculiar problem

I’m a big proponent of the web being leaner and more text-based. In light of how strongly the web has veered in the opposite direction, it’s probably a radical position to say that I think less of the web should have any visual styling attached to it at all. More text channels where a reader can maintain a consistent, custom reading experience feels like a better solution than a bunch of disparate-looking sites all with their own color schemes, custom fonts, and massive headers1.

I often use text-based web browsers like Lynx and WebbIE. I also tend to follow a lot of people who maintain very webring-esque sites, even moreso than mine. But there is more internet than just the HTTP-based World Wide Web. Gopher is, or was, depending on your outlook, an alternative protocol to HTTP. It was more focused on documents that kind of reference one another in a more bidirectional way, and because it never really got off the ground in the way HTTP did, it also never really got the CSS treatment; it’s really just about structured text. Despite most of the information about Gopher on the web being historical retrospectives, enthusiasts of a similar mind to me are keeping the protocol alive2.

Then there’s Gemini3. Gemini is a sort of modern take on Gopher. For nerds like me, it’s wonderful that such an effort exists. If you’re interested in the unstyled side of the internet, Gemini is worth looking into. I do think it needs a bit of love, however, as curl maintainer Daniel Stenberg points out how lacking the implementation details are. I disagree with a few of Daniel’s points; Gemini falls into a lot of ‘trappings’ that HTTP escaped because HTTP development steered toward mass appeal. Gemini is for a small web, one for weirdos like me. The specification and implementation issues seem very real, however, and while I don’t think Gemini can or should get WWW-level acceptance, an RSS-sized niche would be nice, at least, and software sort of needs to know how to work for that to happen.

All of this only really matters for background context. I’ll likely post more of my thoughts on a textual internet in the future, and I’ll likely also be dipping my toes in publishing on a Gemini site. The point of this post, however, is to talk about a strange problem that happens with unstyled text-based content. While there are certainly far fewer distractions between the reader and the content, there’s also a sort of brain drain that comes from sites being visually indistinguishable from one another. I always just kind of assumed this was one of those annoyances that would never really be important enough to try to solve. Hell, the way most software development is going these days, I don’t really expect to see any new problem-solving happening in the UX sphere. But I recently stumbled across a browser that solves this in a very clever way.

GemiNaut4 is an open-source Gemini and Gopher browser for Windows that uses an identicon-esque visual system to help distinguish sites. Identicons are visual representations of hash functions, typically used for a similar version of the same problem – making visually distinct icons for default users on a site. If everyone’s default icon is, say, an egg, then every new user looks the same. Creating a simple visual off of a hash function helps keep users looking distinct by default. I’ve often seen them used on password inputs as well – if you recognize the identicon, you know you’ve typed your password in correctly without having the password itself revealed.

Don Parks, who created the original identicon, did so to ‘enhance commenter identity’ on his blog5. But he knew there was more to it than this:

I originally came up with this idea to be used as an easy means of visually distinguishing multiple units of information, anything that can be reduced to bits. It’s not just IPs but also people, places, and things.

IMHO, too much of the web what we read are textual or numeric information which are not easy to distinguish at a glance when they are jumbled up together. So I think adding visual identifiers will make the user experience much more enjoyable.

-“Identicon Explained” by Don Parks via Wayback Machine

And indeed, browser extensions also exist for using identicons in lieu of favicons; other folks have pieced together the value in tying them to URLs. But GemiNaut uses visual representations of hashes like these to create patterned borders around the simple hypertext of Gopher and Gemini sites. The end result is clean pages that remain visually consistent, yet are distinctly framed based on domain. It only exists in one of GemiNaut’s several themes, and I wish these themes were customizable. Selfishly, I also wish more software would adopt this use of hash visualization.

Aside from browsing Gemini and Gopher, GemiNaut includes Duckling, a proxy for converting the ‘small web’ to Gemini. The parser has three modes: text-based, simplified, and verbose. The first is, as one might expect, just the straight text of a page. Of the other two, simplified is so stripped-down that apparently this blog isn’t ‘small’ enough to fully function in it6. But it does work pretty well in verbose mode, though it lacks the keyboard navigation of Lynx, WebbIE, or even heavy ol’ Firefox.

I had long been looking for a decent Windows Gopher client, and was happy to find one that also supports Gemini and HTTP with the Duckling proxy enabled in GemiNaut. But truly, I’d like to see more development in general for the text-based web. All the big browsers contain ‘reader modes,’ which reformat visually frustrating pages into clean text. ‘Read later’ services like Instapaper do the same. RSS still exists and presents stripped-down versions of web content. There is still a desire for an unstyled web, and it would be great to see more of the software that exists in support of it adopting hash visualizations for distinction.


TOTP: It's not Google Authenticator

I’ve been meaning to write about this since Twitter announced that only the eight-dollar-checkmark class would have access to SMS-based 2-factor authentication (2FA)1. Infosec circles got back into heated debates about the security implications of SMS-based authentication compared to the risk of losing access to the more-secure option of TOTP. This post isn’t really about that debate, but the major takeaways from either side are that:

User friction is a very real issue, and TOTP will always be more frictional than SMS; I can’t solve that in this post. Personally, I prefer to use TOTP when available due to the risk of a SIM-swapping attack2. This post, however, is more concerned with the matter of keeping your secret portable and within your control if you decide to use TOTP for 2FA.

If you’ve made it this far without knowing what TOTP is, well, that’s almost certainly by design. I would hazard that most people who are aware of it know it exclusively as Google Authenticator. Getting an increasingly-vital, open standard to be almost exclusively associated with one shitty app from one shitty company is certainly very good for that company, but very bad for everyone else. So the first order of business here is to clarify that whenever you see a site advertising 2FA via ‘Google Authenticator,’ what they actually mean is TOTP, or more accurately RFC 6238, an open standard3. Additionally, if you’re reading this and you currently implement TOTP on a site you manage or are planning to, I implore you to describe it accurately (including Google Authenticator as one of several options, if necessary) rather than feeding into the belief that the magical six-digit codes are a product of Alphabet.

So what, then, is TOTP? Even if you know it isn’t A Google Thing, the mechanism by which a QR code turns into a steady stream of six-digit codes is not entirely obvious. This is, typically, how we set up TOTP – we’re given a QR code which we photograph with our authenticator app, and suddenly we have TOTP codes. The QR code itself contains just a few pieces of URI-encoded data. This may include some specifics about the length of the code to be generated, the timing to be used, the hash method being used, and where the code is intended to be used. Crucially, it also contains an important secret – the cryptographic key that, along with a known time reference, is the foundation from which the codes are cryptographically generated. Essentially, a very strong password is kept secure, and from this an easily-digestible temporary code is generated based on time. Because it comes from a cryptographic hash function, exposing one (or more) of these codes does not have the same security implications as exposing the key itself.

Keeping the key itself secret is, in fact, extremely important. Vendor lock-in aside, I assume this partially contributes to the opacity of what happens in between scanning the QR code and having a functional 2FA setup. A large part of the debate over whether ‘Google Authenticator’ is a good 2FA solution is the fact that once your secret is in the Google Authenticator app, it is not coming out. If your app data gets corrupted, or if something misbehaves during a phone transition, you’re out of luck. Hopefully you’ve kept the recovery codes for your accounts safe somewhere. If to you, as to most people, TOTP means Google Authenticator, then this is a very real concern. One goof could simultaneously lock you out of all of your accounts that are important enough to you that you enabled their 2FA.

When I was de-Googling myself years ago, I went through the somewhat-laborious process of generating all new codes to put into Authy. In addition to (or in lieu of, I’m not entirely sure) local storage, Authy keeps your TOTP info in the cloud, allowing you to keep several devices in sync, including a desktop app. While this is a better solution than Google Authenticator, I’m not linking to it as I still think it’s a pretty bad one. The desktop app is an awful web-browser-masquerading-as-desktop-software creation. The system of PINs and passwords to access your account is convoluted. And, while in theory you can put the desktop app into a debug mode and extract your data, there’s no officially-supported path toward data portability. The unofficial method could go away at any time; in fact, while I will credit Indrek Ardel with the original method4, it seemingly no longer works and one must find more recent forks that do. On top of this, the aforementioned bad desktop app and confusing set of passwords meant that it was still just easier to start fresh with new codes when I recently switched away from Authy. Finally, Authy is another corporate product. It’s owned by Twilio, and they seem to want a piece of that lock-in pie as well, offering their own 2FA service that is a quasi-proprietary implementation of TOTP5, as outlined by Ardel.

For years, I’ve been using various KeePass implementations in conjunction with one another as a portable password management solution. I can keep a copy of the database in my OneDrive (or whatever cloud storage I happen to have access to; right now it’s OneDrive but frankly that’s because it’s cheap — not because it’s good) and have access to it from my phone and various computers. I can sync copies to flash drives if necessary, or drop a copy on an M-Disc with other important files to stash in a safe. I was, for a long time, using an unmaintained fork, KeePassX, because it simply vibes better with how I want computers to look and feel than its replacement, KeePassXC does. On mobile, I’ve been using Strongbox6. At some point, I noticed they added support for TOTP codes! The app will happily scan a QR code and add the relevant data to an entry.

This was interesting and novel, and I was already thinking about moving all of my codes into it, simply because storing them that way meant the data was easily recoverable. If I wanted to switch again in the future, I now had access to the secret and any other relevant parameters, and could generate a new QR code from them if need be. But then I happened to notice that KeePassXC, the desktop software I had been avoiding, also supports TOTP codes. And Strongbox’s implementation is fully compatible with KeePassXC’s! This changed things – suddenly this was a portable solution for accessing my TOTP codes and not merely the data behind them. I generated new codes for everything I use (and upgraded my security on a few things that had implemented TOTP without my noticing) and ditched Authy.

While you can add TOTP codes directly in the KeePassXC desktop app, you can’t do it directly from a QR code. Windows is fond of capturing screenshots to the clipboard7; I would love to see an option in KeePassXC that scans an image in the clipboard for a QR code (and then clears the clipboard). Getting codes out is extremely straightforward. Since the data is just in normal entries in my database, a code I scan in via Strongbox will show up in KeePassXC once OneDrive catches up. It is worth noting that this rather shatters the ‘something you know / something you have’ model of 2FA, but the flexibility is there to manage codes and passwords however the user is comfortable. The most important aspect for me was liberating my TOTP data from a series of lockboxes for which I lacked the key.

Ultimately, I don’t think average users care much about data portability until they’re forced to. By the time their hands are forced, the path of least resistance tends to just be to stick with the vendor that’s locked them in8. With TOTP, the ramifications of this can be extremely annoying. More importantly, however, I think Google has done a very good job at preventing users from even knowing that TOTP portability is possible. Whether I convince anyone to store their codes in KeePass databases or not is immaterial; I really just want people to know they have options, and why they might want to use them. I want people to give just a small amount of thought to the implications of having a login credential that you not only have zero knowledge of, but also have zero access to. Frankly, I want people to stop doing free advertising for Google. And finally, I genuinely want a return to an internet where, occasionally, we make our users learn one little technical term instead of letting multi-billion dollar corporations coöpt everything good.


Rawwwwwr, let's talk about Wavosaur

Okay, so I promise I’m actually working on my 2022 media retrospective post, but I’ve also been itching to write about a particular piece of software that I’ve been getting a lot of use out of lately. I’ve been dabbling a bit with music production in tracker software, a style which is built entirely1 around the use of samples. As such, I’ve found myself needing to work directly on waveforms, editing samples out of pieces of media I’ve stolen or recordings I’ve made directly2. Having used Adobe Audition as both a multitracker and a wave editor for a long time, I rather like its approach as a dual-purpose tool. I do not, however, like Adobe, nor do I really want to wait for Audition to start up when I’m just chopping up waves. It’s too much tool for my current needs. I’ve also used Audacity in the past, which is a multitracker that certainly can function as a wave editor if you want it to. But, among other issues, it’s just not pleasant to use. So I’ve looked into a number of wave editors over the past few weeks, and have primarily settled on Wavosaur.

Wavosaur is not perfect software, I have a few quibbles that I’ll bring up in a bit. It is, however, really good software, with a no-nonsense interface that at least tries to be unintrusive, and is largely user-customizable. It’s quick to launch, and quick to load files. By default, it will attempt to3 load everything that was open when it was last exited, this can be disabled to make things even quicker. While this is true of pretty much any audio editing software, it supports the import of raw binary data as well as enough actual media formats that I can open up an MP4 video of an episode of Arthur that I downloaded from some sketchball site and start slicing up its audio without issue.

Navigating waves is pretty straightforward. Scrollwheel is assigned to zoom instead of scroll, which I do not like. An option for this would be great. It’s not a huge deal, however, since I’m moving around more by zooming than by scrolling in the first place. Zoom in and out are not bound to the keyboard by default; I set horizontal zoom to Ctrl+/- and vertical to CtrlAlt+/-. I might remove modifiers from vertical altogether, but my point is more that binding them to something logical makes navigating helpful, along with CtrlE and CtrlR, the default bindings for zooming to selection and zooming out all the way.

Wavosaur can deal with two different sorts of markers, and these are stored within the .wav file itself. Normal markers can be used to identify all manner of thing in the file. No data (like a name, for example) can be stored along with the marker, so a somewhat sparing use is probably best, but to my knowledge there is no limit to the number of markers that can be added. Other software does allow for similar markers to be named and then navigated by name, but to my knowledge none of these store these in a standardized way in the .wav file itself. I also haven’t seen other wave editing software that supports the other sort of marker that Wavosaur supports – loop markers. There can only be one pair of these — an in and an out — per file. Set your loops to the note’s sustain duration, and you have a very basic implementation of envelope control. While I don’t know of other software that writes this information, both trackers that I’m currently playing with — MilkyTracker and Renoise — will read it4. Wavosaur doesn’t really have a way to preview loop points in context, unfortunately, but the fact that it reads and writes them still makes for a useful starting point within the tracker.

My second-most-used wave editor over the past few weeks has been NCH WavePad5. Aside from the aforementioned loops, WavePad lacks two features that really makes Wavosaur shine for sample creation. The first is the ability to snap to zero-crossings. Doing this helps to ensure that samples won’t end up popping when they trigger (or, with loop points, retrigger). This can easily be enabled and disabled in the menus, though toggling it can’t be bound to a key for some reason. The second is the ability to universally display time in audio samples6 instead of hours, minutes, and seconds. When fully zoomed in, WavePad switches to time based on audio samples, but I couldn’t find a way to set it as a permanent display. Often, with trackers, it’s advantageous to have a fairy intimate knowledge of how many audio samples you’re dealing with in a given sample. Being able to permanently set the display this way in Wavosaur is very helpful.

Wavosaur allows for resampling to an arbitrary sample rate. It has inbuilt pitch- and time-shifting, and a few basic effects like filters. For everything else, it supports VST in a straightforward way. You can build up a rack and preview things live, editing VST parameters while playing a looped selection of audio, and applying once things sound right. There’s some MIDI functionality, though I’m not sure the extent of it. Basic volume automation is included and works well enough. A wealth of visualization tools – spectrum analyzers and oscilloscopes and such – are included, and even have little widget versions that can live in the toolbar. It includes calculation tools for note frequency, delay, and BPM; BPM detection can also automatically place markers on beats. If you set markers at beats in this way, or manually, it will scramble audio based on markers for you.

I said I had a few quibbles that I’d like to get to. I already sort of mentioned one – while keyboard control is decent, not everything can be keybound. Like toggling snap-to-zero-crossings, there are quite a few actions that I would really like to have keyboard control over. Currently you can easily select between marker points by double-clicking within them, but the same can’t be done from the keyboard; overall, selection could use more granular control via menus and the keyboard. One very annoying thing is that doing an undo action resets the horizontal zoom out to 100%. If I’ve zoomed in on a section of audio that I’m looking to slice out into a new sample, I don’t want to lose that view if I need to correct a goofball mistake I made. Finally, something that a lot of good software has spoiled me for is a one-step process for making a new file from a selection. Right now it’s a two-step process of copying and pasting-as-new, which is fine. But it does sort of add up when you’re chopping up a bunch of samples. These are all pretty minor issues, and overall I think Wavosaur is a great little waveform editor. If you’re working with samples for trackers, I think it may be the best choice (on Windows, at least).


Some things I have been meaning to write about but haven't

So… I have a few posts that I’ve sort of been working on, but they’re involved. I have others that I just haven’t been motivated to actually work on; motivation in general has been difficult lately. And there have been some things I’ve played with or thought about recently, but I just can’t figure out a way to sort of give those things the narrative structure that I hope for when I’m writing here.

Revisiting the travel chess computer

Computers are interesting things. When we think of computers, we tend to think of general-purpose computers – our laptops, smartphones, servers and mainframes, things that run a vast array of programs composed of hundreds of thousands of instructions spanning a multitude of chips. When I was younger, general-purpose computers were more-or-less hobbyist items for home users. Single-purpose computers still exist everywhere, but there was certainly a time when having a relatively cheap, often relatively small computing device for a specific task was either preferable to doing that task on a general-purpose computer, or perhaps the only way to do it. Something like a simple four-function calculator was a far more commonplace device before our phones became more than just phones.

Chess poses an interesting problem here. By modern standards, it doesn’t take much to make a decently-performing chess computer. The computer I’ll be discussing later in this post, the Saitek Kasparov Travel Champion 21001 runs on a 10MHz processor with 1KB of RAM and 32KB of program ROM (including a large opening library). It plays at a respectable ~2000 ELO2. This was released in 1994, a time when the general-purpose computer was becoming more of a household item. The Pentium had just been released; a Micron desktop PC with a 90MHz Pentium and 8MB of RAM was selling for $2,499 (the equivalent of $4,988 in 2022, adjusting for inflation)3. 486s were still available; a less-capable but still well-kitted-out 33MHz 486 with 4MB of RAM went for $1,399 ($2,797 in 2020 dollars). Chessmaster 4000 Turbo would run on one of these 486s, albeit without making the recommended specs. It cost $59.95 ($119.85 in 2020 dollars)4, and while it’s hard to get a sense of the ELO it performed at, players today still seem to find value in all of the old Chessmaster games; they may not play at an advanced club level, but they were decent engines considering they were marketed to the general public. A more enthusiast-level software package, Fritz 3, was selling for 149 DEM5, which I can’t really translate to 2020 USD, but suffice it to say… it wasn’t cheap. Fritz 3 advertised a 2800 ELO6; a tester at the time estimated it around 2440 ELO. Interestingly, when that tester turned Turbo off, reducing their machine from a 50MHz 486 to 4.77MHz, ELO only dropped by about 100 points.

All of this is to say that capable chess engines don’t need a ton of processing power. At a time when general-purpose computers weren’t ubiquitous in the home, a low-spec dedicated chess computer made a lot of sense. The earliest dedicated home chess computers resembled calculators, lacking boards and only giving moves via an LED display, accepting them via button presses. Following this were sensory boards, accepting moves via pressure sensors under the spaces. These were available in full-sized boards as well as travel boards, the latter of which used small pegged pieces on proportionally small boards with (typically clamshell) lids for travel.

In 2022, we all have incredibly powerful computers on our desks, in our laps, and in our purses. Stockfish 15, one of the most powerful engines available, is free open source software. Chess.com is an incredible resource even at the free level, powered by the commercially-available Komodo engine. Full-size electronic boards still exist, which can interface with PCs or dedicated chess computers. Some of these products are pretty neat – DGT makes boards that recognize every piece and Raspberry Pi-based computers built into chess clocks. There is an undying joy in being able to play an AI (or an online opponent) on a real, physical, full-sized board.

The market for portable chess computers has pretty much dried up, however. Pegboard travel sets eventually gave way to LCD handhelds with resistive touchscreens and rather janky segment-based piece indicators. These were more compact than the pegboards, and they required less fiddling7 and setup. The advent of the smartphone, however, really made these into relics; a good engine on even the lowest-end modern phone is just a better experience in every single way. On iOS, tChess powered by the Stobor engine is a great app at the free level, and its pro features are well-worth the $8 asking price. The aforementioned chess.com app is excellent as well.

When I was quite young, I improved my chess skills by playing on a 1985 Novag Piccolo that my parents got me at a local flea market. I loved this pegboard-based computer – the sensory board which indicated moves via rank-and-file LEDs, the minimalist set of button inputs, even the company’s logo. It was just a cool device. It is, of course, a pretty weak machine. Miniaturization and low-power chips just weren’t at the state that they are now, and travel boards suffered significantly compared to their full-sized contemporaries. The Piccolo has been user rated around 900 ELO, it doesn’t know things like threefold repetition, and lacks opening books.

I’ve been trying to get back into chess, and I decided that I wanted a pegboard chess computer. Even though the feeling pales in comparison to a full-sized board, I don’t have a ton of space, I tend to operate out of my bed, and I have that nostalgic itch for something resembling my childhood Novag. Unfortunately, things didn’t improve much beyond the capabilities of said Novag during the pegboard era. I would still love to find one of the few decent pegboard Novags – the Amber or Amigo would be nice finds. But I ended up getting a good deal on a computer I had done some research on, the aforementioned Saitek Kasparov Travel Champion 2100 (from hereon simply referred to as the 2100).

I knew the 2100 was a decent little computer with a near-2000 ELO8 and a 6000 half-move opening library. I liked that it offered both a rank-and-file LED readout and a coordinate readout on its seven-segment LCD. Knowing that these pegboard computers struggled to achieve parity with their full-sized counterparts, I was pretty surprised to find some above-and-beyond features that I was familiar with from PC chess engines. The LCD can show a wealth of information, including a continuous readout of what the computer thinks the best move is. A coaching mode is present, where the computer will warn you when pieces are under attack and notify you if it believes you’ve made a blunder. A random mode is present, choosing the computer’s moves randomly from its top handful of best options instead of always choosing what it believes is the best of the best. You can select from themed opening books or disable the opening library entirely. These are all neat features that I really wasn’t expecting from a pegboard computer9.

I can see why the 2100 tends to command a high price on the secondary market – if you want a traditional pegboard chess computer, it seems like a hard one to beat. I’m certainly intrigued by some of the modern solutions – the roll-up Square Off PRO looks incredibly clever10. But for a compact yet tactile solution that I can tune down to my current skill level or allow to absolutely blast me, the 2100 checks a lot of unexpected boxes. As I mentioned, these travel units died out for good reason; I can play a quick game on chess.com against Komodo and get an incredibly detailed, plain-language analysis afterword that highlights key moments and lets me play out various ‘what if?’ scenarios. I do this nearly every day as of late. Purchasing a nearly-three-decade-old chess computer may have been a silly move. But it’s a different experience compared to poking at at an app on my phone. It’s tactile, it’s uncluttered. It’s scaled down, but there’s still something about just staring at a board and moving pieces around. I still use my phone more, but the 2100 offers something different, and it offers that alongside a decent engine with a flexible interface11. Maybe one of these days someone will come out with a travel eboard, but I doubt it. Solutions like the Square Off PRO are likely the direction portable chess computers are headed. This is fine, it’s a niche market. I’m just glad a handful of decent models were produced during the pegboard era, and I’m happy to have acquired the Saitek Kasparov Travel Champion 2100.


A doughnut in my ear: the Sony Linkbuds

Ever since I saw Techmoan’s video about the new Sony Linkbuds, truly wireless1 earbuds with an open design made possible by virtue of a doughnut-shaped driver, I’ve been enthralled. I always prefer open headphones, which can be tricky when you’re buying things meant to go in your ear. Even within the realm of full-sized, over-the-ear cans, it’s a niche market. People like having a silent, black background. I understand this, but it isn’t for me. For one thing, silence gives me anxiety. For another, the sort of platonic ideal folks tend to have for music – the live performance – is never a silent black box either. Ambient sound exists; even the much-misunderstood 4′33″ by John Cage is more of an exercise in appreciating ambient sound than it is an exercise in silence. Perhaps that’s a pretentious way of looking at things, but this widespread belief that audiophile greatness starts in a vacuum has certainly left the market with a dearth of open designs.

Earbuds themselves are a dying breed. In-ear monitors (IEMs) direct sound through a nozzle directly into the ear canal, where their tips are inserted. This gives a tight physical connection to the sound, and it – once again – isolates the listener from the world better, leading to a more silent experience. I’ve used – and enjoyed – a handful of semi-open IEMs, but… IEM fit is tricky. My ears are different enough in size that I generally need a different tip size for either ear. Even when I do get the ‘right’ fit, it nearly always feels like a delicate balance, and one that requires me to sit a certain way, move very little, and avoid shifting my jaw at all. For quite some time now, I’ve been using Master and Dynamic’s MW-07 Plus. Their design is such that an additional piece of silicone butts up against the back of the ear’s antihelix for additional support, minimizing fit issues significantly. They also sound great. I like these enough that I own three pairs of them2. Getting them seated properly can still be an issue, though, and… they aren’t open. They do provide an ‘ambient listening’ mode that’s sort of a reverse of active noise cancelling – using the inbuilt microphones to pick up ambient noise and inject it into the stream. It’s better than nothing. A new problem has started to manifest with the MW-07s in which that additional piece of silicone doesn’t always fit over the IEM tightly enough, and it obscures the sensor that detects whether or not the IEM is in your ear. The result has been a lot of unintentional pausing, and a lot of frustration.

I spend a fair amount of time listening to a Walkman or a DAP using full-size cans (generally Sennheiser HD-650s), but I also do like the convenience of casual listening from my stupid phone with no headphone jack via Bluetooth. Right now, this means either one of my several pairs of MW-07s, or the weird little doughnuts that are the Sony Linkbuds. I’ve been putting the Linkbuds through their paces for a couple of weeks now, and they’ve quickly become my favorite solution for casual listening. I will get into their caveats – which are not minor – but the TL;DR is that they sound good enough, they fit well, and they’re just… pleasant to use. I know the hot take is to say that Sony lost their flair for innovation and experimentation in the ‘90s or whatever, but they are still doing interesting things. It may not be particularly impressive on a technical level, but someone still had to greenlight the R&D for designing a custom doughnut-shaped driver for the Linkbuds. It’s a shot in the dark for an already-niche product market. These aren’t going to be for everyone, but if the idea of a truly wireless earbud with a gaping hole in the middle to allow ambient sound in is appealing to you – I think Sony did good.

Comfort

To start, the Linkbuds are extremely comfortable. Unlike any IEM I’ve used, they quickly disappear from my ear. If I shake my head, I’ll notice the weight there, but they stay in place fine. Being earbuds instead of IEMs, there are no tips to worry about sizing. But like the MW-07s, there is an additional bit of silicone – in this case, a tiny little hoop that catches behind the top of the antihelix. These are included in five sizes, and they help with positioning enough that choosing the ‘wrong’ size is detrimental to sound and not just the security of the earbud in the ear. They seem too flimsy to do anything, but they’re vital to the fit, and that flimsiness ensures that they remain light and comfortable. Aftermarket manufacturers are selling replacements for these; I’ve acquired some pink ones to make them a bit more me. The amount of silicone contacting the skin is low enough to keep itchiness to a minimum during extended wear – a discomfort that became a reality after wearing the MW-07s for long stretches.

Sound

The Linkbuds are not an audiophile-grade experience. Compared to the MW-07s, they’re… thin. But they don’t sound bad, they don’t sound particularly cheap or tinny. Their sound is rather hard to describe. Some folks have done frequency response charting3 of them, and… yeah the low end rolls off early and it rolls off hard. This can be compensated for quite serviceably with the inbuilt equalizer (more on this shortly), but these are never going to hit you with thick sub-bass. Music that relies heavily on this will sound a bit thin. Occasionally, a piece of pop music like Kero Kero Bonito’s ‘Waking Up’ will surprise me in just how much the production leans on the low-end. But for the most part, the equalizer gets the upper bass present enough that music tends to sound full enough to be satisfying.

There is one really peaky little frequency range somewhere in the 2500Hz band. I first noticed it on µ-Ziq’s ‘Blainville’, the repeating squeal noise was… unbearable. This manifested in a few other tracks as well4, but was also tameable through equalization. Beyond these frequency response issues, it’s tricky to talk about the sound of them. They sound big. Not necessarily in terms of soundstage, but the scope of the reproduced sound itself feels more like it’s coming from large cans firing haphazardly into my ears than tiny little doughnuts resting precisely inside them. I assume this can largely be attributed to the good fit – I’ve used high-end wired earbuds like the Hifiman ES1005, and when they’re properly positioned they sound great… but keeping them properly positioned is tough. Soundstage is fine, imaging is fine. I actually enjoy them quite a bit for well-recorded classical, particularly pieces for chamber ensembles. In a recording like Nexus and Sō Percussion’s performance of Music for Mallet Instruments, Voices, and Organ, not only do the instruments feel like they exist in a physical space, you can almost sense where on the instrument a given note is being struck.

The app

I’ve mentioned the equalizer twice now, but before I can talk about that, I have to talk about the app. In general, a product is less appealing to me if it involves an app – this tends to mean some functionality only exists in a terrible piece of software that probably won’t exist anymore in three years. This is true of the Linkbuds as well, but two things make me reluctant to care about it: the functionality feels pretty set-it-and-forget-it to me, and they’re already bound to a phone by virtue of design. The app lets you set quite a few things including some strange 3d spatial stuff that I haven’t tested, a listening profile designed to liven up low-bitrate lossy compression, and integration with other apps. This integration is very limited, only supporting Spotify (which you shouldn’t do) and a few other things I hadn’t heard of6. It also lets you set the language for notifications (for low battery and the like), and upgrade the firmware. Then there’s the equalizer – five bands, plus a vague ‘Clear Bass’ slider. I’ve found I’m happiest with the following settings:

Clear Bass 400Hz 1kHz 2.5kHz 6.3kHz 16kHz
+7 +1 ±0 -4 -3 -3

This obviously isn’t going to work miracles with the sub-bass, but it does bring enough bass presence to make for a fuller sound, and it smooths out that peak in the 2.5kHz band. The equalizer has a bunch of presets, and lets you store three of your own presets. Frustratingly, while the app supports a bunch of different Sony headphones, it’s also a different app than the one used for Sony speakers.

A final thing that the app allows for is the setting of the four tap commands that are available to you – twice or thrice on either Linkbud. These are limited to a handful of presets – one plays/pauses and skips to next track, one is volume up/down, one is next/previous track, etc. I wish these were just fully customizable. I find it easier to adjust volume with the physical buttons on my phone, so I’m using pause/next and next/previous. I’d love to tweak this for a couple of reasons – not having a redundant next command, and swapping the order of next and previous. Regardless, this is more useful than the hardcoded two buttons on my MW-07s. And while tapping on the Linkbuds feels silly vs. pressing an actual button… it is much easier.

A few final notes

Battery life is bad. I get it, the shape of them and the fact that half of the unit is a doughnut-shaped driver means there isn’t much room for a battery. But the reality is that the MW-07s last long enough to get through a workday, and the Linkbuds just… won’t. Which sucks, because getting through each new slogging day of work pretty much requires a constant stream of high-energy music. The case they come in doesn’t have a great battery either, and this is less forgivable.

Compared to the MW-07s, I really like the way the case feels. It’s made of the same plastic as the Linkbuds themselves, which just… has a nice feel to it. The case is also just weighted in a very pleasing weeble-wobbly way. The Linkbuds snap into the case very positively, whereas the MW-07’s just kind of flop into place. The Linkbuds’ case has a single LED, which reports the battery status of the case itself when you open it, and each Linkbud when you snap them into place. It only seems to report vague green and orange levels. The MW-07 case, on the other hand, has three LEDs which clearly correspond to case, and left and right. These LEDs have three vague levels instead of two.

One last silly detail that the Linkbuds get better than the MW-07 is the volume that they use for their own sounds. Tap confirmations and low battery notifications are soft sounds, played at reasonable volumes. The MW-07’s notification for switching on ambient listening mode is just a little too loud, and the low battery notification is absolutely alarming. This is something that a lot of companies seem to neglect – generic units are usually terrible about it. Master and Dynamic certainly tried harder than generic vendors, but Sony did it right. It’s a little thing, but little things add up.

I guess this post largely serves to take away my audiophile cred, but the reality as I age and my life gets more complicated is that there’s listening as an activity and then there’s listening as background. The activity is akin to enjoying a 15-year Macallan Fine Oak while background listening just gets you through the day like a few shots of rail vodka. The Linkbuds serve my casual background listening needs really well, and they sound perfectly fine doing it. They pale in comparison to my Sennheiser IE-800s, but… they’re supposed to. They’re doing a different job. And while my MW-07s may sound better, they’re increasingly not worth the hassle when I want to both listen to music and move my body. I hope Sony makes a second version of these. I want more doughnut-shaped drivers out in the world. I want Sony to really go ham on such an open design. I want Sony to keep being weird. But mostly I just want to know I’ll be able to get a replacement pair a few years down the line, because I think I’m going to want to keep using these for a while.


The low end of the high end

Recently, Techmoan posted a video about his daily driver Walkman. This sort of pushed me to go back and finish this post that I had a half-hearted outline of regarding my daily driver Walkman. I don’t really have an exotic collection; my interesting pieces are along the lines of a My First Sony1 and the WM-EX999, notable for its two playback heads, allowing for precise azimuth settings for both directions of play. I also don’t really take my Walkmans out much; they just hang out near me as I do my day job. What I want out of a ‘daily driver,’ therefore, isn’t something that stands out by being the most compact or affordable. Rather, it’s just reliable, pleasant to use, sounds good, and has the tape select options I need (Dolby B and I/II formulation).

The deck that I’ve ended up on to fill this role is the WM-DD11. Readers familiar with Walkman nomenclature will recognize ‘DD’ as indicative that the deck uses Sony’s Disc Drive mechanism. These mechanisms use a servo-controlled motor that butts up against the capstan via a disc, leaving the sole belt path for the takeup hub. They provide good speed accuracy, largely impervious to rotation and movement of the deck. They’re mechanically simple and quite reliable, with the exception of an infamously fragile piece – the center gear. Made of a deterioration-prone plastic, this gear has failed on essentially every DD Walkman out there. While the decks continue working for some time after the gear cracks, a horrid clicking sound is emitted with every rotation. Some folks fill the inevitable crack with epoxy, buying the gear some time. Replacements are also available. But every DD deck out there either doesn’t work, clicks, or has been repaired in some way or another.

My WM-DD11 does not have any center-gear-related issues, nor has the gear been replaced or repaired. Unlike most DD models2, the WM-DD11 has no center gear. DD models were high-end models, and the WM-DD11 sat in the strange middle-ground of a stripped down, low-end version of a high-end design. This is, to me, what makes the WM-DD11 special. It’s what makes it an interesting conversation piece, and it’s also what makes it a great daily driver. Like most DD models3, it only plays unidirectionally. This is, perhaps, inconvenient for a daily driver, but it also removes the b-side azimuth issue that affects bidirectional models4. Like most DD models, it has manual controls for a couple of tape settings – Dolby B on/off and Type I/Types II & IV. And while it lacks the quartz-lock that some DD models5 had implemented by this point, the standard servo-driven disc drive system is still more accurate and stable than other low-end models of the era.

The similarities largely stop there, though. Pressing ‘play’ on the deck immediately reveals the primary difference – lacking the soft-touch logic controls of most DD models, the WM-DD11 has a mechanical ‘piano-keys’ type transport. Unlike most piano keys, Sony did premium the buttons up a bit by keeping them in the standard DD position, on the face opposite the door. This means there’s a larger mechanical path than if they were positioned directly above the head, though I doubt this complexity really affects reliability much. People often malign mechanical transports, but I rather like the physical connection between button and mechanism. They tend to feel more reliable to me as well; soft-touch mechanisms still have mechanical bits, they just have to be controlled by the integration of some motor.

With the DD models, specifically, this tracks. The cursed gear facilitates things like tape-end detection in the soft-touch DD models. I certainly don’t think Sony knew this plastic was going to deteriorate; I don’t think they knew all the capacitors they were buying in the ‘80s were going to leak after a couple of decades either. But, despite the fact that there are only a handful more internal bits in the soft-touch transport, one of these has a critical fault. The gear itself is odd – a large, donut-shaped thing that goes around a metal core. It wouldn’t surprise me if this design led to the use of a plastic that wasn’t so thoroughly time-tested.

Costs were cut in some other places – the tape head stays with the body and not the door, there are some plasticky bits that certainly don’t have the premium feel of other DD models. But the WM-DD11 fits in a market segment that seems underappreciated to me. It’s high-end in the ways that matter, while being stripped down elsewhere. That middle-ground rarely seems to exist these days, with performance going hand-in-hand with luxury and the low-end solely existing at the bottom of the barrel. It’s a false binary presumably created by the need to sexy up anything decent enough to market.It’s hard to sell half the features, but I wish companies would try. I want the low end of the high-end market to exist. I want products like my reliable, simple, yet still very performant WM-DD11 to exist.


Commercial music media, a tier list

I’ve owned a lot of audio equipment over the years. Radio receivers, (pre)amplifiers, and equalizers of course, but more importantly the devices required for listening to… many different forms of media. I was late to the party for plenty of them, never an early-adopter and often only dipping my toes into a media after it was entirely out of production. At some point, streaming happened, and new physical formats just kinda… stopped.

On Wordle and Fragmentation

So, the New York Times bought the word game phenomenon Wordle1 for ‘low seven figures,’ or expressed in more human terms, ‘upward of a million dollars.’ I’m happy that Josh Wardle got his bag, though I despise the NYT for things like rampant copaganda, warmongering, transphobic editorial practices, and puzzlingly enough, boot-licking anti-labor covid jokes. It seems logical that Wordle will eventually get wrapped in with the other games that they NYT bundles alongside its crossword section, itself mired in controversy.

2020 & 2021 Media retrospective

At the end of every year, I try to do a bit of a media retrospective of my favorite stuff that came out that year. I neglected to do one last year, what with all the things going on. But, some good art has happened over the last two years. Particularly music, in my opinion, which was originally all I was going to stuff into this post. But, I opted to add video games and movies to the mix.

A dismal sea of color

I have been deeply into audio equipment for as long as I can remember. When I was in high school, I was always scouring Goodwills and Hamfests for the next old thing that would bump up my hi-fi game and look good while doing it. The latter part wasn’t difficult; not everything was Bang & Olufsen, but audio equipment from the ‘70s and ‘80s pretty much universally looks interesting if not outright lovely.

Experiencing Tetris Effect

In 1984, Alexei Pajitnov wrote Tetris for the Elektronika 60 computer. This was not a home computer by any stretch of the imagination; it was a Soviet interpretation of a DEC LSI-11, itself a shrunk-down version of the PDP-11. It had no display capabilities of its own, and this initial release of Tetris had to be played on a text-mode terminal that communicated with the computer. Pajitnov, working at the Soviet Academy of Sciences, was tasked with demonstrating the limits and capabilities of the equipment being developed.

A few of my favorite: Woodcased pencils (with erasers)

Throughout this piece, I link to products on CWPE. This post has been a couple of months in the making, and in the midst of my idleness, CWPE announced that they’re closing down in 2021-11. I’ll try to find other stockists in the future and update the links, but at least two of the recommendations were exclusive to the shop, so… all around disappointing.
I have a Thing I’ve been meaning to start trying to write and draw. And while I keep failing to start trying in a meaningful sense, I have done the most important first step – going through a bunch of my woodcased pencils, buying a couple more to try out, and figuring out what still feels best to me. You get more of a selection, and a better selection if you go for pencils without that nubby little eraser on the end.

On Heathcliff and hackish image manipulation

This should probably just be two posts, but it’s been months since I posted anything and I’m just going to go for it. But if you just want to see me talk about a terrible bodge-job of a shell script, scroll down a bit.
For a while I’ve had this idea to start a Twitter bot that posts a strip made up of a random Heathcliff panel paired with a random Heathcliff caption. There are a few reasons for this, the first of which is that under Peter Gallagher’s tenure, Heathcliff has gotten… weird. Recurring themes include friendly but inexplicable robots, helmets that communicate what their wearer is thinking (maybe?), the Garbage Ape, the magical levitating properties of bubblegum1, the meat tank… the strip has gotten to be a real experience for every possible state of the human mind.

Sony's resin tubes

Sony has a history of making ‘lifestyle’ consumer electronics alongside their more boring, everyday items. From the 1980s My First Sony line designed to indoctrinate children into brand loyalty1 to the beautiful clutch-like Vaio P palmtop, the company has never been afraid to experiment with form, function, and fashion. Occasionally, they’ll release wild products like the XEL-1 which read like concepts but actually get released, albeit at silly prices. One such item was the made-to-order NSA-PF1 ‘Sountina’, a six-foot tall speaker released in or around 2008.

You need a Torx T10 driver to disassemble the 8BitDo Arcade Stick

Not too long ago, I decided to get myself an 8BitDo Arcade Stick. If you’ve spent much time here, you might’ve noticed I’m rather into retrogaming. I grew up with joystick-based consoles and arcades, and while I’m happy using a modern gamepad these days, I do often wish I had that arcade feel when I’m emulating an older system. I was also drawn to the tinkering nature of an arcade stick; the actual joysticks and buttons are largely standardized, modular parts.

The voice of a wizard hacking away

My pals at Sandy Pug Games have opened up preorders for WIZARDPUNK, a zine of various wizard stories and whatnot. It’s full of brilliant work, and I highly recommend checking it out! I have a little epistolary slice-of-life piece in it, which I’m honestly pretty proud of. In addition to this, I was asked if something rather curious was possible, if there was any way some audio-producing computer code could be squished down to a reasonable size such that someone could theoretically type it in.

Two slim keyboards

I’m writing this on a keyboard I ordered from Drop quite some time ago, the Morgrie RKB6801. My daily driver up until now has been an also-recently-acquired Keychron K3. Both of these keyboards use slim switches; prior to this, I was using another Keychron keyboard with full-size Keychron optical switches. I do much of my writing/whatever from bed, and the way I configure myself doesn’t really work out well with a traditional mechanical keyboard; the overall height is just too chunky. Fortunately, a lot of progress is being made in the mechanical keyboard space; simply getting a Bluetooth model was an exercise in frustration but a few years ago.

I should say that I am a clicky-clacky typist. My favorite switches ever are the IBM Model M buckling springs, but in a modern setting I gravitate toward Matias’s take on the Alps switch. Few keyboards/keycaps are designed around Alps, so the next best switches for me are Cherry Greens. This is the sort of baseline that I’m working with for this post-that-approximates-a-review. As I mentioned, I had been using a Keychron K6, with Keychron’s optical simulacra of Cherry Blue switches. Blue is already a step down from Green for me, but I was making do with it. Optical switches are conceptually quite interesting to me; the core mechanical elements that provide the tactile satisfaction can be left in place while changing the electronic element to something solid state. Had I not wanted to dabble in this, I could’ve bought the hot-swappable version of the K6 and swapped in some Cherry Greens. I’m glad I didn’t, because as I mentioned, the keyboard is just too chunky for the intended use-case. You mention this sort of thing around mechanical keyboard groups, and you get chastised, because of course it’s chunky! The big fat switches make the magic! Which… both things can be true. It can be an unfortunate reality while still being… the reality.

The optical switches themselves were… okay? Most of the keyboard was fine, though not quite comparable to Cherry Blues, but the wider-than-letter keys? They squeaked like a poorly-oiled mouse. It was quite annoying. Yet the concept still compelled me enough that I opted for the optical switches on the K3 as well. These switches are definitely better in that they are not squeaky! And overall, they feel less mushy as well. Putting aside the size advantage, these actually feel better to type on than the full-sized Keychron optical switches. The other keyboard that I received, the Morgrie, uses traditional mechanical switches, albeit in a slim form-factor by Kailh. While this post is the first long(ish) thing that I’ve typed on the Morgrie, I have put time into testing it, using it for day-to-day typing, speed tests. And I have thoughts about both keyboards…

Size, and form factor

Initially, I was put off by the size of the Morgrie. It is approximately the same depth as the K3, but noticeably wider. Despite this, it has a full row fewer of keys; the K3 has an actual function row. I use function keys fairly infrequently; I think my most common usage is F2 to edit tags in Acrobat1. I’ll touch on this more in the layout section, but it’s worth noting that there are just many more keys on the smaller keyboard. The reason the Morgrie is so much larger is that it has a fairly prominent bezel surrounding the keys. I feel like this might annoy me on a desk, but it’s kind of nice having a place to rest the ol’ thumbs when typing in bed. The thick aluminum (I believe) bezel also makes the Morgrie heavy compared to the K3. It is solid. It feels well-built; the K3, while perfectly fine, feels flimsy in comparison. Overall, I don’t mind the size of the Morgrie as much as I expected, but the K3 gets the credit for its ability to cram far more into a noticeably smaller footprint.

Layout

These are both compact keyboards without number pads. They both have cursor keys and a right-hand column for page navigation keys. My laptop has a similar configuration. One thing that I’ve learned is that this stack of four keys on the right-hand side is a common decision for navigation keys – but unlike traditional layouts, nobody has decided on a standard for this. The three keyboards have these four keys from top-to-bottom:

I don’t have much of an opinion between my laptop and the K3 except that I wish they were consistent. I guess the K3 makes more sense to me, but they’re both fine. The Morgrie, on the other hand, is nonsense. Delete is at the bottom, as far from Backspace as possible. This is ludicrous. Less egregious, but annoying to me is Pg Up and Pg Dn being on the function layer instead of Home and End. I’d also prefer Insert be on the primary layer instead of PrtSc, but at least it has Insert – this key cannot be input from the K3 (this keyboard only comes backlit, and wastes a perfectly good key on this). Considering WSL2 seemingly has no direct interaction with the Windows clipboard, and I have to rely heavily on ShiftInsert… this was miserable.

Despite the Morgrie displaying symbols for brightness, transport, volume, &c. on the function layer seems to send the codes for Function keys. I could go either way with this, as I really only miss having Mute as quick access, and as previously mentioned, I only really use F2. A second function layer would have been nice here; my old K6 (which also lacks a function row) worked this way. Esc and `/~ share a key, with the latter on the function layer. Despite my heavy vim usage, I don’t touch Esc much. Since I remap Caps Lock to Ctrl on every machine I own, Ctrl[ is less of a stretch despite being a chord. With this in mind, I’d prefer `/~ on the primary layer, but I understand the decision. The extra row of the K3 pays off here.

The only other notable difference is the location of the Fn key; next to Space on the Morgrie and one to the right (between Alt and Ctrl on the K3. I use these modifiers infrequently, and don’t really have a preference, though again… standardization would be nice. Overall, it’s hard to say which layout I prefer; they each have a unique critical failure: the lack of Insert on the K3 and the absurd positioning of Delete on the Morgrie. Utterly bizarre decisions.

Switches and keycaps

TL;DR: The Morgrie wins on both fronts. The keycaps are PBT and feel great; K3 has ABS keycaps with extremely visible sprues. I got the Morgrie in white with orange lettering, and it’s rather pretty. Depending on which backlight you get, the K3 keys are either light or dark grey, with clear lettering for the backlight. They’re unoffensive, but the white Morgrie is just… kind of fun. I don’t know my way around keycap profiles very well, but the K3 uses slightly curved chiclet-style keys, while the Morgrie is more traditional. I don’t have a super strong opinion on this; I find that I orient myself more easily but get lost quickly on chiclets, and therefore type more quickly overall on more traditional caps.

I mentioned that the slim Keychron optical switches are nicer than the full-sized Keychron optical switches. This is certainly true, but the Morgrie’s Kailh Choc switches are much nicer than both. The Kailh Whites supposedly have a lower actuation force than the Keychron Oranges2, but it feels higher. All in all, I find the Kailhs to be a much nicer typing experience. If I sell off some stuff, I might try the traditional mechanical Gateron version of the K3. At the very least, if I found myself preferring the size/form-factor of the K3, I could replace them with Kailh switches now that I know I’m a fan.

The K3 switches have one very cool thing going for them – they accept regular Cherry keycaps. Obviously full-sized ones will be a bit chunky on the board, but still at a lower profile than the same caps on full-sized switches. More importantly, it’s just… an obvious standard. It’s wild to me that both Cherry and Kailh opted to come up with new, incompatible keycap mounts for their low profile switches. This was always a problem with Alps as well; so many people use Cherry that caps for anything else are hard to come by. The K3 switches are also hot-swappable, and optical switches of course don’t rely on a mechanism that will wear. I doubt this is really a sticking point. Finally, one of the keys on my K3 was improperly assembled from the factory, the spring was all out of place; I easily disassembled this and repaired it myself, but… shame I had to.

Miscellanea

I mentioned the K3 only comes backlit. There are two versions: RGB- and white-LED backlit keys. I opted for white; RGB LEDs just… don’t look very good, in my experience. Being a touch-typist, I tend to disable backlighting anyway, and would prefer a version with a useful key and no backlight in lieu of a key I accidentally press constantly, forcing me to cycle through a bunch of ridiculous effects. Lastly, while there is an option to turn the backlight off entirely, there is no option to turn it on entirely; the closest thing is an effect where every key is on, but any given key briefly shuts off when you depress it. This is silly. Oh, there’s also no brightness setting. My laptop, by comparison, has no ridiculous effects and two brightness settings. This is useful!

Both keyboards have Bluetooth, both support three devices. The K3 uses a function layer for this, whereas the Morgrie has three dedicated buttons. I have no preference on this. I haven’t had any issues with Bluetooth on either board yet, though I also haven’t really stress-tested it. Unlike my Bluetooth IEMs that I pair to my phone, I don’t really have a need to test how far I can stray from my device. Both keyboards charge via USB-C, and can in fact be used wired via USB-C. The K3 has a switch to go between wired and wireless, the Morgrie does not. I’m sure there’s an advantage to one of those approaches, but I’m not going to try to suss it out.

The Morgrie has a nice tactile pushbutton for power on the back, while the K3 has a tiny slide switch. Both are fine, but the Morgrie is nicer in my opinion. The K3 will go into a sleep mode quickly; the Morgrie does not seem to, with the company claiming to have one of the longest standby times. I’d rather the keyboard just go to sleep. The K3’s bezel-less, chiclet design makes for easy cleaning; despite this, it came with a thin plastic cover. Neither of these things is true of the Morgrie.

Ultimately, I really think I prefer the Morgrie, and I’m tempted to buy another in the lovely powder blue color. It’s just very nice to type on. The solid build, the Kailh switches, the comfortable keys… I get on with it well. I sure do think that Delete placement is regrettable, though.


Digirule 2U

I keep meaning to post about SISEA, but like… I don’t have anything to say that others haven’t said better. Much like SESTA/FOSTA, this bill is a direct attack on sex workers under a thin anti-trafficking guise. Listen to what sex workers are saying about this. Contact the folks who are supposed to listen to us. Let’s do what we can to stop this garbage.
I’ve written about single board computers before, and have bought and briefly played with a modern board from Wichit Sirichote. I’d meant to write about my experience with this board, but I haven’t actually gotten too far into the weeds with it yet. I need to either find a wall-wart that will power it, or else hook up my bench supply to mess with it, and… my attention span hasn’t always proven up to the task.

Medium/Message: Music and Medium

A new series! I have at least two posts planned for this series, and hopefully I’ll come up with more in the future. The idea is to highlight art that is inseparable from the medium used to record and/or distribute it. As far as this post is concerned, I want to discuss creative uses of the media that a consumer would purchase. Particularly, music or musical experiences that couldn’t exist outside of the medium they were made for.

Rediscovering Compact Cassettes

This year has been a long decade, and seeking little pleasures has been of the utmost importance. Working from home has left me with the opportunity to listen to music more often as I work. I tend not to work in the room with my turntable, so this has largely been a matter of listening from my phone. This is fine, but I know I tend to have very different listening patterns when I’m listening on my phone vs.

Super Mario Bros. 35

I go on a tangent toward the end of this post about my fear regarding preservation when Switch Online inevitably shutters. However, since posting this, I have learned that SMB35 was planned to be shut down at the end of March 2021. This is absurd, and likely warrants its own post, but it’s worth mentioning that my fears are not only warranted but grossly underestimated.
Super Mario World was likely the first smooth-scrolling platformer that I ever played, albeit briefly at a family friend’s house. Later, PC games like Jill of the Jungle and Jazz Jackrabbit were the first of the genre that I owned and played heavily. It wasn’t until a bit later in life that I got an NES and fell in love with… well, a ton of games for the system, but most relevantly the first and third Super Mario Bros.

Replacing a Fan

Update as of 2023-12: About a year after this post, the second fan failed in the same way. Then in mid-2023, the battery got real, real puffy. I’m not using it as my primary laptop anymore, but it’s working well still after replacing all of those things.
It’s been less than a year since I purchased my HP Spectre x360, and while I have mostly been very happy with it1, the left fan started honking and making automobile-engine-attempting-to-turn-over sounds. I probably should’ve sent it in for warranty service, but I opted to replace it myself, with only minor damage. The damage was precisely the sort of thing I predicted – I believe I snapped one of the blasted plastic clips that hold everything together these days2, and I misjudged the sort of connector that the fan uses and mangled it a bit in the process.

Don't turn off the lights

Sigh, so, I feel like every post from the past few months has contained some version of this statement, but… I’ve started writing a number of things lately, and just haven’t had the motivation or whatever to finish them. Some are daunting longer-format pieces that require research and/or illustration, others are smaller filler bits that just don’t ultimately seem worth following through with. I’m handling things pretty well during this pandemic, but… being creative and seeing even the smallest projects through to completion… it’s tough right now.

I bought another four-function calculator

Something I find rather amusing is that despite my owning… a lot of classic HP calculators1, this here blog only has posts about one old Sinclair calculator (which is, at least, a postfix machine) and one modern four-function, single-step Casio calculator (that somehow costs $300). And, as of today… yet another modern Casio calculator. I actually do want to write something about the HPs at some point, but… they’re well-known and well-loved. I’m excited about this Casio because it’s a weird throwback (that, like the S100, I had to import), and because it intersects two of my collector focuses: calculators and retro video games.

The mid-1970s brought mass production of several LCD technologies, which meant that pocket LCD calculators (and even early handheld video game consoles were a readily obtainable thing by the early 1980s. Handheld video games were in their infancy, and seeking inspiration from calculators seemed to be a running theme. Mattel’s Auto Race came to fruition out of a desire to reuse readily-available calculator-sized LED technology in the 1970s; Gunpei Yokoi was supposedly inspired to merge games with watches (in, of course, the Game & Watch series) after watching someone fiddle idly with a calculator. Casio took a pretty direct approach with this, releasing a series of calculators with games built in. Later games had screens with both normal calculator readouts and custom-shaped electrodes to present primitive graphics (like the Game & Watch units, or all those old terrible Tiger handhelds), some of which were rather large for renditions of games like Pachinko. The first, however, was essentially a bog-standard calculator as far as hardware was concerned2: regular 8-digit 7-segment display, regular keypad. I suspect this was largely to test the reception of the format before committing to anything larger; aside from the keypad graphics, the addition of the speaker, and the ROM mask… it looks like everything could’ve been lifted off of the production line for any number of their calculators: the LC-310 and LC-827 have identical layouts.

This was the MG-880, and it was clearly enough of a hit to demonstrate the viability of pocket calculators with dedicated game modes. The game itself is simple. Numbers come in from the right side of the screen in a line. The player is also represented by a number, which they increment by pressing the decimal separator/aim key. When the player presses the plus/fire key, the closest matching digit is destroyed. These enemy numbers come in ever-faster waves, and once they collide with you, it’s game over. Liquid Crystal has more information on the MG-880 here.

So that’s all very interesting (if you’re the same type of nerd I am), but I mentioned I was going to be talking about a modern Casio calculator in this post. About three years ago, Casio decided to essentially rerelease (remaster?) the MG-880 in a modern case; this is the SL-880. I haven’t owned an MG-880 before, so I can’t say that the game is perfectly recreated down to timing and randomization and what-have-you, but based on what I’ve read/seen of the original, it’s as faithful a recreation as one needs. In fact, while the calculator has been upgraded to ten digits, the game remains confined to the MG-880’s classic eight. Other upgrades to the calculator side of things include dual-power, backspace, negation, memory clear, tax rate functions (common on modern Japanese calculators) and square root3. You can also turn off the in-game beeping, which was not possible on the MG-880. The SL-880 is missing one thing from its predecessor, however: the melody mode. In addition to game mode, the speaker allowed for a melody mode where different keys simply mapped to different notes. The only disappointing thing about this omission is how charming it is seeing the solfège printed above the keys.

So was the SL-880 worth importing? Honestly, yes. The calculator itself feels impossibly light and a bit cheap, but it is… a calculator that isn’t the S100 in the year 2020. The game holds up better than I expected. It is, of course, still a game where you furiously mash two keys as numbers appear on a screen, but given the limitations? Casio made a pretty decent calculator game in 1980. More important to me, however, is where it sits in video game history. One might say I should just seek out an original MG-880 for that purpose, and… perhaps I will, some day4. But I think there’s something special about Casio deciding to release a throwback edition of such an interesting moment in video game history. And while the MG-880 was a success, it certainly isn’t as much of a pop culture icon as, say, the NES. This relative obscurity is likely why I find this much more charming than rereleases like the NES Classic Edition. It feels like Casio largely made it not to appeal to collectors, but to commemorate their own history.


Learning opsec with Nermal

A few years back, (ISC)2’s charitable trust, the Center for Cyber Safety & Education partnered up with Paws, Inc. to create four comic books putting Garfield and friends in various educational cyber situations. The topics are privacy, safe posting, downloading, and cyberbullying. The fact that the Center for Cyber Safety & Education has, seemingly, three websites all dedicated to pushing this (one, two, three), the fact that they all demand you accept their usage of cookies, the fact that the Center seems proud to partner with Nielsen and Amazon… none of these things scream ‘privacy awareness’ to me.

Monster Care Squad

Monster Care Squad funded! Late pledges can still be made on the Kickstarter page.

I started writing this post late May. Well before the Kickstarter started. I wrote a lot; I hated it all, it all felt like I was parroting some bullshit press releases. I wouldn’t care, except… I read an extremely early copy of the rules, and I was so excited to write about it. But, I mean, the world… sucks right now. I’m horribly depressed and unmotivated. I’m floating between highs and lows, but… nothing is great. I’m doing lots of retail therapy; collecting films I meant to watch, filling up gaps in my manga collection, I bought the dang perfect scale replica of the Ohmu from Nausicaä. Shit is hard. And I’m glad in these times, folks are creating… happy things.

See, Monster Care Squad is a TTRPG from my pals at Sandy Pug Games that… is exceedingly gentle. My initial take in May was that it was fantasy James Herriot; I know I’m not alone in making this connection. You roam its world, Ald-Amura, fixing up monsters who have been afflicted by a poison: the False Gold. Somewhat uniquely, monsters in this world are… well, they coexist with humans, they’re… not villified. And accordingly, you play a roaming monster veterinarian who never encounters combat. That’s not the sort of game this is. You heal; healing is the end goal, the level-up trigger, the apex of the narrative arc. You may need to slap a monster around to get it to accept your anaesthetic, but… fighting is ancillary here. It is a gentle game, a healing game.

I think part of why I struggled to finish this post was that… there’s a lot of rules to dive into, and again… I fall into some trash PR writing very easily. I will say that a core dice mechanic is that of control, which shifts what dice you use based on how much you’ve succeeded or failed up until that point. It’s a neat system that makes my maths-brain dance. But honestly… all these bits are great, but they mean nothing without realizing how much heart is in the game. And, I have known this from the beginning, I know these people and I know that they care; I’ve read the initial text, and I know that it cares. But…

…here’s the thing. The Kickstarter is going very well. Which isn’t to say you shouldn’t back it; you should! But… the team is doing something amazing. They’ve set up a grants system for what amounts to fanfiction. They’re not claiming ownership over anything that comes of it; they’re essentially not setting any rules at all. They’re asking people to apply, submit community works, and potentially get paid under a patronage sort of system1. Creators potentially get paid to develop whatever the hell they want, and then… they keep the ownership. This is the sort of shit I’ve always been pushing for. This is the sort of shit that we all need to be doing when we get a wee bit of power, yet are still stuck in this capitalist hellscape. Fixing stuff on a large scale is… kind of hard to even fathom. But on a medium scale… Sandy Pug Games is doing something that feels unprecedented to me for a small games company. This is a big fucking deal.

I don’t know how to wrap this up. Monster Care Squad is… so exciting to me. I imagine it would be exciting to anyone who happens across this blog. More importantly, the creators are finding new ways to… be genuinely good. Which is… what you’re to be doing in the game, in Ald-Amura, you’re a selfless professional. It’s some full-circle shit, and I’m here for it. I hope you are too. Redundant link, just in case, y’all.


All of the Windows Explorers, together at last (external)

I have quite a few posts lined up, and I’m excited about all of them, but… I’m very stressed, and writing is very hard right now. So in the meantime, this post title-links to a very cool recent writeup by Gravislizard, a streamer (&c.) whose dives into retro computing I really admire. The linked post compares basically every notable revision to Windows Explorer since… before it was even called Explorer. Twenty little writeups complete with screenshots, from Windows 1.04 to Windows 10. Lovely little trip through history.


Hollow hearts

An interesting thing that I’ve noticed over the past few years of internetting is how we’ve established conventions around like, favorite, &c. buttons, and how frustrating it is when sites break those conventions. The meaning of such a button is largely determined by its context; saving for later (say bookmarking, or wishlisting) for an e-commerce site, acknowledgement or praise for social media, and somewhere in between those two for blogs and other content consumption platforms. This isn’t a hard rule, obviously. Twitter, for example, has a bookmarking function, but also lets you easily browse through liked tweets. Bookmarking is a more buried option, as its intent isn’t to display praise, and I would guess that because of this intentional design decision, a lot of people simply use likes in lieu of bookmarks.

Iconography is also generally pretty standard, often hearts or stars. This defines context in its own way; users famously had concerns when Twitter moved from stars to hearts. Which makes a lot of sense – slapping a star on the tweet ‘My cat Piddles died this AM, RIP’ has a pretty different vibe than a heart. Since this happened retroactively to everything anyone had ever starred… it was certainly jarring.

Other iconography certainly exists; bookmark-looking things clearly define their intent, pins do the same to a lesser extent, bells indicate notifications1, and sites with voting systems will often use thumbs-up/down or up/down arrows for this tri-state toggle. Medium, notably, went from a straightforward ‘recommend’ (heart) system to ‘claps’, a convoluted variable-praise system represented by a hand. While dunking on Medium is not the purpose of this post, I think it’s worth mentioning that this shift was enough to essentially prevent me from ever reading anything on the site again2. Having to rate any given article from 1-50, and then sit around clicking as I worry about that decision is anxiety-inducing agony, especially when I know it affects authors’ rankings and/or payouts. It also feels incredibly detached from the real-world phenomenon it’s supposed to mimic. Clapping for a performer in an isolated void is a very different experience than reacting in real-time with the rest of an audience. But to get back on track, clapping additionally violates our expectations by no longer being a toggle. It increases, maxes out, and if you want to reset it to zero, you have to hunt for that option.

Which brings me to my point, and my frustration. These things are usually a toggle between a hollow heart or star3 and a filled one: ♡/♥︎ or ☆/★. This is very easy to understand, it mimics the checkboxes and radio buttons we’ve been using on computers for decades. Empty thing off, filled thing on. So long as we know that this icon has meaning, and that meaning brings with it a binary on/off state4, a hollow or filled icon indicates what state the content is in. If a user can’t toggle this (a notification, say), it’s simply an indicator. If a user can, then… well, it’s a toggle, and there’s likely a counter nearby to indicate how many others have smashed that like button.

This is great, and intuitive, and it works very effectively. Which is why it’s extremely frustrating when sites violate this principle. Bandcamp, for example, uses a hollow heart at the top of the page to take you to your ‘collection,’ a library which is a superset of your wishlist. Wishlisting is represented by a separate on/off heart toggle. This toggle, on an individual track/album page, has a text description next to the heart; the collection button at the top of the page does not. This is utterly backward, as the toggle works intuitively, and the button… has no meaning until you click it5. Etsy, on the other hand, uses a hollow heart at the top to bring you to your favorites page. But it does two things right: it has a text label, and it brings you only to things that are directly connected with a matching heart toggle.

GoComics is an equally perplexing mess of filled hearts. A comic itself has both a heart (like) and a pin (save)6. Both are always filled, with the toggle being represented by a slight change in brightness: 88% for off, 68% for on. It’s very subtle and hard to scan. These are actual toggles, however, unlike in their comments section. Their comments also have filled hearts to indicate likes, but they only serve as indicators. To actually like a comment, you must click a text-only link that says ‘Like,’ and isn’t even next to the heart. At this point, the text does the same absurdly-slight brightness shift from #00A8E1 to #0082AE. While it’s difficult to scan the comic’s heart icon’s brightness shift, the comment’s ‘Like’ text’s brightness shift is nearly imperceptible. A comment’s heart icon doesn’t even appear until there’s at least one like, and clicking it just brings up a list of users who have liked it. Suffice it to say, I click this accidentally on a near-daily basis. Humorously, GoComics understands the hollow/filled standard: they use it on their notifications bell icon.

These are just two examples in a sea of designs that prioritize aesthetics over intuition and ease of use. Medium tacks a filled star on after the read-time estimate for no apparent reason. Lex has both a functional heart and star toggle on every post, but no immediate explanation as to what differentiates them. Amazon seemingly has a heart toggle on its mobile app, but not its website, and it’s unclear what differentiates this from the regular wishlist. Ultimately, I don’t think this is a space that needs innovation (like, arguably, Medium’s claps), or one that merits subtle aesthetics. Folks have largely realized the perils of excessively abstracting ordinary checkboxes and radio buttons, and this relatively new breed of binary toggle should intuitively work in exactly the same way.


Dismantle each and every police force. (external)

Title link goes to the donation page for Black Visions Collective. I don’t have much to say here, honestly. I’ve been kind of going about my business, writing and creating things as a way to distract my mind. Which, frankly, is the textbook definition of white privilege. I have a bunch of dorky shit that I’d love to write about, but… at this point, saying nothing may as well be an act of violence.

I’ve never had a positive encounter with the police, yet I’ve still survived all of them, come out unharmed. I truly hope that people are seeing cops instigating violence, posing as taxi drivers, taking a knee for a photo op before spraying peaceful crowds with chemical agents, showing off their might with ominous coyote brown vehicles, yelling ‘if you do not move, you will be dead’ at protestors from their armored trucks… I hope people who have given the police the benefit of the doubt are seeing this bullshit and realizing just how wrong it all is.

If there’s protest action happening in your city, there’s almost certainly an abuse of power going with it. Funds in Minneapolis, NYC, LA… they all need support. But pay attention to your community as well. Lift up those who need it, however you can. Tear down systems of oppression. Public safety can exist outside of this structure. Fuck the police.


Experiencing the Casio S100

I have a modest collection of calculators – mostly HP, with a few other curiosities thrown in. Some of these have come with not-insignificant price tags attached, due to rarity, collectibility/desirability, present-day usefulness1, &c. Yet, despite a strong desire for Casio’s 2015 release, ‘The Special One’ (models S100 and S200), I could never justify importing one for the ~$300 asking price.

The S1002 is an incredibly simple calculator; it does basic arithmetic, percents, square root, basic memory functions, and some financial bits like rate exchange, tax calculation, and grand total accumulation. Sliders select decimal point fixing and rounding rules. It is, seemingly, functionally identical to the ~$40 heavy-duty Casio JS-20B. Physically, the two share some properties as well – doubleshot keys with ergonomic curvature, three-key rollover, dual solar/battery power, 12-digit display. So… why $300?

The S100 is a showpiece, plain and simple. A 50th-anniversary tribute to the Casio 001, an early desktop calculator with memory3, and the beginning of Casio’s electronic4 calculator business. On the S100’s website, Casio calls out other notable calculators from their history: the compact 6-digit Mini from 1972, and the 0.8mm thick SL-800 from 1983. The S100 is a celebration of decades worth of innovations. Yet it celebrates not by innovating itself, but by refining. It’s an extravagant, luxury version of a product that Casio has been optimizing for half a century.

To this end, the S100 is made in Casio’s control factory in Yamagata Prefecture, Japan. In keeping with the purpose of this factory, assembly and inspection are largely done by hand. Casio brags about the double-sided anti-reflective coating on an FSTN display. The keys are comparable to well-designed laptop keys, with a ‘V-shaped gear link structure.’ The chassis is machined from a single bit of aluminum. It’s all very excessive for a calculator that doesn’t even have trig functions.

I wouldn’t be writing all of this if I hadn’t actually acquired one, right? Certainly, I still paid too much for something so silly, but I did finally find a good deal on a used S100 in black. So is it, in Casio’s words, ‘breathtaking, unsurpassed elegance?’ I mean… it is quite nice. It’s worth noting that I don’t have any experience with Casio’s similar-yet-priced-for-humans-to-actually-use calculators. But I can say that the display is the finest basic seven-segment LCD that I’ve seen. The keys feel great, and the tactility combined with the overall layout make for the ability to calculate very quickly5. It has a satisfying heft about it, and it’s clear that a lot of attention-to-detail went into it6.

But… let’s say you really were considering plonking down $300 on this thing. Any number of classic HPs can be acquired for less (and they all have better key-feel): a 41C/CX/CV or 42S, a 71B, a 15C, an oddity like the 22S. You could get a Compucorp 324G. Any number of exotic slide rules. My point is, $300 will buy you a lot of cool calculating history… or one incredibly fancy showpiece. I guess I’m glad they made it, and I guess I’m glad I own one. But it’d be hard to recommend one as an acquisition to all but the most intense calculator nerds.


The 1st Dictionary With Attitude

2020-07-20 update: the dinguses at Viacom have shuttered garfield dot com; I had to update a footnote to reflect this.
2023-12-09 update: This product no longer exists, almost certainly because of the shit-suckers at Viacom. I had to remove a link because of this, and some other links were dead too. Footnotes should reflect where links were previously.

A sort of running theme with Paws, Inc. over the years has been licensing Garfield assets to any and every taker and seeing what sticks. Browsing merch prototypes from Paws HQ on eBay shows an incredible variety of oft-freakish attempts at materializing Garf into our 3-dimensional world. StickerYou has a bunch of Garf assets available for making custom stickers. For some reason, a Canadian restaurant exists that sells pizza approximately in the shape of Garf’s head1. Jim Davis is known for his support of education, which has led to collaborations like Garf assets in an educational 3D programming environment. It sort of comes as no surprise, then, that Paws, Inc. teamed up with Merriam-Webster to create The Merriam-Webster and Garfield Dictionary2.

Physically, the dictionary is compact-sized and lacks thumb indices. It comes in paperback and library-bound editions. It runs 816 pages, including all of the supplementary material. Textually, it largely reads like a nermal3 Merriam-Webster dictionary. It has a how-to-use section including a pronunciation guide, the dictionary itself, sections on names of places, people, mythological figures, &c., a style guide, and a list of sources. I’m unable to tell what pre-existing edition of the Merriam-Webster this is based on, but it is definitely pared down a bit to be more ‘family-friendly’: there are no swear words, giggly words like ‘butt’ and ‘poop’ lack their giggly definitions, but sexual anatomic terms like ‘penis’ and ‘anus’ are present as are non-slangy terms for sexual acts like ‘masturbation’ and ‘cunnilingus.’ It also contains typical charts like a table of the elements, and various illustrations.

There are two things that Garf up this dictionary. First, nearly every page has one definition in a callout box with Garf pointing to the definition. On the same page, there will be a Garfield strip that uses that word in some capacity. This continues through the section of names, locations, &c. The preface tells us that these strips were ‘specially chosen by Merriam-Webster editors,’ and it absolutely makes sense to me that some dictionary randos did this rather than anyone well-versed in the world of Garf. Abu Dhabi would be an obvious choice for a strip, yet that page contains no strip at all. This strip in which Jon tells Garfield his picture is in the dictionary next to the word ‘lazy’ is, in fact in the dictionary… to illustrate the word ‘session.’ There are a handful of these little things that would’ve really made for some cute in-jokes, but alas. The other Garfy bit is ‘Garfield’s Daffy Definitions,’ a three-page supplement at the end wherein words like ‘Arbuckle,’ ‘cat,’ ‘diet,’ ‘lasagna,’ ‘Odie,’ and ‘Pooky’ are defined by Garf himself. The section also includes definitions that serve as weird digs at school and teachers, presumably to make the kids feel empowered.

And that’s it, that’s The Garfield and Merriam-Webster Dictionary. It’s a perfectly useful, reasonable dictionary that would serve the average needs of adults as well as children, just… with Garfield. So why am I even talking about it? Part of it is certainly just one of the more interesting Garfield-related objects that I own, and despite being a mashup of two big brands… nobody seems to know about it. Every time I mention it, folks either think I’m joking or simply ask… why. In that sense, I think it’s an interesting object worth making known. In a sense that is a bit more dear to me… I’m worried about the fate of a lot of these odd Garf collabs now that Viacom owns Paws, Inc. There have already been some damning changes in the world of Paws; notably, U.S. Acres, another Jim Davis strip4 and one which has never been printed in its entirety in book form, was recently removed from GoComics5. This may have been in the cards before the acquisition, it may be entirely on Andrews McMeel, but… it feels like things are changing. And I can’t imagine the capitalist clowns at Viacom6 leaving all of these bizarre collaborations intact. If The Garfield and Merriam-Webster Dictonary goes out of print… will anyone even notice? Will anyone care7? It won’t be the end of the world, certainly, but… I do feel some sort of obligation to talk about and document some of these oddities. And if anyone out there was looking for a new dictionary, well… you just got one more option.


Yet another baffling UX decision from Adobe

As of mid-June 2020, Adobe seems to have fixed this. Whether it was a bug or a poor decision is hard to say. I’m leaving this post up for two reasons: first, it is entirely believable that Adobe would do this intentionally; and second, regardless it’s still a good case study in the impacts of this sort of decision.

Adobe apparently updated Acrobat DC recently, which I’m only aware of because of a completely inexplicable change that’s wreaking havoc on my muscle memory (and therefore, my productivity). I haven’t seen any sort of update notification, no changelogs. But on multiple computers spanning multiple Creative Cloud accounts, this change popped up out of the blue. The change? Online help is now accessed via F2 instead of F1.

Actually, this isn’t true. Presumably, sensing that such a change would break years of muscle memory for folks who use F1 to access help1 and/or realizing that this change completely violates a de facto standard that has been nearly universal across software for decades, Adobe actually decided to assign both F1 and F2 to online help. F2 is, however, the key blessed with being revealed in the Help menu.

So, good! Adobe didn’t break anyone’s muscle memory! Except… for those of us who spend all day in Acrobat doing accessibility work. As I wrote in a 2017 post about efficiently using the keyboard in Acrobat, F2 is was the way to edit tags (and other elements in the left-hand panel) from the keyboard2.

Properly doing accessibility work in Acrobat often requires going through an entire document tag-by-tag. Unlike, say, plaintext editing of an HTML file, this is accomplished via a graphical tree view in Acrobat. It is comically inefficient for such a crucial task; attempting to make the most of it was largely the purpose of that earlier post. Fortunately, there is a new way to edit tags via the keyboard: CtrlF2.

This is an incredibly awkward chord, and I have Caps Lock remapped to Ctrl; it’s far, far more awkward using the actual Ctrl key. But let’s pretend for a minute that it’s no more miserable to press than F2. I cannot see any reason why this decision was made. It presumably won’t be used by folks who have muscle memory and/or decades worth of knowledge that F1 invokes online help. It isn’t (currently, maybe they do plan to remap F1 freeing up an additional key. It breaks the muscle memory of users who need to manipulate tags, objects, &c. It’s completely inexplicable, and therefore entirely predictable for the UX monsters at Adobe.

It’s worth noting, in closing, that this isn’t solely an accessibility issue. However, it’s extremely frustrating that there is one tool in this world that actually allows accessibility professionals to examine and edit the core structural elements of PDFs, and that the developers of this tool have so little respect for the folks who need to do this work. I could come up with countless features that would improve the efficiency of my process3, yet… Adobe instead insists on remapping keyboard shortcuts that make the process even slightly manageable. Keyboard shortcuts that I’ve been using for versions upon versions. It’s incredibly disheartening.


A test of three zippers

2023-12-09 update: I have a new laptop, and for related reasons I’m also rebuilding this blog. I redid the test in this post on the new machine (AMD Ryzen 9 7940HS @ 4.00 GHz w/ Radeon 780M Graphics; 48GB RAM). When I was doing this/revisiting this post, I realized I didn’t note what 7-Zip settings I was using. On this machine, at ‘fast’ and ‘fastest’ (which seem to run identically), it is faster than Windows (16 vs 26 seconds), producing a file that’s 9MB larger. At ‘normal’, it produced a smaller test file than Windows, but took 1:17. WinZip with OpenCL enabled won the speed test at 14 seconds for the third-smallest file. Strangely, it didn’t really use much of the GPU. Without OpenCL enabled, WinZip produced the smallest file and took 23 seconds.
I’m in the middle of quite a few posts, and honestly… this one should be pretty short because I had no idea I’d be writing it. I’m trying to make my Windows experience as pleasant as possible (that itself is an upcoming post), and part of that has involved looking for a good archive tool. Windows handles ZIP files well enough, but it’s kind of a barebones approach and it doesn’t handle any of the other major archive formats that I’m aware of.

Geometry Expressions

I’ve written before about the geometry construction language, Eukleides. In that post, I said that ‘I [was] drawn to Eukleides because it is a language […], and not a mouse-flinging WYSIWYG virtual compass.’ Those WYSIWYG mouse-flingers are known as interactive geometry software (IGS), and I’ve never been a huge fan of them. Most of them are built in Java, and it shows. Even beyond Java issues, they largely feel made by interns employed by mathematicians rather than folks who have read The Design of Everyday Things. At the same time, complicated constructions like Gauss’s 17-gon1 can quickly become unwieldy in written code. I have experimented with many, though never really settled on an IGS.

Geometry Expressions (GX) is currently on sale for $10 (instead of $99) due to the pandemic. Saltire, the maker, hasn’t stated when this sale will end, which… is fair. I had previously played with the trial of GX, and found it to be… pretty usable, but also I need to really be in a mood to drop a hundred bucks on hobbyist (I mean, for me) maths software. At $10, I decided to take the plunge. Here are my thoughts so far.

The good

The UI/UX isn’t that bad!
I don’t think this is written in Java? But it might be. It’s cross-platform (macOS/Windows) so it’s entirely possible that it’s written in… something weird. The UX fits in fine with Windows, I can only imagine it’s kind of awful on macOS, but… I haven’t tested that yet. Just a gut feeling. There are UI/UX quirks that I’ll get into later, but it’s… manageable!
Export options
One thing that I really don’t love about Eukleides is that you can basically just export to EPS. I then have to separately convert this to SVG, and from there, post-process the SVG. Eukleides also only lets you draw from like… eleven basic colors? GX is clearly built with exporting in mind, and integral to this is the fact that you can… well… use color good! But the actual export options are also great. Native SVG, our old friend EPS, your normal raster formats, and… interesting things. Lua, which I haven’t tried yet, and both animated GIFs and interactive HTML/JS, neither of which I’ve done anything interesting with yet.
Interactivity
I mean, it is called an interactive geometry system, but it is really rather magical how that all comes together. Assuming everything is glued together correctly (another topic for later), you can just drag points around and watch your construction work with different parameters. So, in the simple angle bisector shown below, dragging points A or C around will change the angle of ∠ABC, and the construction will adjust, changing ∠ABM and ∠MBC accordingly.
Robust toolset
I guess you wouldn’t really expect less, but Eukleides, for instance, really kind of gives you the bare minimum for objects and the like. GX has fifteen drawing tools which behave as expected. It has fourteen methods of constraint – for instance, in the illustration below, radius r is a constraint. I constructed the first circle and applied the constraint. I could have constructed the other two circles at any size – as soon as I applied the constraint, r, they were bound to it. These can be units as well; a square with side lengths constrained to 2 has double the side length as one constrained to 1. It has fourteen built in construction tools, which don’t interest me much as my use-case is largely doing constructions from scratch. Finally, it has eleven calculations, such as the angle calculations in the construction below.

Basic angle bisector created in Geometry Expressions A B C r H r K r M z 0 ~ 0.4294711 z 1 ~ 0.4294711

All in all, it’s a pretty nice tool for the things I want to use it for. But, unsurprisingly, there are some pretty frustrating snags.

The bad

Incident snapping does not work well
I said pretty frustrating, but I’m starting off with an incredibly frustrating UX gaffe. In the above construction, I followed the method of doing an angle bisector by hand with a compass and straightedge. Since I was doing this on an IGS that could precisely measure things for me, however, the construction itself had to be quite precise. I tried several times, and kept coming up with angles that were slightly off. The problem was that center points H and K were not quite aligned with the intersection of circle B along ∠ABC. Why did this happen? When creating the two intersecting circles (H and K), the cursor changed to a design that clearly indicated it was snapping incident to the relevant intersection. Additionally, the two intersecting objects were highlighted. But it didn’t actually snap. The only way to get this to work was to use the construction tools to make intersection points; the circle tool was willing to snap incident to existing point just fine. This is absurd. Certainly the solution is to make snapping function across the board, but if that can’t be done, don’t make the UI change such that it appears as though that’s happening. I don’t know how such a decision can be shipped.
It’s easy to feel like you have to undo a lot
The tools are pretty good for constructing and the like, but… less-so for touching things up or fixing goofs. There were plenty of time where I created things and just felt kind of… lost in either how I accidentally made a thing, or how to get a thing to do what I wanted vs. just… getting it right from the outset. For instance, I haven’t figured out how to rotate a polygon around its midpoint, only vertices (and these rotations don’t seem to have any shift-constraints, nor do translations). Mild example but little things like that make the toolset feel less fleshed-out than I’d like.
Nonstandard UX behaviors
To an extent, this still feels like some mathematician’s hired hand whipped up some controls without studying, say, Illustrator. There aren’t keyboard shortcuts that I can find for the tools (the menu doesn’t even have alt-keys defined, which is infuriating). Scroll-wheel zooms (actually, I believe it scales the document, which is even sillier) instead of… scrolling (this is one of my biggest pet peeves in image editing software). Scroll/pan is achieved by holding right-click instead of Space. Et cetera. It’s not as bad as many that I’ve played with, but it can be cumbersome to use.
Unclean SVG export
I’m glad I can export right to SVG! But the export is… a lot. There’s a lot of extra stuff in there, and weird behaviors like every digit in 0.4294711 up there being a separate textbox. I actually imported it into Illustrator and cleaned a couple of things up (there were some points and extra bits that I couldn’t quite figure out how to get rid of, &c.) but… the text is really small! It’s not font-size, and my knowledge of SVG isn’t quite at the point where I’m going to solve it for this post. And while I apologize for the text being difficult to read, it does help demonstrate that the SVG output is just a bit much. I also had to touch up the rightward double arrow; GX’s export opted to find this in a Symbol font instead of using U+21D2. Little things, but room for improvement.

In conclusion

For $102? I’m happy with this purchase. It doesn’t do too much; it’s not a full-featured CAS with an IGS built in. Because of this, all of your tools are right there in front of you and are fairly self-explanatory. The UX could use some polish, but it isn’t terrible. There are a lot of export options, and hopefully I can figure out how to do something fun with the interactive ones. I don’t know that I would be bothering to write this if I was just checking out the trial at a $99 price point, however. It’s specialized software, and I get that; we’re also increasingly numb to the work that goes into software and the value of said work. But, boy, if I was going to pay full-price? I sure as hell would want keyboard shortcuts, functioning snapping, and just a little bit of general UX touch-up.

If you’re reading this, and you’re a recreational maths nerd, and you’re stuck at home, and Saltire is still offering GX for $10… I think it’s hard to pass up.


Quarantine food: 7DAYS Croissants

Early on in isolation times, I figured out that 7DAYS Croissants were readily available, delivered quickly, and infinitely comforting. I have since tried all six flavors, and in between writing far more meaningful thing, I feel like I should rank them. This isn’t a sponsored post, I’m just bored and hungry. Least satisfying: Dulce de Leche (Caramel) I had extremely high hopes for this one, being a huge fan of pretty much anything caramel.

Backward compatibility in operating systems

Earlier this week, Tom Scott posted a video to YouTube about the forbidden filenames in Windows. It’s an interesting subject that comes up often in discussions of computing esoterica, and Scott does an excellent job of explaining it without being too heavy on tech knowledge. Then the video pivots; what was ostensibly a discussion on one little Windows quirk turns into a broader discussion on backward compatibility, and this inevitably turns into a matter of Apple vs. Microsoft. At this point, I think Scott does Apple a bit of a disservice.

If you’ve read much of my material here, you’ll know I don’t have much of a horse in this race; I’m not in love with either company or their products. I’m writing this post from WSL/Ubuntu under Windows 10, a truly unholy matrimony of software. And while I could easily list off my disappointments with MacOS, I genuinely find Windows an absolute shame to use as a day-to-day, personal operating system. One of my largest issues is how much of it is steeped in weird legacy garbage. A prime example is the fact that Windows 10 has both ‘Settings’ and ‘Control Panel’ applications, with two entirely different user experiences and a seemingly random venn diagram of what is accessible from where.

This all comes down to Microsoft’s obsession with backward compatibility, which has its ups and downs. Apple prioritizes a streamlined, smooth experience over backward compatibility, yet they’ve still gone out of their way to support a reasonable amount of backward compatibility throughout their history. They’ve transitioned processor architecture twice1, each time adding a translation layer to the operating system to extend the service life of software. I think they do precisely the right amount of backward compatibility to reduce bloat and confusion2. It makes for a better everyday, personal operating system.

That doesn’t make it, however, a better operating system overall; it would be absurd to assume that one approach can be generally declared better. Microsoft’s level of obsession in this regard is crucial for, say, enterprise activities, small businesses that can’t afford to upgrade decades-old accounting software, and gaming. There is absolutely comfort in knowing that you can run (with varying levels of success) Microsoft Works from 2007 on your brand new machine. It’s incredibly valuable, and it requires a ton of due diligence from the Windows team.

So, this isn’t to knock Microsoft at all, but it is why I think dismissing Apple for a lack of backward compatibility is an imperfect assessment. I’ve been thinking about this sort of thing a lot lately as I decide what to do moving forward with this machine – do I dual-boot or try to live full-time in Windows 10 with WSL. And I’ve been thinking about it a lot precisely because of how unpleasant I find Windows3 to be. Thinking about that has made me examine why, and what my ideal computing experience is. Which is another post for another day, as I continue to try to make my Windows experience as usable as possible. Also, I’m not in any way trying to put down Scott’s video, which I highly recommend everyone watch; it was enjoyable even with prior knowledge of the forbidden filenames. It just happened to time perfectly with my own thoughts on levels of backward compatibility.


Solving puzzles using Sentient Lang

I’ve been playing a mobile room-escaping-themed puzzle game (I believe the title is simply Can You Escape 50 Rooms) with a friend, and there was a certain puzzle that we got stuck on. By stuck, I mean that we certainly would’ve figured it out eventually, but it was more frustrating than fun, and it consumed enough time that I thought up a fun way to cheat. I am not against cheating at puzzles that are failing to provide me with joy, or that I’m simply unable to complete, but I have a sort of personal principle that if I’m going to cheat, I’m going to attempt to learn or develop something in the process.

Caltrops

I love four-sided dice (which I will refer to from here on as d4s, in keeping with standard notation). I also love clean, simple dice mechanics in TTRPGs. Many of these use d6s, Fate uses d3s in the shape of d6s, some use only a percentile set or a single d20. I’m certainly not about to say that there aren’t any d4-based systems out there. But I have not encountered one on my own time, and my love of these pointy little bits has had me thinking about potential workings for a while now. And while I don’t have anything resembling a system here, I had some interesting thoughts and had my computer roll a few tens of millions of digital dice for me, and I’d like to lay out a few initial thoughts that may, some day, turn into something.

The TL;DR is this: players can, for any resolution1, roll two, three, or four d4s. If every die has the same value, regardless of what this value is, that counts as a special. Otherwise, the values are summed with 1s and 2s treated as negative (so, -1, -2, +3, +4). And that’s it, roll complete! What is a special, exactly? Well, I don’t really know. My initial thought was that the all-of-a-kind roll would be a critical success. After seeing the maths, and thinking about what I would opt to do in any given situation. Which led me to believe that the all-of-a-kind roll should certainly be special in some way, but likely a more interesting and dynamic way than just ‘you score very big’. This could be a trigger for something special on your character sheet related to whatever thing you are rolling for, or it could be a cue for the GM to pause the action and shift course. It should certainly always be something positive, but I don’t think the traditional crit mentality quite fits.

I’ll get into the numbers in more detail in a minute, but the key takeaways are:

Ignoring specials for a minute, we see a clear advantage to rolling more dice. Generally speaking, we will trend toward getting higher values, and the likeliest values for us to get on a given roll are better. When we factor in specials, rolling two dice becomes a lot more attractive; specials come up 25% of the time! Which is a very cool way to shift the balance, in my mind, but it’s also why it needs to be something other than just ‘BIG SMASH’. Make it too strong, and it basically becomes the universal choice. Making it more dynamic or narrative seems like a likely way to make the decision meaningful for players. Another possibility is a potential cooldown mechanic where rolling two specials in an encounter would force that character to cut out; that would likely leave the 3d4 option unused, however, as players would roll 2d4 until hitting a special, and then switch directly to 4d4.

I wrote a quick and dirty Lua3 script to let me roll a few tens of millions of virtual dice and run the numbers. The resultant percentage table is below. My initial script only returned the number of specials, positives, negatives, and zeroes. Upon seeing the steep declination toward 0% specials on rolls of more than 4 dice, I decided I was only going to do further testing on 2, 3, and 4. I’ve included the percentages of specials for 5, 6, 7, and 8 dice just to show the trend.

Result percentages in the Caltrops concept
# d4s 2 3 4 5 6 7 8
Special 25 6.3 1.6 0.4 0.1 0.025 0.006
-7 0 0 1.6
-6 0 0 2.3
-5 0 4.7 1.6
-4 0 4.7 0
-3 12.5 0 1.6
-2 0 0 6.3
-1 0 4.7 9.4
0 0 14.1 6.3
1 12.5 14.1 1.6
2 25 4.7 2.3
3 12.5 0 9.4
4 0 4.7 14.1
5 0 14.1 9.4
6 0 14.1 2.3
7 12.5 4.7 1.6
8 0 0 6.3
9 0 0 9.4
10 0 4.7 6.3
11 0 4.7 1.6
12 0 0 0
13 0 0 1.6
14 0 0 2.3
15 0 0 1.6

One final (for now) takeaway after having stared at these numbers in multiple forms. I mentioned the use of special instead of critical because of a traditional critical making a roll of 2d4s too powerful; you’ll get that hit 25% of the time. There’s another truth to 2d4 rolls, however, and that is that the chance of negative rolls is the lowest: 12.5% of 2d4 rolls are negative, 14.1% of 3d4 rolls are negative, and 22.8% of 4d4 rolls are negative. Every negative 2d4 roll is -3, however, and the chance of getting -3 or lower for 3d4 is 9.4% and for 4d4 is 7.1%. This raises a question as to what is a better motivator. You’re more likely to get a negative with more dice, and it’s possible to get a worse negative, but the trend is toward a better negative (the above numbers didn’t reflect zero; the likeliest non-positive result for 3d4 is, in fact, zero). It’s worth running through how this plays out and deciding whether negative values matter, or simply the fact that a negative was, in fact, rolled. My instinct says stay with values, but that doesn’t take into account the feeling of how the dice are treating you.

Clearly there are a lot of ‘what ifs’ to work through, and there’s a lot more involved in practical testing than just rolling millions and millions of dice. But I do think I’m on to something interesting here, something simple, but with slightly-less-than-simple decision determinations.


Unicode bloats and emoji kitchens

Unicode 13 is coming, and bringing with it a handful of exciting things. Particular to my interests is a new Legacy Computing section with characters like seven-segment display numerals and graphics characters like those found on the Commodore 64 and other machines of the era. Of course, new emoji are coming as well, including among other things a magic wand, a beaver, and the trans pride flag (finally!). Unicode is doing a lot of necessary language work behind the scenes as well; the 12.

On Animal Crossing and native UX

Nintendo (of Australia) has revealed that Animal Crossing: New Horizon will only support one island per console. Different cartridge? Same island. Different user account? Same island. This obviously reads as some money-grabbing garbage (that they’re releasing a special edition Switch alongside the game doesn’t help), but there’s another issue here that I feel will largely go untouched-upon. Using a computer these days is a horrible mess, and to me this is largely due to the use of non-native UI widgets.

On the Kensington Expert Wireless (and other pointing devices)

I’ve expressed once or twice before my disappointment with the current selection of pointing devices. This hasn’t improved much, if any. To make matters worse, Trackpoints are becoming less and less common on laptops. Such is the case with my HP Spectre, a deficiency I knew would be an issue going into things. When I was writing about pointing devices back in 2016, I ended up acquiring a Logitech MX Master. I still use that mouse, and also own an MX Master 2. They are incredibly good mice, the closest thing that I have found to the perfect mouse.

Thinking of pointing devices to use with the Spectre, I immediately figured I’d get an MX Anywhere to toss in the pouch of my laptop sleeve. What a horrible mistake. The truly standout feature of the MX Master is its wheel. It scrolls with individual clicks like wheel mice of yore until a specific speed is reached, at which point it freewheels like a runaway train. It’s the perfect physical manifestation of inertial scrolling. It also, notably, still clicks to perform the duty of middle-click. Both of these things are broken on the MX Anywhere – you have to manually select freewheel or click scrolling, and you do that by depressing the wheel. Middle click is a separate button below the wheel, with no regard for muscle memory. I returned the MX Anywhere and will likely just buy a cheap slim mouse to throw in the sleeve; it seems unlikely there are any travel-sized mice out there with modern inertial scrolling.

I also have considered I might need a pointing device other than the touchscreen for certain higher-precision activities while lounging in bed. And, three paragraphs in, we get to the meat of this post: my experiences so far with a trackball, the Kensington Expert Wireless. Trackballs, even more than mice, feel resistant to progress. Only a handful of notable companies are producing trackballs, and of the available models, relatively few are Bluetooth. Kensington has been making versions of the Expert for over twenty years, and the latest change came four years ago with the introduction of the Bluetooth model. The basic layout that has remain unchanged over the years is a large ball surrounded by four large buttons at the corners. The current iterations, both wired and wireless, also have a ring around the ball for scrolling.

Most modern trackballs seem to have a traditional scroll wheel. This, to me, is absurd. You’re not getting modern inertial scrolling with these (even Logitech’s MX-branded trackball has traditional clicky scrolling), and you have a perfectly good device capable of inertia right in front of you: the ball. I would love to see a designer in hardware/firmware simply dedicate a button to switching the ball into scroll mode. As it stands, however, Kensington’s ring is the least obtrusive of the lot, and the four buttons are all very easily accessed. And, while it is a bit convoluted, ball-scrolling behavior is attainable in Windows1 via software.

The first bit of the puzzle is the official KensingtonWorks software. This allows configuration of what each of the four buttons does, as well as the upper two buttons pressed together, and the lower two buttons pressed together. These upper and lower chords do have a limitation – it seems they aren’t held, they’re only momentary presses. There’s also no way to achieve the desired ball-scrolling effect here, so this stage is just minor tweaks to buttons. By default, starting at the upper-left and moving clockwise, the buttons are middle-click, back, right-click, left-click. I use middle-click more than right-click, and thought that swapping these would make sense, but the pinky-stretching actually made that a bad choice. I ultimately settled on swapping middle-click and back, and assigning forward to the upper two buttons pressed together. I haven’t decided what to do with the lower two in concert yet.

The next step is a third-party bit of software, X-Mouse Button Control. From here, I’ve intercepted middle-click to be ‘Change Movement to Scroll’. Within this option, I have it set to lock the scrolling axis based on movement, and to simply send a middle-click if there’s no movement. Thus, clicking the upper-right button sends a middle-click whereas holding it and flicking the ball around turns into scrolling. It works so well that I am again shocked that this isn’t scrolling behavior being designed into any trackballs.

I would love to see Kensington integrate this behavior into firmware or KensingtonWorks. I would love to see Kensington replace the scroll ring with the SlimBlade’s rotation-detecting ball sensor. I would love to see Kensington release a Bluetooth version of the SlimBlade. But for now, I have a pretty clean solution: an unobtrusive, solid-feeling trackball with decent customization options in a software layer.


The new mobile Tetris is a travesty

A few more technical notes as I’ve unfortunately put more time into N3TWORK’s Tetris: it does use guideline scoring, which I assumed but… the awkward placement of the score made it hard to confirm (and it gives no notification for any moves other than Tetris); leveling is fixed-goal (which makes sense: you lose faster and get to watch another ad!) and tops out at level 15 (EA’s Tetris used variable-goal leveling and didn’t max out); it never reaches nor approaches 20G (I’m pretty sure EA’s Tetris did; if it didn’t, it got far closer).

It’s probably pretty obvious by now that I love Tetris. Enough so that I was able to write a 1200-word post detailing my favorite Tetrises. It is, then, incredibly disheartening that I feel forced to write two posts in one month (back-to-back, even) about modern Tetris implementations that are just absolutely terrible. Unfortunately, this also renders part of the aforementioned list of favorite Tetrises outdated1. Until recently, Electronic Arts (EA) was the developer for Tetris on mobile. As of last year, the ridiculously-named N3TWORK is the exclusive rights-holder to mobile Tetris. Once upon a time, this would simply mean that EA could no longer make or sell a new Tetris game on the respective platform, but it’s 2020 and all technology is hell. So, as of April 21, 2020, EA’s mobile Tetris will simply… stop working. I’m sure EA was forced into some phone-home scheme that would allow such a thing to happen, and I’m not exaggerating when I say that the ability for such a thing to happen should be 100% illegal.

Capitalist technohell aside, there’s a new mobile Tetris in town! In my 2019 video game retrospective, I pointed out that “[a]pparently there’s a battle royale Tetris game coming to mobile as well, which is exciting.” This game (Tetris Royale) will, of course, also be made by N3TWORK, and I have to say… I am no longer excited. While EA’s mobile Tetris was essentially a perfect implementation, N3TWORK’s is an unplayable steaming shit. The controls are utterly broken – one’s finger must be lifted in between swiping sideways for lateral movement and swiping down for a hard drop. Bonuses aren’t acknowledged (I’m unsure if they’re scored properly or not at the moment) for T-spins, back-to-backs, or combos – only Tetrises. And visually, the game is a nightmare.

Compare these screenshots (EA on the left, N3TWORK on the right). EA’s app has a bunch of black space at the top and bottom, as it was never updated for X-sized iPhones. N3TWORK’s has been made for modern phones, but it… does nothing useful with that space. In fact, it is objectively worse because the score is floating so far away from the field. One of the big reasons that EA’s made my list of favorite Tetrises is the boxes for the next piece and hold. The backgrounds of these boxes are the same color as the piece, which means that if you know your Guideline colors, even the slightest hint of these out of the corner of your eye tells you the necessary information. N3TWORK’s does not do this. To be fair, this is also something I miss from all of the other implementations I enjoy. However, N3TWORK goes far beyond the normal level of disappointment by making their next and hold pieces nearly invisible to an eye focused on the grid. There is absolutely no reason for them to be so small, it’s just a foolish design decision that makes the game objectively less playable. On top of that, the colors in these boxes are absurdly pale, making color-based recognition difficult as well. It’s worth noting that there are five different skins. Of these, the one in the screenshot is the only one that bothers to color the hold/next boxes at all. It’s absurd. The bizarre pseudo-3D effect and half-baked ‘90s-hacker-film aesthetic are distracting (though fitting for a company called N3TWORK) and ugly, but that’s a personal opinion. You’d be hard-pressed to make an argument about the other aforementioned visual issues not making the game objectively worse to play at a high level.

EA’s Tetris also had excellent stats tracking, both per-game and over time. It would graph out scores over the course of a week or a month. It had some silly additional modes beyond Marathon, but for someone who primarily plays Endless Marathon at a relatively high level, it was the perfect companion. My stats didn’t carry over from my last phone, but I’m glad I cleared over 35,000 lines with EA’s Tetris on my current phone. I will keep an eye on updates to N3TWORK’s Tetris, but a lot would have to change for me to pay for it or even continue to play it for free. It is utterly, devastatingly disappointing.


Tetris Microcard vs. Tetris Micro Arcade

This is going to be an attempt to review two ostensibly similar products, one discontinued that paved the way for the other. Both are pocket-sized Tetris games, officially licensed and generally adherent to the Guideline. They follow the same basic physical format, and comparing them should be pretty straightforward (it is, actually; one is good and the other is bad). I think that properly comparing them, however, requires examining the technical decisions that were made, and for this we need to back up and establish a couple of other things. This is because the first product, the discontinued one from 2017, is based on the Arduboy platform.

Arduboy is a tiny open gaming console that vaguely resembles a Game Boy, based on the Arduino ‘open-source electronics platform’. Arduino kits are typically used to ease the embedded microcontroller portion of hardware products. It’s a dinky 20MHz ATmega processor, with enough flash memory to hold (in the case of Arduboy) one game at a time. Tetris Microcard, released in 2017, took this overall platform, rotated the physical format so it was more like a Game Boy Micro (and in the process, orienting the display portrait, perfect for Tetris) and matched it with a custom port in ROM. Both the Arduboy and Tetris Microcard were manufactured by Seeed Studio, a fabrication shop that also sells a number of premanufactured devices based around these sorts of microcontrollers. I doubt these were manufactured in massive quantities. All of this together led the release price of the Microcard to be a whopping $60.

Onward to the 2019 release of Tetris Micro Arcade. It retains the basic physical format of the Microcard, but is no longer based on the Arduboy platform or manufactured by Seeed Studio. Mass-produced by Super Impulse alongside (currently) five other games in the same format, Micro Arcade sells for a more consumer-friendly $15-20. Some have speculated that these run Arduinos as well, but I suspect this is simply because of the obvious evolutionary path from the Microcard. My suspicion all along has been that these run on a Famicom-on-a-chip. Opening the case up, the processor has (of course) been epoxied over, but it certainly doesn’t look like the format of an Arduino’s ATmega. Regardless, even if it is the same platform, it is a wildly different ROM, and one that fits its role as a cheap, mass-produced device, devoid of love.

That is to say, the Micro Arcade ROM is… bad. Really, really bad. It plays through the background music (“Korobeiniki”) once, and then just… stops. At some point after that, the screen just blanked white on mine, even though the game was still technically playing in the background. There are no lines to delineate between minos in a tetrimino, which always feels like a Programming 101 port to me. There’s no ghost piece. It doesn’t save high scores1 (Microcard has a ten-spot leaderboard). Despite largely adhering to the guideline (pieces are colored correctly, at least, and rotation is SRS2) it feels terribly unofficial.

Which isn’t to say that the Microcard was a perfect port either. Its pieces were not the correct colors, because the screen was monochrome3. It showed one ‘next’ piece compared to Micro Arcade’s three. But aside from the price difference… that’s all Micro Arcade has going for it. The screen blanking may be a glitch on mine, or something that will be patched in a future revision, but I’m not the only one reporting this issue. Even if that wasn’t an issue, and even if the music didn’t randomly cut out, I would still play Microcard over Micro Arcade in a heartbeat. It feels like Tetris to me, vs. a knockoff.

I may put more effort in to figuring out what’s under the hood. Delidding the epoxied ASIC isn’t entirely in my wheelhouse, but I also don’t care about destroying this thing. I may also try to dump the ROM at some point, which could theoretically provide some insight.


2019, a personal video game retrospective

Last year, I did a sort of year in review post which began with an explanation of the difficulty in creating such a post. I don’t tend to consume a lot of media as it comes out, and… 2019 was even worse in that regard. I think my escapism was fairly concentrated this year in two media: video games and comics. Hopefully I’ll do a second post on the latter after sorting out what all actually came out this last year. But for now: VIDEO GAMES.


On computers, particularly the HP Spectre x360

This is about the third piece I’ve written on (loosely) this subject; perhaps it will be the one I actually publish. I’d been thinking a lot about computers lately, and what my needs would be in my next machine. I’ve long considered myself a Mac user, despite currently owning one Mac and four PCs (two of which I use with regularity). Apple has been incredibly disappointing to me lately, on both hardware and software fronts. On the other hand, I still truly hate using a non-Unix OS, and there are plenty of other points of contention that make Windows my least favorite modern OS. My approach on my Lenovo X220 (a machine which I will be keeping and using for writing, I suspect) is to dual-boot with Ubuntu as my default. This is viable, though I need to pay closer attention to partitioning, and likely add an exFAT part or the like for a shared space. I’m currently uncertain whether I’ll continue with that approach on my new machine, or attempt KVM with GPU passthrough.

At any rate, I was looking for a two-in-one (which Apple refuses to make), yet something at least somewhat powerful. If I wasn’t going to go for a two-in-one, I wanted something very powerful, and something with a trackpoint1. I think trackpads are the absolute worst pointing devices in existence, and I hate that they’re the norm. I had been looking for a while, and ended up semi-impulsively pulling the trigger when a very good sale landed on the HP Spectre x360 (13″). I’m still working on getting it set up (debating on a Linux distro, messing with the new version of WSL, making Windows tolerable, &c.), but I’m using it (under Windows gVim, egad) to write this post.


(Finally) playing Pokémon

Despite growing up firmly in the Pokémon era, I had only played Pokémon Snap, Pokémon: Magikarp Jump, Pokémon Go, and a handful of games on the Pokémon Mini console. That is to say, I have never played a main-series Pokémon game until now, with Shield. I know I’ve been writing about video games a lot lately, and I really should do some maths or something instead. But, I’m an exhausted person in an exhausting world, and video games are giving me a lot of joy. I also know that I’m not particularly qualified to write a review on a game which I have nearly no background with; this isn’t intended to be a review. It’s just been a very interesting experience breaking into a well-known, well-loved franchise 23 years and eight generations late.

To get the end of the story out of the way, I am really enjoying Pokémon Shield, and I intend to go back and play through previous generations of Pokémon games. I can tell that I am nearing the end of Shield, and my sole complain would really be the length of the game. Not in an ‘I paid $60 for this!!!’ sort of way, just… I’m having a good time, I want more. Part of why I’m having a good time is that there’s an obvious formula that works here; the franchise is successful for a reason. The narrative is present but not so deep that it demands undivided attention.The collection element is engaging, and even without the ‘gotta catch ‘em all’ mindset, it means there’s always something new to find. The RPG system itself is interesting to me as well, with every possible move having a cost, that cost system not replenishing over time, and no ability to skip a turn. On the surface it feels like it should be unforgiving, but it works and forces decision-making over just brute-forcing every battle with one well-designed monster.

My appreciation goes beyond the gameplay, however, since Pokémon is such a cultural powerhouse. Simply due to the sort of cultural world I inhabit, Pokémon fan art1 crosses my path a lot. And I’ve always enjoyed it! The little monsters are cute, and folks who want to reinterpret them generally gravitate toward the cutest of the cute. But now it feels personal: I can go out and find this creature, or if I already have, I know how it operates. I realize this is not a novel concept; obviously one will have a greater appreciation for art that they relate to beyond its surface level. But it’s interesting to me how much that appreciation has shifted for me, despite already having absorbed a fair amount of franchise knowledge simply by its cultural saturation.

Part of the reason, I suppose, that I never got into the franchise is because it has always been centered around Nintendo’s mobile consoles. I never owned mobile consoles2 until much later in life – my first was a DS Lite. What I didn’t realize was that this meant that from the beginning of the series, this focus on mobile meant there was a multiplayer aspect. If you truly wanted to ‘catch ‘em all’, you had to link up and trade with a friend who had the other version. A dear friend of mine (who has been very helpful in getting me up to speed on the basics) has Sword, and while we haven’t traded monsters, we have been sharing our finds with one another. It’s cute, and it’s clear that this culture of sharing has been baked in to the series from the beginning. I had no concept of this before; I deeply appreciate it now.

I guess that’s about all I have to say. I firmly believe that Pokémon Shield is a good game. It could be the worst game in the series, for all I know; that wouldn’t really matter. It has been a thing to share with friends, a thing to connect me to a community, and it has me convinced that I should go back and play the older games. To me, that’s good enough.


Cats, dogs, and birbs (according to my phone)

2021-02 update: Because the turds at Viacom have removed all of the cross-posts of Garfield comics from Garfield.com, I have changed the link to the Garfield comic in the birds section to point to GoComics. This is bullshit.

I’ve never really used iOS’s automatic thing-detection for photo categories before, but I was looking for a specific picture of a dog from my ~8 years worth of photos, so I gave it a shot.

The 231 photos my phone thinks are of cats include:

The 214 photos my phone thinks are of dogs include:

The 76 photos my phone thinks are of birds include:

NIRB, Birb don’t want nirb scirbs a scirb is a birb that can’t get nirb lirb from birb!


Garfield Kart: Furious Racing is out, but whatever

Well, Garfield Kart: Furious Racing officially lands in the U.S. today, which means a review is in order. Not of that game, of course, but of Garfield GO – Paws, Inc.’s 2017 response to the similarly-named and certainly better-known creation by The Pokémon Company. Much like Pokémon Go, you play on a map, based on your actual location, tapping things to interact with them. Also like Pokémon Go, you can play in an AR-style mode where the objects you interact with are superimposed on camera footage of the real world around you, or you can disable this to play on static backgrounds. In AR mode, you have to rotate yourself around to find things and aim very carefully, it’s a frustrating experience just for the sake of seeing a Garf floating above your sad desk. I never enjoyed playing Pokémon Go this way either, personally.

Like so many Garf games, Garfield GO feels like a shell of a game with a half-hearted Garf theme slapped on. Even with my limited knowledge of Pokémon lore1, I knew that Pokémon Go made sense: you found cute monsters out in the wild and trapped them in tiny balls. While there’s a battle element to it and all, a core part of the Pokémon Go experience was just finding all of these different creatures and watching them evolve. The Garf imitation, on the other hand… involves you throwing food into Garf’s bowl. One of four types of food (lasagna, pizza, donut, cake); one of one type of Garf.

So if you’re not collecting different bizarro Garfs (which would have been 100% more rad in every way, tbh), what exactly is the point? Well, after you catch feed a Garf, he disappears in a cloud of smoke before appearing next to a treasure chest, fidgeting and pointing at it as though it contains the directions for defusing a bomb that’s strapped to his chest. It does not, of course; it contains coins, hats, comics, and trinkets. Which I guess I have to dive into now.


IDBiG: Implicitly-Dimensioned Binary Grid

A while back, I got an idea stuck in my head. If one wanted to, say, scratch a series of tick-marks into small bits of metal to serialize and identify them, how would one do this? Small is key here; easily printing decimal numbers may not be possible, and something like a binary number may be limited in length. The obvious solution, to me, is a flexible, two-dimensional binary grid. But what happens when you make a mark like this:

Keyboards, old and new

Reading a typewriter-themed Garfield strip recently, I got to wondering whether or not my typewriter (a Brother Charger 11) even had 44 keys. It does, barely. Despite modern computer keyboards still using the same core QWERTY layout from the 1800s, things are different enough that this was a perfectly reasonable thing to be unsure of. Then I got to thinking about all of these differences, as well as the weird holdovers (QWERTY itself notwithstanding) and… here, I suppose, are just a bunch of those things that I find interesting.

Objects

I had a plan to submit something into the 200 Word RPG Challenge this year. I wrote a thing, meant to polish it up a bit, didn’t, figured I’d just submit it anyway, and forgot. I don’t think the thing is particularly good, and given that my first task was to bind myself to a word limit, it is not particularly well-written either. And, maybe I’ll edit it so it reads like a human wrote it at some point, or maybe I’ll build it into something bigger and with more purpose. But unless/until that happens: here are 195 words describing a little roleplaying concept about the objects around you. Do with it what you will.


Objects is a light filler game (aim for 45-60 min) for 1 GM and at least 2 players about examining your surroundings with a macro lens and bringing life to the inanimate. Each player should look around the room and choose an object to roleplay. They inform the GM, but not one another. They should briefly consult with the GM what free actions are available to them – a soda bottle can likely roll freely, but bouncing to a height will be a challenge. The GM announces their goal: a rendezvous point, and potentially another object from the room that must be brought to said point. Challenge actions are resolved via 1d4, with the GM deciding whether success is a 2+, 3+, or 4 depending on difficulty. Failures should still move the player(s) forward, just not quite as they’d hoped. When player characters manage to run into each other, those players can reveal to one another what they are, and they can work together. Aside from the basic challenges of movement, finding one another, and rendezvousing, the GM should bring other objects in the room to life as challenges (the ottoman is trapped!) and NPCs.


I am writing about the goose game

I did not intend to write about Untitled Goose Game. It has been written about exhaustively, the core bits of it reviewed and dissected from Kotaku to Entertainment Weekly, from Polygon to Time. The best piece about it, or possibly anything, has already been written. Folks talking about and posting fan art of the game has dramatically brightened up what has been a fairly bleak time in internet discourse. I have nothing to add, because everything has already been said about this game multiple times by myriad people. And yet.

Initially I was a bit frustrated by the game, as its controls are… not great. But even when I was trapped in a weird rotational loop with the farmer, annoyed that it felt like I was playing a hastily-coded shareware title from the late ‘90s, I didn’t want to stop. All was forgiven, I just wanted more goose. I beat the game, which prompts you with a handful of additional tasks. I thought, I’ll do these here and there amid other games. The next day, I wanted more goose, and promptly powered through these tasks. I watched some streamers do these additional tasks despite just having done them, because, more goose. Which, I suppose is why I’m writing this. It’s just another avenue to more goose.

The game is silly and low-stakes, and I feel like saying ‘spoilers ahead’ is kind of ridiculous. But I also think a big part of the game’s charm is figuring things out for yourself, finding weird little details, experiencing the whole thing fresh. So, with that in mind… Spoilers ahead, here are the goosey little details that brought me the most joy:


An accidental PDF bomb

Recently I was tasked with ensuring a two-page document was §508 compliant, something that I do every day. I didn’t really expect any hang-ups; even the most complicated two-page PDF is still only two pages. I got through the first page with ease. Navigating via the Tags panel, as I do, I landed on a table in the second page. Acrobat immediately stopped responding. Frustrating, but Acrobat is not the stablest of software, so I didn’t think much of it.

Solo: Islands of the Heart

Solo: Islands of the Heart is, in the words of the developers, “A contemplative puzzler set on a gorgeous and surreal archipelago” wherein the player “Reflect[s] on love’s place in [their] life with a personal and introspective branching narrative.” This sounds like peak me: I love puzzles, surreal landscapes, love, and introspection! To top it off, the game offers some flexibility regarding gender representation; you’re not automatically forced into a binary heteronormative default. I snatched it up pretty quickly after learning about it (and confirming that it at least attempted to be queer-friendly) and completed a run after a few days of casual pick-up-and-put-down play. While I’m not sure that it was quite what I’d hoped it would be, it made enough of an impression on me that I feel the need to write about it. Be warned, there may be some things that resemble spoilers ahead, but the game is very much dependent upon what you put into it, so I’m not even sure spoiling is… a thing.

The basics…

The basic gist of the game is that you hop around from island to island trying to activate totems. There are two pieces to each; you activate a small one, which shines a light at a large one which you can then talk to. Talking to the large totems asks you a question related to love, after which a new island opens up. There are some other minor puzzles along the way, like helping smitten dogs reach one another or watering gardens; these are all optional and don’t move anything forward in the game. Puzzles involve moving five different types of boxes around, generally so you can move upward to a place you can’t reach, or float via parachute to a far away bit of land. They are, for the most part, pretty simple and somewhat flexible in terms of solving. They can be frustrating in terms of guiding just how high up or far out you need to be to land on that island – suddenly you’re in the water again swimming back to your pile of boxes.

In my experience, there was a considerable disconnect between the ‘do a box puzzle’ and the ‘talk about your love life’ elements. I suspect that part of the idea here is to allow the introspective side of your brain some time to relax by running the lateral thinking bits instead. And, as a whole, I didn’t really mind that disconnect – but it stacked up with other things. I mentioned that it was fairly easy to misjudge just how high or far out you’d need to coax the boxes, lest you plunge into the sea. This happened to me quite a lot, often multiple times on the same puzzle in later stages. Swimming is slow, and faster swimming is achieved by hitting a certain rhythm with the swim button. This decision, too, I can easily justify as an exercise in mindfulness instead of impatiently button-mashing. But these things compound – things start feeling like busy work keeping you at bay while the totems think of something to ask.

Regarding the questions…

The questions the totems ask are not trivial, they run a fairly wide gamut and certainly lend themselves to introspection. Early on, one basically asked if I was polyamorous which… is honestly a very important sort of acknowledgement in a game like this. You’re asked how important things like sex and shared values are; you’re asked if you would abandon your family for a lover. You’re also asked questions that relate more directly to the path you choose at the beginning – that is, are you in love, have you loved and lost, or have you never loved at all. It’s easy, when answering this at the beginning of the game, to fall into the trap of your character being you. And, to be fair, I think that it would be a waste of energy to not align your choices in the game with your personal life and feelings. But, it’s important to keep a bit of distance, as the game will occasionally contradict your answers or dive into things that quite possibly aren’t at all applicable to your situation.

For example, having chosen in earnest the ‘in love once, but not now’ option, I was asked a lot of questions as to why I thought the relationship failed. One was about time, did I think time played a role. After answering ‘no’, the next question basically opened with ‘okay, but time basically had to play into it’, directly contradicting my honest response. This was the first moment where I got annoyed and began to realize I needed to distance myself from the little tiny on-screen version of me that I was shaping. Some of the responses were, to me, absurd to the point of throwing me right out of the game’s depth, such as “You can’t fully hate what you don’t fully love”. But again, the key was to answer honestly while consciously separating myself from my avatar.

About those gender options…

I’d be remiss to not touch on the matter of gender. You can independently choose one of three body styles for your character, and one of three ‘genders’. While the game refers to it as gender and gives you the option of male, female, and non-binary, what it actually means is pronouns. To be clear, I’m glad that they put an effort into making this game inclusive, I’m glad that you can use they/them pronouns. But that’s not gender, and there’s no reason not to call it what it is. Both you and your partner1 get the three options; you can change yours at any time. It’s a root-level option in the pause menu, right with ‘Back to the main menu’ and ‘Settings’. This is absolutely the right way to handle a thing, and should be seen as an example for all developers to follow. Your partner is static upon initial choosing, which… is honestly a little weird, given the player’s flexibility. I would like to see this reconsidered.

In closing…

I’m glad that I played this game. I’d have to be very cautious in recommending it, however: it’s very short, it’s not great as a puzzle game, and the disconnects mentioned (between puzzle and introspection, between player and avatar) are a little tricky to reckon with. I doubt there’s much in the way of replay value – even writing this, I’d like to go through the beginning again to pull some direct quotations but at the same time… I really don’t want to. I might play through a different path if I find myself in love again, but even that feels like a toss-up. Still, there aren’t a lot of games doing this sort of emotional introspective adventure, and I think there’s a lot of value in it. And even though the matter of gender may be a bit flawed, enough of an attempt was made such that the game feels fairly inclusive (or, at least, not intentionally exclusive).


Font changes, hopefully no major issues

Short meta-post. Until 2019-08-20, I was using Font Library as a CDN for the two webfonts1 that I use on this site: Hack for code blocks and other monospaced needs, and Gentium for everything else. Font Library was, for at least a week, down, leading to upsettingly long load times. I temporarily just removed the appropriate <link>s, allowing the site to render in the user’s default monospace and serif fonts, respectively. Font Library is back up, now, but the downtime made me think about alternative solutions. I sure as hell was not going to subject my audience to Google as the CDN. And I realized, I don’t really have any need for a CDN, why make the additional external requests? Why worry at all about a third-party’s uptime? So, I am currently hosting the copies of Gentium and Hack that the site uses. I’m not entirely sure it’s the same version of Gentium2, so I may need to poke around, say, math posts and see if any glyphs are missing. Otherwise, I think this is the best solution and should be relatively problem-free.

I think it’s worth briefly mentioning why Font Library was down. Microsoft, citing trade restrictions, started banning Iranian, Syrian, and Crimean hosting on GitHub. The Bassel Khartabil Fellowship was one such banned project, based in Syria. My understanding is that Font Library was not directly affected by this, but having been built partially on Khartabil’s work, removed their site from GitHub in solidarity and in opposition of the policy. I mention this because it’s important. It was a bold move for Font Library to have that much downtime out of principle, and I applaud them for it. I would not be at all dissuaded from continuing to use their service, except… again the whole thing made me realize there’s really no practical reason for me to use any CDN for font hosting.

One final meta note, I have updated all posts of the category ‘lgbt’ to ‘lgbtqia’ instead. I think it’s just a habit of being of a certain age; I generally find myself defaulting to the four-letter initialism. But, there’s no reason not to try to be better and more inclusive, and this is such a simple update to make, it’s rather ridiculous not to.


The poetics of TTRPGs

I have often expressed, in a pseudo-jest of oversimplification, that I prefer novellas to novels, short stories to novellas, and poems to short stories. I have always been more drawn to the meditative experience of an impossibly-concise framework than the contemplative experience that length and breadth brings. That isn’t to argue that either experience is objectively better, more difficult to create, nor more serious or worthy of being canonized as art – I, myself, personally just find something extremely satisfying in art that I can hold in a single breath. That oxygenates my blood and travels throughout me.

At Gen Con this year1, I had the opportunity to play Alex Robert’s For the Queen, a short, card-based, no-dice-no-masters TTRPG. The basic gist is that all of the players are on a journey in wartime with their queen, and characters and narratives unfold as players answer questions prompted by the deck of cards. You don’t really need a table, you don’t need to write anything. It’s an incredibly distilled essence of roleplaying. The experience soaked into me, stuck in my mind. A week later, I was trying to figure out why, and how, and it occurred to me that the game is a poem.

My mind repeatedly wandered to another game that I love, that similarly demands I pore over its delicacy: The Quiet Year by Avery Alder. The Quiet Year is also free of masters, and also deck-based2. Cards lay out events that happen during a given season, and players use these events to draw a map that tells the story of a community. Two common themes between these games are cards and lack of a master, but I don’t think either of those elements specifically makes a game a poem. Cards prompting events are randomized, but it’s not the chaotic, make-or-break randomness of chucking a D20 at your GM. An egalitarian system free of masters adds an odd aura of intimacy within the group. They’re poetic elements, certainly, but that’s kind of like saying that in literature, everything that rhymes or
looks like
this
is apoem(period)

And certainly, there are a bunch of formalized rules that we can scrutinize and calculate and determine that aha! A given piece of written or spoken word simply must be a poem! But that’s clearly not what I mean, and I don’t know that it’s productive to try to break down countless elements and rule sets to establish an encyclopedic guide as to whether or not a given TTRPG will give me this lingering satiety. To me, it’s simply about feeling, much of which I believe comes from crossing boundaries, challenging expectations, and doing it all with the crash of shocking brevity.

Let’s talk about a game I haven’t actually played, Orc Stabr by Liam Ginty and Gabriel Komisar3. Fitting on a single sheet, it is a simple game (though a game made more traditional by way of both dice and masters). I suspect it is a fairly quick game, but again… I have not played it. Aside from the game itself, however, there was an additional experience layered onto it, a bit of a metagame if you will. It was launched on Kickstarter, and all of the materials for it were written from the perspective of its orc designer, Limm Ghomizar. Backers could get a full sheet of paper, or a hand-torn half-sheet of paper, encouraging them to find other backers to form a full, playable sheet with. Every sheet had something custom done to it – crayon doodles, recipes, custom rules, handprints, all manner of weird things that simply served to make each copy human, personal, and unique. Seeing folks post about their copies when they received them and just knowing that everyone was getting some different bit of weird was an act of art in itself. And that had that lingering feeling of something once, seemingly rigid, being shattered in the medium.

Clever means of introducing interactivity to narratives have always existed outside what we understand and refer to as gaming. Things like Fluxus’ event scores, the Theatre of the Oppressed, Choose Your Own Adventure novels. Community storytelling has always been a thing, and presumably ‘interactive storytelling, but with rules’ is not a particularly novel concept either. It’s almost certainly unfair, then, to presume that there’s really anything new about what feels like a Gygaxian mold being broken. But I do feel like I’m seeing more and more of this sort of thing being done very intentionally in a space dominated by long-campaign, dice-laden, hack’n’slash systems. There’s a vibrancy to the sense of art and emotion that is being put into games, and that I think seethes through the players of these games.

And that, to me, is poetry.


WTPDF: Role Mapping

PDF 1.7 supports a limited number of standard tags, limited enough that I can freely list them here: Document, Part, Article, Section, Division, Block quotation, Caption, Table of Contents (TOC), TOC item, Index, Paragraph, Heading, six hierarchical Heading levels, List, List item, List item body, Label, Table, Table row, Table header cell, Table data cell, Table header row group, Table body row group, Table footer row group, Span, Quotation, Note, Reference, Bibliography entry, Code, Link, Annotation, Figure, Formula, and Form.

Changing and updating the brhfl dot com template

It’s been a while since we had a good meta post here, which makes for a good excuse to perform a major overhaul on my template. In seriousness, this has been a long time coming. For starters, my site wouldn’t render on versions of Hugo past 0.47.1. While not a huge deal to keep old copies around, it only becomes more work as the versions roll by. None of the changes that I’ve made to support Hugo should have any visible effect on the site. But I’ve also been meaning to play around with revamping the navigation at the top. I was using this hacked-together ‘drawer’ type system to hide and reveal the categories, archive, and et cetera sections. I have preserved an archived copy of the home page with the old template intact for demonstration purposes. But I’m not doing that anymore, and let’s start there.


Acrobat: The disparity of tagging methods

Prompted both by troubleshooting a comrade’s accessibility work (related to this short RPG collection which you should absolutely check out!) and a recent instance of tags in a work document turning to random bytes, I thought it might be valuable to briefly go over the three main ways to tag elements in a tagged PDF in Adobe Acrobat. Ultimately they should all do the same thing, but because it’s an Adobe product, they all come with their own unique quirks.

(Retro) Single-board computers

Single-board computers from the early microcomputing era have always fascinated me. Oft-unhoused machines resembling motherboards with calculator-esque keypads and a handful of seven-segment LEDs for a display1, their purpose was to train prospective engineers on the operations of new microprocessors like the Intel 8080 and MOS 6502. Some, like MOS’s KIM-1 were quite affordable, and gave hobbyists a platform to learn on, experiment with, and build up into something bigger.

The KIM-1 is, to me, the archetypal single-board. Initially released by MOS and kept in production by Commodore, it had a six-digit display, 23-key input pad, 6502 processor, and a pair of 6530 RIOT chips. MOS pioneered manufacturing technology that allowed for a far higher yield of chips than competitors, making the KIM-1 a device that hobbyists could actually afford. I would love to acquire one, but unfortunately they are not nearly as affordable these days, often fetching around $1,000 at auction. Humorously, clones like the SYM-1 that were far more expensive when they were released are not nearly as collectable and sell at more reasonable rates. Even these are a bit pricy, however, and you never know if they’ll arrive operable. If they do, it’s a crapshoot how long that will remain true.

Other notable single-boards like the Science of Cambridge (Sinclair) MK14 and the Ferguson Big Board rarely even show up on eBay. The MK14 is another unit that I would absolutely love to own – I have a soft spot for Clive Sinclair’s wild cost-cut creations. This seems extremely unlikely, however, leaving me to resort to emulation. Likewise for the KIM-1, a good emulator humorously exists for the Commodore 64.

History has a way of repeating itself, I suppose, and I think a lot of that retro hobbyist experience lives on in tiny modern single-board computers like the Raspberry Pi and Arduino. I’m glad these exist, I’d be happy to use one if I had a specific need, but they don’t particularly interest me from a recreational computing perspective. Given that these modern descendants don’t scratch that itch, and the rarity and uncertainty of vintage units, I was very excited to recently stumble across Thai engineer Wichit Sirichote’s various single-board kits for classic microprocessors. Built examples are available on eBay. The usual suspects are there: 8080, 8088, 8086, Z80, 68008, 6502; some odd ducks as well like the CDP1802.

I have ordered, and plan to write about the cheapest offering: the 8051 which sells in built form for $85, shipped from Thailand. The 8051 was an Intel creation for embedded/industrial systems, and is an unfamiliar architecture for me. If it all works out how I hope it will, I wouldn’t mind acquiring the 6502, Z80, CDP1802 and/or one of the 808xs. I’d love to see a version using the SC/MP (as used in the Cambridge MK14), but I’m not sure there are any modern clones available2. For now, I will do some recreational experiments with the 8051, perhaps hitting a code golf challenge or two. While this can’t be quite the same as unboxing a KIM-1, I love that somebody is making these machines. And not just one or two, but like… a bunch. Recreational computing lives.


MINOL and the languages of the early micros

This post was updated in May 2020 with an explanatory footnote sent in by a reader.

When I started playing with VTL-2, another small and obscure language was included in the same download: MINOL. Inspired by BASIC syntax and written by a high-schooler in 1976, it “has a string-handling capability, but only single-byte, integer arithmetic and left-to-right expression evaluation.” What I am assuming is the official spec PDF was seemingly submitted over several letters to and subsequently published by the magazine, “Dr. Dobb’s Journal of Computer Calesthenics and Orthodontia.” This article described the purpose and syntax of the language, as well as the code for the Altair interpreter.

MINOL has 12 statements: LET, PR(int), IN(put), GOTO, IF, CALL, END, NEW, RUN, CLEAR, LIST, and OS (to exit the interpreter).As quoted above, there is integer arithmetic (+-*/), and there are three relational operators, =, <, and the inexplicably-designated #1 for not equal. Line numbers are single-byte, with a maximum of 254 lines. Statements can be separated with a colon. Exclamation points are random numbers. If (immediately) running a line without a line number, GOTO calls its line number 0. Rudimentary string-handling seems to be the big sell. This basically entails automatically separating a string into individual code points and popping them into memory locations, as well as some means of inverting this process. An included sample program inputs two strings and counts the number of instances of the second string in the first; being a bunch of code points contiguous in memory, it is certainly functional.

Is MINOL interesting, as a hobbyist/golf language? I may very well try one or two string-based challenges with it. Its limitations are quirky and could make for a fun challenge. I think more than anything, however, I’m just fascinated by this scenario that the Altair and similar early micros presented. Later micros like the Commodore PET booted right into whatever version of BASIC the company had written or licensed for the machine, but these early micros were very barebones. Working within the system restrictions, making small interpreters, and designing them around specific uses was a very real thing. It’s hard to imagine languages like MINOL or VTL-2 with their terse, obscure, limited syntaxes emerging in a world where every machine boots instantly into Microsoft BASIC.

Once again, I don’t know how much value there is in preserving these homebrew languages of yore, but as I mentioned when discussing VTL-2, folks nowadays generate esoteric languages just to mimic Arnold Schwarzenegger’s speaking mannerisms. Given that climate, I think there’s a pretty strong case to keep these things alive, at least in a hobbyist capacity. And given the needs of early micro hobbyists, I find the design of these languages absolutely fascinating. I’m hopeful that I can dig up others.


Time again for anger

Hey gang, generally if I take a step back here from esoteric programming languages or video game gripes, it’s to cry through my fists at the regressions this administration is landing upon LGBTQ+ folks. Which, more of that seems to be happening this week, so that’s something; please hug your queer friends & if you’re queer hug yourself. But, in case you somehow missed it, the reproductive freedom and bodily autonomy of folks with uteri is being rapidly destroyed. This is, in two words, fucking abhorrent.

So I’m doing this thing again where, paralyzed by the news I am unable to write anything regardless, I use this space to beg you to throw a couple of bucks to some folks who need it. Like…

Stay strong. Stay angry.


VTL-2: golfing and preservation

I’ve been playing with Gary Shannon and Frank McCoy’s minimalist programming language from the ‘70s, VTL-2 PDF as of late. Written for the Altair 8800 and 680 machines, it was designed around being very small – the interpreter takes 768 bytes in ROM. It has quite a few tricks for staying lean: it assumes the programmer knows what they’re doing and therefore offers no errors, it uses system variables in lieu of many standard commands, and it requires that the results of expressions be assigned to variables. It is in some ways terse, and its quirks evoke a lot of the fun of the constrained languages of yore. So, I’ve been playing with it to come up with some solutions for challenges on Programming Puzzles and Code Golf Stack Exchange (PPCG)1.

I mentioned that it is in some ways terse, and this is a sticking point for code golf. VTL-2 is a line-numbered language, and lines are rather expensive in terms of byte count. Without factoring in any commands, any given line requires 4 bytes: (always) two for the line number, a mandatory space after the line number, and a mandatory CR at the end of the line. So, at a minimum, a line takes 5 bytes. This became obvious when golfing a recent challenge:

3 B=('/A)*0+%+1

saved a byte over

3 B='/A
4 B=%+1

These almost certainly look nonsensical, and that is largely because of two of the things I mentioned above: the result of an expression always needs to be assigned to a variable, and a lot of things are handled via system variables instead of commands. For example, ' in the code above is a system variable containing a random number. There is no modulo nor remainder command, rather % is a system variable containing the remainder of the last division operation. Thus originally, I thought I had to do a division and then grab that variable on the next line. As long as the division is performed, however, I can just destroy the result (*0) and add the mod variable, making it a single shot. It’s a waste of our poor Altair’s CPU cycles, but I’m simulating that on modern x64 hardware anyway. And despite the extra characters, it still saves a byte2.

Other notable system variables include ? for input and output:

1 A=?
2 ?="Hello, world! You typed "
3 ?=A

Line 1 takes input – it assigns variable A to the I/O variable, ?. Line 2 prints “Hello, world! You typed &rquo;, and then line 3 prints the contents of variable A. Lines 2 and 3 assign values to the I/O variable. The system variable # handles line numbers. When assigned to another variable (I=#), it simply returns the current line number. When given an assignment (#=20), it’s akin to a GOTO. The former behavior seems like it could come in handy for golf: if you need to assign an initial value to a variable anyway, you’re going to be spending 4 bytes on the line for it. Therefore, it may come in handy to, say, initialize a counter by using its line number: 32 I=#.

Evaluation happens left-to-right, with functional parentheses. Conditionals always evaluate to a 1 for true and a 0 for false. Assigning the line number variable to a 0 in this way is ignored. With that in mind, we can say IF A==25 GOTO 100 with the assignment #=A=25*100. A=25 is evaluated to a 1 or a 0 first, and this is multiplied by 100 and # is assigned accordingly. ! contains the last line that executed a #= plus 1, and therefore #=! is effectively a RETURN.

There’s obviously more to the language, which I may get into in a future post3. Outside of the syntactical quirks which make it interesting for hobbyist coding, the matter of running the thing makes it less than ideal for programming challenges. Generally speaking, challenges on PPCG only require that a valid interpreter exists, not that one exists in an online interpreter environment such as Try It Online (TIO). In order to futz around in VTL-2, I’m running a MITS Altair 8800 emulator and loading the VTL-2 ROM. TIO, notably, doesn’t include emulation of a machine from the ‘70s with a bundle of obscure programming language ROMs on the side.

This brings me to my final point: how much effort is being put into preserving the lesser-known programming languages of yore, and how much should be? I personally think there’s a lot of value in it. I’ve become so smitten with VTL-2 because it is a beautiful piece of art and a brilliant piece of engineering. Many languages of that era were, by the necessity of balancing ROM/RAM limitations with functionality and ease of use. Yet, there’s no practical reason to run VTL-2 today. It’s hard to even justify the practicality of programming in dc, despite its continued existence and its inclusion a requirement for POSIX-compliance. New esoteric languages pop up all the time, often for golfing or for sheer novelty, yet little to no effort seems to be out there to preserve the experimental languages of yesteryear. We’re seeing discussions on how streaming gaming platforms will affect preservation, we have archive.org hosting a ton of web-based emulators and ROMs, we have hardware like Applesauce allowing for absolutely precise copies to be made of Apple II diskettes. Yet we’re simply letting retro languages… languish.

To be clear, I don’t think that this sort of preservation is akin to protecting dying human languages. But I do think these forgotten relics are worth saving. Is “Hello, world!” enough? An archive of documentation? An interpreter that runs on an emulator of a machine from 1975? I don’t know. I don’t feel like I have the authority to make that call. But I do think we’re losing a lot of history, and we don’t even know it.


The unsettling meows of a Garf

This post is about the 2007 Nintendo DS game, Garfield’s Nightmare. While it would not be terribly off-brand for me to review a 12 year old video game based on a syndicated comic strip, I don’t really plan to do that. Because honestly, there isn’t much to review. It’s a serviceable platformer with very little in the way of challenge. There are some hidden things can find, some very lightweight box-moving challenges, some enemies to stomp on. It’s a simple game, and, you know… it’s fine.

Gameplay is actually extremely similar to the developer’s earlier GBA games based on the Maya the Bee franchise: Maya the Bee: The Great Adventure and Maya the Bee: Sweet Gold. The developer in question is Shin’en Multimedia, a studio made up of – I shit you not – a bunch of current and former demosceners. This makes more sense when you look at, say, their first GBA game, Iridion 3D which is incredibly impressive from a technical standpoint, or even their recent F-Zero-esque Wii U/Switch title, Fast Racing Neo/Fast RMX. Aside from demos, the Abyss1 group dabbled in games early on with Rise of the Rabbits and Rise of the Rabbits 2 – both, of course, for the Amiga. They developed Rinkapink for the GBC. While it doesn’t appear to have ever been published2, it seems they used bits of it for Ravensburger’s Käpt’n Blaubärs verrückte Schatzsuche. A promotional brochure for Rinkapink seems to be selling their demoscene experience as a company that can avoid “bad programming, flickering graphics, and awful music”, which… makes a lot of sense! You don’t win at demo parties without knowing how to make the most of a given system. Abyss was and is particularly known for its music, at the time largely done by Manfred Linzner, the lead programmer on Iridion 3D, Maya the Bee: Sweet Gold, and, yes, Garfield’s Nightmare. They developed trackers and audio toolchains for the Amiga (AHX) and Gameboy (GHX). They’re still releasing audio demos.

What does any of this really have to do with Garfield’s Nightmare? Likely not much, but it sure is fascinating. If anything I think it explains how technically competent this game is while also being a pretty sub-par Garfield experience. Which brings me to something that I highly doubt was intentional and can only imagine was a byproduct of a team of highly-skilled demosceners having agreed to take on a licensed title about a syndicated comic strip cat: Garfield’s Nightmare is actually fairly nightmarish. Not in a blatantly scary, horrorish way, but rather in its completely disquieting approach to what Garfield’s world is. The basic premise is that Garfield ate too much (shocker) before going to bed, and is now stuck in his own nightmare. But throughout the game, he really doesn’t seem concerned himself. Either he has good enough lucid dream control abilities to will himself into perfect calmness, or else he’s just oddly resigned to being in this nightmare world that he is, of course, ostensibly trying to escape. It doesn’t make any sense, and the disconnect that it presents as perfectly normal is more and more discomforting the more one thinks about it.

This isn’t the only weird disconnect. Aside from spiders (which Garfield does canonically hate)3, none of the enemies are things that bother canon Garfield, or even things that exist in his world as we know it. They seem like entirely generic platformer enemies (for instance, turtle thing with a cannon built into its back) yet they’re in a very specific licensed setting. I’m sure the studio just didn’t want to cough up the handful of dollars to license a sound bite or two of Lorenzo Music’s voice, but Garfield meows when he gets injured in this game. It shouldn’t be unsettling to hear a cat meow, but I assure you it is extremely so to hear what sounds like a sample of a real live cat coming out of Garfield. There’s no lasagna in sight; pizza stands in for health points and donuts are akin to coins. There are hidden doors that lead to brief minigame reprieves in the real world, but this version of the real world is cold and empty, it feels like the Garfield who is in the nightmare has himself fallen asleep and is experiencing a nightmare version of the real world. Even the box-moving puzzles feel planned and placed, which… Obviously they were, by Peter Weiss of Shin’en, but it makes the nightmare feel like an escape room situation that someone has built for the sole purpose of torturing Garfield. On the surface it’s almost certainly just a bunch of half-hearted design decisions, but it adds up and makes for an unnerving, uncanny experience.

So, should you play the game? I don’t know. I mean you can grab one on eBay for like six bucks, and if you let your mind really take in the nightmare world, it’s… weird. It’s fascinating to think about how the developers, active demosceners, got into the DS development program and got shit on for making a Santa Claus demo that they couldn’t link to because of licensing violations months before releasing this oddity. Everything about Garfield’s Nightmare is just weird, and that in itself is worth quite a few donuts to me.


VT100 Line Drawing

One of those totally useful1 things that crosses my mind occasionally is recompiling a version of dc that won’t choke on characters above code point 127. Among other reasons, occasionally code golf questions come up that really want box drawing characters used for some reason, and it just isn’t possible in dc. Except, I got to thinking… it absolutely is on a VT100, and xterm supports the same escape codes. I just haven’t really explored them.

Allocations

My ‘daily driver’ USB drive gave up the ghost recently, and after having secured a replacement1, it was time for the always-fun task of formatting. I could’ve left things as-is, but the stock partition was FAT32 with 32K block allocations. While not the end of the world, I was really hoping to set the new drive up with smaller block allocations. The previous drive was partitioned with 32K allocations, which wasn’t ideal given that I tend to keep a lot of small files around.

The avocado with legs

Avo is the first bit of media from British company Playdeo, whose lofty introduction describes the things they’re creating as ‘television you can touch’. A lot of the general buzz around Avo has described it as augmented reality with prerecorded video, which seems apt. Told over eight short episodes, Avo is a lightweight mystery that befalls the quirky scientist Billie and her sentient ambulatory avocado, Avo. The player controls Avo, walking the stubby-legged fruit around and picking things up while Billie explains the situation and tells you what she needs. At its core, it’s a typical adventure game mechanically – walk around, pick things up, bring them to a place. But it’s all done seamlessly in this fully video-based real setting.

“Seamless” is kind of a strong word, I suppose. While exploring, the video is simply short loops of, say, Billie working at her desk in the background. There are still cutscenes, but because you’re already in the world and bound to preset camera angles, they just kind of… happen in place. So despite there still being two distinct modes, they do blend together in a fairly seamless way. The story is cute and simple, there are fun nerdy jokes scattered throughout (I had a good chuckle at Billie’s large cardboard box labelled ‘Klein bottles with moebius strips inside’), and the core mechanic works well. Avo is enjoyable and potentially worthy of recommendation, albeit with some caveats.

For starters, the game requires you to agree to a privacy policy like… pretty much immediately. I definitely have issues with content (especially paid content) tracking me for marketing purposes, but unfortunately I am used to it. Actually having to accept a privacy policy before entering the game just (and this may be unwarranted) feels more ominous than usual. Perhaps requirements are tighter being based in the UK, but it feels extreme for such an airy game. I was going to play it regardless because I was curious about what Playdeo was doing with the format, but I would encourage folks to actually read through the thing and weigh the pros and cons.

The other weird thing to me is the matter of the beans. Beans are scattered throughout the game. They serve three purposes: they give you an idea of paths you’re supposed to explore, they make Avo move slightly faster for some reason, and they are also the in-game currency to buy the episodes. While I suppose you could simply replay each episode a ton of times and collect enough beans to get the next one, doing so would be wildly impractical. Episodes cost 1,000 beans, and there aren’t hundreds (much less a thousand) of beans scattered throughout any given level. Which makes sense, Playdeo wants you to actually spend money on the game. For this, I do not blame them, and I do not think the game is overpriced (the bean bundle at $6 will get you through the whole thing). I do think that forcing it into this free-to-play framework is just weird. Awkward.

Many negative reviews on the App Store are from folks who don’t want to pay and actually are going the bean-collection route. The alternative that they would prefer is lowering the cost of the episodes. I mentioned that I don’t think the game is overpriced. I do think that the complete undervaluation of mobile apps means that a great number of people will think it’s overpriced. Especially since it’s much more of a story than it is a game. I think these are challenges that Playdeo is going to need to overcome. First, either ditch free-to-play or come up with a far less clumsy approach to it. Second, make the content more of a game and less of a poking-the-television. Avo largely feels like a proof-of-concept. As proof-of-concepts go, however, it is an incredibly charming one. And I would still recommend it as an experience for folks who are comfortable with the data collection.


On Twine (and my first Twine project)

Twine is, by its own definition, “an open-source tool for telling interactive, nonlinear stories”. I, personally, would call it a templating language for HTML-based interactive fiction. I have finally gotten around to experimenting with it, and… I find it to be missing the mark in the way that many templating systems tend to, and the way that many ‘friendly’ languages tend to.

Before I dive into my struggles with Twine, I’d just like to drop a link to the result of this experiment: yum yum you are a bread, which I guess I’ll just call a bread simulator? I don’t know. It’s silly, it’s fun, it’s bread. Also, minor spoilers in the rest of the post.


Tetris 99

I rather enjoy Tetris. Tetris has changed a lot from the pre-Guideline games I grew up with. I’m glad the Guideline exists and has made for a largely consistent experience among recent Tetris titles. But I still haven’t adapted perfectly to, say, a world with T-spins after no such moves existing in my formative Tetris years. Over the years, more and more multiplayer Tetris games have been released as well, the strategies of which are completely antithetical to the way I play solo. To put it lightly, I have never been good at multiplayer Tetris – some of the stronger AIs in Puyo Puyo Tetris’s story mode even frustrate me.

So when Nintendo announced Tetris 99, a battle royale match between (guess how many) players, I was skeptical. Not that I thought the game would be bad1, but I definitely thought I’d be bad at it, which would simply make it… not super fun for me. But, due to there simply being so many players and a large degree of randomness in how much you’ll be targeted for attacks (additional bricks), simply being decent can keep you alive for a considerable portion of the round. I’ve only played a handful of games, maxing out at 9th place (and dropping out nearly immediately at 74th once!), but I’m really enjoying it so far. Something about seeing 49 other players’ teeny tiny Tetris screens on either side of the screen is quite engaging (and honestly a bit humorous).

You can, either manually or according to four rule sets, choose who of those 98 others you are targeting. The mechanisms for this are not made entirely clear – in fact, they aren’t really explained at all, you just kind of have to stumble across them and suss out how they work by name. Likewise, because the rounds are short (and, to an extent, shorter the worse you are at the game) it’s hard to get into a groove, and there isn’t really a mechanism for practicing. If one didn’t already have other Guideline-era Tetris games, and particularly games with a multiplayer experience, I feel like they’d be a bit sunk here. Those minor quibbles are the closest things that I have to real complaints about the game. I’m curious how they’ll monetize it. The mobile Tetris games from EA have additional soundtracks that can be unlocked w/ coins won in-game (or purchased). Perhaps Tetris 99 will end up with a bit of this, or additional skins. Perhaps it’s just an incentive for Switch Online. For now, save for needing a Switch Online account, it is completely free… and it is a blast.


Curtailing Amazon purchases

Amazon is… decidedly not a great company, and as time passes, this seems to be more and more true. Every few months, a new call to boycott seems to enter the public discourse, which is almost certainly as warranted as it is impractical. That’s not what this is, however — aside from the fact that a seemingly infinite catalog of affordable1 items is an incredible boon for disabled folks and folks that simply don’t have ready access to a wealth of brick-and-mortar stores, actually boycotting Amazon seems rather impossible given that their big money-maker these days is AWS. But I have been beyond disappointed with Amazon’s customer service lately, and this is compounded by core elements of the shopping experience.

I’ll get the petty personal complaint out of the way first. I have had a lot of problems with Amazon’s customer support over the past couple of years, only increasing as time goes on. The real kicker was trying to get any sort of resolution (or even acknowledgement!) about two shipments that were lost around the same time, ultimately translating into several hundreds of dollars worth of unrecoverable Things. Four interactions with customer support yielded four contradictory responses (paraphrased):


RIP, Wii Shop Channel

A sad loss – Nintendo shuttered the Wii Shop Channel today. This was advertised well ahead of time; hopefully most people who care were able to retrieve and backup everything they wanted to. I haven’t powered my Wii up in quite some time, so likewise… hopefully I don’t have any gaps in my downloads. People are (rightfully) disappointed with Nintendo (I guess this is the first major console download marketplace to disappear?), but I don’t really think it’s sensible to focus our ire on Nintendo specifically – this is the nature of the download beast1.

Assuming one can readily dump downloads, then I suppose from an archive perspective the data can be passed around eternally. Beyond that, however, I fail to believe that any of these markets will outlive the silicon in a cartridge. It would surprise me if they outlived properly-stored optical media. I’m glad that a lot of Switch games are being released in both download and cartridge form – even indie titles via small-batch entities like Limited Run Games. Cartridges are still patched via downloads, and these patches are stored on the device (not the cartridge), so that could become its own issue, but the base game should stay functional for a very, very long time.

Anyway, nothing I’ve said here is particularly groundbreaking. It’s sad that the Wii Shop is no more, but… it was inevitable. One thing that has, fortunately, been archived: that lovely, lovely theme music.


Be gone, 2018

I don’t really consume a lot of current media1, and have accordingly joked that if I made a best-media-I-consumed-in-2018 list, it would just be re-reading Sailor Moon and a bunch of video games from the early 2000s. But, digging a bit deeper, 2018 was one of the rare years that I did consume slightly more current cultural artifacts. So, why the fuck not: let’s list off the best of the best that 2018 had to offer me.

I’m not going into movies, because I watched very few 2018 movies (and in general, I am disappointed by movies). I would have included Mary and the Witch’s Flower, but that was 2017 somehow. Holy heck, this year was a horrifying blur. I did just see The Favourite, which I thought was very good, but it just seems a bit… inappropriate to make any sort of judgment call when I’ve focused so little of my time on film. Also, graphic novels/manga aside, I definitely did not read any 2018 books in 2018, so… there’s that. Okay!


365 Numbers

I’m not one to put stock into New Year’s resolutions, but I do occasionally have ideas for little things-to-do during a given year. One such idea for 2019 is to set my homepage such that I’ll be redirected to the Wikipedia entry for the number of the current day (out of 365, so February 14 will return the page for 45 (Number)). This is easily attained with a small bit of HTML stored locally:

Site updates, supporting open source software, &c.

Haven’t done a meta post since August, so now seems like as good of a time as any to discuss a few things going on behind the scenes at brhfl dot com. For starters, back in November, I updated my About page. It was something I forced myself to write when I launched this pink blog, and it was… pretty strained writing. I think it reads a bit more naturally and in my voice now, and also better reflects what I’m actually writing about in this space. I also published my (p)review of Americana in November, which was an important thing to write. Unfortunately, it coincided with Font Library, the host of the fonts I use here, being down. This made me realize that I rely on quite a few free and/or open source products, and that I should probably round up ways to support them all. I’ll get to that at the end of this post, it’s a thought process that started in November, though.


Portal, Commodore 64 style

I’ve been thinking a lot about empathy and emotion in video games lately, and this has really given me the itch to play through Portal again. This weekend, I did just that… sort of. Jamie Fuller1 has released a 2D adaptation of the classic for the Commodore 64 (C64), and it is pure joy. It’s quick – 20 levels with brief introductions from GLaDOS, completable in around a half hour. The C64 had a two-button mouse peripheral (the 13512) but it was uncommon enough that even graphical environments like GEOS supported moving the cursor around with a joystick. Very few games had compatibility with the mouse, and here we are in 2018 adding one more – using WAD to move and the mouse to aim/fire is a perfect translation of Portal’s modern PC controls. If you’re not playing on a real C64 with a real 1351, VICE emulates the mouse, and it works great on archive.org’s browser-based implementation as well.


The VCSthetic

The Atari VCS, better known as the 2600, was an important part of my formative years with technology. It remains a system that I enjoy via emulation, and while recently playing through some games for a future set of posts, I started to think about what exactly made so many of the (particularly lesser-quality) games have such a unique aesthetic to them. The first third-party video game company, Activision, was famously started by ex-Atari employees who wanted credit and believed the system was better suited to original titles than hacked-together arcade ports. They were correct on this point, as pretty much any given Activision game looks better than any given Atari game for the VCS. Imagic, too, was made up of ex-Atari employees, and their games were pretty visually impressive as well. Atari had some better titles toward the end of their run, but for the most part their games and those of most third-parties are visually uninspiring. Yet the things that make them uninspiring are all rather unique to the system:


A few of my favorite: Tetrises (Tetrii? Tetrodes?)

I spent a couple of weeks writing this, and of course remembered More Thoughts basically as soon as I uploaded it. For starters, I had somehow completely forgotten about Minna no Soft Series: Tetris Advance for the GBA, which is a somewhat difficult to find Japanese release superior to Tetris Worlds in every imaginable way. Second, I neglected to mention leveling details and have updated the Puyo Puyo Tetris and mobile sections accordingly (as of 10-28).

Tetris, the ‘killer app’ of the Game Boy and proven-timeless time-sink has a pretty bizarre history. Alexey Pajitnov originally wrote it as a proof-of-concept for a Soviet computer that lacked graphics capability. Pajitnov’s coworkers ported the game to the IBM PC, and its availability on consumer hardware meant that unofficial ports popped up across the globe, and licensing deals were struck without Pajitnov’s involvement. Facing some difficult decisions regarding licensing, Pajitnov gave the Soviet Union the rights to the game. Licensing was then handled through a state-sponsored company known as Elorg (the famous Game Boy pack-in deal was during the Elorg era). During this period, brick colors and rules were inconsistent from this Tetris to that Tetris. Some games branded Tetris during this era bore next-to-no resemblance to the game we all know and love.

The Elorg deal was temporary by design, and some years later Pajitnov got the rights back and formed The Tetris Company. The Tetris Company has proven to be an absurdly aggressive intellectual property monster, which is hardly surprising given the game’s licensing history1. The Tetris Company has done one positive thing, though: standardized the rules and the colors of blocks into something known as the Tetris Guideline. This means that any Tetris from the late ‘90s and newer is largely interchangeable2 – and if you can make out the color of the next piece from the corner of your eye, you know what shape it is. The consistency is valuable, and even though years of NES Tetris have left me rather untalented at T-spins, all of my favorite Tetris games are of the modern sort. This also largely means that the distinction really boils down to hardware, but that’s kind of important when some form of the game has been released for pretty much any given system. So on that note, the four I most often reach for are:


Get angry again (Unicode edition)

So, it’s a bit of a recurring theme that this administration makes some horrifying attack on some marginalized group and I feel the need to make some brief post here angrily tossing out organizations worth donating to. Of course, the topic this week is a series of actions threatening trans people1 and hearkening back to the 1933 burning of the archives of the Institut für Sexualwissenschaft. I’m personally feeling less and less in control of how I’m handling the erosion of civil liberties, and part of me right now needs to write, beyond a brief scream into the ether. So here’s what this post is: if anything on this site has ever had any value to you, please just roll 1D10 and donate to:

  1. Trans Lifeline
  2. National Center for Transgender Equality
  3. Transgender Law Center
  4. Transgender Legal Defense & Education Fund
  5. Sylvia Rivera Law Project
  6. Trans Justice Funding Project
  7. Trans Women of Color Collective
  8. Trans Student Educational Resources
  9. Lambda Legal
  10. Southern Poverty Law Center

…and with that out of the way, for the sake of my own mental health, I’m going to quasi-continue my last post with a bit of binary-level explanation of text file encodings, with emphasis on the Unicode Transformation Formats (UTFs).


Honey walnut, please

Apple recently stirred up a bit of controversy when they revealed that their bagel emoji lacked cream cheese. Which is a ridiculous thing to get salty over, but ultimately they relented and added cream cheese to their bagel. Which should be the end of this post, and then I should delete this post, because none of that matters. But it isn’t the end, because I saw a lot of comments pop up following the redesign that reminded me: people really don’t seem to get how emoji work. Specifically, I saw a lot of things like ‘Apple can fix the bagel, but we still don’t have a trans flag’ or ‘Great to see Apple put cream cheese on the bagel, now let’s get more disability emoji’. Both of those things would, in fact, be great1, but they have nothing to do with Apple’s bagel suddenly becoming more edible.

Unicode is, in its own words, “a single universal character encoding [with] extensive descriptions, and a vast amount of data about how characters function.” It maps out characters to code points, and allows me to look up the division sign on a table, find that its code point is 00F7, and insert this into my document: ÷. Transformation formats take on the job of mapping raw bytes into these standardized code points – this blog is written and rendered in the transformation format UTF-8. Emoji are not pictures sent back and forth any more than the letter ‘A’ or the division sign are – they are Unicode code points also, rendered out in a font2 like any other character. This is why if I go ahead and insert 1F9E5 (🧥), the resulting coat will be wildly different depending upon what system you’re on. If I didn’t specify a primary font for my site, the overall look of this place would be different for different users also, as the browser/OS would have its own idea of a default serif font.


JPEG Comments

A while back, floppy disk enthusiast/archivist, @foone posted about a floppy find, the Alice JPEG Image Compression Software. I suggest reading the relevant posts about the floppy, but the gist is that @foone archived and examined the disk and was left with a bunch of mysterious .CMP files which appeared to have JPEG streams but did not actually function as JPEGs. Rather, they would load but only displayed an odd little placeholder1, identical for each file. I know a bit about JPEGs, and decided to have a hand at cracking this nut. The images that resulted were not particularly interesting – this was JPEG compression software from the early ‘90s, clearly targeted at industries that would be storing a lot of images2 and not home users. The trick to the files, however, was a fun discovery.


Americana

Americana was successfully funded on Kickstarter! Be sure to check out the Kickstarter campaign or the quick start rules.

A while back, I wrote lovingly of a sweet little tabletop RPG (TTRPG) called Mirror. Currently, I am in the middle of a campaign of an upcoming (to Kickstarter, October 1) TTRPG by the same author (a personal friend, it’s worth noting1), entitled Americana. I have no real desire to discuss the nitty-gritty mechanics of, say, where the dice go and how to use them, but as far as my experience is concerned this all works well. I don’t mean to be dismissive of the gears that make the clock tick – all the little details are incredibly important and difficult to make work. I just don’t think that writing about them is particularly expressive, and Americana has a lot of implementation facets that really make for a compelling experience. These experiential details are what I’d prefer to discuss.


Solo play: Cardventures: Stowaway 52

When I first wrote the ‘Solo play’ series, they were basically the top five solo board/card games that I was playing at the time, in order of preference. Adding to this series at this point is just adding more solo games that I love, the order isn’t particularly meaningful anymore.

Beyond nostalgia, I’ve enjoyed a lot of the modern takes on the Choose Your Own… errm… Narrative style of book. Recently, my fellow commuters and I have been laughing and stumbling our way through Ryan North’s 400-page Romeo and/or Juliet, which I highly recommend. There are great independent works up at chooseyourstory.com. It’s an art form that’s alive and well, and has grown beyond the exclusive realm of children. Does a book that you read out of order, and often fail to bring to a happy conclusion count as a game? Does it warrant a post in my ‘Solo play’ series?

Cardventures: Stowaway 52 by Gamewright is a card-based version of the choosable narrative. The premise is something along the lines of being stuck on an alien ship set to destroy Earth. The assumption is that you like Earth, and would therefore like to keep this plan from happening. My initial suspicion was that the thing should’ve just been a book, and that the card-based system was a cost-cutting measure or a gimmick. I was pleasantly surprised to find that I was quite wrong about this.


Sea Duel

HOW TO PLAY THE GAME:

  1. Slide ON/OFF switch to “ON” position. Listen to a few bars of the song “Anchors Away [sic]” and see a computer graphic of the American flag appear on the screen.

Such begins the instruction booklet for the Microvision game, Sea Duel. A few days back, I wrote about the Microvision, and reviewed the handful of games I had at the time. I figured I’d acquire the handful of remaining games, and in several months or whenever, I’d sum them all up in one more post. But then Sea Duel came in the mail. This game is such a prime example of depth in a limited system, that I feel compelled to discuss it on its own. Putting aside the hilarity of describing listening to a song and looking at a flag as one of the steps you must take to start playing, it highlights one of the immediate standout features of this game – despite having 256 pixels, a piezo buzzer, and ridiculously limited processing power and storage space, the game actually has an intro screen that shows something resembling an American flag and plays something that resembles “Anchors Aweigh”.


Oh-so-many colors

The app and website that I generally turn to for weather forecasts is Dark Sky. Recently they made some great changes to the app (like being able to save locations, bizarre that that took so long to implement and that you can still only save six). Alongside these other changes, they swapped out their old monochrome icons with new colorful ones from The Iconfactory. The icons are lovely, the artists who created them did a fantastic job. But when I see them all lined up on the screen, I get… something resembling anxiety.

I really freaked out a little bit when I first saw them, and I still find them very unsettling, and the whole thing made me reflect on my relationship with colorful things… I’ve always gravitated toward monochromatic photography, I spend as much of my computing time as possible in fairly monochromatic terminals, my blog looks like this, I had the same sort of disturbed feeling when Microsoft switched to color emoji (and still pine for the monochromatic ones), I miss laptops with monochromatic LCDs (and still play Game Boy DX games on the Game Boy Light), etc. Obviously this isn’t a universal issue in my life, I love a lot of colorful animation and other media1, but even then… I definitely prefer muted palettes.

I’m not entirely sure why I felt the need to write about this, though if nothing else it’s something to mull over in design. I’m sure the muted palette of this blog is received negatively by some, just as very colorful things seem to cause my mind considerable unease. I think part of it is simply that more colors make for a busier presentation – it’s more visual data providing the same amount of information. And to that tack, perhaps the constant bombardment of ‘eye-catching’ advertising ubiquitous throughout the world, providing nothing but noise competing with the signal of life has taken its toll.


256 pixels

I’ve been restoring a Milton Bradley Microvision and am now happily at the point where I have a fully functional unit. Introduced in 1979, it’s known as the first portable game console with interchangeable cartridges. Anyone who has scoured eBay and yard sales for Game Boys knows that the monochrome LCDs of yore were fairly sensitive to heat and even just age. For a system ten years older than the Game Boy (and one that sold far fewer numbers), functional units are fairly hard to come by. But for a while, I’ve been invested in patching one together, and I plan to enjoy it until it, too, gives up the ghost1.


Accessibility myths: The misguided war on merged cells

One of the stranger accessibility myths that I often run into is that merged cells in tables are to be avoided at all costs. This is entirely antithetical to semantic structuring of data and really points to a larger issue: often, folks who are doing and talking about accessibility have no concept of tabular structure, data relationships, and the importance of context. This goes both ways – often, folks that I receive documents from will have put multiple pieces of data in a single cell, either because they don’t know how to make the cell border invisible, or because they’re afraid to merge a cell that spans all the pieces of data.


Amplitude Modulation

I recently purchased a Sangean HDR-141 compact HD Radio receiver after the local station that broadcasts baseball decided to move their AM/MW2 station (and most of their FM stations) exclusively to digital HD Radio broadcasts. In their announcement, they established that the time was right now that 20% of their audience was equipped to listen. That’s… an astonishingly low percentage, especially given that the technology was approved as the U.S. digital radio broadcast format over fifteen years ago. I, myself, was able to find one acquaintance capable of receiving HD Radio (in their car), and this receiver only handled FM.

Adoption has seemingly been low in the other direction as well. Though the airwaves near me seem flooded with broadcasts, the only HD Radio content is coming from the aforementioned station. Part of this is almost certainly because the standard itself is patent-encumbered bullshit from iBiquity3 instead of an open standard. Transmitting requires not only the encoder, but licensing fees directly to iBiquity. The public-facing language is very vague on the HD Radio website, but receivers also need to license the tech and I imagine if this was free they’d make a point of it (and there’d be more than three portable HD Radio receivers on the market).


DuckDuckGo

A while back, I started testing two things to switch up my browsing habits (and partially free them from Google): I began using Firefox Quantum1, and I switched my default search provider to DuckDuckGo. I have been spending pretty much equal time with both Google and DuckDuckGo since (though, admittedly, I have many prior years of comfort with Google). This has been more than just a purposeless experiment. Google started out as a company that I liked that made a product that I liked. This slowly but surely morphed into a company that I was somewhat iffy about, but with several products that I liked. Nowadays, the company only increases in iffiness, but Google’s products are increasingly feeling bloated and clumsy. Meanwhile the once-laughable alternatives to said products have improved dramatically.

As far as results are concerned, Google (the search engine, from here on out) is still quite good. When it works, it’s pretty much unbeatable for result prioritization, that is, getting me the answer I’m seeking out with little-to-no poking around. But it’s not infrequent that I come across a query that simply doesn’t work – it’s too similar to a more common query, so Google thinks I must have wanted the common thing, or Google includes synonyms for query terms that completely throw off the results. The ads, and sponsored results (aka different ads) are increasing to the point of being a distraction (particularly on mobile, it can take multiple screens worth of scrolling to actually get to results). AMP content is prioritized, and AMP is a real thorn in the side of the open web (Kyle Schreiber sums up many of AMP’s problems succinctly). Finally, Google is obviously an advertising company, and we all know by now that everything we search for exists as a means to track us. This is not a huge complaint for me; it’s a known ‘price’ for the service. For as much as it leads to targeted advertising, it also helps tailor search results. Of course, this seems nice on the surface, but is a bit of a double-edged sword due to the filter bubble.

To be fair, some of these things are mitigated by using encrypted.google.com, but its behavior is seemingly undocumented and certainly nothing I would rely on2. This is where DuckDuckGo, which was designed from the ground up to avoid tracking, comes in. DuckDuckGo makes its money from ads, but these ads are based on the current search rather than anything persistent. They can also be turned off in settings. The settings panel also offers a lot of visual adjustments, many of which I’m sure are welcome for users with limited vision3. Anyway, my experiences thus far using DuckDuckGo as a serious contender to Google are probably best summed up as a list:

All in all, I have no qualms using DuckDuckGo as my primary search engine. I will not pretend that I do not occasionally need to revert to Google to get results on some of the weirder stuff that I’m trying to search for – although, as mentioned earlier, Google thinks it’s smarter than me and rewrites my obscure searches half the time anyway. DuckDuckGo isn’t entirely minimalist or anything, but its straightforward representation, its immediacy, and its clarity all remind me of how clean Google was when it first came to exist in a sea of Lycoses, AltaVistas, and Dogpiles. It returns decent results, and it’s honestly just far more pleasant to use than Google is these days.


HTTPS and categories

Meta-post time, as I’ve made a few site updates. Most notably, HTTPS works now. I wouldn’t say that Chrome 68 pushed me to finally do this, but hearing everyone talk about Chrome 68 was a good reminder that I was really running out of excuses. So, only this site as of right now, I’ll get around to fenipulator, the archive, and a couple of other projects that aren’t actually tied to my name shortly. My hosting provider, NearlyFreeSpeech.NET, has a little shell script in place that makes setting up with Let’s Encrypt an entirely effortless ordeal, with full ACME tools available if necessary. I still need to edit my .htaccess to force the matter.

A while back I also did some category overhauls. There are still quite a few categories that only contain a single post, but that seems likely to change in the future. I got rid of any categories where I didn’t really see myself adding more. I do have a tag taxonomy in place, which I need to start making better use of, for more detailed keywords. I planned to use this (plus categories, plus titles) for a sort of half-baked keyword search implementation, which I may still do at some point. I also ‘fixed’ the problem of categories showing up out of order by just making them all lowercase for the time being. It’s ludicrous to me that Hugo has no case-insensitive sorting.


What pros?

When my Mac Pro recently slipped into a coma, I began thinking about what my next primary computer will be. Until this past week, Apple hadn’t updated any Macs in quite a while1, and the direction they’ve taken the Mac line continues to puzzle me. It all started (in my mind) with ubiquitous glossy screens, and has worked its way down to touchbars and near-zero travel keyboards. Last week’s update to (some) Macbook Pros is welcome, but underwhelming. Six cores and DDR4 is great, but that’s only in the large model. Meanwhile, if I wanted to suffer through a 15″ machine, HP’s ZBook 15 has either hexacore Xeons or Cores, inbuilt color calibration, a trackpoint, a keyboard that I feel safe assuming is superior to the MBP’s, and a user-upgradable design.

I remain consistently confused by what professionals Apple is targeting. As a creative user, I’d whole-heartedly prefer the aforementioned HP. Most illustrators I know rely on Surfaces or other Windows machines with inbuilt digitizers. I know plenty of professional coders on MBPs (and Apple seems to push this stance hard), but I don’t know why23 – that funky keyboard and lack of trackpoint don’t make for a good typist’s machine. The audio world makes sense, Logic is still a big deal and plenty of audio hardware targets the platform. But honestly, when I see people like John Gruber saying the updated MBP’s “are indisputably aimed at genuine ‘pro’ users”, I’m a bit baffled, as I simply can’t think of many professional use-cases for their hardware decisions as of late. They’re still extremely impressive machines, but they increasingly feel like high-end consumer devices rather than professional ones.


Another World

Is there a word for nostalgia, but bad? Kind of like how you can have a nightmare that is on one hand an objectively terrible experience, but on the other… fascinating, compelling even. When I was quite young, the household computer situation was a bit of a decentralized mess. I guess the Commodore 64 was the family computer, but it was essentially mine to learn 6510 ML and play Jumpman on. My sister had a Macintosh Quadra which I guess was largely for schoolwork, but it had a number of games on it that were positively unbelievable to my 8-bit trained eyes. Among these was the bane of my wee existence, Another World1.

I guess I’m about to give away a few spoilers, but they’re all from the first minute or so of punishment play. Another World begins with a cutscene where we learn that our protagonist is a physics professor named Lester who drives a Ferrari2. At this point, we realize we are dealing with a science fiction title. Lester starts doing some very professorly things on his computer, and then some lightning strikes his ARPANET wires or whatever and suddenly our protagonist is deep underwater! Some kind of sea monster grabs him, and… game over?! The cutscenes are rendered with the same beautifully polygonal rotoscoping as the rest of the game, so it’s entirely possible that you die several times watching this scene before grasping that you’re actually supposed to press buttons now.

This stressful memory came back hard upon recently purchasing a Switch and inexplicably making this year’s port of Another World my first purchase. Well, I guess it is explicable: ‘nostalgia, but bad.’ The frustrations of a game that will let you die if you simply do nothing within the first five seconds had not changed much from my childhood. This is a fundamental part of the experience; Another World is a game that wants you to die. It demands that you die. A lot. It’s a lovely game, and one that I’m sure a lot of folks remember (fondly or otherwise) from their Amigas and Macs, but I couldn’t help but think that this sort of trial-and-error experience really wouldn’t fly today if not for nostalgia3. Though I have to ask myself, how does this differ from, say, Limbo, another game that tricks you into death at every turn?

The next death in Another World is when little polygonal slug-looking things slip a claw into Lester’s leg, collapsing him. You have to kind of squish them just right, and it’s the first of many deadly puzzles that rely more on a very finicky sort of perfection rather than just a clever solution. Slightly further into the game, Lester faces a challenge that neatly sums up the whole problem: perfect positioning and perfect timing are required to dodge two screens worth of oddly-timed falling boulders. These moments are very reminiscent of the frustratingly exacting challenges in Dragon’s Lair, a point of inspiration for designer Éric Chahi4. I think this is where a modern take like Limbo feels less annoying in its murderous tendencies – you rarely die because you didn’t time something out to the nanosecond or position yourself on just the right pixel; you die because something crafty in the evil, evil environment outsmarted you.

This sort of thing seems to be a point of maturity for gaming in general. The aforementioned Jumpman was one of my favorite games back in the day, but it was painstakingly picky down to the pixel. Collision detection has eased up in modern times, and additional system resources give designers a lot more room to make challenges diverse and clever instead of simply difficult-by-any-means-necessary. Another World’s spiritual successor, Flashback5 definitely still had these moments, but by the time its 3D sequel, Fade to Black came out, things were much less picky.

I’m certain I beat both Flashback and Fade to Black, but I don’t think I ever had it in me to get through Another World. I guess this was part of why I jumped right on the Switch port. The game has won many battles, but I do intend to win the war. And the fact of the matter is, that for all my griping, it is still an incredibly enjoyable game. ‘Nostalgia, but bad’ certainly doesn’t mean that the game is bad, it means that the game forced all of my respective memories to be bad. The graphics have a unique quality about them6, and the sparse atmosphere feels very modern. The challenges are often interesting, even when they’re more technical than cerebral. It’s a game that I think is best experienced in short spurts, so as not to be consumed by the seemingly infinite tedium of frustrating deaths. It’s a product of its time, and must be treated as such. And while its demands certainly reveal its age, little else about it feels out of place on a portable console in 2018.


Part Time UFO

Somehow, I missed that HAL Laboratory (creators of the Kirby franchise) had broken into the mobile market earlier this year with the game Part Time UFO1. I tend to be oblivious to even these big mobile releases because I’m just generally not that into the mobile game scene2. Touch controls are limiting at best, and the market is saturated with free-to-play snares. If anybody is going to release a mobile gem, though, HAL is bound to, so I snatched this thing up as soon as I heard about it.

In Part Time UFO, you control a flying saucer (oddly reminiscent of UFO Kirby) with a claw-game-esque grabber attached to it. Every level has a bunch of objects, and a place to put them. Some of the objects are mandatory, others might net you extra points or help you meet a bonus goal. The primary goal is usually straightforward – put all of the important objects on the target, get five objects on the target, get the objects to fit a particular shape on the target, etc. Each stage additionally has three bonus goals. One is usually a timer, and the other two either involve stacking things perfectly, not dropping things, stacking more things than required, etc. The real trick comes from the fact that the target area is small, so you pretty much have to stack things. The physics of swinging something four times your size from a flaccid claw make this stacking less than simple.

The levels are adorably-themed, and the themes tend to influence the overall challenge. For instance, my least favorite are the ‘Lab’ levels, which require you to fit Tetris-like blocks into a precise shape – which feels like a bit much going on all at once. But this adds a nice bit of variety, I think there will be some themes that a given person really looks forward to unlocking more of, and some that are less captivating (though still enjoyable).

Points equate to money, and money can be used to buy new outfits for the UFO. Aside from being cute (and occasionally referential to other HAL properties – Kirby’s parasol comes to mind), these affect the control of the UFO in various ways. Certain challenges benefit more from some outfits than others, but generally it seems like you can pop one on that gives you a boost in control that makes you more comfortable, and just leave it. I made the mistake of buying a speedy outfit first, and became very quickly frustrated with the game.

Make no mistake, the game can be frustrating. But never to the point where it feels insurmountable or stops being fun. Part of it is probably just how charming and sweet the whole thing is. The challenges are goofy (stacking cheerleaders, balancing hamsters on a circus elephant, and of course placing cows onto a truck), and even when successfully completed, the end result is often uproarious. This is one thing I wish they had included – some kind of gallery feature of all your wacky stacks.

I haven’t completed the game yet, so I’m not sure how many levels there are. I definitely think it’s worth $43 – it’s just so joyful, well-polished, and fun – everything I expect from HAL. I do think the default controls – a fake analog stick and button type deal – are awful. That control scheme is bad enough for games in landscape orientation, but even with my tiny hands and Plus-sized phone, I could not figure out how to hold my phone so it would work. Fortunately there’s a one-handed control that’s a little bit awkward, but still streets ahead of the faux stick.


Kakoune

I’m not writing this post in vim, which is really a rather odd concept for me. I’ve written quite a bit about vim in the past; it has been my most faithful writing companion for many years now. Part of the reason is its portability and POSIX inclusion – it (or its predecessor, vi) is likely already on a given system I’m using, and if it isn’t, I can get it there easily enough. But just as important is the fact that it’s a modal editor, where text manipulation is handled via its own grammar and not a collection of finger-twisting chords. There aren’t really many other modal editors out there, likely because of that first point – if you’re going to put the effort into learning such a thing, you may as well learn the one that’s on every system (and the one with thousands of user-created scripts, and the one where essentially any question imaginable is just a Google away…). So, I was a bit surprised when I learned about Kakoune, a modal editor that simply isn’t vim1.

Now, I’ve actually written a couple of recent posts in Kakoune so that I could get a decent feel for it, but I have no intention of leaving vim. I don’t know that I would recommend people learn it over vim, for the reasons mentioned in the previous paragraph. Though if those things were inconsequential to a potential user, Kakoune has some very interesting design ideas that I think would be more approachable to a new user. Heck, it even has a Clippy:

~                                                          ╭──╮   ╭───┤nop├────╮
~                                                          │  │   │ do nothing │
~                                                          @  @  ╭╰────────────╯
~                                                          ││ ││ │
~                                                          ││ ││ ╯
~                                                          │╰─╯│
~                                                          ╰───╯
nop          unset-option                                                      █
:nop            content/post/2018-06/kakoune.md 17:1 [+] prompt - client0@[2968]

Here are a few of my takeaways:

I guess there are far more negative points in that list than positives, but the truth is that the positives are really positive. Kakoune has done an incredible job of changing vim paradigms in ways that actually make a lot of sense. It’s a more modern, accessible, streamlined approach to modal editing. Streamlining even justifies several of my complaints – certainly the lack of a file browser, and probably the lack of splitting fall squarely under the Unix philosophy of Do One Thing and Do It Well. I’m going to continue to try to grok Kakoune a bit better, because even in my vim-centric world, I can envision situations where the more direct (yet still modal) interaction model of Kakoune would be incredibly beneficial to my efficiency.


Solo play: Coffee Roaster

When I first wrote the ‘Solo play’ series, they were basically the top five solo board/card games that I was playing at the time, in order of preference. Adding to this series at this point is just adding more solo games that I love, the order isn’t particularly meaningful anymore.

Solo board games don’t seem to get a lot of distribution. Deep Space D-6 is still rather tricky to come by, SOS Titanic sells in the triple-digits on eBay, and it’s only recently that I managed to acquire a copy of Saashi and Saashi’s highly-regarded single-player bag-builder, Coffee Roaster. The game is accurately described by its title: you are roasting a batch of coffee beans over the course of however many turns you think you need, and then tasting the result to see how closely your roast matched the target.

Coffee Roaster is essentially played by pulling a handful of tokens out of a bag, potentially using some of them for some immediate and/or future benefits, increasing the roast level of any of the bean tokens that were pulled out, and then returning them to the bag. This is wonderfully thematic – the longer you take, the darker the overall roast becomes. Adding to this thematic element, useless moisture tokens evaporate (are pulled from the game) over time, before first and second crack phases occur yielding a more significant increase in roast level as well as adding harmful smoke tokens to the bag. The game is definitely on a timer, and while the effect-yielding flavor tokens allow you to play with time a bit by adjusting the roast, ultimately you need to be mindful of how dark your beans have gotten before you stop the roast and move on to the tasting (scoring) phase.

Scoring involves pulling tokens from the bag and placing them in a cup (which holds ten tokens) or on a tray (which holds either three or five, depending on whether or not you picked up the extra tray). You can stop at any time, but a major penalty is incurred for failing to fill the cup up to ten tokens. Whatever roast you’ve chosen has a target roast level, as well as flavor profile requirements. Again, all thematic to the point where my coffee-loving self was giddy over the little details.

The game has quite a few rules to get through; you absolutely want to read the rules start-to-finish before diving in. It can be a little bit easy to forget to do this or that, but for the most part the theme and artwork help guide you once you’re comfortable with the rules. There is one serious omission to this, however, and that relates to the aforementioned flavor profile tokens. Aside from leaving them in the bag to be used for scoring, these can be pulled out and played in order to achieve certain effects. As an example, I mentioned the extra tray, which you gain by sacrificing two flavor effect tokens while roasting. However, any time you give up a token in this way, there is an additional effect that controls the roast and must immediately be performed. One of the tokens turns (say) a single level two bean into two level one beans, one of them preserves the level of two beans, and the third turns (say) two level two beans into a single level four bean. The problem is that there’s no indication of this on the board, or the player aid. No indication that the effect must be performed, nor which effect goes with which token. It is really easy to forget to do this, and even if you remember, you probably need that page of the rulebook open to remind you which does what. This is my biggest complaint about the game, and I’ll be making myself an improved player aid to remedy it.

I really do love Coffee Roaster; though I haven’t gotten particularly good at it yet. Fortunately, once I do, there are a ton of ways to control the difficulty. Several levels of difficulty in beans, a three-round vs. single-round variant, there’s an on-board mechanism for tracking the roast that can be eschewed. There’s a lot of room to grow into this game, and I fully intend to do that.


Revisiting my Linux box

My Mac Pro gave up the ghost last week, so while I wait for that thing to be repaired, I’ve been spending more time on my Lenovo X220 running Ubuntu. While I do use it for writing fairly often, that doesn’t even require me to start X. Using it a bit more full-time essentially means firing up a web browser alongside whatever else I’m doing, which has led to some additional mucking around. For starters, I went ahead and updated the system to 16.04, which (touch wood) went very smoothly as has every Linux upgrade I’ve performed in the past couple of years. This used to be a terrifying prospect.

Updating things meant that the package list in apt also got refreshed, and I was a wee bit shocked to find that Hugo, the platform I use to generate this very blog, was horribly out of date. Onward to their website, and they recommend installing via Snapcraft, which feels like a completely inexplicable reinventing of the package management wheel1. Snapcraft is supposedly installed with Ubuntu 16.04, but not on a minimal system apparently, so I went and did that myself. Of course it has its own bin/ to track down and add to the ol’ $PATH, but whatever – Hugo was up to date. I think I sudoed a bit recklessly at one point, since some stuff ended up owned by root that shouldn’t have been, but that was an easy enough fix.

I run uzbl as a minimalist web browser, and have Chromium installed for something a bit more full-featured. I decided to install Firefox, since it is far less miserable of a browser than ever, and its keyboard navigation is far better than Chromium’s. Firefox runs well, and definitely fits better into my keyboard-focused setup, but there is one snag: PulseAudio. At some point, the Firefox team decided not to support ALSA directly, and it now relies on PulseAudio exclusively for audio. I can see small projects using PulseAudio as a crutch, but for a major product like Firefox it just feels lazy. PulseAudio is too heavy and battery-hungry, and I will not install it, so for the time being I’m just not watching videos and the like in Firefox. I did stumble upon the apulse project, but so far haven’t had luck with it.

I use i3 as my window manager, and I love it so much – when I’m not using this laptop as a regular machine, I forget how wonderful tiling window managers are. When I move to my cluttered Windows workspace at the office, I miss i3. Of course, I tend to have far more tasks to manage at work, but there’s just something to be said for the minimalist, keyboard-centric approach.

I had some issues with uxterm reporting $TERM as xterm and not xterm-256color, which I sorted out. A nice reminder that fiddling with .Xresources is a colossal pain. I’m used to mounting and unmounting things on darwin, and it took me a while to remember that udisksctl was the utility I was looking for. Either I hadn’t hopped on wireless since upgrading my router2, or the Ubuntu upgrade wiped out some settings, but I had to reconnect. wicd-curses is really kind of an ideal manager for wireless, no regrets in having opted for that path. I never got around to getting bluetooth set up, and a cursory glance suggests that there isn’t a curses-based solution out there. What else… oh, SDL is still a miserable exercise.

All in all, this setup still suits a certain subset of my needs very well. Linux seems to be getting less fiddly over time, though I still can’t imagine that the ‘year of desktop Linux’ is any closer to the horizon. I wouldn’t mind living in this environment, though I would still need software that’s only available on Mac/Win (like CC), and the idea of my main computer being a dual-boot that largely keeps me stuck in Windows is a bit of a downer. Perhaps my next experiment will be virtualization under this minimal install.


Accessibility myths: The deceitful panacea of alt text

One of my favorite1 accessibility myths is this pervasive idea that alternate text is some kind of accessibility panacea. I get it – it’s theoretically2 a thing that content creators of any skill level can do to make their content more accessible. Because of these things (and because it is technically a required attribute on <img> tags in HTML), it seems to be one of the first things people learn about accessibility. For the uninitiated, alternate text (from here on out, alt text) is metadata attached to an image that assistive tech (such as a screen reader) will use to present a description of an image (since we don’t all have neural network coprocessors to do deep machine-learning and describe images for us).

This is all very good, if we have a raster-based image with no other information to work with. The problem is, we should almost never have that image to begin with. Very few accessibility problems are actually solved with alt text. For starters, raster images have a fixed resolution. And when users with limited vision (but not enough-so to warrant use of a screen reader) attempt to zoom in on these as they are wont to do, that ability is limited. Best case scenario, the image is at print resolution, 300dpi. This affords maybe a 300% zoom, and even then there may be artifacting. Another common pitfall is that images (particularly of charts and the like) are often used as a crutch when a user can’t figure out a clean way to present their information. Often this means color is used as a means of communicating information (explicitly prohibited by §508), or it means that the information is such a jumble that users with learning disabilities are going to have incredible difficulty navigating it.


decolletage.vim

The ‘screenshots’ in this post are just styled code blocks. There are likely some weird visual artifacts (like background colors not extending the whole width of the block), but the point is to show off the colors.

I’ve been using a hastily-thrown-together color scheme for vim, cleverly named ‘bcustom.vim’ for years now. It’s a dark scheme, peppered heavily with syntax highlighting. While slightly softer than most, it’s still a pretty typical, masculine scheme. I recently realized two things – I would like to use a more feminine, light scheme based on my general sense of pinkness1, and I actually find myself a lot more distracted by extensive syntax highlighting than I find myself aided by it. So I decided to start from the ground up, and build a minimalist, light pink colorscheme, ‘decolletage.vim’.

Again, part of the design decision was to keep the total number of colors used to a minimum. So, to start, here’s the basic scheme. You can see the line numbers, the basic scheme, a comment, an uncapitalized word (‘colors’), a misspelled word (‘matchTypo’), a fold, a search result (‘cterm’), an error (‘#123456’), a visual selection (‘Decolletage’), and matched parentheses:

193 194 195 “Adjust things re: markdown. colors only matchTypo if decolletage loads 196 if g:colors_name==“decolletage” 197 +— 5 lines: hi! link markdownBlockQuote CursorColumn―――――――――――――――――――――― 198 199 hi markdownBlockQuote ctermfg=none ctermbg=#123456 200 call DecolletageDiff(1)

It… looks a lot like this blog, I know. That truly wasn’t how I set out to do things, it’s just my aesthetic. Let’s examine a little -- More --, like that right there, which is how the more/mode message lines appear. Or these status lines:

2:~/.vim/colors/decolletage.vim [RO] [vim][utf-8] 74,1 71%

2:~/.vim/colors/decolletage.vim [RO] [vim][utf-8] 74,1 71%

2:~/.vim/colors/decolletage.vim [RO] [vim][utf-8] 74,1 71%

…Active, inactive, and insert, in that order. Yes, it may be weird, but I like having a blunt, obvious indication of which mode I’m in. And I associate blue with insertion, so that’s my choice for insert. This was a feature of my hacked-together ‘bcustom.vim’ as well – it’s pretty nice to have.

There are two variants for diffs in decolletage.vim. One is more traditional, very obvious with highlighted backgrounds and the like; and the other is fittingly minimal. Here’s the standard version (you also get to see a split line here; it’s predictable) (oh, and non-printing characters):

1 if this { │ 1 if this { 2 that │ 2 that 3 → the other↲ ---------------------------------- 4 print “goodbye” │ 3 print “goodbye” 5 → return true↲ │ 4 → return false↲ 6 } │ 5 }

…and here’s the more jarring, less obviously-a-diff minimal version:

1 if this { │ 1 if this { 2 that │ 2 that 3 → the other↲ --------------------------------- 4 print “goodbye” │ 3 print “goodbye” 5 → return true↲ │ 4 → return false↲ 6 } │ 5 }

I’m fully on board with the minimal version, but it doesn’t seem right to have as a default, so it isn’t. Add call DecolletageDiff(1) to your .vimrc to use it. Alternatively, you can choose it as a default, and call DecolletageDiff(0) for filetypes that seem to desire a more blatant diff.

:set cursorline in decolletage.vim looks like this:

254 255 this is the line that the cursor is on _   256

I’m not a huge fan of cursorline, but I do see value in being able to quickly find the current line, so for a more subtle cursorline, we can call DecolletageNecklace(0):

254 255 this is the line that the cursor is on _ 256

Finally, there is an option to actually add some syntax highlighting, via call DecolletageFreckles(1). It’s rudimentary so far, and based on the default colors that vim would use in a 16-color terminal.

317 Constant 318 Identifier 319 Statement 320 PreProc 321 Type 322 Special 323 Number 324 Boolean

…this probably needs tweaking, but it is there if you want it. And again, implementing it as a function call means you can pop it on and off at will as you’re flipping through a file. So, that should be adjusted, I’d like to add some color for netrw, and I need to implement it as GUI colors as well2. But, for the time being (and particularly for my own specific needs), decolletage.vim looks pretty good, and is available for preliminary testing here.


Examining 'my .vimrc'

I realized the other day that, as much as I’ve read through the vim documentation and sought out solutions to specific problems, I’m still constantly learning things about it almost accidentally as I stumble across how person x or y approached some specific task. It occurred to me that a lot of people post their .vimrc files online1, and flipping through a bunch of these could prove insightful. So I googled ‘my vimrc,’ I searched github, I poked around… a lot. It’s worth noting that some of my observations here are biased in that my vim use is primarily prose (generally in markdown), followed by HTML/CSS/JS, followed by recreational code. I don’t deal in major coding projects consisting of tens of thousands of SLOC for production systems. What works for me is almost certainly atypical.

Something that I’ve been meaning to write about is my aversion to things that make any given setup of mine less portable – and that includes things like keyboard mappings that simply give me muscle memory that only works on my configuration. I see a lot of this sort of stuff in the .vimrc files of others, and for the most part it’s just stuff where I’d rather take the efficiency hit but know how to do it in a portable way. For example, a lot of people map something to the ‘oh shit I forgot to sudo vim’ sequence, !sudo tee % > /dev/null. I fully understand how that sequence works, but to me it’s such an oddball use of tee that I feel like if I got too accustomed to not typing it, I might accidentally do something really weird on a system that isn’t my own2. Similarly, I see a lot of mappings like Ctrlh to jump left a window instead of Ctrlwh. This sort of thing saves you one keystroke, while completely demolishing one of the key points of using vim – that of context and modality. Ctrlw means ‘get ready to do some stuff to some windows’, be it moving, resizing, closing, whatever. It’s not a ‘mode’, per se, but it fits vim’s modal model.

I know there’s a huge community of vim plugin development, but I was still a little surprised to see so much reliance on plugins (and plugin managers3). There are a few plugins that I consider rather essential, like surround.vim, but again I largely try to do things the native way when possible, so I try not to extend vim too heavily.

I don’t strictly adhere to the aforementioned policy, particularly for things that I know I won’t forget how to do in a portable way (like autocd in the shell), or things that are purely conveniences (like mapping CtrlL such that it works in insert mode). One clever idea that I saw along these lines was remapping Enter to clear the search highlight before doing its thing. Which, I don’t think I’ll adopt, but it is a handy idea – those highlights can get a little distracting.

I see a lot of mappings of j to gj which again just feels so un-vimlike. Up/down movements corresponding to screen lines instead of content lines is something that actually bugs me in other editors. Worse, this mapping makes counts tricky to use in a . or macro situation, which is particularly weird when a lot of the same people use :set relativenumber. Another common mapping is to do gv after > or <, so that you can repeatedly hit it and have the same visual block selected. But… the vim way would be to use a count instead of mashing the button four or five times.

People remap <Leader> to , a lot, which to me feels even more awkward than \. I’ve seen weird insert-mode mappings to jump back to normal mode, like jj, which is fair – Esc is kind of a ways away. But the real trick to that is twofold: first, remap your useless Caps Lock to Ctrl system-wide, and then train yourself to use Ctrl[ instead of Esc.

Doug Black’s post about his .vimrc has two good pieces of advice: don’t add things to your .vimrc that you don’t understand, and don’t use the abbreviated forms of settings/commands4. I see a lot of files that don’t seem to conform to this rather basic advice. Things like hardcoding t_Co without performing any checks – at best it’s merely not portable, but it reads like ‘a thing that I did that solved a problem’ vs. a setting that the user actually understands.

I did have some positive takeaways from this little journey. While I don’t use macros much (I opt for :normal more often), I learned about :set lazyredraw which speeds up macro execution by waiting until the end to redraw the screen. I had somehow forgotten that vim supports encryption, and that it defaults to the laughable pkzip scheme, so :set cryptmethod=blowfish2 is making its way into my .vimrc. Someone added syntax for two spaces after a period, which is a smart idea – I would link that right to Error. It would be better (perhaps) to add that as a wrong/bad spell, but I think a highlight would work.

Curious to me was the number of people who put things in their .vimrc files that are specific to filetypes (etc.). This is stuff that I generally relegate to .vim/ftplugin/ and .vim/ftdetect/. For instance, I have some folding rules in place for markdown files used in my blog. I add the filetype hugo with a ftdetect script, and then lay out some syntax rules in .vim/ftplugin/hugo_folds.vim. I don’t know if my approach is better or worse – it definitely makes for a big pile of files. Is this more or less organized than just maintaining it in a tidy .vimrc? Something to think about.

This adventure down the dotfile rabbit hole taught me more than anything, I suppose, about how other vim users twist vim’s arm to make it less vimlike. Interestingly, I ran into a couple of files that were updated over the years, and the users seemingly adapted to more vimmy ways. I suspect a lot of these things come of a sort of feedback loop – a vim beginner sees a .vimrc file online with map <C-K> <C-W>k and thinks ‘why not shave off a keystroke?!’ They end up posting their .vimrc file a year down the road when they feel they’ve perfected it, and another novice stumbles across it, thinking ‘why not shave off…’ Regardless, it’s pretty neat just how many .vimrc files are floating around there, given how customizable and extensible vim is. Even approaches that are totally opposite one’s own likely have something previously unknown or unthought of.


Reversing Markdown

Most writing that I do, I do in vim using Markdown. Either for applications that support it natively (like Hugo, which powers this blog), via pandoc, or directly into Word via Writage. Going from Markdown is never really a problem, but trying to convert from pretty much any format to Markdown is pretty much always frustrating.

The reason for this is baked into the format — the format is designed to be flexible. It’s designed to be human-readable, and therefore most structural elements can be reached via several paths. For example, italics can be reached either by _this_ or *this*, and bold is achieved via either __this__ or **this**1. This allows a variety of personal styles. Since _this_ is a universal fallback for communications that don’t afford italics2, that is how I always do italics. And, on the rare chance that I use bold, I do it **this way** to readily set it off from the italics formatting. Ultimately, there are four combinations here, though, and software that is rendering to Markdown has to make its own style choices.

There are other decisions. Markdown, for some ungodly reason, promotes hard-wrapping (which I loathe). Markdown supports two different sorts of headers, one of which I find aesthetically pleasing, and then Setext-style. But again, a renderer has to either support a ton of options or make these decisions for you. Writage, for example, makes pretty much every decision opposite how I’d like it. Which is ok, but it means I spend a lot of time in vim reprocessing things.

I’ve been considering writing about this for months now, mostly to complain about Writage. But, this isn’t Writage’s fault. And I’d hesitate to call it a fault at all, it’s just a tradeoff that comes with a flexible markup language. I don’t think I would have made a lot of the decisions that Gruber made in establishing this format… But those decisions have led to it being a de facto standard for human-readable markup. Rich text would be worse off had this gone any other way.


Accessibility myths: The delusion of accessibility checkers

There is a delusion that I deal with, professionally, day in and day out. That nearly any piece of authoring software, be it Microsoft Word or Adobe Acrobat, has some inbuilt mechanism for assessing the accessibility of a document. Before I get into the details, let me just come out and say that if you are not already an accessibility professional, these tools cannot help you. I understand the motivation, but in my experience, these things do more harm than good. They allow unversed consumers to gain a false sense of understanding about the output of their product. That sounds incredibly condescending, but that’s honestly how it should work when you’re talking about fields that require extensive training.


A few of my favorite: Slide rules

I link to photos hosted by the International Slide Rule Museum, a really great resource. Unfortunately, they don’t set IDs on their pages for specific rules, but luckily I only discuss two brands: Pickett and Faber-Castell.

I love slide rules nearly as much as I love HP calculators, and much like HP calculators, I have a humble collection of slide rules that is largely complete. While I keep them around more as beautiful engineering artifacts than anything, I do actually use them as well. These are a few of my favorites, from both a conceptual standpoint and from actual use.

Pickett 115 Basic Math Rule:
This is, by far, the simplest rule that I own. It lacks the K1 scale that even the cheap, student 160-ES/T2 has. Aside from the L scale, it is functionally equivalent to a TI-108. But, to be fair, the TI-108 has two functions that nearly all slide rules lack: addition and subtraction. And, true to the name ‘Basic Math Rule,’ the Pickett 115 has two linear scales, X and Y, for doing addition and subtraction. Additionally, it has one scale-worth of Pickett’s ‘Decimal Keeper’ function, which aids the user in keeping track of how many decimal places their result is. All in all, it’s not a particularly impressive rule, but it is quite unique. Faber-Castell made a version of the Castell-Mentor 52/80 (unfortunately ISRM’s photo is not that version) with linear scales as well, and I probably prefer it in practice to the 115. The 115 just has a wonderful sort of pure simplicity about it that I appreciate, however.
Pickett N200-ES Trig:
This is basically the next step up from the aforementioned 160-ES/T. The 160-ES/T is a simplex with K, A, B, C, CI, D, and L scales. The N200-ES/T is a duplex model that adds trig functions with a single set of S and T scales, and an ST scale. It’s a wee little pocket thing, the same size as the 160-ES/T, and it’s made of aluminum as opposed to plastic. It’s nothing fancy, but it handles a very useful number of functions in a very small package. The N600-ES/T does even more, but it becomes a little cluttery compared to the N200-ES/T’s lower information density. Good for playing with numbers in bed.
Faber-Castell 2/83N Novo-Duplex:
The 2/83N is, in my opinion, the ultimate slide rule. It has 31 scales, conveniently organized, and with explanations on the right-hand side. Its braces have rubberized strips on them, and are thick enough that the rule can be used while sitting on a table. The ends of the slide extend out past the ends of the stator so it’s always easy to manipulate (I don’t have any Keuffel & Esser rules on this list, but they had a clever design that combatted this problem as well, with the braces being more L-shaped than C-shaped). The range of C (and therefore everything else, but this is the easiest way to explain) goes beyond 1-10, starting at around 0.85 and ending around 11.5. The plastic operates incredibly smoothly (granted, I bought mine NOS from Faber’s German store a few years ago, that had to have helped), and the whole thing is just beautiful. Truly the grail slide rule.
Faber-Castell 62/83N Novo-Duplex:
This feels like a complete cop-out, because it is essentially identical to the 2/83N, except smashed into half of the width. You lose the nice braces, you get a slightly less-fancy cursor, and you lose precision when you condense the same scale down to half-width. But you end up with something ridiculously dense in functionality for a small package. Even though it’s essentially the same rule as the 2/83N, I think it deserves its own place on this list.
Pickett 108-ES:
This was the piece I’d been looking for to essentially wrap up my collection. It is a circular, or dial, slide rule, and it is tiny – 8cm in diameter. It’s much harder to come by than the larger circular Picketts, particularly the older 101-C. Circular rules have some distinct advantages – notably their compact size (the 108-ES is the only rule I own that I would truly call pocketable, and it cradles nicely in the palm of my hand), and the infinite nature of a circular slide. The latter advantage means there’s no point in adding folded scales, nor is there ever a need to back up and start from the other end of the slide because your result is off the edge.
The 108-ES, by my understanding, was a fairly late model, manufactured in Japan. It is mostly plastic, and incredibly smooth to operate – moreso than non-circular Picketts that I’ve used. The obverse has L, CI, and C on the slide; D, A, and K on the stator. The reverse has no slide, and has D, TS, three scales of T, and two of S. I can’t help but hear “I’m the operator / with my pocket calculator” in my mind when I play with this thing. It really packs a lot of punch for something so diminutive. The larger 111-ES, of the same sort of manufacture, is also quite impressive with (among other things) the addition of log-log scales.

netrw and invalid certificates

Don’t trust invalid certificates. Only do this sort of workaround if you really know what you’re dealing with is okay.

Sometimes I just need to reference the source of an HTML or CSS file online without writing to it. If I need to do this while I’m editing something else in vim, my best course of action is to open a split in vim and do it there. Even if I’m not working on said thing in vim, that is the way that I’m most comfortable moving around in documents, so there’s still a good chance I want to open my source file there.

netrw, the default1 file explorer for vim, handles HTTP and HTTPS. By default, it does this using whichever of the following it finds first: elinks, links, curl, wget, or fetch. At work, we’re going through an HTTPS transition, and at least for the time being, the certificates are… not quite right. Not sure what the discrepancy is (it’s not my problem), but strict clients are wary. This includes curl and wget. When I went to view files via HTTPS in vim, I was presented with errors. This obviously wasn’t vim’s fault, but it took a bit of doing to figure out exactly how these elements interacted and how to modify the behavior of what is (at least originally) perceived as netrw.

When netrw opens up a remote connection, it essentially just opens up a temporary file, and runs a command that uses that temporary file as input or output depending on whether the command is a read or write operation. As previously mentioned, netrw looks for elinks, links, curl, wget, and fetch. My cygwin install has curl and wget, but none of the others. It also has lynx, which I’ll briefly discuss at the end. I don’t know if elinks or links can be set to ignore certificate issues, but I don’t believe so. curl and wget can, however.

We set this up in vim by modifying netrw_HTTP_cmd, keeping in mind that netrw is going to spit out a temporary file name to read in. So we can’t output to STDOUT, we need to end with a file destination. For curl, we can very simply use :let g:netrw_HTTP_cmd="curl -k". For wget, we need to specify output, tell it not to verify certs, and otherwise run quietly: :let g:netrw_HTTP_cmd="wget --no-check-certificate -q -O".

I don’t have an environment handy with links or elinks, but glancing over the manpages leads me to believe this isn’t an option with either. It isn’t with lynx either, but in playing with it, I still think this is useful: for a system with lynx but not any of the default HTTP(s) handlers, netrw can use lynx via :let g:netrw_HTTP_cmd="lynx -source >". Also interesting is that lynx (and presumably links and elinks via different flags) can be used to pull parsed content into vim: :let g:netrw_HTTP_cmd="lynx -dump >".


Nancy (ca. 2018)

Nancy is an 80-year-old1 syndicated comic strip, both maligned and studied for its simplicity in both artistic style and humor. Originally by Ernie Bushmiller, the strip has been drawn by six different people. The sixth, as of last week is the pseudonymous Olivia Jaimes, the first woman to be in command of Nancy.

That a strip predominantly featuring its eponymous female character hasn’t, in 80 years, been drawn by a woman is… Not terribly notable in this world, and I’m glad that that has changed. Somehow the bigger aspect of the shift seems to be the fact that (so far, at least) the new strip is really good. It’s modern, quirky, and real. It’s hard to take the original Bushmiller strips decades out of context2, but the most recent incarnation by Guy Gilchrist was, to me, awful even by syndicated strip standards. The Jaimes strip, so far, feels like a lightweight web comic almost, far exceeding the quality that I expect out of syndicated strips. I haven’t actually been excited by a newspaper strip in a long time, but this is seriously fresh.


FOSTA-SESTA

Content warning: mentions/links to data on sex trafficking, and murders of women & sex workers.

I know this site gets zero traffic, but regardless I regret that I didn’t take the energy to write about FOSTA-SESTA before FOSTA passed. FOSTA-SESTA is anti-sex-worker legislature posing as anti-trafficking legislature. It’s a bipartisan pile of shit, and the party split among the two dissenting votes in the FOSTA passage was also bipartisan. Since the passage of FOSTA, Craiglist has shut down all personals1, reddit has shut down a number of subreddits, and today Backpage was seized. I would implore anyone who gives a shit about sex workers and/or the open internet to follow Melissa Gira Grant on Twitter.

If you don’t support sex workers, frankly I don’t want you reading my blog. But if you’re here anyway, it’s worth pointing out that the absurdity laid out by FOSTA is a threat to the open web at large, which almost certainly explains why Facebook supported it. It’s not just sex workers who oppose this thing, NearlyFreeSpeech.net, the host I use for all of my sites, had a pointed and clear blog post outlining how frightening it is.

Obviously, it’s worth listening to sex workers on this matter, which nobody did. But it’s also worth listening to law enforcement, the folks who are actually trying to prevent trafficking. And, who would have guessed, law enforcement actually works with sites like Craigslist and Backpage to crack down on the truly villainous aspects of the sex trade. Idaho, just last month, for instance. Meanwhile, having outlets where sex workers can openly communicate and vet their clients saves their lives — when Craiglist opened its erotic services section, female homicide dropped by over 17 percent. That is to say that so many sex workers are routinely murdered, that helping them vet clients significantly reduces the overall female homicide rate.

This whole thing is misguided and cruel2, and I don’t really know what to do about it at this point. But listening to people who are closely following the impacts is a start. It’s a death sentence for sex workers, and a death sentence for the open web, and anyone who cares about either needs to keep abreast of the impact as it unfolds.


Mirror

You should immediately follow this link to the single-page tabletop RPG system, Mirror. There you will find my review, which is likely a more cohesive version of this post. You will also find a couple of other reviews from friends who playtested the game alongside me, and you will find the official description, and you will find the words ‘Pay what you want,’ to which I say… it’s worth a decent wad of cash.

Mirror does two things very well. First, it exists as a single-page ‘accelerated’ tabletop RPG system. Second, it breaks the tabletop mold in a meaningful way. It does the latter by basing character generation on real-world friendship. The former is aided by this, but is additionally accomplished by a simple dice-pool mechanic that drives interactions and health.

The dice pool mechanic is straightforward and covered by the rules, and not entirely worth expounding upon. CharGen is far more interesting, and is based upon the real human physically sitting across from you. I entered this rather nervous, and ended up playing across from people who I trust1 implicitly, but honestly have a hard time distilling to their core essence. You see, you play as an abstracted version of the person you sit across from, and during CharGen, you isolate four of that person’s strong suits, and two of their weaknesses. Without being an utter piece of shit, of course. I opted to play my weaknesses as counterpoints to my strengths — where my friend was absurdly creative, that creativity made her ideas occasionally impractical.

My best friend in the whole world games with me, and I am very grateful that in playtesting Mirror, I was not sat opposite her. Not for fear of insulting her during CharGen, but simply because I actually think I had to soak in what I love about other players in said group. A lack of closeness (let’s call it) made me feel a lot closer to the friends I played as. I guess Mirror has a way of doing that — it’s like a forced empathy, but since these are people you want to empathize with, it just makes you love them more.

And, this is important in the game, and brings me back to the first point — this is a single-pager. There are expectations for these things — quick, and simple to broach. I, personally, love Fate Accelerated Edition (FAE) as a quick, accessible tabletop system. But even FAE has barriers to entry… CharGen can theoretically be as long as a campaign, and for a new player, there’s no guarantee that they’ll be invested. Something about playing as one of your fellow gamers has a strange way of making you invested. And CharGen is quick and straightforward as you are simply… describing your buddy.

In my review on the DriveThruRPG page, I describe the friendship element and the one-page/one-off element as being intimately intertwined, and that’s really the magic of Mirror, I think. To non-gamers, even a quick system like FAE can be intimidating. But Mirror allows you to build a world, build a scenario and give your players an inherent motivation and set of character attributes — these are both dependent upon someone they care about IRL.

Mirror terrified me at first. Because I’m timid, and I’m bad at breaking even the people I know the most intimately into their prime components. But there’s enough of a balance between abstraction and familiarity that the whole thing is just… really comfortable. This is probably a first: I’m going to smash a redundant link here: go check out Mirror, it’s… special.


Trying Twitterific

[N]ot to worry, for the full Twitter experience on your Mac, visit Twitter on web.

I could not stop laughing in disgust when I read the email in which Twitter, a company known primarily for taking user experience and ruining it, announced that they were shuttering their Mac client. The idea that Twitter in a browser is in any way a palatable experience is horrifying, and the only explanation I can offer is that the entire Twitter UX team is comprised of unpaid interns.

As part of our ongoing effort to streamline our apps and provide a more consistent and up-to-date Twitter experience across platforms, we are no longer supporting the Twitter for Mac app.

To be fair, the official Mac app was horribly neglected, and just… a bad experience. It didn’t support the latest changes to the Twitter service (like 280 chars), it was a buggy mess when you tried to do simple things like scrolling, and it crashed at least once a week on me. It was a bad app, yet still infinitely more manageable than using a full-fledged web browser for something as miniature-by-design as Twitter. Enter Twitterrific.

The idea of paying a third party so that I can access a service so rampantly overrun by TERFs and nazis that I feel the need to maintain a private account never really made sense to me. But, unlike the other great UX nightmare, Facebook, I don’t hate the company and the service with every atom of my body. I guess I’m kind of a sucker for the shithole that is Twitter. So, I have paid for Twitterrific. And, it’s pretty good.

Twitter clients were once this sort of UI/UX playground, and while I don’t entirely think that’s a good thing, some genuinely positive user interaction experiences were born of it. Twitterrific (speaking only of the MacOS edition for this post) feels largely native, but still has enough of these playground interactions as to frustrate me. The biggest one is that threads (etc.) don’t expand naturally, they pop out in little impermanent window doodads, and if you want to ensure you don’t lose your place, you have to manually tear them off and turn them into windows.

There are some other little issues, like a lack of granular control over notification sounds, but all in all the thing is better than the official client has been for years. Mostly just in that it reliably updates, it knows how to scroll, and like any good MacOS app it does not freeze every other day. I’ve been using it since Twitter made their shitty announcement (mid-February), and it’s a solid product. I guess this post has been more rant than review, but the facts are simple: if you use a Mac and you use Twitter, your experience either has gone or will go to absolute shit. Unless you use a third-party Twitter client. And Twitterrific is a pretty good one.


Distant megaphones

I’m a big fan of cities. Whether I’m trying to settle in to sleep or just absorbing the ambience around me, I am an especially big fan of the sounds of cities. I’m not alone in this; Leonard Bernstein was inspired by the urban soundscape, Steve Reich composed New York Counterpoint and the even more blatant City Life, and the Konzerthausorchester Berlin paid homage to the sounds of Berlin with thirteen pieces that include such urban mundanities as the preparation of food and being snapped in a photo booth.

I live in an environment that could barely qualify as urban if you really squinted your ears at it. I do hear what I believe to be the world’s loudest street sweeper on a regular basis, but I’m more likely to hear the whistle of a factory or freight train than a bustling street performance. Working in DC, however, affords my ears a wonderful palette of sounds. There are a ton of police forces, all seemingly trying to outdo one another with their bizarre sirens. Bucket drummers abound, and for about a year I got to listen to the wonderful contrast provided by a stunning street harpist.

The District is also, by its inherently political nature, a hotbed for activism and protest1. Chanting forms its own unique rhythm, and the most confident and compelling protest emcees assert poetic lilts in their megaphone communiqués. And while I appreciate hearing the vocals of a revolution, there’s another magical sound that comes of this: that of distant megaphones.

Distant megaphones echo and blur. Distant megaphones are pronounced but inarticulate. Distant megaphones are at once familiar and alien. There’s almost an uncanniness about them, unquestionably human yet obscured and abstracted through the distortion of the machine, the reverberance of the city. The indecipherable lilt of the protest emcee now dances out of phase with itself.

This unwitting reduction of the voice of rebellion to little more than mechanized rhythmic moans is quite possibly my favorite of the city sounds. Unintelligible as it may be, there is a signal in the noise: We are here, and we need to be heard.


Dotfile highlights: .vimrc

I use zsh, and portability across Darwin, Ubuntu, Red Hat, cygwin, WSL, various gvims, etc. means I may have pasted something in that’s system-specific by accident.

New series time, I guess! I thought for the benefit of my future self, as well as anyone who might wander through these parts, there might be value in documenting some of the more interesting bits of my various dotfiles (or other config files). First up is .vimrc, and while I have plenty of important yet trivial things set in there (like set shell=zsh and mitigating a security risk with set modelines=0), I don’t intend to go into anything that’s that straightforward. But things like:

"uncomment this on a terminal that supports italic ctrl codes
"but doesn't have a termcap file that reports them
"set t_ZH=^[[3m
"set t_ZR=^[[23m

…are a bit more interesting. I do attempt to maintain fairly portable dotfiles, which means occasionally some of the more meaningful bits start their lives commented out.

Generally speaking, I leave word wrapping on, and I don’t hard wrap anything1. I genuinely do not understand the continuing practice of hard wrapping in 2018. Even notepad.exe soft wraps. I like my indicator to be an ellipsis, and I need to set some other things related to tab handling:

"wrap lines, wrap them at logical breaks, adjust the indicator
set wrap
if has("linebreak")
	set linebreak
	set showbreak=…\ \ 
	set breakindentopt=shift:1,sbr
endif

Note that there are two escaped spaces after the ellipsis in showbreak. I can easily see this trailing space because of set listchars=eol:↲,tab:→\ ,nbsp:·,trail:·,extends:…,precedes:…. I use a bent arrow in lieu of an actual LFCR symbol for the sake of portability. I use ellipses again for the ‘more stuff this way’ indicators on the rare occasions I turn wrapping off (set sidescroll=1 sidescrolloff=1 for basic unwrapped sanity). I use middots for both trailing and non-breaking spaces, either one shows me there’s something space-related happening. I also only set list if &t_Co==256, because that would get distracting quickly on a 16 color terminal.

Mouse handling isn’t necessarily a given:

if has("mouse") && (&ttymouse=="xterm" || &ttymouse=="xterm2")
	set mouse=a "all mouse reporting.
endif

I’m not entirely sure why I check for xterm/2. I would think it would be enough to check that it isn’t null. I may need to look into this. At any rate, the variable doesn’t exist if not compiled with +mouse, and compiling with +mouse obviously doesn’t guarantee the termcap is there, so two separate checks are necessary.

I like my cursors to be different in normal and insert modes, which doesn’t happen by default on cygwin/mintty. So,

"test for cygwin; not sure if we can test for mintty specifically
"set up block/i cursor
if has("win32unix")
	let &t_ti.="\e[1 q"
	let &t_SI.="\e[5 q"
	let &t_EI.="\e[1 q"
	let &t_te.="\e[0 q"
endif

Trivial, but very important to me:

"make ctrl-l & ctrl-z work in insert mode; these are crucial
imap <C-L> <C-O><C-L>
imap <C-Z> <C-O><C-Z>

I multitask w/ Unix job control constantly, and hugo server’s verbosity upon file write means I’m refreshing the display fairly often. Whacking Ctrlo before Ctrll or Ctrlz is easy enough, but I do it enough that I’d prefer to simplify.

I have some stuff in for handling menus on the CLI, but I realize I basically never use it… So while it may be interesting, it’s probably not useful. Learning how to do things in vim the vim way is generally preferable. So, finally, here we have my status line:

if has("statusline")
	set laststatus=2
	set statusline=%{winnr()}%<:%f\ %h%m%r%=%y%{\"[\".(&fenc==\"\"?&enc:&fenc).((exists(\"+bomb\")\ &&\ &bomb)?\",B\":\"\").\"]\ \"}%k\ %-14.(%l,%c%V%)\ %P
endif

I don’t like my status line to be too fancy, or rely on anything nonstandard. But there are a few things here which are quite important to me. First, I start with the window number. This means when I have a bunch of splits, I can easily identify which I want to switch to with (say) 2Ctrlww. I forget what is shown by default, but toward my right side I show edited/not, detected filetype, file encoding, and presence or lack thereof of BOM. Here’s a sample:

2<hfl.com/content/post/2018-02/vimrc.md [+][markdown][utf-8]  65,6           Bot

That’s about everything notable from my .vimrc. Obviously, I set my colorscheme, I set up some defaults for printing, I set a few system-dependent things, I set some things to pretty up folds. I set spell; display=lastline,uhex; syntax on; filetype on; undofile; backspace=indent,eol,start; confirm; timeoutlen=300. I would hesitantly recommend new users investigate Tim Pope’s sensible.vim, though I fundamentally disagree with some of his ideas on sensibility (incsearch?2 autoread? Madness).


Personal Log

For reasons that are not really relevant to this post, I am in search of a good solution for a personal log or journal type thing. Essentially, my goal is to be able to keep a record of certain events occurring, with a timestamp and brief explanation. Things like tags would be great, or fields for additional circumstances. Ideally, it’ll be portable and easily synced between several machines. Setting up an SQLite database would be a great solution, except that merge/sync issue sounds like a bit of a nightmare. jrnl looks really neat, but I haven’t been able to try it yet due to a borked homebrew on my home Mac1 and a lack of pip on my main Win machine. While syncing this could also be a chore, and there would be no mobile version, the interface itself is so straightforward that writing an entry could be done in any text editor, and then fed line-by-line into the command.

But I haven’t settled on anything yet, so I was just kind of maintaining a few separate text files using vim (Buffer Editor on iOS), and figuring I’d cat them together at some point. But I got to thinking about ways I can make my life easier (not having to manually insert entries at the right spot chronologically, etc.), and came to a few conclusions about ways to maintain a quick and dirty manual log file.

First thing was to standardize and always use a full date/timestamp. Originally I was doing something like:

2018-01-29:
    8.05a, sat on a chair.
    10.30p, ate a banana.

2018-01-30:
    [...]

…which is very hard to sort. So I decided to simply preface every single entry with an ISO-formatted stamp down to the minute, and then delimit the entry with a tab (2018-01-29T22.30 ate a banana.). As a matter of principal, I don’t like customizing my environment too much, or in ways that will lead to my forgetting how to do things without customization. I don’t, therefore, have many custom mappings in vim, but why not simplify adding the current date and/or time:

"insert time and date in insert mode
imap <Leader>d <C-R>=strftime('%F')<CR>
imap <Leader>t <C-R>=strftime('%R')<CR>
imap <Leader>dt <C-R>=strftime('%FT%R')<CR>

Leaderd for ISO-formatted date, Leadert for ISO-formatted time (down to minutes), Leaderdt for both, separated by a ’T’. If everything is formatted the same, with lines beginning in ISO time formats, then every entry can readily be sorted. Sorting is simple in vim, :sort or, to strip unique lines, :sort u. I think that it’s more likely that the merge operation would happen outside of vim, in which case we’d use the sort command in the same way: cat log1.tsv log2.tsv >> log.tsv && sort -u -o log.tsv log.tsv. sorting in place like this was a new discovery for me; I had always used temporary files before, but apparently if the output file to sort (specified by -o) is the same as the input file, it handles all the temporary file stuff on its own.

I think it’d be neat to establish a file format for this in vim. Sort upon opening (with optional uniqueness filtering). Automatically timestamp each newline. Perhaps have some settings to visually reformat the timestamp to something friendlier while retaining ISO in the background (using conceal?). The whole thing is admittedly very simple and straightforward, but it’s a process that seems worthwhile to think about and optimize. While most journaling solutions are much more complicated, I think there is a lot of value in a simple timestamped list of events for tracking certain behaviors, etc. I’m a bit surprised there isn’t more out there for it.


The death of Miitomo

Well, damn. Come May 9, Nintendo is shuttering Miitomo. I don’t know that it was ever terribly popular – it was Nintendo’s earliest venture onto mobile, but it wasn’t really a game. There were some game-like elements, primarily throwing your body into a pachinko machine to win clothes, but ultimately it was a dollhouse. A game of dress-up.

Entertainment, in all forms and across all media, is often a tool for escape. Some wish to lose themselves in a setting, others as a passive bystander in a plot, still others seeing pieces of themselves in fictional characters. A dollhouse experience is largely concentrated on this third aspect – expressing yourself, consequence-free, as this blank canvas of a person. While certainly a valid means of escape for anyone, this seems especially valuable to trans folks and people questioning their gender identity. The answers and comments on in-game questions revealed a staggering number of trans Miitomo users. I don’t really know of another game of dress-up that will serve as a viable replacement to Miitomo, and this is heartbreaking.

The May 9 date will put Miitomo’s lifespan at just over two years. Unfortunately, the app is entirely dependent upon the service, and assets users have acquired will not be retained locally, etc. While it seems plausible that local copies could be downloaded so that users could still fire up the app and change into any number of outfits they had previously purchased1, this will not be the case2. This is not a matter of ‘no more updates’, this is ‘no more app’. And that’s… a fairly short lifespan, even for a niche non-game. This absolute dependence on hosted assets makes me wonder about some of Nintendo’s other mobile forays. When Super Mario Run stops being worth the upkeep, will there be no more updates, or will the game cease to function altogether? Nintendo is in a weird spot where a lot of their casual gaming market has been overtaken by mobile. Obviously they want to get in on that and reclaim some market, but they just haven’t proven that they quite ‘get it’ yet. Or perhaps rendering a game entirely ephemeral is meant to prove to us the value of a cartridge. I… doubt it.

On January 24, Nintendo stopped selling in-game coins and tickets3 for real-world money. Daily bonuses, which used to be a handful of coins or a single ticket, are now 2,000 coins and 5 tickets every day. That’s a lot of in-game purchasing power for the next few months, and I’m glad that Nintendo is saying ‘here, just go nuts and have fun while it lasts’. Better than making this announcement on May 1, and operating as usual (including in-app purchases) until then.

I am truly sad about this; Miitomo has been oddly important to me. There is a lot of sadness and anger in the answers to the public in-game question running until May 9, ‘What was your favorite outfit in Miitomo? Show it off when you answer!’ Users are elaborately staging Miifotos with dead-looking Miis stamped ‘DELETED’, Miis crying on their knees, demonic-looking Miis labeled ‘Nintendo’ standing over innocent-looking Miis labeled ‘Miitomo’ with table knives sticking out of them. Ouch. We have #savemiitomo, #longlivemiitomo, #justice4miitomo (bit extreme, that) hashtags popping up. Suffice it to say, there is a frustrated community. I’ll be the first to admit that it never would have had the prominence of a Super Mario Bros. or Animal Crossing game, but Miitomo has been very meaningful to a lot of people.


Firefox mobile

Well, I finally downgraded upgraded to iOS 11, which means trying out the mobile version of Firefox1 and revisiting the Firefox experience as a whole. While Quantum on the desktop did show effort from the UI team to modernize, my biggest takeaway is that both the mobile and desktop UIs still have a lot of catching up to do. I mentioned previously how the inferiority of Firefox’s URL bar might keep me on Chrome, and the reality is that this is not an outlier. Both the desktop and mobile UI teams seem to be grasping desperately at some outdated user paradigms, and the result is software that simply feels clumsy. While I have always been a proponent of adhering to OS widgets and behaviors as much as possible, this is only strengthened on mobile where certain interaction models feel inextricable from the platform.

All of this to bring me to my first and most serious complaint about Firefox Mobile: no pull-to-refresh. I believe this was a UI mechanism introduced by Twitter, but it’s so ingrained into the mobile experience at this point that I get extremely frustrated when it doesn’t work. This may seem petty, but to me it feels as broken as the URL bar on desktop.

A UI decision that I thought I would hate, but am actually fairly ambivalent on, is the placement of navigation buttons. Mobile Chrome puts the back button with the URL bar, hiding it during text entry, and hides stop/refresh in a hamburger menu (also by the URL bar). Firefox Mobile has an additional bar at the bottom with navigation buttons and a menu (much like mobile Safari). I don’t like this UI, it feels antiquated and wasteful, but I don’t hate it as much as I expected to. One thing that I do find grating is the menu in this bar. I have a very difficult time remembering what is in this menu vs. the menu in the URL bar. The answer often feels counterintuitive.

In my previous post about desktop Firefox, I was ecstatic about the ability to push links across devices, something I’ve long desired from Chrome. It worked well from desktop to desktop, and it works just as well on mobile. This is absolutely a killer feature for folks who use multiple devices. Far superior to syncing all tabs, or searching another device’s history. On the subject of sync, mobile Firefox has a reader mode with a save-for-later feature, but this doesn’t seem to integrate with Pocket (desktop Firefox’s solution), which makes for a broken sync experience.

Both Chrome and Firefox have QR code detection on iOS, and both are quick and reliable (much quicker and more reliable than the detection built into the iOS 11 camera app). Chrome pastes the text from a read QR code into the URL bar; Firefox navigates to the text contained in the code immediately. That’s a terrifyingly bad idea.

A few additional little things:

Finally, a few additional thoughts on desktop Firefox (Quantum), now that I’ve gotten a bit of additional use in:


Interpreting 69lang (a ;# dialect) in dc

PPCG user caird coinheringaahing came up with a language, ;#, and a code golf challenge to implement an interpreter thereof. The language has two commands: ; adds one to the accumulator, # mods the accumulator 127, outputs its ASCII counterpart, and resets the accumulator. It is, quite clearly, a language that only exists for the sake of this challenge. I mostly do challenges that I can do in dc, which this challenge is not (dc simply cannot do anything with strings).

What I can do is establish a dialect of ;# wherein the commands are transliterated to digits1. I’ve opted for 6 and 9, because I am a mature adult. So, assuming the input is a gigantic number, the challenge is fairly trivial in dc: 0sa[10~rdZ1<P]dsPx[6=Ala127%P0sa]sR[la1+saq]sA[lRxz0<M]dsMx

0sa initializes our accumulator, a, to zero. Our first macro, [10~rdZ1<P]dsPx breaks a (presumably very large) number into a ton of single-digit entries on the stack. ~ yields mod and remainder, which makes the task quite simple – we keep doing ~10, reversing the top-of-stack, and checking the length of our number. Once it’s down to a single digit, our stack is populated with commands.

The main macro, [lRxz0<M]dsMx runs macro R, makes sure there are still commands left on the stack, and loops until that is no longer true. R, that is, [6=Ala127%P0sa]sR tests if our command is a 6 (nee ;), and runs A if so. A has a q command in it that exits a calling macro, which means everything else in R is essentially an else statement. So, if the command is a 9 (or, frankly, anything but a 6), it does the mod 127, Print ASCII, and reset a to zero stuff. All we have left is [la1+saq]sA, which is macro A, doing nothing but incrementing a.

66666666666666666666666666666666666666666666666666666666666666666666666696666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666696666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666669666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666966666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666696666666666666666666666666666666666666666666696666666666666666666666666666666696666666666666666666666666666666666666666666666666666666666666666666666666666666666666669666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666966666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666696666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666669666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666696666666666666666666666666666666669
0sa[10~rdZ1<P]dsPx[6=Ala127%P0sa]sR[la1+saq]sA[lRxz0<M]dsMx

Hello, World!

As Queen, I keep dying

This post might contain spoilers for the games Reigns and/or Reigns: Her Majesty.

Reigns was a game that really kind of blew my mind when it came out. I guess the idea was to sort of frame a narrative around Tinder-esque interactions, which I didn’t really grasp (Tinder seems like the polar opposite of how I wish to find a mate). To me it was just this story, played over a whole bunch of games (some of which you had to fail), each game potentially affecting future games, and all handled via this incredibly simple decision tree mechanic. For the most part, you have two decisions at any given time (swipe left or right, that’s the Tinder-y bit). It was an oddly engaging game.

Now, in Reigns, you played as a king. So if they were to make a sequel, it would only be fitting that you would play as a queen. This is Reigns: Her Majesty. I don’t really make a habit of reviewing mobile games1 on this blog, but Her Majesty is fucking phenomenal. I don’t know if Leigh Alexander was involved in the first game, but she definitely has a writing credit on this one, and it shows. Reigns was clever, but Her Majesty is ridiculously tight, smart, and progressive.

Part of my draw to the game is likely bias — you play as a woman, a woman who I deeply respect wrote the thing, and the entire game just oozes with femininity and feminism. This has always been a sticking point for me, I will become far more invested in a game where I can play as a woman vs. one where I’m stuck as a man. That’s not necessarily a knock on any given game (or unwarranted praise on any other given game), it’s just my bias. But, trying to look past that bias, this Queen’s world undeniably gives Her Majesty far more depth than its predecessor.

If you never played the first game, it’s worth briefly describing what swiping left or right accomplishes. For any given scenario, swiping either direction may raise or lower one or more of your piety, popular favor, might, or financial2 stats. If any given stat maxes out or reaches zero, you die. This is the same in Her Majesty, but there’s a much bigger struggle (at least, how I’ve played it) with the church. Part of this is that a major aspect of the plot involves astrology and the occult, and diving into that essentially requires you to defy the church. Part of it is that you’re constantly given the opportunity to flirt with all the other women in the game and I mean, how could you not!? Oh, and occasionally the Cardinal asks you to conceal your pendulous melons (or something), which… no, I dress how I want.

And this is why I think the feminine aspect really gives the game depth. Personally, I find it hard to play in a way that defies my feminist sensibilities (and, in fact, a random owl occasionally pops up to tell you how feminist you are or situate you in various fandoms3), but this is often detrimental to my score – you are, after all, ‘just’ the Queen, and in a sense must maintain your place. But beyond my personal hangups, this still adds a great depth to the game. Choices aren’t as clear-cut, and your level of control isn’t always what it seems. Layer the whole astrological woman magic icing on top, and it’s an even more impossibly complex swipe-left-or-right game than Reigns.

This complexity and my desire to be an empowered Queen means that I have been losing very quickly, very often. Which might be grating in a lesser game, but somehow losing Her Majesty usually feels pretty damned virtuous.


Animal Crossing: Pocket Camp

Animal Crossing: Pocket Camp has been available stateside for about a week now, and it is… strange. This post on ‘Every Game I’ve Finished’ (written by Mathew Kumar) mirrors a lot of my thoughts – I would recommend reading it before reading this. I haven’t really played a lot of Animal Crossing games before, and I tend to avoid free-to-play1 games. The aforementioned post is largely predicated on the fact that Pocket Camp doesn’t fully deliver on either experience. Which, I guess I wouldn’t really know, but something definitely feels odd about the game to me.

Early in his post, Kumar states that ‘[Pocket Camp] makes every single aspect of it an obvious transaction’, which is comically true. My socialist mind has a hard time seeing the game as anything but a vicious parody of capitalism. My rational mind, of course, knows this is not true because the sort of exploitative mundaneness that coats every aspect of the game is the norm in real life.

This becomes even more entertaining when you observe how players set prices in their Markets. For the uninitiated, when your character has a surplus of a thing, they can offer that thing for sale to other players. The default price is its base value, but you can adjust the sale price down a small amount or up a large amount. Eventually you’ll likely just max out your inventory and be forced to put things up for sale in this Market. More eventually, you’ll max out the Market and be forced to just throw stuff away without getting money for it. But in the meantime, people (strangers and friends) will see what you have to offer and be given the opportunity to buy it.

For the most part, if you need an item (I use the term ‘need’ loosely), it is common, and either hopping around or waiting a couple of hours will get you that item. So there should be no reason to charge a 1000% markup on a couple of apples. But (in my experience thus far) that is far more common than to see items being sold for the minimum (or even their nominal value). I don’t know if it’s just players latching on to the predatory nature of free-to-play games or what, and I’m really curious to know if it works. I’ve been listing things in small quantities (akin to what an animal requests) for the minimum price, and while I’ve sold quite a few items, most still go to waste – I can’t imagine anything selling at ridiculous markups.

So far this description of a capitalist hellscape has probably come off as though I feel negatively toward the game, which I really don’t. To return to Kumar, he leaves his post stating that he hasn’t given up on the game yet, but ‘like Miitomo, the first time I miss a day it’s all over.’ This comparison to Miitomo is apt, and a perfect segue into why I’m invested in this minor dystopia.

Miitomo (another Nintendo mobile thing) is really just a game where you… decorate a room and try on clothes. You answer questions and play some pachinko-esque minigames in order to win decorations and clothes, but it’s basically glorified dress-up. It seems like mostly young people playing it, but it’s also just a wonderful outlet for baby trans folks, people questioning gender, and any number of people seeking a little escape. I find Miitomo to be very valuable and underrated, and a lot of the joy Miitomo brings me is echoed by Pocket Camp.

While the underlying concept behind Pocket Camp is that you’re a black market butterfly dealer or whatever, there’s also a major ‘dollhouse’ component to it. You buy and receive cute clothes and change your outfits, which has no bearing on the game. You buy things to decorate your campsite which (effectively2) has no bearing on the game. You can drop 10,000 dollars bells on a purse that does nothing but sit in the dirt looking pretty. I guess it’s hypocritical to praise this meaningless materialism, but it’s a nice escape. A little world to mess around in and make your own.

I don’t know how long I’ll obsessively island-hop the world of Pocket Camp, but I think that (like Miitomo) once the novelty wears off, I’ll still pop in to play around with my little world when it occurs to me to do so. And the whole time, in my mind, it will remain a perfectly barbed satire on capitalism.


Firefox Quantum

There was once a time where the internet was just beginning to overcome its wild wild west nature, and sites were leaning toward HTML spec compliance in lieu of (or, more accurately, I suppose, in addition to) Internet Explorer’s way of doing things. Windows users in the know turned to Firefox; Mac users were okay sticking with Safari, but they were still far and few between. Firefox was like the saving grace of the browser world. It was known for leaking memory like a sieve, but it was still safer and more standards-compliant than IE. Time went on, and Chrome happened. Compared to Chrome, Firefox was slow, ugly, lacking in convenience features, it had a lackluster search bar, and that damn memory leak never went away. Firefox largely became relegated to serious FOSS nerds and non-techies whose IT friends told them it was the only real browser a decade ago.

I occasionally installed/updated Firefox for the sake of testing, and these past few years it only got worse. The focus seemed to be goofy UI elements over performance. It got uglier, less pleasant to use, and more sluggish. I assumed it was destined to become relegated to Linux installs. It just… was not palatable. I honestly never expected to recommend Firefox again, and in fact when I did just that to a fellow IT type he assumed that I was drunk on cheap-ass rum.

Firefox 57 introduces a new, clean UI (Photon); and a new, incredibly quick rendering engine. I can’t tell if the rendering engine is just a new version of Gecko, or if the engine itself is called Quantum (the overall new iteration of the browser is known as Quantum), but I do know it’s very snappy. I’m not sure if it is, but it feels faster than Chrome on all but the lowest-end Windows and macOS machines that I’ve been testing it on. It still consumes more memory than other browsers I’ve pitted it against, and its sandboxing and multiprocessor support is a work in process. The UI looks more at home on Win 10 than macOS, but in either case it looks a hell of a lot better than the old UI, and it fades into the background well enough. On very low-end machines (like a Celeron N2840 2.16GHz 2GB Win 8 HP Stream), Firefox feels more sluggish than Chrome – and this sluggishness seems related to the UI rather than the rendering engine.

I’ve been using Quantum (in beta) for a while, alongside Chrome, and that’s really what I want to attempt to get at here. Both have capable UIs, excellent renderers, and excellent multi-device experiences. I don’t particularly like Safari’s UI, but even if I did the UX doesn’t live up to my needs simply because it’s vendor-dependent (while not platform-dependent, the only platforms are Apple’s), and I want to be able to sync things across my Windows, macOS, iOS, and Linux environments. Chrome historically had the most impressive multi-device experience, but I think Firefox has surpassed it – though both are functional. So it’s starting to come down to the small implementation details that really make a user experience pleasant.

As a keyboard user, Firefox wins. Firefox and Chrome1 both have keyboard cursor modes, where one can navigate a page entirely via cursor keys and a visible cursor. This is an accessibility win, but very inefficient compared to a pointing device. Firefox, however, has another good trick – ‘Search for text when you type’, previously known as Type Ahead Find (I think, I know it was grammatically mysterious like that). So long as the focus is on the body, and not a textbox, typing anything begins a search. Ctrl– or Cmd-G goes to the next hit, and Enter ‘clicks’ it. Prefacing the search with a restricts it to links. It makes for an incredibly efficient navigation method. Chrome has some extensions that work similarly, but I never got on with them and I definitely prefer an inbuilt solution.

Chrome’s search/URL bar is way better2. It seems to automatically pick up new search agents, and they are automatically available when you start typing the respective URL. One hits tab to switch from URL entry to searching the respective site, and it works seamlessly and effortlessly. All custom search agents in Firefox, by contrast, must be set up in preferences. You don’t get a seamless switch from URL to search, but instead must set up search prefixes. So, on Chrome, I start typing ‘amazon.com’, and at any point in the process, I hit tab, and start searching Amazon. With Firefox, I have to have set up a prefix like ‘am’, and remember to do a search like ‘am hello kitty mug’ to get the search results I want. It is not user-friendly, it is not seamless, and it just feels… ancient. Chrome’s method also allows for autocomplete/instant search for these providers, which is only a feature you get with your main search engine in Firefox. It is actually far superior to simply not use this feature in Firefox and use DuckDuckGo bangs instead. The horribly weak search box alone could drive me back to Chrome.

Chrome used to go back or forward (history-wise) if you overscrolled far enough left or right – much like how Chrome mobile works. This no longer seems to work on Chrome desktop, and it doesn’t work on Firefox either. I guess I’m grumpier at Google for teasing and taking away. I know it was a nearly-undiscoverable UI feature, and probably frustrated users who didn’t know why they were jumping around, but it freed up mouse buttons.

I don’t know how to feel about Pocket vs. Google’s ‘save for later’ type solution. Google’s only seems to come up on mobile. Pocket is a separate service, and without doing additional research, it’s unclear how Mozilla ties into it (they bought the service at some point). At least with Google you know you’re the product.

I have had basically no luck streaming on Firefox. Audio streams simply don’t start playing; YouTube and Hulu play for a few seconds and then blank and stop. I assume this will be fixed fairly quickly, but it’s bad right now.

Live Bookmarks are a thing that I think Safari used to do, too? Basically you can have an RSS feed turn into a bookmark folder, and it’s pretty handy. Firefox does this, Chrome has no inbuilt RSS capability. Firefox doesn’t register JSON feed which makes it a half-solution to me, which makes it a non-solution to me. But, it’s a cool feature. I would love to see a more full-featured feed reader built in.

Firefox can push URLs to another device. This is something that I have long wished Chrome would do. Having shared history and being able to pull a URL from another device is nice, but if I’m at work and know I want to read something later, pushing it to my home computer is far superior.

I’ll need to revisit this once I test out Firefox on mobile (my iOS is too far out of date, and I’m not ready to make the leap to 11 yet). As far as the desktop experience is concerned, though, Quantum is a really, really good browser. I’m increasingly using it over Chrome. The UI leaves a bit to be desired, and the URL/search bar is terrible, but the snappiness and keyboard-friendliness are huge wins.


SVG d6

I’ve posted a few games-in-posts and other toys that involve rolls of dice, and my strategy is to use Unicode die-face symbols. I think, for the foreseeable future, this is how I will continue to handle such matters – it’s clean, compact, and rather portable. For whatever reason, I was wondering how best to achieve this in an SVG containing all of the pips, with the face selected via class and modified via CSS. So, below is an SVG die that contains seven pips, with its class set to .die1. But if we set it to .die2, it hides the (0-indexed, left to right, top to bottom) pips 1, 2, 3, 4, and 5. If we set it to .die4, it hides pips 2, 3, and 4. This works, of course, for .die3, .die5, and .die6 too, of course. Since pips 0 and 6 and pips 1 and 5 will always be (in)visible together, we can combine either set into a single class, .pip06, and .pip15 to simplify the .die classes that hide them.

Pros include the ability to customize dice (regular D6s and fudge dice, say, or simply multicolored pips), the potential to mix in other-sided dice, and likely superior accessibility. Cons are complexity and file-size (SVGs must be embedded into posts as SVG elements). The latter can be mitigated by generation of the SVGs from whatever JS would be running the show, but it’s still a bit clumsy. An interesting experiment, regardless of whether or not I ever use it.


Antiquine

A quine is a program that does nothing but output its own source code. In various programming challenges (notably, on PPCG) the word is often generalized to refer to those which are somehow related to this behavior – print the source code backward, input n and print n copies of the source, etc. An interesting challenge from 2013 floated to the top of PPCG this week a few weeks ago (I’ve been sitting on this post for a while), Print every character your program doesn’t have. While I don’t particularly feel the need to dive into everything I attempt on PPCG, this was a very interesting challenge in how seemingly trivial decisions that appear to shrink the code could have very well ended up growing it.

The premise was, for the printable ASCII set, print every character not present in your source code. In dc, the sort of baseline solution to beat is one which simply contains and comments out every single printable ASCII character, # !"$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ – 95 bytes. So that’s the thing to beat. Even if I couldn’t beat it, I’m not sure I would have submitted that – it’s just so boring1. My strategy was to:

The program, then, starts by pushing a list of ASCII code points that we need to exclude onto the stack, which I’ll get to after the fact. The meat of the program is [l.dP]s;34s.[dl.!=;s,l.1+ds.127>/]ds/x. This defines two macros, ; and / and runs / (as main essentially). Originally I had thought I would name all of my registers (macros included) with characters otherwise already used in the code, so as to not add bytes to the exclusion list. But because I (probably) needed every digit between 0 and 9, my list was already going to be partially incrementally generated, and since I needed > and < which are just a few code points away from 9, it was actually beneficial to introduce the ‘missing’ characters into the code to factor more into the incremental generation process. Theoretically, I could comment out some of these characters, but then I’d have to add the comment character, # to the list as well.

The macro / is rather straightforward. It duplicates top of stack, compares it to the counter ., runs the printing macro ; if they don’t match, ‘drops’2 the top of stack (which has an extra value on it if ; ran), increments the counter ., and continues running as long as we’re still below 127. The macro ; does nothing save for printing the ASCII value of . and leaving some padding on the stack – since / always deletes an item from the stack, we want it to be unimportant unless it’s removing an item from the list of exclusions.

Our list of exclusions is 120d115 112 108 100 93 91 80 62[d1-d48<:]ds:x45 43. We’re printing from low to high, so we make sure that from top to bottom, the stack is also arranged low to high. Since we compare our current ASCII value to this list of exclusions every time, we need to make sure there’s always something on the stack to compare to, so our largest/last value is duplicated. There’s a little decrementing macro in there to automatically push every value from 62 to 48 – the digits 0-9 as well as the aforementioned nearby characters that I used for macro names, etc. I tried generating the list in a handful of different bases – either to reduce bytes with fewer digits or to reduce exclusions with fewer possible numerals. In every case, the code to switch bases plus the overhead of adding that command to the exclusions list made it longer than simply using decimal.

This was an interesting challenge, one that I didn’t readily think I could beat the straightforward solution to. Many of the things that worked in my favor were counterintuitive – like introducing more characters that I had to exclude.


Firefox fixes (et cetera)

I’ve been testing out Firefox Quantum recently, which is a post for another day, but it made me realize one thing and that is that this site right here barely functioned for anyone using Firefox. Either Quantum or the old engine (Gecko? Is Quantum a replacement for Gecko or a version of it?). Frankly, it’s much stricter than I would have imagined, and assuming that something that functions fine in IE/Edge and Chrome/Safari would also function fine in Firefox was… not a safe assumption, apparently. Here are a few things that I’ve fixed over the past few days, some related to Firefox and others not.


QR codes from box-drawing characters


▛▀▀▌▟▐▘▛▀▀▌
▌▓▌▌▜▗▘▌▓▌▌
▌▀▘▌▌▗▌▌▀▘▌
▀▀▀▘▙▙▌▀▀▀▘
▓▟▖▀▐▌ ▓▙▀▖
▚▀▖▘▄▗▘▞▄▗▘
▘▝▘▘▓▙▚▄▚▖▌
▛▀▀▌▀▚▟▐▛▀
▌▓▌▌▖▟▝▜▙▐▖
▌▀▘▌▞▌▚▄▌ ▘
▀▀▀▘▘▝▘▝▝▝

I’ve been increasingly interested in QR codes as of late, for reasons that Glenn Fleishman articulated far better than I could. I also just find them rather fascinating as a format. There’s a lot of redundancy to account for errors and damage (wonderfully demonstrated here), and a handful of possible masks that overlay all of the data means that the exact same data will have myriad possible representations. I’ve also been curious as to the most efficient ways to store and present the data (GIFs and 1 bit/pixel PNGs done at the pixel-level and then scaled up seem pretty good1), and got to wondering if Unicode Block Elements (2580-259F) would work. As above, they seem to, albeit with the entire block scaled to 60% vertical height and the line-height condensed. Also, Hack (the monospace font I use on this site) seems to render 2588, FULL BLOCK, as a quarter-sized block centered in the space that it should be filling up all of. So I substituted 2593, DARK SHADE, which works. Also, the squareness and contiguousness of the thing seems to be crucial for recognition, far more so than the integrity of the actual data within.

This was a truly pointless exercise, but hey, it’s a thing that can be done.


A billion points: an SVG bomb

SVGs, via the <use> tag, are capable of symbolic references. If I know I’m going to have ten identical trees in my image, I can simply create one tree with an id="tree" inside of an undrawn <defs> block, and then reference it ten times inside the image along the lines of <use xlink:href="#tree" x="50" y="50"/>.

A billion laughs is a bomb-style attack in which an XML document makes a symbolic reference to an element ten times, then references that symbol ten times in a new symbol, and again, and again, until a billion (109) of these elements are being created. It creates a tremendous amount of resource consumption from a few kilobytes of code. Will symbolic references in an SVG behave similarly?

I briefly searched for SVG bombs, and as expected mostly came up with clipart. I did find one Python script for generating SVG bombs, but it relied on the same XML strategy as the classic billion laughs attack1. The answer is that yes, in about 2.3kB we can make a billion points and one very grumpy web browser:

<svg version="1.2" baseProfile="tiny" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" xml:space="preserve">
<path id="a" d="M0,0"/>
<g id="b"><use xlink:href="#a"/><use xlink:href="#a"/><use xlink:href="#a"/><use xlink:href="#a"/><use xlink:href="#a"/><use xlink:href="#a"/><use xlink:href="#a"/><use xlink:href="#a"/><use xlink:href="#a"/><use xlink:href="#a"/></g>
<g id="c"><use xlink:href="#b"/><use xlink:href="#b"/><use xlink:href="#b"/><use xlink:href="#b"/><use xlink:href="#b"/><use xlink:href="#b"/><use xlink:href="#b"/><use xlink:href="#b"/><use xlink:href="#b"/><use xlink:href="#b"/></g>
<g id="d"><use xlink:href="#c"/><use xlink:href="#c"/><use xlink:href="#c"/><use xlink:href="#c"/><use xlink:href="#c"/><use xlink:href="#c"/><use xlink:href="#c"/><use xlink:href="#c"/><use xlink:href="#c"/><use xlink:href="#c"/></g>
<g id="e"><use xlink:href="#d"/><use xlink:href="#d"/><use xlink:href="#d"/><use xlink:href="#d"/><use xlink:href="#d"/><use xlink:href="#d"/><use xlink:href="#d"/><use xlink:href="#d"/><use xlink:href="#d"/><use xlink:href="#d"/></g>
<g id="f"><use xlink:href="#e"/><use xlink:href="#e"/><use xlink:href="#e"/><use xlink:href="#e"/><use xlink:href="#e"/><use xlink:href="#e"/><use xlink:href="#e"/><use xlink:href="#e"/><use xlink:href="#e"/><use xlink:href="#e"/></g>
<g id="g"><use xlink:href="#f"/><use xlink:href="#f"/><use xlink:href="#f"/><use xlink:href="#f"/><use xlink:href="#f"/><use xlink:href="#f"/><use xlink:href="#f"/><use xlink:href="#f"/><use xlink:href="#f"/><use xlink:href="#f"/></g>
<g id="h"><use xlink:href="#g"/><use xlink:href="#g"/><use xlink:href="#g"/><use xlink:href="#g"/><use xlink:href="#g"/><use xlink:href="#g"/><use xlink:href="#g"/><use xlink:href="#g"/><use xlink:href="#g"/><use xlink:href="#g"/></g>
<g id="i"><use xlink:href="#h"/><use xlink:href="#h"/><use xlink:href="#h"/><use xlink:href="#h"/><use xlink:href="#h"/><use xlink:href="#h"/><use xlink:href="#h"/><use xlink:href="#h"/><use xlink:href="#h"/><use xlink:href="#h"/></g>
<g id="j"><use xlink:href="#i"/><use xlink:href="#i"/><use xlink:href="#i"/><use xlink:href="#i"/><use xlink:href="#i"/><use xlink:href="#i"/><use xlink:href="#i"/><use xlink:href="#i"/><use xlink:href="#i"/><use xlink:href="#i"/></g>
</svg>

It works precisely the same way as a billion laughs: it creates one point, a, at 0,0; then it creates a group, b with ten instances of a; then group c with ten instances of b; and so on until we have 109 (+1, I suppose) instances of our point, a. I’m not entirely sure how a renderer handles ‘drawing’ a single point with no stroke, etc. (essentially a nonexistent object), but it is interesting to note that if we wrap the whole thing in a <defs> block (which would define the objects but not draw them), the bomb still works. Browsers respond a few different ways…


SVGs

For someone rooted in graphic design and illustration, I typically hate running across visuals on the internet. Aside from being numbed by ads, the fact of the matter is that a large percentage of the graphical presentation on the web is just bandwidth-stealing window dressing with little impact on the surrounding content. Part of my plan with this blog was to avoid graphics almost entirely, and yet over the past month or so, I have littered this space with a handful of SVGs. I think, for the most part, they have added meaningful visual aids to the surrounding content, but I still don’t want to make too much of a habit of it.

I’m far more comfortable with SVGs (or, vector graphics in general) because I find it easier to have them settle onto the page naturally without becoming jarring. I could obviously restrict the palette of a raster image to the palette of my site, and render a high resolution PNG with manageable file size, but scaling will still come into play, type may be mismatched… aside from being accessibility issues, these things have subtle effects on visual flow. I’m thankful that SVG has been adopted as well as it has, and that it’s relatively simple to write or manipulate by hand. Following is the process I go through to make my graphics as seamless as possible.

Generally speaking, the first step is going to be to get my graphic into Illustrator. Inside Illustrator, I have a palette corresponding to my site’s colors. Making CSS classes for primary, secondary, tertiary colors is in my to-do list, but I need to ensure nothing will break with a class defining both color and fill. Groups and layers (mostly) carry over when Illustrator renders out an SVG, so I make a point of going through the layer tree to organize content. Appearances applied to groups cascade down in the output process, so (as far as SVG output is concerned) there’s no point in, say, applying a fill to a group – each individual item will get that fill in the end anyway. I use Gentium for all of the type, as that is ideally how it will be rendered in the end, though it’s worth quickly checking how it all looks in Times New Roman as well.

Once I get things colored and grouped as I need them, I crop the artboard to the artwork boundaries. This directly affects the SVG viewbox, and unless I need extra whitespace for the sake of visually centering a graphic, I can rely instead on padding or the like for spacing.

Once in the SVG Save dialog, I ensure that ‘Type’ is set to ‘SVG’. I don’t want anything converted to an outline, because I want the type to visually fall back with the rest of my page. I never actually save an SVG file from Illustrator, I just go to ‘SVG Code…’ from the Save dialog, and copypaste it elsewhere for further massaging. This involves:

Illustrator seemingly outputs SVG with the intent being structural accuracy if the file is read back in for editing, which is often counterproductive for web use, which would prioritize small filesize without a sacrifice in selection ordering or visual accuracy. To be fair, I just installed 2018 and haven’t tested its SVG waters yet, so we’ll see how Adobe managed to mess that up handle that.

Finally, it’s worth mentioning SVGO (and the web-based SVGOMG). Very customizable optimization, definitely more useful once one starts dealing with more intricate, complicated SVGs. I’m happy to optimize mine down by hand, and stop there – but I’m keeping them to a handful of kilobytes anyway.


Aztec diamonds: Shifted like tangrams

Well, I was right about one thing – there was a straightforward solution to this whole Aztec diamond problem. To be fair to myself, my original solution holds up – I neglected to add one somewhere in my equation (we’ll get to that later), an error that was insignificant at the start, and less significant as the Aztec numbers increased. To get to the reveal, it’s worth backing up to our series, OEIS: A046092 again. Somehow, in my haste, I kind of glossed over the fact that these are ‘4 times triangular numbers,’ a fact that became readily apparent to me when I was coming up with the diagrams for the last post. You see…

An Aztec diamond split into four triangles.

…our Aztec diamond is made up of four triangles; and it is in fact true that each of our Aztec numbers is 4× the corresponding triangle number. A funny thing about triangle numbers is that if you multiply them by eight and add one, they become perfect squares. This can be demonstrated visually:

Nine triangles of leg size three tile into a seven by seven square with one unit ‘missing’.

This visual only proves that it’s true for the triangle number, 6, but it is universally true and readily proven – this ‘Cool Math Stuff’ post shows it succinctly, and John Conway and Richard Guy discuss it in The Book of Numbers. Rehashing the proof here seems rather pointless. Fascinatingly, I did almost figure this out last time, with the extra unit hypotenuse theory.

So, we know that for our Aztec number, x, x/4 is a triangle number, and for this triangle number, y, 8y+1 is a perfect square. We know that any given side of this square is made up of 2× the length of a side of triangle y plus one, which is ultimately the value that we need to recreate our square from our grid. We can see this rather clearly in the first diagram with the triangle highlighted – the three dots forming the outer side correspond directly to segments of our grid.

Thus, given a triangle number, y, the length of any of its sides is (sqrt(8y+1)-1)/2. Which then leads us to the same thing for our Aztec number, x, (sqrt(2x+1)-1)/2. Now, to solve my problem, I actually need to add one to this. Given that we’re dealing with integers, this can be simplified to ceil(sqrt(2x+1)/2) – precisely what I originally came up with, aside from the +1.

So, my equation was wrong, but it provably works – the off-by-one error is clearly insignificant for a 3×3 square, and, given x, sqrt(x2)-sqrt(x2-1) converges toward zero:

Chart shows that the aforementioned equation quickly approaches zero.


Aztec diamonds: Testing the reversed Aztec numbers

My initial test of x=ceil(sqrt(2y)/2) to determine the size of a square given a value in A046092 corresponding to how many line segments would remain if the grid formed internally at every unit of height and width of the square was broken at every intersection was tested and shown to be valid for the first 1000 numbers in the sequence. I did this haphazardly in Excel, just to make sure it wasn’t going to fail quickly.

I have since scripted a test in python, which has thus far shown my method viable up to a 50907428697×50907428697 square. The script is called by passing it an argument containing the initial integer for the square’s size (default 2), and it loops infinitely until it either fails or is halted with SIGINT. Upon halting, it returns something like Last success: Square 39033633; azNum 3047249088424644; urNum 39033633.0, where Square is the height/width of the square being tested, azNum is the number of line segments (or squares in the equivalent Aztec diamond), and urNum is the calculation which (hopefully) is equal to Square. Revealing the last success in this way tells me where to start next time. The code:

import signal
import sys
from math import ceil,sqrt
def sigintHdl(signal,f):
    print "Last success: Square %s; azNum %s; urNum %s" % (tNum-1,azNum,urNum)
    sys.exit(0)
signal.signal(signal.SIGINT,sigintHdl)
tNum = 2 if len(sys.argv)<2 or int(sys.argv[1])<2 else int(sys.argv[1])
result=0
while result==0:
    azNum=tNum**2+tNum*(tNum-2)
    urNum=ceil(((2*azNum)**.5)/2)
    result=urNum-tNum
    tNum+=1
print "FAILURE: Square %s; azNum %s; urNum %s" % (tNum-1,azNum,urNum)

There’s another way of looking at this whole thing. If we consider an isosceles right triangle with hypotenuse h, we know the length of either of the legs is equal to sqrt(h^2/2). Interestingly enough, if we work with a hypotenuse of one unit larger (which should never exist as a halved Aztec diamond), h^2/2 is equal to our A046092 value +0.5.

Adding one unit to our central line yields seven units; squaring seven and then halving the result yields 24.5. 7

Ultimately, the problem seems to be one of dealing with a not-quite-proper triangle. It’s easy to imagine additional nodes that make the triangle more… triangular. Doing so leads to more funny math, but it all sort of, kind of makes sense. I guarantee there’s an off-the-shelf solution out there, and it’s likely quite straightforward and, in hindsight, obvious. But this sort of math isn’t necessarily my forté, so I’ll just fidget around until I come up with something conclusive. At this point, it’s all for fun – I have far more known-valid values than I could ever imagine needing. My little python test snippet will easily be reused for other things as well, so I’ll call that a win.


Aztec diamonds: How I came to learn of them

A little project I’m working on requires me to suss out the size of a square given the following: Imagine a square of width and height x units. The square is gridded by unit such that x^2 squares are visible inside it. How many of these squares’ perimeter lines exist inside the outer square? Or, put another way, if you then erase the border of the original square and break every line segment at the intersections, how many line segments do you have left?

It seems to work out to x^2+x*(x-2). For a 2×2 square, there are 4 segments left. 3×3 yields 12, 4×4 yields 24, 5×5 yields 40, 6×6 yields 60, and so on. I confirmed this far by manually counting the segments, and everything seemed good. This wasn’t what I needed, however, I needed to be able to do it in reverse – given 24 segments, I needed to come up with my outer square’s width and height of 4. This was not an obvious solution.

I tried searching for things like ‘size of square from inner grid segments’, but I couldn’t really articulate the thing in a way that got me anywhere. Or, perhaps, nobody has ever really needed this version of this problem solved before (though I find that doubtful). I needed a new angle. I searched OEIS for my sequence, 4, 12, 24, 40, 60, and came up with A046092, ‘4 times triangular numbers’. Now, OEIS is great, but it has a way of presenting a lot of information very tersely, which can be overwhelming. So I googled A046092, and nearly every hit came back to one thing – Aztec diamonds.

Further searching revealed that Aztec diamonds are popular because of the Aztec diamond theorem and the Arctic Circle theorem, both related to domino tilings. This is all very fascinating, but unfortunately presented me with a dead end. Fortunately, I did also discover that Aztec Diamond Equestrian is a company that makes leggings with very functional looking pockets, so that was a win. But on the math side of things, I wasn’t coming up with much. I did, at least, realize that if I rotated my grid and treated each of those segments as the diagonal of a square, I was in fact dealing with an Aztec diamond. If nothing else, this allowed me to confirm that A046092 was the sequence I was dealing with, it allowed me to confirm my original equation, and therefore meant I could test any arbitrary case without manually counting.

I started noticing other patterns, too. The problem is complicated because it’s like a square, but it’s missing the corners. This is where the narrative begins to fall apart. All these thoughts of squares and right triangles, hypotenuses… I established a formula that I have tested for the first 1000 integers in A046092, they all pass. But I cannot come up with a proof that explains it. Furthermore, I am not confident that it actually holds up despite the first 1000 integers validating. The next step is scripting the formulae out to test well beyond 1000.

Now, for my project, I don’t need anywhere near 1000, so the equation is Good Enough. Hell, I could probably hardcode the sequence as far as I need it, and a lookup table would likely be faster than a bunch of math. But now I’m really curious. So for a square x by x units, and y as the xth integer in A046092, I’d love to prove that x=ceil(sqrt(2y)/2). I don’t know that I can, but I would really like to. Ultimately I know I’m looking at a hypotenuse, or an approximation of a hypotenuse, with the Pythagorean theorem being beautifully simple given the area of a square.

I guess, in the end, I have an equation that more than meets my needs. I learned about Aztec diamonds, and I figured out why this problem is not as simple as it originally seems. I also learned about some pretty bangin’ equestrian leggings. I’m going to keep at this, though, because it fascinates me, and I haven’t found a solution in the wild. I’ll report back, but before I do, there will likely be a game-in-a-post to explain why I was trying to solve it in the first place.


Golfing in Eukleides

Eukleides is decidedly not a golfing language, but when a geometry-related question came up on PPCG, I had to give it a shot. Eukleides can be quite rigid; for starters it is very strongly typed. Functions are declared by what they intend to return, so while set would be the shortest way to declare a function, it can’t really be exploited (unless returning a set is, in fact, desired). Speaking of sets, one thing that is potentially golfable is ‘casting’ a point to a set. Given a point, p, attempting a setwise operation on p will fail because a point does not automatically cast to a set (strict typing). p=set(p) will overwrite point p with a single-item set containing the point that was p. If, however, it is okay to have two copies of the point in the set, p=p.p is three bytes shorter.

If user input is required, the command number("prompt") reads a number from STDIN. The string for the prompt is required, though it can be empty (""). Thus, if more than four such inputs (or empty strings for other purposes) are required, it saves bytes to assign a variable with a one-letter name to an empty string.

Whitespace is generally unavoidable, but I did come to realize that boolean operators do not need to be preceded by whitespace if preceded by a number. So, if a==7or a==6 is perfectly valid. aor a, 7ora, 7 or6 are all invalid, however. This may be an interpreter bug, but for the time being it is an exploitable byte.

Finally, loci. Loci are akin to for-loops that automagically make sets. Unfortunately, they don’t seem to care much for referencing themselves mid-loop, which meant that I couldn’t exploit how short of a construction they are compared to manually creating a set in a for-loop.

This was a fun exercise, and just goes to show that if you poke around at any language enough, you’ll find various quirks that may prove useful given some ridiculous situation or another.


Sinclair Scientific Programmable

The Sinclair Scientific is one of my favorite calculators, though certainly not for its speed, accuracy, or feature set. In fact, in an era where full-featured scientific calculators can be had for under ten bucks, it’s a downright laughably bad machine. But it’s evidence of the ingenuity of Sinclair in their race to have made tech accessible for those with slimmer wallets. The Scientific may well be a post for another day, but recently I fell into another ridiculously quirky Sinclair calculator, the Scientific Programmable. Its manual describes it as ‘the first mains/battery calculator in the world to offer a self-contained programming facility combined with true scientific functions at a price within the reach of the general public’.

As with the Scientific, that last bit is key – this machine was engineered to meet a price point. HPs of the day were engineered for speed and accuracy, and were beautiful, easily operated machines to boot. Sinclairs were affordable, period. To start investigating this thing’s quirks, let’s address the ‘true scientific functions’ that the calculator includes. Sine, cosine, arctan, log, and antilog. No arcsine, arccos, or tangent – instead the manual tells you how to derive them yourself using the included functions. Precisely what I expect from a Sinclair (though the aforementioned Scientific did include all standard trig functions).

The highlight (if you will) of this calculator is, of course, its ‘self-contained programming facility’, which is really what I’d like to discuss. While the terms are oft indistinguishable nowadays, I would really consider the Scientific Programmable’s functionality more of a macro recording system than anything resembling programming. There are no conditionals, there is no branching, and a program can only contain 24 keystrokes. The keyboard is shifted for program entry, and integers thus require two extra keystrokes as they are delimited. I say integers because that is all one can enter during program entry – if your program requires you to multiply by .125, you would need to calculate that with integer math first.

My go-to demo program is Viète’s formula for pi. It’s simple, requiring very little in the way of scientific functions, stack manipulation, memory registers, or instructions; yet it’s fun and rewarding. Unfortunately, I don’t actually think it’s possible on the Scientific Programmable, primarily due to the lack of a stack and the single memory register. I just need one more place to stick a value, and a stack would be ideal – it would contain the previous result ready to be multiplied by the next iteration.

We could try pi the Leibniz (et al.) way, 1 – 13 + 1517 + 19, and so on. But we still need to store two variables – the result of every go, and the counter. I still don’t think it can be done.

How about we eschew pi and just make 3. Easy enough, just type 3 or, perhaps 69 enter 23 ÷. But what if I want to do it Ramanujan-style with more nested radicals? I… still don’t think I can, because again I essentially need a decrement counter. Bit of the inverse of the problem as above, one place for storage just isn’t enough. Sorry, Ramanujan.

So what can we do? I guess the golden ratio is simple enough: ' 1 ' + √ and then just mash EXEC repeatedly until we have something resembling 1.618. Not terribly satisfying. Also, the calculator lacks the precision to ever actually make it beyond 1.6179.

To be fair, the calculator (well, not mine, but a new one) comes with a program library in addition to the manual. Katie Wasserman’s site has them, fortunately. And while none of the programs are particularly interesting in any sort of technical way, they do give a good overview of how this macro mentality would cut down on repetitive calculations. One thing that I do find technically interesting, from a small systems/low level perspective is Sinclair’s advice on dealing with the limitations. For instance, they mention that pi is 355113 which yields 3.1416, as accurate as is possible. But if one is willing to deal with less accuracy, they suggest 4*(arctan 1) for 3.1408 (~.02%) or 227 for 3.1428 (~.04%). Determine needs and spend memory accordingly.

All in all, I don’t know what I’ll do with the Scientific Programmable beyond occasionally pulling it out to mess with. It’s not really fun to program like an old HP, because it’s just too limited. I guess if I come up with any other simple, iterative formulas that I can plug into it, I may revisit. But, much like the Sinclair Scientific, it will largely stay in my collection as a quirk, a demonstration of what was ‘good enough’ alongside the cutting edge.


A few of my favorite: Pink pencils

I write a lot. I carry many bags. I’m untidy. I own and use many mechanical pencils. Some of them good, some of them bad, most of them pink. Here are my favorites.

Pentel Sharp Kerry:
The Kerry is hands-down the best pencil I own, in a very practical sense. It’s not beautiful; its oddly shiny-and-gridded midsection that looks grippy but is too high to function as such is just gaudy. The pencil was introduced at least 30 years ago, but I can’t imagine that ever looked good. But it’s such a well-engineered, functional pencil that it’s hard to rail on too much. It’s kind of the pencil version of 70s Japanese ‘pocket pens’ in that it has a cap, and it ‘grows’ to a usable size when said cap is posted. A pencil introduces its own challenges, of course, so the cap actually has its own lead advance button (which contains the eraser) that interfaces with the main advance. The cap obviates the need for a retraction method, so the Kerry lives comfortably in a pocket or a bag. I don’t have a bag without a Kerry in it.
Uni Kuru Toga High Grade:
Uni’s Kuru Toga is one of the most meaningful innovations in mechanical pencil mechanisms as far as I’m concerned. With a 2.0mm lead, or possibly even down to a 1.3mm, one might sharpen their lead with a purpose-built rasp or a lead pointer. Get much narrower, and your leads are all over the place. I use 4B whenever possible, and 2B otherwise, so this is less of a problem for me, but it’s still nice to find ways to mitigate problems. The Kuru Toga mechanism rotates the lead by a tiny amount every time you press it to paper. This mechanism is a bit ‘spongy’, perhaps, almost like a pencil with a suspension mechanism. I found it very easy to adjust to, but I’ve heard of others taking issue with it. The High Grade has an aluminum grip, which I like the weight and feel of, but others may prefer the rubber grip model.
Pilot Clutch Point:
This pencil is fairly maligned, and I understand why. The aforementioned pencils (and the following pencils) all use an internal clutch mechanism, and have a straight lead sleeve. Straight lead sleeves are great for draughting; they slide perfectly along a straightedge. I, personally, don’t like them all that much for writing, however. The internal mechanism brings with it other advantages. Keeping the mechanism protected, and having the lead held straight by the sleeve before the clutch means far less breakage. The Pilot Clutch Point exposes the clutch right at the front of the pencil, and if you don’t treat it with respect, it will jam. Badly. But when it works, it has a nice, pointy mechanism that will hold the shortest bits of lead known to humankind. It may very well be my favorite pencil for everyday writing.
Pentel Sharp Pocket:
The baby sister of the Kerry, I suppose? Much thinner, slightly shorter, and with a much more Biro-like cap that doesn’t really extend the pencil whatsoever. Great for attaching to a diary or the like. Works as well as any Pentel does. Very light.
Staedtler 775:
I guess the pink one is only available in Korea, but it does exist, and one can find it on eBay, so I say it counts. The 775 is a classic draughting Staedtler. It has a retracting point, but you have to ram it in to get it to do so. This has its ups and downs, being a simple retraction system makes it incredibly steady when engaged. But it’s also hard to disengage, and you risk breaking the lead or bending the sleeve.
Zebra Color Flight:
A really cheap pencil, but bonus points for coming in three shades of pink. It also has one other neat trick up its sleeve – much more eraser than the typical mechanical pencil, extended by rotating the advance. The plastic feels cheap, nothing to write home about. But considering how cheap they can be, the Zebra Color Flight pencils are actually pretty nice.

Bubble sort in dc

dc offers essentially nothing in the way of stack manipulation. Part of the power behind a lot of RPN/RPL HP calculators was that if you were being tidy with your stack, you rarely had to use registers. This is not the case with dc. Of course there’s also nothing resembling array sorting in dc, and I was curious how difficult it would be to implement a basic bubble sort.

As mentioned, we can’t directly manipulate stack elements (beyond swapping the two topmost values), but we can directly address elements of an array. This means that our bubble sort needs to dump the contents of the stack into an array, and potentially dump this array back onto the stack at the end (depending on what we need the sort for).

zsa0si[li:sli1+dsila>A]dsAx

This takes care of putting everything on the stack into array s. It does this by storing the number of items on the stack into register a (which we will need later – dc does not have a means of counting the number of items in an array). We start counter i at zero, and just go one at a time putting the top-of-stack into array s at index i until i is equal to a.

[1si0sclSxlc1=M]sM

The bubble sort itself requires three macros. This first one, M is our main macro, which we will ultimately begin executing to start the sort process. It sets a counter, i to one and a check register, c, to zero. Then it runs our next macro, S, which does one pass through the array. If S has changed anything by running macro R, the check register c will no longer be zeroed out, so we test for this and restart M until no changes have been made.

[lidd1-;sr;s<R1+dsila>S]sS

As mentioned, macro S does one pass of the array s. We fetch our counter, i, duplicate it twice, and decrement the top copy by one. While we work on our array, we’re always looking at i and i-1 essentially, which just makes the comparison a bit tidier at the end vs. i+1 and i. We load the values with these indices from s, compare them, and if they aren’t in order, we run macro R. We still have a copy of i on the stack at this point, so we increment it, duplicate it, store it, and compare with a to see if we’ve hit the end of the array.

[1scddd;sr1-;sli:sr1-:s]sR

Macro R does one instance of reversal. First it puts a one in the check register, c, so that M knows changes have been made to the array. At this point there is still a copy of i on the stack, which we duplicate three times. We load the i-indexed value from s, swap with a lower copy of i which we decrement by one before loading this i-1-indexed value from s. We load i again, and store the old i-1 value (our previous top-of-stack) at this index. This leaves our old i value atop the stack, with i one spot below it. We swap them, subtract one from i, and store the old i value at this new i-1 index in array s.

And that’s actually it. If we populate the stack and run our initial array-populating code before executing lMx, we’ll end up with our values sorted in array s. From here, we can [li1-d;srdsi0<U]dsUx to dump the array onto the stack such that top-of-stack is low, or 0si[lid;sr1+dsila>V]dsVx to dump the array onto the stack such that top-of-stack is high. If, say, we only need min (0;s) or max (la1-;s), these are simple tasks.

Additionally, if we wanted to get the median of these values, we can exploit the fact that dc just uses the floor of a non-whole index into an array. This allows us to avoid an evenness test by always taking ‘two’ values from the middle of the array and calculating their mean. If a is odd, then half of a will end in .5. Conversely, if a is even, half of a will end with no remainder. So we can subtract half of a by .5 to get the lower bound, and then subtract half of a by half of a mod one (either .5 or 0) and average the values at the resulting two indices: la2/d1~-;sr.5-;s+2/p.


Sieve of Eratosthenes

This post contains APL code, and assumes that your system will be capable of rendering the relevant unicode code points. If the code below looks like gibberish, either you don’t understand APL or your computer doesn’t ☺. I use the APL standard of six spaces indentation to show my input, and zero indentation for output. As for the code itself, it assumes one-indexing, that is, ⎕IO←1.

I was messing around with some primality and coprimality testing in dc the other day when I got to wondering about inefficient methods for checking primality (specifically, the thought of testing primality of n by testing coprimality of n and m where m is every integer<n). This reminded me of the sieve of Eratosthenes, a first-century CE means of narrowing down a list of integers to a list of primes by eliminating multiples of primes (which must be composites). My APL is getting very rusty, unfortunately, but this seemed like a fun exercise since APL is a language heavily invested in arrays. We may start by assigning s to the integers 2-26, and confirm this by printing s as a 5x5 matrix:

      s←1↓⍳26
      5 5 ⍴s
 2  3  4  5  6
 7  8  9 10 11
12 13 14 15 16
17 18 19 20 21
22 23 24 25 26

Then we can start sieving. I’m going to skip ahead to the threes to avoid a potential spot of confusion, so what we need to do is reassign the elements after 3 to either a zero if there’s no remainder (upon dividing by 3), or the original value if there is. 2↓s narrows us down to the appropriate elements, and we have to make sure that we do that everywhere so that our 1x25 shape stays intact. The first step is getting our remainders. 3|2↓s performs modulus 3 on elements 4-25, resulting in 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2. This doesn’t fit neatly into a 5x5 grid, however, so I’m going to temporarily write it over the corresponding elements in s:

      (2↓s)←(3|2↓s)
      5 5 ⍴s
2 3 1 2 0
1 2 0 1 2
0 1 2 0 1
2 0 1 2 0
1 2 0 1 2

This gives us zeroes where we need them – at every multiple of three (excluding three itself, of course). We can then turn this into a series of ones and zeroes by comparing each value to zero:

      5 5 ⍴ 0<s
1 1 1 1 0
1 1 0 1 1
0 1 1 0 1
1 0 1 1 0
1 1 0 1 1

Which we could then multiply by our original set of 25 integers, if we still had them. So let’s reassign s and put it all together. And since we skipped over two, we should probably do that as well.

      s←1↓⍳26
      (2↓s)←((3|2↓s)>0)×2↓s
      5 5 ⍴s
 2  3  4  5  0
 7  8  0 10 11
 0 13 14  0 16
17  0 19 20  0
22 23  0 25 26

      (1↓s)←((2|1↓s)>0)×1↓s
      5 5 ⍴s
 2  3  0  5  0
 7  0  0  0 11
 0 13  0  0  0
17  0 19  0  0
 0 23  0 25  0

So now we’ve sieved out all of our multiples of 2 and 3, and the only thing left to sieve is 5. Of course, if we didn’t already know the primes ≤25, we’d want to keep trying every nonzero value in the list, but we do know that 25 is the only outstanding composite in the list, and (4↓s)←((5|4↓s)>0)×4↓s does, in fact, take care of it, as can be seen here, on TryAPL.org.

I mentioned that my APL skills are a bit rusty. I’m not sure I’ve even mentioned APL on this blog before, but it is probably tied with Forth for the coveted title of Bri’s favorite language. I’ve never been great at it, though. It’s a relic, with lineage dating before computers were capable of running it. I either don’t do enough array-based math, or I don’t think of enough of the math I do in array-based terms to use APL regularly. Where I get really rather rusty is in thinking outside of imperative terms or using APL’s imperative programming behaviors. Which was why my little demo here was very REPL-esque. Hopefully a future post will bundle this thing up into an actual, runnable function that knocks the whole process out in one go. But for now, have fun typing in those iotas.


Post updates

I’ve realized that a few of the things I’ve written over the past year may have contained a nugget or two of information that was either ill-informed, or otherwise deserving of an update. For starters, in the posts Darwin image conversion via sips and Semaphore and sips redux, I talk about using Darwin’s sips command to convert a TIFF to PNG before running it through optipng. While this is a fine exercise, I have since come to learn that optipng will handle uncompressed TIFFs just fine, converting them to PNG and optimizing accordingly. sips is unnecessary, so long as I’m willing to temporarily use up the SSD space for uncompressed TIFFs.

Before those posts was Of lynx and curl, describing a few uses of those tools in tandem. Toward the end I mention going through a list of URLs, and writing out a CSV by using printf to output the URL before using curl to fetch the status code. For some reason, I had long overlooked that curl has a variable for the current URL as well, so the printf step is largely unnecessary. Going through a list of links to output a CSV of the form URL,Status Code can be achieved as for i in $(< links.txt); curl -o /dev/null --location --silent --head --write-out '%{url_effective},%{http_code}\n' "$i" >> status.csv (given that links.txt is a line-by-line list of URLs to check).

In other news, the first post I made on this version of this blog was the meta post, Licensing – dated 2016-07-30. So I’ve stuck with this thing for a year now, 75 posts, so far so good. I did recently upgrade from Hugo 0.16 to 0.24.1, which I have a post-in-progress about but all in all the upgrade went shockingly smooth. I definitely have no regrets about the move to a static site generator, and I would whole-heartedly recommend Hugo for anyone whose needs it meets. I have made a few other minor changes, like setting my Inline audio player to not autoload, and customizing the selection highlight color, but nothing major since my last round of Template updates. Next up is hopefully building a better archive page using new Hugo features, having Hugo generate the CSS files, and possibly a JSON feed.


Eukleides

Despite having failed out of geometry in my younger days, it has become my favorite sort of recreational math. I think, back in elhi geometry, too much emphasis was placed on potential practical applications instead of just distilling it down to the reality that this was what math was before there was any sort of formal mathematical system. Geometry is hands-on, it’s playful, and as such I have come to really enjoy playing with it. As much as I enjoy doing constructions with a straightedge and compass, I occasionally poke around to see what tools exist on the computer as well. Recently, I stumbled across a very neat thing: Eukleides.

I’m drawn to Eukleides because it is a language for geometry, and not a mouse-flinging WYSIWYG virtual compass. This seems contradictory given my gushing about geometry being hands-on, and don’t get me wrong – I love a hands-on GUI circle-canvas too1. But sometimes (often times) my brain is in code-mode and it’s easier to express what I’m trying to do in words than to fiddle around with a mouse. And a task like ‘intersecting a couple of circles’ is far more conducive to writing out than, say, laying down an SVG illustration from scratch.

a b c d e bi

There you have one of the first constructions learned by anyone studying geometry – bisecting an angle with three arcs (or, full-blown circles in this case). Angle ∠abc is bisected by segment bbi. Here’s the code:

% Percent signs are comments
a=point(7,0); b=point(0,0) % Semicolons and newlines separate commands
a b c triangle 5,-40°
g=circle(b,3)
d=intersection(g,a.b); e=intersection(g,b.c)
d=d[0]; e=e[0] % Intersections return sets (arrays), extract the points
h=circle(d,3); i=circle(e,3)
bi=intersection(h,i)
bi=bi[1]
label
  a 0°; b 180°; c 40° % Label the points
  d -40° gray; e 90° gray
  bi 20°
  a,b,bi; bi,b,c 1.5 % Make the angle markers
end
draw
  c.b.a
  g lightgray; h gray
  i gray
  bi
  b.bi
end

Note that the code originally used shades of grey, I shifted these around to match my site’s colors when I converted the resulting EPS file to SVG. The code is pretty straightforward: define some points, make an angle of them, draw a circle, intersect segments ab and bc, make some more circles, intersect where the circles meet, and boom – a bisected angle. The language reminds me a bit of GraphViz/DOT – purpose-built for naturally expressing how things will be drawn.

We can actually prove that the construction works without even generating an image file. Removing the draw and label sections, and replacing them with some print (to stdout) statements calling angle measurement functions:

a=point(7,0); b=point(0,0)
a b c triangle 5,-40°
g=circle(b,3)
d=intersection(g,a.b); e=intersection(g,b.c)
d=d[0];e=e[0]
h=circle(d,3); i=circle(e,3)
bi=intersection(h,i)
bi=bi[1]
%%%%%%%% New content starts here:
print angle(a,b,c)
print angle(a,b,c)/2
print angle(a,b,bi)
print angle(bi,b,c)

We get ∠abc, ∠abc/2, ∠abbi, and ∠bcbi. The last three of these should be equal, and:

40
20
20
20

…they are! We can also do fun things like dividing the circumference of a circle by its diameter:

print (perimeter(circle(point(0,0),4))/8)

…to get 3.14159. Very cool. There are a few improvements I would like to see. Notably, while you can label points etc. with their names, I can’t find a way to add arbitrary labels, or even labels based on functions. So you can’t (seemingly) label a line segment with its length, or an angle with its measure. Also, the interpreter wants ISO 8859-1 encoding, and it uses the degree symbol2 (°) to represent angle measures. This gets all flippy-floppy when moving to UTF-8, and in fact if I forget to convert a file to ISO 8859-1, I’ll get syntax errors if I use degree symbols. Finally, it only outputs to EPS; native SVG would be incredibly welcome.

Eukleides is a lot of fun to play with, and it’s worth mentioning that it has looping and conditionals and the like, so it is programmable and not just computational or presentational. Presumably some pretty interesting opportunities are thus opened up.


Nth-order Fibonacci sequence in dc

After working on the nearest Fibonacci number problem in dc, I got to thinking about how one might implement other generalized Fibonacci sequences in the language. Nth-order sequences seemed like a fun little challenge, and since I was simply doing it for my own pleasure I went ahead and gave it prompts and user input:

% dc -e '[_nacci? ]n?1-sg[terms? ]n?2-snlgsi[0lid1-si1<Z]dsZx1[dls+ssli:flid1+silg>G]sG[li;flid1-si0<R]sRlgsz[0si0sslGxlgsilRxlslzd1+szln>M]dsMxf'
_nacci? 4
terms? 20
20569
10671
5536
2872
1490
773
401
208
108
56
29
15
8
4
2
1
1
0
0
0

…gives us the first 20 terms of the tetranacci sequence. This interested me because unlike a simple Fibonacci iteration that can be handled on the stack with only rudimentary stack manipulation (dsf+lfr), higher order ‘naccis need summation of more terms. For a defined number, I could simply use registers, but dc does support arrays, so setting the order at runtime is plausible. There’s a bit going on here, so I’m going to start by rattling off all of the registers I use:

g:
The order (so, a g-nacci sequence)
n:
Number of terms to run through
s:
Sum
i:
General counter
f:
Array used to temporarily hold the last sequence as it’s being summed
G:
Generalized Fibonacci generating macro
R:
Macro to retrieve previous sequence from f
Z:
Zero-seed macro
z:
Counter for iterations (compares to n)
M:
Main macro

Now to break down what’s going on. [_nacci? ]n?1-sg[terms? ]n?2-sn just prompts the user and inputs g and n. We reduce each of these in the process to appease the loops. After doing this, we need to seed the sequence with g-1 zeroes, and one one. lgsi sets counter i to g, and then the macro Z, does nothing but put a zero on the stack and loop: [0lid1-si1<Z]1. dsZx stores the macro and executes it; then 1 pushes the necessary one onto the stack such that we can begin.

[dls+ss]sS is our macro, S, which is a simple summer for register s. It duplicates the top of stack, recalls s, adds the two together, and then writes that back to s. The stack is returned to its original state.

Our next macro, G, has a bit more going on: [dls+ssli:flid1+silg>G]sG. It starts with a simple summer for register s, dls+ss. This duplicates the stack, recalls s, adds them and then overwrites s with the new sum. The stack returns to its original state. The next thing we need to do is move the top of the stack into our holding array, f. We’ll use our counter i as an index, so we load i and then do the array operation, li:f. Every time we do these two things, our summation (the next number in the sequence) nears its next value, and our stack shrinks. The rest of the macro, lid1+sig>G just handles incrementing i and comparing it to our order, g, determining whether or not to continue the loop.

Macro R, [li;flid1-si0<R]sR repopulates the stack from array f. Before calling R, i is set to g, and we use that as our array index to pull from, thus treating the array as a LIFO2. li;f does this, and then the rest of the macro is (surprise, surprise) counter/loop handling.

Before we run macro M, which is our main macro essentially, we set counter z to our order number g, which accounts for the fact that we already have our first few terms in assorted zeroes and a one. M, [0si0sslGxlgsilRxlslzd1+szln>M]dsMx, starts out by resetting counter i and the sum register s to zero: 0si0ss. lGx runs macro G, lgsi sets counter i to our order g, and then lRx runs macro R. ls puts our sum (the new Fibonacci value) at the top of the stack, and then the rest of the thing is counter/loop handling. dsMx saves the macro as M and also sets it running, while our last command, f prints the entire stack.


Nearest Fibonacci number in dc

I hadn’t really exercised my code golf skills in a while, but saw a fun challenge today that seemed like an easy solve, well-suited for dc. The challenge was, given a positive integer, return the nearest Fibonacci number. The Fibonacci sequence is fun on its own in a stack language; in dc one step can be performed with something like dsf+lfr. Given that the stack is prepared with two integers, this will duplicate the top, store it in an intermediate register f, add the two previous values, push f back onto the stack, and then swap them such that they’re in the correct order. It doesn’t pollute the stack, either, these two values are all that remain.

For this challenge, those two values are all I ever need – my input (which I will refer to as i) is always going to fall between two consecutive numbers in the Fibonacci sequence (or on one of them, in which case I would only need that value to test against). Keeping as much work on the stack as possible is ideal when golfing in dc because the byte cost of working with registers adds up quickly. So my strategy is to seed the Fibonacci generator with two 1s, and run it until the larger of the two Fibonacci numbers is greater than i. One of those two Fibonacci numbers will be the right one, and if i happened to be a Fibonacci number, I’ve just generated an extra one for no reason. I convert both of the Fibonacci numbers to their respective differences from i. Since I know for a fact that the top of the stack is greater than i, and the second value on the stack is either less than or equal to i, I don’t have to worry about dc’s lack of an absolute value mechanism; I simply subtract i from the big one and subtract the small one from i. Since I know which difference is which, I have no need to retain the Fibonacci numbers. I simply compare the differences, and then reconstruct the Fibonacci number by adding or subtracting the difference to i depending on which difference ‘won’. The code:

?si1d[dsf+lfrdli>F]dsFxli-rlir-sd[lild-pq]sDdld<Dli+p

…and the explanation:

?si                  Store user input in register i
1d                   Push 1 to stack, duplicate it to seed Fibonacci sequence
[dsf+lfrdli>F]dsFX   Macro F: Fibonacci generator as described above, with the
                       addition of loading register i and continuing to run F
                       until the top-of-stack Fibonacci number is larger than i
li-                  Turn our high Fibonacci number into its difference
rlir-                Swap the stack, and turn our low Fibonacci number into its
                       difference. The stack stays swapped, but it doesn't 
                       matter
sd                   Store our low Fibonacci difference in register d
[lild-pq]sD          Macro D: reconstruct the low Fibonacci number by 
                       subtracting d (the difference) from i; print the result
                       and quit. Macro is stored for later use.
dld<D                Duplicate the high Fibonacci difference and push the low 
                       Fibonacci difference onto the stack. Compare them and run
                       D if the low difference is less than the high difference
li+p                 Else, add i to the high difference and print the result

Scaling visualized data using common multiples

How we’re presented with data skews how we interpret that data. Anyone who has read a lick of Tufte (or simply tried to make sense of a chart in USA Today) knows this. Many such commonly-encountered misrepresentations tend to relate to scaling. Volatility in a trend line minimized by a reduction in scale, perspective distortion in a 3D pie chart, a pictograph being scaled in two dimensions such that its visual weight becomes disproportionate – technically, the graphic may accurately line up with the values to be conveyed, but visually the message is lost.

In taking mandatory domestic violence training at work recently, I was thrown completely off by a statistic and accompanying graphic. The graphic (albeit using masculine and feminine silhouette images) resembled:

♀♀
♂♂♂♂♂

…with the corresponding statistic that one in every four women and one in every seven men has experienced domestic violence. While the numbers made sense to me, it took me a while to grasp them, because the graphic was simply showing more men than women. It’s something akin to a scale problem, our brains are going to see the obvious numbers – four and seven – instead of readily converting the graphics into their respective percentages. How to fix this, then?

♀♀♀♀♀♀♀♀♀♀♀♀♀♀
♂♂♂♂♂♂♂♂♂♂♂♂♂♂♂♂♂♂♂♂

If we simply repeat our graphics such that the total number in either set is a common multiple, now it’s much simpler to process the information that we’re supposed to process. We might not immediately recognize that seven in twenty-four is the same as one in four, nor that four in twenty-four is the same as one in seven, but we do know that seven is greater than four (which caused the problem in the first place earlier), and now we’re not dealing with mentally constructing percentages from a simple visual.


Anchors, away!

My line of work involves a lot of mushing around of the requirements of customers who don’t know the Agency’s style requirements, best practices for web, and CMS limitations into something that comfortably adheres to the preceding rules while still pleasing (or at least appeasing) said customer. Often, folks inexplicably want a handful of tiny, near-zero content pages, when typically we try to present the content together, broken up by headers with IDs. I have to go through quite a few layers to communicate how their requirements will actually manifest, and the technical knowledge is reduced each step of the way.

The term ‘anchors’ comes up in these instances, a lot. I understand that the term makes sense as far as relating something concrete to a structural abstraction – the ‘anchor’ is cast somewhere on the page, and you can just follow the corresponding line (link) to it. I also understand how this came to be from a historical tech perspective – as I understand it, Tim Berners-Lee envisioned a far more bidirectional system of linking, so our <a> (anchor) element would represent something more nodelike, an exit point and an entrance point.

But links don’t work that way on the internet as we know it. The <a> element is probably the least logically named element in modern HTML. But for a while, even though <a> elements still didn’t work that way, we had kind of a hack in place that accomplished the same goal. The HTML 3.2 specification tells us that “[Anchors] are used to define hypertext links and also to define named locations for use as targets for hypertext links”, with the distinction coming from whether an <a> element has an href or a name attribute. It wasn’t until HTML 4.0 that we even had an id attribute to use.

The two uses of the anchor element, while compatible in conjunction with one another (<a href="www.example.com" name="here">) in line with Berners-Lee’s vision, are still semantically very different functions. HTML 4.0 still encouraged the dual-usage, but at least acknowledged these were fundamentally different things, “The A element may define an anchor, a link, or both.” It no longer actually calls <a> anchor, and instead states that the element has two distinct usages. Obviously this is not great as far as semantics are concerned, but more trouble comes when one starts to introduce styling.

HTML 4.0 brought with it the big push for stylesheets, the separation of structure and content. Of course, if you’re styling <a> to look like a link, all of your <a> elements being used as anchors now just appear to be nonfunctional links. The solution wrecked any sense of structural connection between the anchor and the text it represents – simply use an empty <a name="headline"> element in front of your text. This is clearly awful, and with the id attribute now present and sharing namespace with name, entirely unnecessary.

HTML5 still supports this behavior, though recommends against it. Anyone who cares at all about semantics, about accessibility should recommend against it. The CMS I use at work has finally done away with it. And I think that as we slowly come to our senses about this, we should probably just do away with the term ‘anchor’ as well. The attribute is id, the hash in the URL denotes a ‘fragment identifier’. They’re a bit more jargonistic, but these are the terms I always try to use. There’s still a legacy connection between the word ‘anchor’ and the <a> element. And when dealing with folks who occasionally wind up changing things that they don’t really have the background to be changing, legacy language can lead to legacy behavior, as well as making it more difficult to search for help they may need.


How not to write about trans folks

While looking for something entirely unrelated, I came across this article from the San Diego Union-Tribune about a new book on Wendy Carlos. I wouldn’t actually recommend reading the article, as it comes off rather vapid and hollow, and the Union-Tribune’s website is extremely user-hostile. A couple of things stood out to me, however. First of all, I do think journalists and their publications are generally getting a bit better about talking about trans folks. This article could have been much worse. But there’s still a lot of work to be done.

For one thing, the article casually deadnames Carlos for no reason. Throughout the article, they do refer to her as Wendy (and they never misgender her), but just once there’s a pointless little parenthetical with her deadname. I get that she released albums under that name, and I suppose this could be a point of confusion for some casual listener, but any further research by an interested reader would clear this up. In my opinion, there needs to be a very good reason to deadname a trans person, and this article sorely lacked one.

As I mentioned, the article is actually about a new book about Carlos (or, I suppose, about Switched on Bach). The author, Roshanak Kheshti, is a UC San Diego professor, and talks a bit in the article about diversity and intersectionality in the music scene, apparently a running theme in her coursework as well. What fascinates me is that, in an article that is simultaneously about a prominent transgender musician and a professor who teaches about the impact of marginalized groups on culture, gender identity is not mentioned once. I don’t know if this was a matter of Kheshti not bringing up (say) the struggle of transgender musicians, or if the paper simply plucked their quotations around it, but it’s a really strange omission.

I look forward to checking out Kheshti’s book once it comes out, and I have this article to thank for that. But as I mentioned above, it was a rather hollow read and could have done so much more to acknowledge the realities that trans musicians are dealing with. Now more than ever, the media should feel an obligation to lift up marginalized groups, and subtle deadnaming and inattention to their subjects’ realities does not an obligation fulfill.


Field recording with the Tascam GT-R1

Field recording of found sounds is a rather crucial aspect of the sort of sound design that interests me. Diving deep enough into this area, one will inevitably wish to experiment with contact microphones. Contact microphones are unlike ‘normal’ microphones in that they don’t really respond to air vibrations. But they are quite good at picking up the vibrations of solids (or, in the case of sealed hydrophones, liquids) that they’re attached to. This is a lot of fun, but there’s one problem – due to an impedance mismatch, they aren’t going to sound very good when connected to a normal microphone input. Compare this matched recording with this recording from a standard mic input.

The typical solution will be to go through a DI box or dedicated preamp. For a portable, minimal setup, this is far from ideal. I figured at some point, someone would have had to have come up with a portable recorder designed with sound design in mind, and containing inputs suitable for a range of microphones. I came up empty-handed. Then it occurred to me, this is really the same problem that guitar pickups have – they need a high impedance (Hi-Z) input for proper frequency range reproduction. Perhaps a portable recorder for guitarists exists. It does, and let me just say that the Tascam GT-R1 makes an awesome little field recorder.


Inline audio player

For the purposes of an upcoming post or more, and some other upcoming projects, it occurred to me that I might need to come up with a UI for incorporating audio samples in with posts. It needed to:

Little snippets of audio have different requirements from, say, video. In keeping with my requirements, for example, I opted to omit a mute button. The snippets are short and trivial enough that pause should suffice. In fact, I opted for only five possible actions: play, pause, 13 volume, 23 volume, and full volume. This boils down to two controls: one play/pause control and one three-position volume control. The result looks something like this weird ringing sound.

Audio is just linked from inside a certain class of span1. The link remains – so users who want to or who don’t have JS enabled or who don’t have a modern browser can simply download the file. Each control is inline SVG. The play/pause button is one SVG, with either button being shown or hidden via CSS. Likewise the volume control is one SVG element, and each of the three bars defaults to the ‘off’ state. Any given bar will have the class active, and the CSS darkens the active class plus the next bar plus the next bar. Each bar has an invisible rectangle atop it that spans that entire third of the SVG, to make for an easier target.

The code is obviously snatchable, and I may release it at some point, but it’s definitely not pretty. I… don’t code pretty. I have some other reservations as well, namely accessibility. I haven’t really used SVGs quite like this before, and I don’t really know how to make AT handle it sensibly. I guess if nothing else, the link is a guaranteed fallback. Unrelated, but I was pleasantly surprised to see it working in IE11.


Compromised

Recently, a financial account of mine was compromised. As a person who, while entirely fallible, is pretty well-versed in infosec, I have a lot of thoughts on the matter. Honestly the whole thing has been more fascinating to me than anything. Maybe it’s because my bank has been very accommodating so far, maybe it’s because (relatively speaking) trivial amounts of money have been sucked from my accounts, or maybe it’s because I’m petty and vengeful and when you make a direct bank transfer your name, the recipient’s name, it is revealed to the sender1.

I’m curious about the vector of attack. My assumption is that primarily my card was physically compromised, but I’m not sure. The timeline began with the reception of notifications that my online banking password had been reset. I assumed, or, hoped for a glitch and reset it. Then it reset again. And again. Then a transfer account was added. Then, while I was dialing in to the bank, $100 had been transferred out. This is when it gets a little panicky, but having that information, having a number of controls in front of me to mitigate the situation, and having quick response from the bank’s customer service all led to a fairly painless resolution.

The means of ingress was not the internet, it was not ‘hacking’. When you start telling people about an attack like this, the overwhelmingly rudimentary understanding of security lends itself to responses like ‘ah, well you have this account and now that account was hacked! The hackers hacked it!’ The term ‘hacking’ evokes some real man-vs.-machine WarGames type shit, but the sort of attacks that tend to affect most of us are far less sexy. Things like malware and card skimmers meticulously mining data to then be sold off in batches to lesser criminals.

So that was the first breach, and then several days later it was followed by fraudulent card purchases. I was able to temporarily mitigate this by disabling the card, before ultimately contacting the issuer and having the card entirely deactivated and a new one issued. In between these two things happening, I received a call from ‘my bank’ enquiring about card fraud (which had not yet occurred). The incoming number (which is trivially spoofed) did appear to resolve to the bank’s fraud department, but the callback number was unknown to the internet. I assume this was an attempt by attackers to phish more information while I was at my most vulnerable.

When I mention that the vector of attack likely began with the card, this is because there are some safeguards in place for doing the password reset over the phone. Some, like driver’s license numbers in many states, are completely trivial to reproduce, and financial institutions really need to stop relying on faux secret information. The card number is another potential identifier, and I think these two things with a dash of good old-fashioned social engineering thrown in probably led to multiple over-the-phone password resets being granted in a fifteen-minute window. Just the handful of dealings I had with the bank gave a lot of insight into how one could pull off such an attack, which itself is a little concerning.


Brief thoughts on the iMac Pro

Yesterday, Apple announced the iMac Pro, an all-in-one machine purchasable with up to an 18-core Xeon processor. I can’t tell if this is a machine for me or not (I love Xeon Macs but not iMacs so much), but I also have no real reason to think about that beyond fantasy – I’m only on my 2nd Xeon Mac, and I expect to get a few more years out of it. They age well. The current, oft-maligned Mac Pro smashed an impressive amount of tech into a rather small, highly optimized space. It may lack the expansion necessary for typical Pro users, but it is a technological masterpiece. The new iMac, however, seems like an impossible feat1.

What truly excites me is the reinforcement that Apple is committed to its Xeon machines. The iMac Pro is not the mysterious upcoming Mac Pro. So while tech pundits have lamented the inevitable death of the Mac Pro in recent years, Apple has instead doubled down and will be offering two Xeon Macs rather than zero.

One final thought that is more dream than anything – Apple prides itself on its displays, and on its Pencil/digitizer in the iPad Pro. A lot of artists use pro software on iMacs with Cintiq digitizers. Cintiqs are top-of-the-line, but that doesn’t make them great. The digitizers are decent, the displays themselves are alright, but they aren’t spectacular devices – they’re just the best thing out there. I don’t expect Apple to move to a touch-friendly macOS, their deliberate UI choices show that this is a clear delineation between macOS and iOS. But I think working the iPad Pro’s Pencil/digitizer into an iMac2 could very well prove to be a Cintiq killer for illustrators, photographers, and other visual artists.


Discoveries

‘Timeline’ is a game that I’ve been pushing to non-gamers lately. The premise is very simple – everyone has a (public) hand of several historical events, inventions, artistic creations, discoveries, etc.; anything notable and dated. The flip-side of every card has the corresponding date. One event starts the timeline date-side up. Players must then choose one of their cards and make an educated (or not, I suppose) guess as to where it goes in the timeline relative to the other events. Place it, flip it, leave it in place if correct or pull a new card if not. Gameplay is simple, fast, and almost educational. There are a whole bunch of sets, and they can be freely mixed-and-matched.

One of these sets is ‘Science and Discoveries’. Something always felt a little off about this set, and the last time I played it, I think I figured it out. There are 110 cards in a given set, and I have (to the best of my ability) narrowed this one down to a handful of categories:

I had to make a few executive decisions so that I could neatly categorize things, and if I did this categorical exercise again right now, everything would likely be give or take a couple cards. But the heart of the matter is that the creators (rightfully) marked 22% of the cards as having been discovered (by Europeans). If my categorization is even remotely accurate, that’s 40% of the physical/corporeal ‘discovery’ cards.

Now, that ‘rightfully’ up there is important – I am glad that Asmodee opted to point out that these peoples and places were only ‘discovered’ in a very surface manner – the pygmies already knew that the pygmies existed. And this isn’t a very deep thought, hopefully it’s immediately obvious to any given American or European that their history textbooks are written with a bias and to a purpose. But I guess what fascinated me were those percentages.

This is by no means representative of a history textbook, nor the average person’s understanding of history. But I can’t imagine it’s terribly far off, either. Coming from a colonialist sort of viewpoint, a lot of our ‘big moments in history’ come from finding this or that ‘savage’ population and treating them not as humankind, but as a scientific subject. And here we have a truly trivial history game telling us that >20% of the notable achievements the creators could come up with are, in fact, just stuff we’ve decided we can claim as having discovered. Despite either it (for lack of a better phrasing) having discovered itself, or other (‘lesser’) civilizations having beaten us to the punch. I suppose there is far more important stuff to worry about right now, even in the context of colonialism, but I still find it to be an intriguing glimpse into our historical ownership.


A chessboard for pebbling

Another post inspired by Numberphile. This one, specifically, is in response to a game presented by Zvezdelina Stankova in this introductory video on ‘Pebbling a Chessboard’, and three others that I’ll link to in a bit. Not right away, though, because she explains the game and then pauses for a bit so that the viewer can try to figure it out for themself. My way of doing this was to whip together a bit of a web game.

I’ll do a few explanations of my web version here before the reveal. The core game is one of checkers on an infinite checkerboard; pull a checker and place one up and to the right. Can’t pull one if you can’t do the placement. In my web version here, we’re reflected on the horizontal, so we’re going down and to the right. Much simpler to implement. The initial grid is 20x20, but it expands infinitely whenever a checker nears a border. Game is after the jump, I recommend watching the beginning of the previously linked video, and then playing around. That was why I made this. If you play around a bit, then eventually you can scroll down for more thoughts, I suppose. I think the board is tall enough so as to not accidentally spoil anything.


Speech synthesis

When I was in elementary school, I learned much of my foundation in computing on the Commodore 64. It was a great system to learn on, with lots of tools available and easy ways to get ‘down to the wire’, so to speak. Though it was hard to see just how limited the machines were compared with what the future held, some programs really stood out for how completely impossible they seemed1. One such program was S.A.M. – the Software Automated Mouth, my first experience with synthesized speech2.

Speech synthesis has come a long way since. It’s built into current operating systems, it can be had in IC form for under $9, and it’s becoming increasingly present in day-to-day life. I routinely use Windows’ built in speech synthesizer along with NVDA as part of my accessibility checking regimen. But I’m also increasingly becoming dismayed by the egregious use of speech synthesis when natural human speech would not only suffice but be better in every regard. Synthesis has the advantage of being able to (theoretically) say anything while not paying a person to do the job. I’m seeing more and more instances where this doesn’t pan out, and the robot is truly bad at its job to boot.

Three examples, all train-related (I suppose I spend a lot of time on trains): the new 7000 series DC Metro cars, the new MARC IV series coach cars, and the announcements at DC’s Union Station. None of these need to be synthesized. They’re all essentially announcing destinations – they have very limited vocabularies and don’t make use of the theoretical ability to say anything. Union Station’s robot occasionally announces delays and the like, but often announcements beyond the norm revert to a human. Metro and MARC trains only announce stops and have demonstrated no capacity for supplemental speech. Where old and new cars are paired, conductors/operators still need to make their own station stop announcements.

So these synthesizers don’t seem to have a compelling reason to exist. It could be argued that human labor is now potentially freed up, but given the robots’ limited vocabularies and grammars, the same thing could be accomplished with human voice recordings. I can’t imagine that the cost of hiring a voice actor with software to patch the speech together into meaningful grammar would be appreciably more expensive than the robot. In fact, before the 7000 series Metro cars, WMATA used recordings to announce door openings and closings; they replaced these recordings in 2006, and the voice actor was rewarded with a $10 fare card3.

Aside from simply not being necessary, the robots aren’t good at their job. This is, of course, bad programming – human error. But it feels like the people in charge of the voices are so far detached from the final product that they don’t realize how much they’re failing. The MARC IV coaches are acceptable, but their grammar is bizarre. When the train is coming to a station stop, an acceptable thing to announce might be ‘arriving at Dickerson’, which is in fact what the conductors tend to say. The train, instead, says ‘this train stops at Dickerson’, which at face value says nothing beyond that the train will in fact stop there at some point. It’s bad information, communicated poorly. Union Station’s robot has acceptable grammar, but she pronounces the names of stations completely wrong. Speech synthesizers generally have two components: the synthesizer that knows how to make phonemes (the sounds that make up our speech), and a layer that translates the words in a given language to these phonemes. My old buddy S.A.M. had the S.A.M. speech core, and Reciter which looked up word parts in a table to convert to phonemes. This all had to fit into considerably less than 64K, so it wasn’t perfect, and (if memory serves), one could override Reciter with direct phonemes for mispronounced words. Apple’s say command (well, their Speech Synthesis API) allows on-the-fly switching between text and phoneme input using [[inpt TEXT]] and [[inpt PHON]] within a speech string4. So again, given just how limited the robot’s vocabulary is (none of these trains are adding station stops with any regularity), someone should have been able to review what the robot says and suggest overrides. Half the time, this robot gets so confused that she sounds like GLaDOS in her death throes.

Which brings me to my final point – the robots simply aren’t human. Even when they are pronouncing things well, they can be hard to understand. On the flipside, the DC Metro robot sounds realistic enough that she creeps me the hell out, which I can only assume is the auditory equivalent of the uncanny valley. I suppose a synthesized voice could have neutrality as an advantage – a grumpy human is probably more off-putting than a lifeless machine. But again, this is solvable with human recordings. I cannot imagine any robot being more comforting than a reasonably calm human.

Generally speaking, we’re reducing the workforce more and more, replacing the workforce with automation, machinery. It’s a necessary progression, though I’m not sure we’re prepared to deal with the unemployment consequences. It’s easy to imagine speech synthesis as a readily available extension of this concept – is talking a necessary job? But human speech is seemingly being replaced in instances where the speaking does not actually replace a human’s job and/or a human recording would easily suffice. In some instances, speaking being replaced is a mere component of another job being replaced – take self-checkout machines (which tend to be human recordings despite the fact that grocery store inventories are far more volatile than train routes, hence ‘place your… object… in the bag’). But I feel like I’m seeing more and more instances that seem to use speech synthesis which is demonstrably worse than a human voice, and seemingly serves no purpose (presumably beyond lining someone’s pockets).


Arbitrary precision

I use dc as my day-to-day computer calculator because it’s RPN, it’s there, and I know its language. But as I was watching this Numberphile video from a few years back on the 1998001 phenomenon, I remembered that dc is capable of arbitrary precision. I don’t think about this much, because it’s rare that I actually need 3000 digits of precision, generally I just sort of start my session with 4k so I know I’m getting something past the point. It was fun, however, to run dc -e '3000k1 998001/p' and see a full cycle of its repeating decimal instead of something like 1.002003004×10-6.


Separating cd and pushd

While much of this post applies to bash, I am a zsh user and this was written from that standpoint.

One piece of advice that I’ve seen a lot in discussions on really tricking out one’s UNIX (&c.) shell is either setting an alias from cd to pushd or turning on a shell option that accomplishes this1. Sometimes the plan includes other aliases or functions to move around on the directory stack, and the general sell is that now you have something akin to back/forward buttons in a web browser. This all seems to be based on the false premise that pushd is better than cd, when the reality is that they simply serve different purposes. I think that taking cd out of the picture and throwing everything onto the directory stack greatly reduces the stack’s usefulness. So this strategy simultaneously restricts the user to one paradigm and then makes that paradigm worse.

It’s worth starting from the beginning here. cd changes directories and that’s about it. You start here, tell it to go there, now you’re there. pushd does the same thing, but instead of just launching the previous directory into the ether, it pushes it onto a last in, first out directory stack. pushd is helped by two other commands – popd to pop a directory from the stack, and dirs to view the stack.

% mkdir foo bar baz
% for i in ./*; pushd $i && pushd
% dirs -v
0       ~/test
1       ~/test/foo
2       ~/test/baz
3       ~/test/bar

dirs is a useful enough command, but its -v option makes its output considerably better. The leading number is where a given entry is on the stack, this is globbed with a tilde. ~0 is always the present working directory ($PWD). You’ll see in my little snippet above that in addition to pushding my way into the three directories, I also call pushd on its own, with no arguments. This basically just instructs pushd to flip between ~0 and ~1:

% pushd; dirs -v
0       ~/test/foo
1       ~/test
2       ~/test/baz
3       ~/test/bar

This is very handy when working between two directories, and one reason why I think having a deliberate and curated directory stack is far more useful than every directory you’ve ever cded into. The other big reason is the tilde glob:

% touch ~3/xyzzy
% find .. -name xyzzy
../bar/xyzzy

So the directory stack allows you to do two important things: easily jump between predetermined directories, and easily access predetermined directories. This feels much more like a bookmark situation than a history situation. And while zsh (and presumably bash) has other tricks up its sleeves that let users make easy use of predetermined directories, the directory stack does this very well in a temporary, ad hoc fashion. cd actually gives us one level of history as well, via the variable $OLDPWD, which is set whenever $PWD changes. One can do cd - to jump back to $OLDPWD.

zsh has one more trick up its sleeve when it comes to the directory stack. Using the tilde notation, we can easily change into directories from our stack. But since this is basically just a glob, the shell just evaluates it and says ‘okay, we’re going here now’:

% pushd ~1; dirs -v
0       ~/test
1       ~/test/foo
2       ~/test
3       ~/test/baz
4       ~/test/bar

Doing this can create a lot of redundant entries on the stack, and then we start to get back to the cluttered stack problem that started this whole thing. But the cd and pushd builtins in zsh know another special sort of notation, plus and minus. Plus counts up from zero (and therefore lines up with the numbers used in tilde notation and displayed using dirs -v), whereas minus counts backward from the bottom of the stack. Using this notation with either cd or pushd (it is a feature of these builtins and not a true glob) essentially pops the selected item off of the stack before evaluating it.

% cd +3; dirs -v
0       ~/test/baz
1       ~/test/foo
2       ~/test
3       ~/test/bar
% pushd -0; dirs -v
0       ~/test/bar
1       ~/test/baz
2       ~/test/foo
3       ~/test

…and this pretty much brings the stack concept full circle, and hopefully hits home why it makes far more sense to curate this stack versus automatically populating it whenever you change directories.


Tagging in Acrobat from the keyboard

December 2023 update via a prior May 2020 update. At some point within the Acrobat DC lifecycle, the behavior of F6 has changed. .

Since much of my work revolves around §508 compliance, I spend a lot of time restructuring tags in Acrobat. Unfortunately you can’t just handwrite out these tags à la HTML, you have to physically manipulate a tree structure. The Tags panel is very conducive to mouse use, and because Adobe is Adobe, not very conducive to keyboard use. Many important tasks are missing readily available keyboard shortcuts, and it has taken me a while to be able to largely ditch the mouse1 and instead use the keyboard to very quickly restructure the tags on very long, very poorly tagged documents.

A couple of notes – this assumes a Windows machine, and one with a Menu key2. While I generally prefer working on MacOS, I’m stuck with Windows at work, so these are my efficiencies. Windows may actually have the leg up here, since the Acrobat keyboard support is so poor, and MacOS does not have a Menu key equivalent. Additionally, this applies to Acrobat XI, it may or may not apply to current DC versions. Finally, all of this information is discoverable, but I haven’t really seen a primer laid out on it. If nothing else perhaps it will help a future version of myself who forgets all of this.


Extracting JPEGs from PDFs

I’m not really making a series of ‘things your hex editor is good for’, I swear, but one more use case that comes up frequently enough in my day-to-day life is extracting JPEGs from PDF files. This can be scripted simply enough, but I find doing these things manually from time to time to be a valuable learning experience.

PDF is a heck of a file format, but we really only need to know a few things right now. PDFs are made up of objects, and some of these objects (JPEGs included) are stream objects. Stream objects always have some relevant data stored in a thing called a dictionary, and this includes two bits of data we need to get our JPEG: the Filter tells the viewer how to interpret the stream, and the Length tells us how long, in bytes, the data is. The filter for JPEGs is ‘DCTDecode’, so we can open up a PDF in a hex editor (I’ll be using bvi again) and search for this string to find a JPEG. Before we do, one final thing we should know is that streams begin immediately after an End Of Line (EOL) marker following the word ‘stream’. EOL in a PDF should always be two bytes – 0D 0A or CR LF.

/DCTDecodeEnter

00002E80  6C 74 65 72 2F 44 43 54 44 65 63 6F 64 65 2F 48 lter/DCTDecode/H
00002E90  65 69 67 68 74 20 31 31 39 2F 4C 65 6E 67 74 68 eight 119/Length
00002EA0  20 35 35 33 33 2F 4E 61 6D 65 2F 58 2F 53 75 62  5533/Name/X/Sub
00002EB0  74 79 70 65 2F 49 6D 61 67 65 2F 54 79 70 65 2F type/Image/Type/
00002EC0  58 4F 62 6A 65 63 74 2F 57 69 64 74 68 20 31 32 XObject/Width 12
00002ED0  31 3E 3E 73 74 72 65 61 6D 0D 0A FF D8 FF EE 00 1>>stream.......
/DCTDecode                                     00002E85  \104 0x44  68 'D'

This finds the next ‘DCTDecode’ stream object and puts us on that leading ’D’, byte offset 2E85 (decimal 11909) in this instance. Glancing ahead a bit, we can see that the Length is 5533 bytes. If we then search for ‘stream’, (/streamEnter), we’ll be placed at byte offset 2ED3 (decimal 11987). The word ‘stream’ is 6 bytes, and we need to add an additional 2 bytes for the EOL. This means our JPEG data starts at byte offset 11995 and is 5533 bytes long.

How, then, to extract this data? It may not be everyone’s favorite tool, but dd fits the bill perfectly. It allows us to input a file, start at a byte offset, go to a byte offset, and output the resulting chunk of file – just what we want. Assuming our file is ‘test.pdf,’ we can output ‘test.jpg’ like…

dd bs=1 skip=11995 count=5533 if=test.pdf of=test.jpg

bs=1 sets our block size to 1 byte (which is important, dd is largely used for volume-level operations where blocks are larger). skip skips ahead however many bytes, essentially the initial offset. count tells it how many bytes to read. if and of are input and output files respectively. dd doesn’t follow normal Unix flag conventions, there are no prefixing dashes and those equal signs are quite atypical, and dd is quite powerful, so it’s always worth reading the manpage.


Fireworks, and its bloated PNGs

The motive behind my last post on binary editors was a rather peculiar PNG I was asked to post as part of my job. It was a banner, 580x260px, and it was 14MB. Now this should have set off alarms from higher up the web chain: even with the unnecessary alpha channel, 580(px)×260(px)×(8(bits)×4(R,G,B,A)) is only 460KB or so. A very basic knowledge of how information is stored is always helpful – complicated file sizes are largely because of compression or encryption, neither of which applies here.

So what happened? Adobe Fireworks, which is completely unsurprising. Fireworks was a Macromedia project, and while Macromedia obviously shaped a large chunk of the web in their heyday and also into the Adobe years, Macromedia projects were shit. The very definition of hack. I’m certain Adobe learned all of their terrible nonstandard UI habits from their Macromedia acquisition. I never thought Fireworks was terrible, but nor did I find it impressive. It was often used for wireframing websites, which feels wrong to me in every single way. But, to get ahead of myself, it had one other miserable trick: saving layers and other extended data in PNG files. Theoretically, this is great: layer support in an easily-read compressed lossless free image format. Awesome! But in Adobe’s reality, it’s terrible: not even any current Adobe software can recover these layers.

As mentioned in my previous post, PNGs are pretty easy to parse: data comes in chunks: the first 4 bytes state the chunk length, then 4 bytes of (ASCII) chunk type descriptor, then the chunk data, then a 4 byte CRC checksum. Some chunks are necessary: IHDR is the header that states the file’s pixel dimensions, color depth, color type, pixel ratio, etc; IDATs contain the actual image data. Other chunks are described by the format but not necessary. Finally, there are unreserved chunks that anyone can use, and that this or that reader can theoretically read. The chunk type is 4 ASCII bytes, and is both a (potentially) clever descriptor of the chunk, and 4 bits worth of information – each character’s case means something.

So my image should have had a few things: the PNG magic number, 25 bytes worth of IHDR chunk explaining itself, ~460KB worth of IDAT chunk, and then an IEND chunk to seal the deal. Those were definitely present in my terrible file. Additionally, there were a handful of proprietary chunks including several hundred mkBT chunks. I don’t know much about these chunks aside from the fact that they start with a FACECAFE magic number and then 72 bytes of… something… And I also know there are a lot of them. Some cursory googling shows that nobody else really knows what to make of them either, so I’m not sure I’m going to put more effort into it. Suffice it to say: Fireworks, by default, saves layers in PNG files, and this made a ~460KB file 14MB.

So why do the files even work? Well, remember I mentioned that case in a chunk descriptor is important – it provides 4 bits of information. Note the difference between the utterly important IDAT and the utterly bullshit mkBT. From left to right, lower vs. uppercase means: ancillary/critical; private/public; (reserved for future use, these should all be uppercase for now); safe/unsafe to copy. The important thing to glean here is that mkBT is ancillary — not critical. We do not need it to render an image.

So, when we load our 14MB PNG in a web browser, the browser sees the IHDR, the IDATs, and renders an image. It ignores all the garbage it can’t understand. This is perfectly valid PNG, because all of those extra chunks are ancillary, the browser can ignore them. PNG requires a valid IDAT, so Fireworks must put the flat image there. So, it works, but we’re still stuck with a humongous PNG. Most image editors will discard all of this stuff because it’s self-described as unsafe-to-copy (meaning any editing of the file may render this chunk useless). But for reference, pngcrush will eliminate these ancillary chunks by default, and optipng will with the -strip all flag.

Takeaways? Know enough about raw data to see that your files are unreasonably large, I suppose. Or automatically assume that a 14MB file on your homepage is unreasonably large, regardless. Maybe that takeaway is just ‘perform a cursory glance at your filesizes’. Maybe it’s flatten your images in Fireworks before exporting them to PNG. Maybe instead of just performing lazy exports, web folks should be putting the time in to optimizing the crap out of any assets that are being widely downloaded. Maybe I’m off track now, so my final thought is just — if it looks wrong, save your audience some frustration and attempt to figure out why.


Binaries and hex editors

Talking about certain files as ‘binaries’ is a funny thing. All files are ultimately binary, after all, it’s just a matter of whether or not a file is encoded as text. Even in the world of text, an editor or viewer needs to know how the text is encoded, what bytes map to what characters. Is a file ASCII, UTF-8, PostScript? Once we know something is text or not text, it’s still likely to be made to the standards of a specific format, lest it be nothing but plain text. Markdown, HTML, even PDF1 are human-readable text to an extent, with rules about how their content is interpreted. A human as well as a web browser knows that a <p> starts a paragraph, and this paragraph continues until a matching </p> is found.

If we open a binary in a text editor, we’ll see a lot of familiar characters, where data happens to coincide with printable ASCII. We’ll also see a lot of gibberish, and in fact some of the characters may cause a terminal to behave erratically. Opening a binary in a hex editor makes a little more sense of it, but still leaves a lot to be answered. In one column, we’ll see a lot of hexadecimal values; in another we’ll see the same sort of gibberish we would have seen in our text editor. In some sort of status display, we’ll also generally see a few more bits of information – what byte we’re on, its hex value, its decimal value, etc. Why would we ever want to do this? Well, among other things, binary file formats have rules as well, and if we know these rules, we can inspect and navigate them much like an HTML file. Take this piece of a PNG file, as it would appear in bvi (my hex editor of choice).

00000000  89 50 4E 47 0D 0A 1A 0A 00 00 00 0D 49 48 44 52 .PNG........IHDR
00000010  00 00 02 44 00 00 01 04 08 06 00 00 00 C9 50 2B ...D..........P+
00000020  AB 00 00 00 04 73 42 49 54 08 08 08 08 7C 08 64 .....sBIT....|.d
00000030  88 00 00 00 09 70 48 59 73 00 00 0B 12 00 00 0B .....pHYs.......
00000040  12 01 D2 DD 7E FC 00 00 00 1C 74 45 58 74 53 6F ....~.....tEXtSo
"ban_ln_560_NLW.png" 14498451 bytes    00000000 10001001 \211 0x89 137 NUL

Playlist for a new turntable

After a bit of a hiatus, I’m re-entering the analog domain, though hopefully with a more manageable, pared-down collection exclusive to albums that I would actually want to sit through in their entireties. With that said, I know my first session or two will come of sporadic playlists, selected tracks that either mean a lot to me, make me happy for whatever reason, or challenge an audio system. Ten ideas for the inaugural spin-up:


Template updates

I’ve made a handful of updates to my Hugo template over the past few weeks. Hopefully the next step is to confirm compatibility with Hugo v0.19, genericize the template, and then release it — it’s at a point where it does enough of the things I want it to do right that I’m proud of it.

A while back (I suppose when I started working on wo), I added a new sort of Hugo taxonomy – series. Very tightly connected posts, hopefully only a large handful of posts to a given series, and any given post can only potentially belong to a single series (most won’t). This taxonomic declaration does a few things – namely adding the series name to the beginning of the post’s title wherever it shows up, and adding information about the series to the bottom of the post. Since I added this taxonomy, that information was just a link to the series page, listing its posts. Now, it’s an ordered list of all the items in the series, each one a link except for the current post. I’m pretty happy with it.

A minor update is that dates in post bylines now link to the respective month in the ‘archive’. Now, Hugo doesn’t really support pagination by date. There are a few hacked-together solutions out there for an archive page, and I won’t pretend mine is great. Really it’s always a full list of posts that hides everything but the month identified by the fragment specified in the URL. Some other changes related to this came out of the third recent change, regarding my drawers.

My panties drawers are how I refer to the little slide in/out menus in my top-level menu — categories, archive, and et cetera. The problem with these things is that I need to render each one in full to grab its unfolded height, but I obviously can’t do this in view of the user or else all sorts of nasty flickering will go on. I was hiding the entire body in CSS, doing these calculations, and then showing the body afterward via JS, falling back with a <noscript> to reveal anything. There’s no way to do this that isn’t a hack, but a hack that involves not having a page unless JS or <noscript> works is a pretty weak hack, and I’ve been meaning to bite the bullet and replace it with some CSS that shoves it way off screen (I already do this with my ‘Skip to main content’ link, hit tab).

In doing this, I tidied up a few classes and such that should make other drawers simpler — I had considered making the series list below the post a drawer, which I don’t think I’m still considering, but it’s nice that the drawer solution is more portable. Back to the archive page, I used to be able to do all of that while the body was already invisible and the drawers were doing their thing. I can’t do that now, so I’m using the same basic hack just on that page. It’s not ideal, but it’ll be fine until (hint, hint, dev team) Hugo can generate real date-based archive pages.


Semaphore and sips redux

In this article, I do sem -j +5, allowing 5 jobs to run at a time. -j can be used with integers, percents, and +/– values such that one can say -j +0 -j -1 to run one fewer job than their available cores (+0), etc.

I was going to simply edit my last post, but this might warrant its own, as it’s really more about sem and parallel than it is sips. parallel’s manpage describes it as ‘a shell tool for executing jobs in parallel using one or more computers’. It’s kind of a better version of xargs, and it is super powerful. The manpage starts early with a recommendation to watch a series of tutorials on YouTube and continues on to example after example after example. It’s intense.

In my previous post, I suggested using sem for easy parallel execution of sips conversions. sem is really just an alias for parallel --semaphore, described by its manpage (yes, it gets its own manpage) as a ‘counting semaphore [that] simply waits for a semaphore to become available and then runs the command given’. It’s a convenient and fairly accessible way to parallelize tasks. Backing up for a second, it does have its own manpage, which focuses on some of the specifics about how it queues things up, how it waits to execute tasks, etc. It does this using toilet metaphors, which is a whole other conversation, but for the most part it’s fairly clear, and it’s what I tend to reference when I’m figuring something out using sem.

In my last post (and in years of converting things this way), I had to decide between automating the cleanup/rm process or parallelizing the sips calls. The problem is, if you do this:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" && rm "$i"

…the parallelism gets all thrown off. sem executes, cues up sips, presumably exits 0, and then rm destroys the file before sem even gets the chance to spawn sips. None of the files exist, and sips has nothing to convert. The sem manpage doesn’t really address chaining commands in this manner, presumably it would be too difficult to fit into a toilet metaphor. But it occurred to me that I might come up with the answer if I just looked through enough of the examples in the parallel manpage (worth noting that a lot of the parallel syntax is specific to not being run in semaphore mode). The solution is facepalmingly simple: wrap the && in double quotes:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" "&&" rm "$i"

…which works a charm. We could take this even further and feed the PNGs directly into optipng:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" "&&" rm "$i" "&&" optipng "${i/.tif/.png}"

…or potentially adding optipng to the sem queue instead:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" "&&" rm "$i" "&&" sem -j +5 optipng "${i/.tif/.png}"

…I’m really not sure which is better (and I don’t think time will help me since sem technically exits pretty quickly).


Darwin image conversion via sips

I use Lightroom for all of my photo ‘development’ and library management needs. Generally speaking, it is great software. Despite being horribly nonstandard (that is, using nonnative widgets), it is the only example of good UI/UX that I’ve seen out of Adobe in… at least a decade. I’ll be perfectly honest right now: I hate Adobe with a passion otherwise entirely unknown to me. About 85-90% of my professional life is spent in Acrobat Pro, which gets substantially worse every major release. I would guess that around 40% of my be-creative-just-to-keep-my-head-screwed-on time is spent in various pieces of CC (which, subscription model is just one more fuck-you, Adobe). But Lightroom has always been special. I beta tested the first release, and even then I knew… this was the rare excuse for violating so many native UI conventions. This made sense.

Okay, from that rant we come up with: thumbs-down to Adobe, but thumbs-up to Lightroom. But there’s one thing that Lightroom has never opted to solve, despite so many cries, and that is PNG export. Especially with so many photographers (myself included) using flickr, which reencodes TIFFs to JPEGs, but leaves the equally lossless PNG files alone, it is ridiculous that the Lightroom team refuses to incorporate a PNG export plugin. Just one more ’RE: stop making garbage’ memo that I need to forward to the clowns at Adobe.

All of this to just come to my one-liner solution for Mac users… sips is the CLI/Darwin equivalent of the image conversion software that MacOS uses for conversion in Preview, etc. The manpage is available online, conveniently. But my use is very simple – make a bunch of supid TIFFs into PNGs.

for i in ./*.tif ; sips -s format png "$i" --out "${i/tif/png}" && rm "$i"

…is the basic line that I use on a directory full of TIFFs output from Lightroom. Note that this is zsh, and I’m not 100% positive that the variable substitution is valid bash. Lightroom seemingly outputs some gross TIFFs, and sips throws up an error for every file, but still exits 0, and spits out a valid PNG. sips does not do parallelism, so a better way to handle this may be (using semaphore):

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/tif/png}"

…and then cleaning up the TIFFs afterward (rm ./*.tif). Either way. There’s probably a way to do both using flocks or some such, but I haven’t put much time into that race condition.

At the end of the day, there are plenty of image conversion packages out there (ImageMagick comes to mind), but if you’re on MacOS/Darwin… why not use the builtins if they function? And sips does, in a clean and simple way. While it certainly isn’t a portable solution, it’s worth knowing about for anyone who does image work on a Mac and feels comfortable in the CLI.


Of lynx and curl

I use zsh, and some aspects of this article may be zsh specific, particularly the substitution trick. bash has similar ways to achieve these goals, but I won’t be going into anything bash-specific here.

At work, I was recently tasked with archiving several thousand records from a soon-to-be-mercifully-destroyed Lotus Notes database. Why they didn’t simply ask the DBA to do this is beyond me (just kidding, it almost certainly has to do with my time being less valuable, results be damned). No mind, however, as the puzzle was a welcome one, as was the opportunity to exercise my Unix (well, cygwin in this case) chops a bit. The exercise became a simple one once I realized the database had a web server available to me, and that copies of the individual record web views would suffice. A simple pairing of lynx and curl easily got me what I needed, and I realized that I use these two in tandem quite often. Here’s the breakdown:

There are two basic steps to this process: use lynx to generate a list of links, and use curl to download them. There are other means of doing this, particularly when multiple depths need to be spidered. I like the control and safety afforded to me by this two-step process, however, so for situations where it works, it tends to be my go-to. To start, lynx --dump 'http://brhfl.com' will print out a clean, human-readable version of my homepage, with a list of all the links at the bottom, formatted like

1. http://brhfl.com/#content
2. http://brhfl.com/
3. http://brhfl.com/./about/
4. http://brhfl.com/./categories/
5. http://brhfl.com/./post/

…and so on (note to self: those ./ URLs function fine, and web browsers seem to transparently ignore them, but… maybe fix that?). For our purposes, we don’t want the formatted page, nor do we want the reference numbers. awk helps us here: lynx --dump 'http://brhfl.com' | awk '/http/{print $2}' looks for lines containing ‘http’, and only prints the second element in the line (default field separator being a space).

http://brhfl.com/#content
http://brhfl.com/
http://brhfl.com/./about/
http://brhfl.com/./categories/
http://brhfl.com/./post/

…et cetera. For my purposes, I was able to single out only the links to records in my database by matching a second pattern. If we only wanted to return links to my ‘categories’ pages, we could do lynx --dump 'http://brhfl.com' | awk '/http/&&/categories/{print $2}', using a boolean AND to match both patterns.

http://brhfl.com/./categories/
http://brhfl.com/./categories/apple/
http://brhfl.com/./categories/board-games/
http://brhfl.com/./categories/calculator/
http://brhfl.com/./categories/card-games/

…and so on. Belaboring this any further would be more a primer on awk than anything, but it is necessary1 for turning lynx --dump into a viable list of URLs. While this seems like a clumsy first step, it’s part of the reason I like this two-step approach: my list of URLs is a very real thing that can be reviewed, modified, filtered, &c. before curl ever downloads a byte. All of the above examples print to stdout, so something more like lynx --dump 'http://brhfl.com' | awk '/http/&&/categories/{print $2}' >> categories-urls would (appending to and not clobbering) store my URLs in a file. Then it’s on to curl. for i in $(< categories-urls); curl -O "$i" worked just fine2 for my database capture, but our example here would be less than ideal because of the pretty URLs. curl will, in fact, return

curl: Remote file name has no length!

…and stop right there. This is because the -O option simplifies things by saving the local copy of the file with the remote file’s name. If we want to (or need to) name the files ourselves, we use the lowercase -o filename instead. While this would be a great place to learn more about awk3, we can actually cheat a bit here and let the shell help us. zsh has a tail-matching substitution built in, used much like basename to get the tail end of a path. Since URLs are just paths, we can do the same thing here. To test this, we can for i in $(< categories-urls); echo ${i:t}.html and get

categories.html
apple.html
board-games.html
calculator.html
card-games.html

…blah, blah, blah. This seems to work, so all we need to do is plug it in to our curl command, for i in $(< categories-urls); (curl -o "${i:t}".html "$i"; sleep 2). I added the two seconds of sleep when I did my db crawl so that I wasn’t hammering the aging server. I doubt it would have made a difference so long as I wasn’t making all of these requests in parallel, but I had other things to work on while it did its thing anyway.

One more reason I like this approach to grabbing URLs – as we’re pulling things, we can very easily sort out the failed requests using curl -f, which returns a nonzero exit status upon failure. We can use this in tandem with the shell’s boolean OR to build a new list of URLs that have failed: (i="http://brhfl.com/fail"; curl -fo "${i:t}".html "$i" || echo "$i" >> failed-category-urls) gives us…

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (22) The requested URL returned error: 404 Not Found
~% < fail.html
zsh: no such file or directory: fail.html
zsh: exit 1      < fail.html
~% < failed-category-urls
http://brhfl.com/fail

…which we can then run through curl again, if we’d like, to get the resulting status codes of these URLs: for i in $(< failed-category-urls); (printf "$i", >> failed-category-status-codes.csv; curl -o /dev/null --location --silent --head --write-out '%{http_code}\n' "$i" >> failed-category-status-codes.csv)4. < failed-category-status-codes.csv in this case gives us

http://brhfl.com/fail,404

…which we’re free to do what we want with. Which, in this case, is probably nothing. But it’s a good one-liner anyway.


Game-in-a-post: Rolling Market

Here it is! Game-in-a-post of ‘Rolling Market’, which I’m still pretty happy with, truth be told. Rules are here. This JS implementation has one bug I’m still aware of which lets you cheat during the endgame, so just… don’t do that until I fix it.

A few additional tips/thoughts on the game:


Rolling Market introduction & rules

I did end up implementing this as a game-in-a-post.

I’ve been testing out a little solo game design lately that’s somewhat inspired by Sackson’s Solitaire Dice. Inspired in the sense that I was looking to come up with something that has that same lack of Yahtzee-esque luck mitigation, instead relying on intuition, probabilities, and risk management. Much like Sackson’s game, this can backfire, and the dice can utterly screw you. But even when that happens, there’s enough going on to where the game is still enjoyable (in my humble opinion).

Full rules are listed after the jump, and will repeat some of this brief overview, but here’s the idea: players have four companies they can buy and sell stock shares from. On every turn, the player rolls dice which influence the current value of a given company’s shares. Buying and selling also affects values. On some turns, the market is closed, but when it is open the player can buy shares of one company and/or sell shares of a different company. The player goes through 12 of these buying/selling turns, and scores based on their final pile of cash.

I have a JS game-in-a-post implementation nearly ready to go, so that will appear shortly, along with a PDF of these rules, and potentially a few more strategic thoughts. Until then…


Lenovo Yoga Book

The Lenovo Yoga Book is a bizarre little machine. It’s unbelievably thin, and hosts a wee 10” display, netbookish almost. Unfold it, like a laptop, to reveal the secret to its thinness – a blank slate where the keyboard should be. Powering the device on, the ‘halo keyboard’, as it is known, glows. It is what it sounds like – a glowing, flat, touch-sensitive keyboard. It is the price paid for a 9.6mm thick, 1.5lb device that still manages a laptop form factor.

Now, I value a good keyboard. My primary keyboard uses Alps switches, my primary laptop is a Lenovo X220 that types rather well. This is neither of those. This is not a good keyboard, it’s a flat slab. But I’ve spent enough time typing on tablets that, a strategy of muscle memory combined with occasional glances down to reorient my fingers means I can type reasonably quick and with reasonable accuracy.

My use case is pretty simple – I spend nearly four hours every day on a train, but I still don’t like carrying a lot with me. I had been taking a Microsoft keyboard which I could sort of, kind of rig up with my iPhone and type nicely into Buffer Editor with. It works well when seated at a desk, but the unadjustable angle and possibility of the phone just flopping out made it suboptimal for the train. Physical keyboards take up space – the Yoga Book manages to be thinner than just that Microsoft keyboard, though obviously larger in the other two dimensions. But it opens like a compact, it can unfold to any angle (including all the way back to just be a really thick tablet), and just has a much more lappable presence. Also, since it’s running Windows 10, I get a real operating system and filesystem (by this I mean WSL or cygwin), I get real USB (OTG), and a solid software selection. An Android version is also available (for $50 less), but even if I didn’t hate Android, that just seems like a bad plan for as much as they have customized it. It does have an autocorrect feature, however, which the Windows version lacks.

I’ll continue using this thing on the train and finding out its compromises. It is obviously compromised. It’s doing strange new things, and it’s really positioned more for people who want to use the digitizer. Which, yeah, the whole keyboard area can be dimmed and turned into a pressure-sensitive digitizer, either with a typical stylus nib or with an actual ballpoint pen on a paper tablet set atop the surface. I guess I should mess with that more. But for my use-case, so far so good. It’s no Matias Alps keyboard, but it’s very typable, it’s very light, and very compact. I wrote this entire post on the Yoga Book, and didn’t feel like I was suffering1. It’s like a tablet where I can type without obstructing half of my screen.


Game-in-a-post: Sid Sackson's Solitaire dice

Sid Sackson, in his book A Gamut of Games1, describes a solitaire dice game that I have grown very fond of. Fond enough that I decided to whip up a little js version of it, found below. I won’t go into the rules here, others have done that well enough. I will just put a couple of thoughts out there on why I find the game so compelling. Dice are obviously the epitome of randomness; roll-and-move mechanics are universally bemoaned for this. Games that try not to be awful while still using dice generally do so with some sort of randomness mitigation technique. Yahtzee is an easy example – a player gets three rolls to a turn to mitigate luck. Sackson’s Solitaire Dice does not offer any mitigation, and in fact it can be brutal. You could theoretically lose 400 points on your last turn. And while this sounds objectively terrible, it really isn’t. Occasionally you will have a game where the dice just torture you, but for the most part the game forces you to think about probabilities, and attempt to control pacing. If the game is going really well, you may want to try to blow one of your scratch piles up toward the game-ending 8 marks. Similarly, if things aren’t going great, it would probably be in your best interest to take poorly-scoring pairs in order to scratch dice evenly. In my plays thus far, I’d say that a meh game is in the -100-100 point range, a successful game being 350+.


Solo play: One Deck Dungeon

On to my number one solo game at the moment: Chris Cieslik’s One Deck Dungeon, released by Asmadi Games. This game takes all the uncertainty and the brutality of a roguelike, and packs it into a small deck of cards and a pile of dice. One’s character has attributes which indicate the number and color of dice that can be rolled in resolving a conflict. A section of dungeon, so to speak, is entered by spending time (discarding cards). This fills the player up to four face-down dungeon cards, which can then be encountered on a turn by flipping them up. One can either attempt to defeat the card or leave it for later, wasting time and available space to fill with new dungeon cards. Defeating these cards involves rolling the dice allowed by the player’s character attributes and placing them to beat numbers on the card. These can be color-specific or not, and spaces can either require the placement of one die or allow multiple dice. Unfilled slots are what ultimately cause damage – to either health, time, or both. Assuming the player lives, resolving a conflict allows them three choices – the card can be taken as an item (additional dice and/or health), a skill or potion, or experience.


Solo play: Deep Space D-6

My (probably, maybe) second most-played solo game currently is one of dice, cards, and worker placement. Designed by Tony Go and released by Tau Leader (in very small print runs, it seems, though one can print-and-play), Deep Space D-6 packs a lot of game into a very small package. One of several tiny boards with illustrated ships, explanations of their features, countdown tracks for hull and shield health, and placement areas for worker dice sits in front of the player. To the right of the players ship, tiny threat cards are added every turn, and positioned to indicate their health. The player rolls their crew dice for the turn and assigns them to various attack and defense roles. Worker actions are taken, then a die is rolled to see which, if any, enemies activate and attack on that turn.


Solo play: Friday

Friday is third up in my list of top solo games, and routinely comes up whenever solo board/card games are being discussed. Designed by Friedemann Friese and released by Rio Grande, Friday is a card game in which the player takes on the role of the titular character, helping Robinson Crusoe survive his time on the island. The theme is not one that has been beaten to the ground, and while the game by no means drips with theme, it makes sense and the art supporting it is goofy fun. Even if the theme does nothing for you, the gameplay shines so much that it’s easy to get lost in it.

For as small as the game is, as few card as there are, Friday is just loaded with decisions. Essentially, every turn involves pulling a hazard from the hazard deck (actually, decision number one: you pull two and choose one to take on), then pulling a series of counterattacks from your fighting deck. You get so many fighting cards for free, and then pay life points to keep drawing. Additionally, if you opt to simply lose the fight instead, you will lose life points. While the primary goal here is to obviously not run out of life, you’re also essentially building your deck for the future – defeated hazards become fighting cards, and a lost fight gives you the opportunity to get rid of poor fighting cards that you may have drawn that round. When your fighting deck runs out, you get to shuffle it anew, including the new cards you’ve gotten from defeating hazards, but you also end up throwing one aging card in with negative effects every time this happens.


Solo play: Onirim

Beyond Dungeon Roll, this list is a real struggle to rank. Do I push games with creative mechanics higher, or games that ultimately speak to me more? I’m inclined to go with the former, in this case, only because the things that make my no. 3 work as a solo game make for such a tight, decision-addled game. But Onirim (Shadi Torbey, Z-Man Games), my no. 4 may very well get more play for its relative lightness, small footprint, and fascinating artwork/theming.


Solo play: Intro and Dungeon Roll

As someone who is far more into board (and card) than video games, as someone who spends a lot of time alone, and as someone who has immense insomnia (compounded by the ridiculous anxiety brought on by recent politics), the volume and quality modern board/card games continues to impress me. While I know I’m not alone in seeking these out, I do think they get pushed to the side a bit, and I’ve been meaning to get a few write-ups out there about the games I’ve been enjoying as of late. Initially, I’m going to present this series as my current top five, but in the future I’ll be tacking others on in no particular order. With that…

First up is Dungeon Roll from Tasty Minstrel Games and designer Chris Darden. Its appeal is pretty clear: it’s cheap, has fun dice, and comes in a tiny cardboard chest that you pull treasure from during the game. They bill it as playing 1-4, but multiplayer is essentially just every individual playing a solo game while others watch. All of the encounters are based on dice, with no automatic rerolls (some character abilities grant rerolls), so it is very much a game of randomness and of pushing one’s luck. There are a handful of expansions out there (all bundled together in a cheap package at CoolStuff Inc., conveniently), which are largely just new player characters, though the winter one also adds some interesting new treasure.


Brains: Japanese Garden

Brains: Japanischer Garten (Japanese Garden) is a single-player game brain-teaser, if we’re being honest, from Reiner Knizia. With Knizia’s name on it, it’d be easy to assume that it’s actually some sort of solo game, but really it’s a simple set of puzzles based on this theme of a Japanese garden. If anything, it reminds me of those ThinkFun puzzles with the chunky plastic pieces, except this uses seven cardboard tiles and a stack of paper containing the puzzles. Alternatively, there is a mobile app, which I think I would recommend over the physical edition as a simple value proposition. I’m assuming since ‘Brains’ is so much more prominent than the ‘Japanese Garden’ title that perhaps more of these puzzlers are coming down the line from Knizia.

Ruleswise, the puzzle itself is quite simple. The theme is utterly unimportant (though it means we get the lovely art, so that’s something). It’s a well-designed puzzle despite not being particularly unique or groundbreaking. What fascinates me is the whole tile-laying with placement rules as a solo puzzle is actually rather clever, and opens up some thought processes on how one could make puzzles of, say, Carcassonne. I mull from time to time over ways to implement solo Carcassonne play, particularly using the limited tile set of the Demo-spiel. One way that I’ve played has been to use one meeple, and allow her to move a tile per turn in lieu of placement. Moving off of a feature scores it as is, and a meeple is placed on the feature on her side to indicate that the feature has been scored and cannot be scored again. This may or may not warrant its own post (likely not, as I think I just covered everything), but my point is that I’m always looking for a way to throw down tiles by myself. This puzzle-like concept in Brains: Japanese Garden certainly has potential with other tilesets.


This is not crazy

Content warning: ableism.

A lot of inexplicable, or at least difficult-to-comprehend things have been happening in the world lately. My various social circles are comprised of folks in various states of befuddlement lately, and the news does not cease to surprise and disgust. Things are so far beyond reason, so infuriating, so mystifying that it can be hard to expound upon the resultant emotions and articulate them cleanly. Often, things feel nothing short of crazy, like the world has lost all sanity.

There’s a problem with this. When I was younger, it was trendy to describe the inexplicable and foolish as (apologies) retarded. Even without judging rationality or logic, the word was a simple stand-in for basic denigration. Some time around high school, it would become clear to us what we were actually saying, what the implications were. Then we had a decision to make – do we live with those implications out of some lazy dedication to our extant lexicon, or do we grow and find better and less actively harmful ways to express ourselves? Can we find the empathy to recognize how dehumanizing it is to use our differences as terms of denigration.


Position Descriptions

While job-hunting as of late, I’ve been seeing a lot of poorly-written lists of IT skill sets in position descriptions, such as:

Familiarity with HTML, CSS, JavaScript, JQuery, Salsa, Drupal, Adobe, or similar databases and technologies

None of these things are databases. Sure, Drupal and Salsa rely on databases, but databases they are not. Adobe is, of course, a company and not a product. People still seem to conflate the company with Acrobat or the (ISO-standard) PDF file format, but even that is entirely vague. This has been a problem I’ve seen with contract work, and is a problem I continue to see in position descriptions – a lack of input from people who understand the technical needs of a position leads to an application process that is confusing at best and more commonly inaccurate and misleading. I’m finding it increasingly difficult to gauge my qualification for positions that theoretically involve work within my realm of expertise as well as potentially tailoring CVs for these applications.


The Lazy He

While searching through the rule book for ‘Raptor’ (an admittedly great game by everyone’s favorite Brunos) for a bit of errata this weekend, I came across a grossly irritating footnote early on:

Note: throughout this document male pronouns are used for the sake of simplicity and readability. It should be clearly understood that in each instance, we mean to include female players as well.

This is bullshit on so many levels. The most inclusive choice would, of course, be to use the singular they. The most sorry-gaming-is-horribly-patriarchal choice would be to use female pronouns throughout1. And while I hate enforcing the gender binary, the most ‘readable’ choice would be to use male pronouns for Player A and female for Player B or vice versa. ‘Raptor’ is exclusively a two-player game, so all of the included examples rightfully include two players. Switching between two people with a shared set of pronouns is far less readable than unique pronouns for either. Ambiguity is always a potential pitfall of pronoun usage, easily avoidable when you’re dealing with two purely hypothetical humans.

Failing all of the above, however, I’d almost prefer they just used male pronouns throughout and cut out the nonsensical and condescending footnote. The footnote reads as though ‘some woman complained that we did this once and rather than adapt we’re just going to make up a bunch of excuses.’ Whose ‘simplicity’ is this for the sake of? The reader’s? Are we to assume that they are so caught up in the masculine gamer trope that a single female pronoun would cause their brains to shut down, eternally paralyzing them, rulebook still in hand? Or is it for the sake of simplicity on the part of the writers and editors, so lazy and consumed by male hegemony that they can’t even bother to do a find-and-replace on their masculine-as-default pronouns? The message put forward by the footnote is a brutally honest display of privilege: ‘we know we should be more inclusive, but we think it’s simpler not to.’ The footnote does not read as a statement of inclusiveness, rather an outright denial of it and a mockery of the very idea.


Yamzod

I’ve been thinking a lot lately about new game concepts and designs using existing bits – dice, playing cards, checkers, &c. One such recurring thought is expanding on the Yahtzee sort of theme – solitaire dice-chucking games with poker-like scoring. It’s easy to pan Yahtzee as a garbage game, but as a quick solo activity it isn’t terrible. It isn’t great, but nor is it terrible. Over the past couple of weeks, I’ve been playing around with an idea for a dice game that offers a tiny bit extra in the decision-making category. I call it Yamzod, which is a name I came up with while on the brink of sleep, and have stuck with because it makes me laugh.


Super Mario Run

So, Super Mario Run has been out for half a day or so now, and I’m sure more meaningful opinions than mine are bouncing around all over the internet. It’s just too juicy to not set my own uninspired thoughts in pink internet stone, however. I’ve always been a Nintendo fan. These days I really don’t game much at all. The occasional weird indie, a nostalgic retro re-release here and there, but mostly if I’m gaming on a screen it’s either a roguelike on the computer or a board game adaptation or point-and-click (point-and-tap?) adventure on the phone. The last consoles I’ve owned were the original Wii and DS Lite. All this to say, having a Nintendo side-scroller on my phone is ridiculously exciting. The game is a ton of fun, well worth the cost of entry, and generally feels very much like a Super Mario Bros. game. A few thoughts:


Karuba: Solo

Karuba is, essentially, a solo game that two to four players play simultaneously. One player could theoretically play for a high score, but the randomness of the draw makes that a little problematic – and playing to beat a high score outside of the arcade isn’t terribly fun anyway. But as I was playing with the pieces and the tiles and thinking of a simple notation for my aforementioned hypothetical correspondence game1, I accidentally came up with what seems to be a decent solo variant for this game.


Karuba

I recently received a copy of the 2016 Spiel des Jahres nominee, ‘Karuba’. It’s a tile-laying game of sorts, albeit less free-form and less interactive than Carcassonne. It’s really about solving a puzzle more efficiently than everyone else at the table. I don’t really aim to explain or review the game, however, as plenty such explanations and reviews are already out there. There is one interesting angle that I would like to touch on, however.

I have always been a fan of correspondence chess1, the idea that the game is open, all information is public, and moves are simple enough to easily notate, pass back and forth, and replicate. It was immediately obvious to me that Karuba has a great potential as a correspondence game. Due to the lack of interaction, it will certainly be nothing like chess. But as a casual puzzler, all the pieces are there for correspondence. All information in the game is public. All players start with the same board configuration. All players place the same tile on a given turn. Because of the way this mechanic works, tiles have unique numbers and would just as easily be described in correspondence. To ease in initial setup, rows and columns of the board already have labels. The four explorers every player controls are all unique colors, and can therefore easily be described in notation.

I don’t expect a huge community to explode around correspondence Karuba, but this possible means of play immediately struck me as such a perfect fit. Kind of the icing on an already rather impressive cake.


Automatic excitement: video as default

By now we all know that Twitter has killed off Vine, or is slowly killing off Vine, or has killed off part of Vine and will kill off the rest of it in the future. My initial reaction to this was pure joy, for I have long hated Vine. That enthusiasm was tempered by promptly hearing from source after source how Vine was a huge creative outlet for oft-ignored black youth. That my experiences never crossed paths with this version of Vine is purely a failing on my part, plain and simple. A wake-up call to attempt to be less complacent and lazy in my media consumption.

If I were left to my own personal experiences with Vine, however, I would still be delighted with the news. This is because, put simply, I have never watched a Vine and felt like I got anything out of the video that I did not get out of the screencap. This is not a problem unique to Vine by any means, it seems that increasingly we live in a world where video is considered the most captivating medium, thus all content should be video. Rather than letting a creative work dictate its own medium and leaving the excitement factor as the responsibility of the creator, video checks off that box from the get-go. I guess if audiences are largely eating it up, then that’s true enough and fair enough. But I wonder just how many people clicked Vine after Vine and felt that they weren’t getting an appropriate return on their time investment.


No escape

Assuming the leaked images of the new MacBook Pro are to be believed (and there seems to be no reason to think otherwise), tomorrow will bring MacBook Pros with a tiny touch strip display above the number row instead of a set of physical keys. It looks like a more practical version of the much-maligned Lenovo Carbon X1 concept. Yet, like the X1, it’s part of a bigger change that makes for an overall worse keyboard experience – in the case of the leaked MBP images, the physical keys themselves are moving to the slim-but-unloved keys from the MacBook.


Game-in-a-post: Dim Corridor

I’ve been playing with some ideas lately for positionally-based toggle puzzles, similar in concept to the classic ‘Lights Out’ game, though I’ve primarily been thinking about one-dimensional puzzles. My first attempt was far too simplistic and easily beaten, though I have carried some of the ideas along into this little puzzler, which I call the ‘Dim Corridor’. This is a work in progress, but as of this version, the rules are as follows:


Pizza dreams

I’ve had pizza on my mind a lot lately. Cravings. Running through my mental Rolodex, imagining the sauce from this local joint, the crust from that one. Promising myself a slice or two or three as a treat to myself at the end of the week. I don’t even like pizza all that much. It’s fine, I certainly won’t complain about being offered one, but I’ve never understood the obsession over it. A well-executed pie can be a wonderful thing, but no more so than any other food. Pizza, certainly, is not the stuff dreams are made of.


Making multiple directories with mkdir -p

I often have to create a handful of directories under one root directory. mkdir can take multiple arguments, of course, so one can do mkdir -p foo/bar foo/baz or mkdir foo !#:1/bar !#:1/baz (the latter, of course, would make more sense given a root directory with a longer name than ‘foo’). But a little trick that I feel slips past a lot of people is to use .. directory traversal to knock out a bunch of directories all in one pass. Since -p just makes whatever it needs to, and doesn’t care about whether or not any part of the directory you’re passing exists, mkdir -p foo/bar/../baz works to create foo/bar and foo/baz. This works for more complex structures as well, such as…

% mkdir -p top/mid-1/../mid-2/bottom-2/../../mid-3/bottom-3
% tree
.
└── top
    ├── mid-1
    ├── mid-2
    │   └── bottom-2
    └── mid-3
        └── bottom-3

Game-in-a-post: Yz (or, on post-specific JS/CSS requirements in Hugo)


Finding the greatest Yahtzee score

A little over a year after writing this post, I decided to make a code golf challenge of it. Not too many people submitted answers, but there was a wild one in MS-DOS Batch, as well as some interesting tricks I hadn’t thought of.

I’ve been meaning to implement a way to incorporate style or script requirements into my posts using Hugo frontmatter. I’m not there yet, and before I get there, I need a test post that requires one or the other. I thought a little toy that lets one play a turn (three rolls) of Yahtzee, and then returns the highest possible score of the roll would be a fun and simple demonstration. Aside from small straights wigging me out a little (and I still have a nagging feeling this can be optimized), it was indeed simple1 to come up with an optimal score search. Fortunately, for a single-turn score, we don’t need to worry about a few scoring rules: bonus (joker) Yahtzees, the upper row bonus, nor chance. We could implement chance easily, but it really doesn’t make sense for single-turn scoring.


wo: 9-byte modulo

While working on a code golf challenge in dc today, my mind turned to if and how I could solve the same challenge with the current instruction set of wo. We left off in the middle of wo5, with six slots to go and the promise of division. Internally, I was filling a couple of slots with additional stack manipulation instructions, with three slots still open. The goal isn’t necessarily to be able to do everything (or, perhaps, much of anything) efficiently at this point, but to leave that bit-shaving option on the table, letting the programmer do the most with the least. Since there’s no fractional input, division was a no-brainer: now you can enter fractions, you can waste a little time making a reciprocal for multiplication, and you can (of course) divide.


Telephoto

As is to be expected whenever Apple announces something new, a lot of shit is being flung around in the tech sphere over the iPhone 7 and 7 Plus. One particularly fun nugget is that the secondary camera lens on the 7 Plus’s dual-camera system is not, despite what Apple says, a telephoto lens. This is based on a few mixed-up notions from people who know just enough about photography to think they know a lot: namely that ‘telephoto’ is synonymous with ‘long’, and that 56mm (135 equivalence, when will this die) is ‘normal’ (and therefore not ‘long’ ‘telephoto’). 50mm was standardized on the 135 format because Oskar Barnack said so, essentially. Different versions of the story say that the 50 was based on a known cine lens design, or that glass to make the 50 was readily available, or that it was necessary to fill the new large image circle, but whatever the original motivating factor was – the original Leica I set a new standard with the 135 film format, and a new standard somewhat-longer-than-normal focal length with its Elmar 50/3.5. The idea behind normalcy is matching our eyesight. This, conveniently, tends to match up with the length of the diagonal of the imaging plane; √(24²+36²)≅43mm. 50 is already noticeably longer than this, and 56 even more so. There’s a reason 55-60mm lenses were popular as more portrait-capable ‘normals’.


wo: Registers

Thoughts on wo have slowed down slightly, largely because I think I’ve gotten a lot of easy answers out of the way. I haven’t yet addressed – no. 4, REGISTER. Delegating as much responsibility to internal registers as possible, and allowing a wo programmer to modify these registers is paramount to opening up the extent of what can be done inside of a limited instruction set. As few system registers as possible will be reset at every turn – this is likely necessary for something like ‘stack depth,’ as it is a valuable value for a user to be able to fetch, but for a user to change it would be both unpredictable and of limited value.


Swiftpoint GT

I previously discussed my overall dissatisfaction with mice these days. I bit the $150 bullet, and decided to try the Swiftpoint GT. A lot of people love this mouse. It has 4.2 stars on Amazon. It nearly octupled its Kickstarter funding goal. It’s natural, it’s ergonomic, it’s gestural. In theory. In practice, it feels to me like it’s been built of outmoded tech and interaction paradigms in order to fabricate a simulacrum of a hypermodern interaction experience. In practice, it’s still a clicky, line-at-a-time scroll wheel. Sure, you can sort-of-kind-of mimic touchpad scrolling by rolling it on your table, but doing so won’t feel any less clicky nor awkward. In practice, the gestural ‘stylus’ is just a tiny upside-down joystick that only works when you find just the right place to tilt it to, that you have to maintain just the right pressure to hold, and that you ultimately still fail to get a smooth navigation experience out of. Neither of these interactions were comfortable, and they both proved better at marring my (admittedly delicately finished) table than anything else.


wo: Stacks

At some point the question of stacks in wo needs to come up. How many, how do they work, how do we manipulate them. As mentioned in the first post in the matter, I’ve been operating under the premise of a stack that does not shrink (unless cleared). It takes the theoretically-infinite stack of RPL, and combines it with the repeating bottom of RPN. Thus (pseudocode, obvs):

> 1 2 PRINT_STACK
1
2

> 3 PRINT_STACK
1
2
3

> DROP PRINT_STACK
1
1
2

wo: Implementing the interpreter

I’ve been thinking a lot about instructions for wo, how to eke the most out of low-byte-count programs. And, while I haven’t touched on it here yet (soon!), I’ve been thinking a lot about what system registers could prove useful as well. But a big part of me wonders how I’ll implement the thing, if I ever do. That so many esoteric/golfing languages are just layers on top of another scripting language makes me a bit grumpy, and that isn’t a path I’d like to take with wo. My plan would be to write the reference interpreter in go, but I wouldn’t discount ANSI C either. That decision is trivial, though.


wo: A truth machine in wo3

In wo3, we get two extra instructions for a total of four. OISCs can be Turing complete, so four instructions should be enough to give us some brunt. At this point, some amount of math is probably a good idea. Instruction three, then, is ADD, which does what you think it does. Since we’re capable of entering negatives, we can pretty much get to any number we want at this point, albeit terribly inefficiently.

Branching would be very convenient at this point as well, so a simple SKIPZERO instruction gets the no. 4 position; it pops a value from the stack and if that value is zero, it skips until past the next instruction. All odd (input) words are skipped until an even (instruction) word is encountered, at which point that is skipped and the next word is interpreted. Skip if zero is a standard simple conditional branch, but I’ll likely change this to skip if (skip register), so a user can theoretically set the branch condition via a register (with a command certain to be available in wo4).


style

Punctuation and diacriticals Diaresis Should always be used when applicable. Coöperate, not cooperate or co-operate. Math Negative numbers Should be represented with U+207B, Superscript Minus (⁻); i.e. Negative thirty minus three: ⁻30−3 Historically represented with a superscripted minus sign (&minus;); i.e. Negative thirty minus three: −30−3. Deprecated due to inferior semantics and NVDA compatibility. More thought needs to go into this; NVDA doesn’t read a single ‘minus’ character correctly.

wo: Numbers

A draft of this post began:

A few thoughts on numbers in wo. Numbered input is pushed to the stack via LSB==1. LSB==0 signifies an instruction, so in either case, effective word size is 1 bit less than total word size. The stack itself won’t care what sort of numbers it’s holding, input is the tricky part. wo3 would allow you to enter the integers 0-3, or 2-1 depending on unsigned vs. signed input. wo4 0-7 or 4-3. These aren’t big numbers to work with, so getting instructions in early to allow for more complicated input would be welcome. The easiest solution here is simply getting the WORDSIZE instruction out of the way as early as possible, ideally in two bits. With signed input, this would only get a user up to a 4-bit word size with 6 bits of input. Unsigned, a user could get up to 6-bit word size in the same 6-bit input, which raises the question of whether allowing a single word input of 1 is more important than more immediate access to slightly higher integers. Signed input would reduce the need for a sign-changing instruction at the low end of the set, and there are little hacks that could be put in place – for example, if 1s’ complement was used instead of 2s’, 0 could act as a stand-in for the next highest byte size or the like. Thus 6 bits of input could immediately kick word size up to a full byte.

As outlined in this post killing off WORDSIZE, I have changed my thoughts on that matter. I do think input will (by default) be 1s’ complement, with 0 receiving special treatment, what I am internally referring to as the Magic Number or MNum. MNum would allow some instructions to serve multiple purposes, which could theoretically put us on the path to larger integers at lower word sizes. Additionally, MNum can act as a stand-in for a predetermined value for other instructions, again opening up some integer growth options.


wo: Word size

A few things have occurred to me regarding the WORDSIZE instruction in wo. Namely, that changing word size in the middle of a program could be entirely unpredictable – my first thought would be to break every word out into an array before interpreting (aside from the WORDSIZE instruction itself), but that would be impossible – the stack would necessarily have to be computed before WORDSIZE could execute. Even before this, I was wondering if I should really prioritize it so low in the instruction set, or if it just turns into an abuse of a reduced instruction set claim.


wo: Introduction

I’ve been thinking a lot about languages designed for code golf and programming puzzles/challenges. They run no risk of extinction — seems a new utterly inexplicable dialect is always freshly hatching. And here I am, about to fan these flames with my own bonkers idea. I call it wo1. Here’s the thing – there will be several posts in this series, as I figure out how I would ideally like this thing to work. There will be a lot of theoretical code. But it is very possible that I will never actually bother writing a reference implementation for it. I’d like to think that I will, but I’m really not a great programmer and the fun for me is largely in the thought experiment, the puzzle.

There are a few key points that I think will make this language unique. First and foremost, it’s based on binary instructions with variable word sizes. The interpreter should set the initial word size based on invocation: wo4 would start with 4 bit words, wo7 with 7 bit words, etc. Smaller word sizes mean more instructions per byte, but fewer instructions are available (and smaller numbers can be pushed to the stack). It is postfix, stack-based, influenced by RPN, RPL, Forth, and dc. The main stack is initialized at zero, and can never be empty. I am very seriously considering a stack that grows but does not shrink (aside from clears) – drop operations would duplicate the bottom of the stack like RPN. Word size would be resettable via a relatively low (preferably 3-bit) instruction. A separate stack would handle certain scratch activities like string manipulation. Least Significant Bit (LSB) would determine stack input vs. instruction execution. Nearly every instruction would be rewritable (don’t need GOTO? plop something else in there before dropping your word size and taking care of business). That pretty much sums up the core philosophy/principals, the big remaining question is implementation — primarily, instruction set.


Your Brand New Linux Install (A letter to my future self)

Dear Future Self,

If you’re reading this, hopefully it’s because you’re about to embark, once again, on the journey known as ‘installing Linux anew’. You’re predictable, you’re not particularly adventurous, so you’re almost certainly opting for Ubuntu out of some delusion that its consumer-friendly nature will make the install quick and seamless. But you only want the machine for writing/coding on, so you’re going to ruin your chances of a simple install by opting for Minimal. You’ve done it several times before, so it couldn’t be that bad, right? No, it won’t be, but I can tell you… I wish I had a me to guide me through it last time or the time before.

The actual install is simple enough. You may want to research the bundles of packages offered up during install time, I still haven’t figured those out. Also, it’s probably worth reviewing the encryption options, and the current state of the various filesystem choices, though you’re just going to choose whatever unstable thing has the most toys to play with anyway. Just let it do its thing, prepare a martini, relax.


Of mice and meh

August 21, 2016, I tweet: “why are there no good mice.” It’s a reality I’ve been battling for months now: seemingly nobody else wants the same things out of a mouse that I do. And as I’ve tried to replace my beloved Magic Mouse with something just a little more, I’ve come up with a pretty clear list of what these things are:

The original Magic Mouse hits all of these points, and perhaps I should just stock up on a few of these before they disappear, without searching for something more.


shapeshifter_chess

(this is dummy content because damned blackfriday won’t start a code block if it’s the first thing) skqbnr pppppp . . . . . . PPPPPP RNBQKS Where s/S is the Shapeshifter Shifting On any given move of the Shapeshifter, the Shapeshifter shifts A player can sac a turn to shift the Shapeshifter? The Shapeshifter (in theory does, or in meatspace would) hold a D6 Sided prbnbn/PRBNBN Alternative possibilities include a more ‘evil’ shapeshifter that could potentially shift to the other side?

Alphasmart Neo2

I’m writing this from Tuckahoe State Park, the first leg of a multinight car camping trip which I (for practice, I suppose) opted to treat like a backpacking trip. My goal was to fit everything, aside from food (handled as a group), that I needed in or on a 30L pack for the one evening here followed by three in the sand at Assateague Island. Good way to try out a few things for when I ordinarily need to pack food but fewer clothes. So, why am I wasting precious pack space on a writing device?


ASCII table

Control Characters HEX DEC CHAR Description 0 0 NUL Null 1 1 SOH Start of Heading 2 2 STX Start of Text 3 3 ETX End of Text 4 4 EOT End of Transmission 5 5 ENQ Enquiry 6 6 ACK Acknowledgment 7 7 BEL Bell 8 8 BS Backspace 9 9 HT Horizontal Tab A 10 LF Line Feed B 11 VT Vertical Tab C 12 FF Form Feed D 13 CR Carriage Return E 14 SO Shift Out; XON F 15 SI Shift In; XOFF 10 16 DLE Data Line Escape 11 17 DC1 Device Control 1; XON 12 18 DC2 Device Control 2 13 19 DC3 Device Control 3; XOFF 14 20 DC4 Device Control 4 15 21 NAK Negative Acknowledgment 16 22 SYN Synchronous Idle 17 23 ETB End of Transmit Block 18 24 CAN Cancel 19 25 EM End of Medium 1A 26 SUB Substitute 1B 27 ESC Escape 1C 28 FS File Separator  1D 29 GS Group Separator 1E 30 RS Record Separator 1F 31 US Unit Separator Printing Characters 32-63 HEX DEC CHAR Description 20 32 Space 21 33 !

Collatz sequences in dc

Inspired by this recent post and code snippet by John Gruber, I decided to have a go at outputting Collatz conjecture sequences in dc. Running

    dc -e '?[r3*1+]so[d2~1=opd1<x]dsxx'

from the command line will take a single value as input, and run the sequence. ? takes input, puts it on the stack. [r3*1+]so is our macro for n3+1, what we do when we encounter an odd value. [d2~1=opd1<x]dsxx is our main macro, as well as our macro for handling even numbers (n/2). First it duplicates the value on the stack (last iteration) and divides by 2 to return the quotient and the remainder. 1=o uses the remainder to test for oddness and run o if true. Here I do something rather lazy for the sake of concise code: that initial duplication was solely for the sake of oddness. o swaps the top of the stack to get that value back, then multiplies by three and adds one, leaving this value on the stack. Evenness will never run this, and since the test for evenness leaves the outcome for evenness on the stack (what luck!), and the initial duplication is below it. Either way, the bottom of the stack is going to fill up with a lot of garbage, which should be inconsequential unless our sequences become absurdly long. At this point, the top of our stack is the result of the current step, so we print it, duplicate it, and run x if it’s greater than 0. Finally, we duplicate our main macro, save it as x, and then execute the copy we left on the stack.


dc as a code golf language

Code golf is a quirky little game – complete the challenge at hand in your preferred programming language in as few bytes as possible. Whole languages exist just for this purpose, with single-character commands and little regard for whitespace. dc always seemed like a plausible language for these exercises, and I recently attempted a few tasks which I would not ordinarily use dc for, notably 99 Bottles of Beer.


FENipulator

FENipulator is an online chess board program. More specifically, it is a very simple interface for viewing and changing FEN data, primarily for correspondence chess. Having been unimpressed with online correspondence chess systems, I decided to make my own. Here’s why it works for me: it’s simple, it’s lightweight, it takes in FEN via URL (HTTP-GET), it generates new encoded URLs to pass to the next player, and it only ever deals with one half-turn worth of information at once.

A night of Pokémon Go

Tonight marked my first night spent actively hunting Pokémon; it was, in fact, the first time I’d ever bothered to catch one outside. Finding new critters in new places, seeking out pokéstops with lures attached, comparing notes with a friend… this was all fun but predictable. I guess I just also haven’t been on an evening walk in a while1, because the whole meatspace community aspect of the thing was new, and very unlike what I expected.

Walking through our main town park, which was technically closed since it was after dark, was fascinating. Where there were pokéstops, there were just masses of people huddled together… enough where it seemed rather unlikely to me that all these people actually knew each other… little social gatherings were forming in the middle of the night just out of the desire to catch virtual monsters. And while the basic idea here wasn’t surprising, the sheer scale of the groups, the sheer number of people glued to their phones and alerting others to the presence of a Goldeen really wasn’t something I had anticipated.


Licensing

I’ve long believed in open licensing, in sharing, in letting content be free. Warhol, Negativland, Paul’s Boutique… Culture jamming produces new culture, it moves culture forward. I’ve always tried to open up my own work to the world to use as it sees fit. Creative Commons has been a valuable resource for years when it comes to readymade1 licenses for open cultural texts.


dc

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…

Even though I generally have an HP or two handy, the POSIX command-line RPN calculator, dc, is probably the calculator that I use most often. The manpage is pretty clear on its operation, albeit in a very manpagish way. While the manpage makes for a nice reference, I've not seen a friendly, readable primer available on the program before. This is likely because there aren't really people lining up to use dc, but there are a couple of compelling reasons to get to know it. First, it's a (and in fact, the only calculator, if memory serves) required inclusion in a POSIX-compliant OS. This is important if you're going to be stuck doing calculations on an unknown system. It's also important if you're already comfortable in a postfix environment, as the selection of such calculators can be limiting.


dvtm and the mouse

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point… Notably, in February 2021, a reader sent in a comment informing me that a PR was submitted to support mouse wheel scrolling in DVTM, and that they’ve patched it into their local environment with success. I haven’t (and won’t, as I rely on job control for multitasking for the past… ten years or so) tested this, so YMMV, but… it’s an update!

I've gotten quite a few hits from people searching for things like 'dvtm pass mouse.' I don't have much to say on the matter, except that this is the one thing that really bugs me about dvtm. As I have mentioned previously, given the choice between screen, tmux, and dvtm, I like dvtm the best. It is certainly the simplest, and has the smallest footprint. It automatically configures spaces, and makes notions of simultasking as simple as double-clicking. I would say that it brings the best of the GUI experience to terminal multiplexing, while still keeping true to the command line.


SCorCh, Part Two

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point… Notably, this became FENipulator.



Couple of minor developments on the scorch front. First, I have a rough flowchart whipped up. There are likely flaws in this chart, but I wanted to quickly get my thoughts diagrammed out. PDF, or Graphviz/DOT.


SCorCh - Simple Correspondence Chess

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point… Notably, this became FENipulator.

I've thought a lot in the past about correspondence chess, and the current state of such. There are a number of online solutions, most of them not so great. Twitter-based ChessTweets is my current favorite solution (anyone who wants a fight, @brhfl) although the constant barrage of DMs from the system does get somewhat irritating. I use the somewhat clumsy XBoard with a variety of engines for the sake of analysis, but using it for correspondence is far from ideal. This task seems the perfect opportunity to demonstrate that less is more, and create a CLI interface which acts as a somewhat dumb client for displaying a board and interpreting moves. While I will probably never actually code this, I hope that perhaps I will some day, and I will call it scorch for Simple Correspondence Chess.


Smartenter

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…
I've been busy with a number of (unfortunately more important) things, and haven't really put much effort into z in the past few days. I did put a bit more thought into smartenter, however, and have come to the conclusion that I must put it on hold for now. As much as I would like to polish it up, one thing stands in my way that would make z a better investment in my time - command history.

Job Control

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point. Notably, I use zsh as my primary shell these days, which has out-of-the-box support for what I set out to accomplish here (setopt auto_continue).
If you haven't already, it's probably a good idea to read my previous post. The plan, of course, was to work on z, my shell script to assist me with multitasking and Unix job control. I am working on z, but while I was spending a lot of time thinking about z, I was spending just as much time implementing something additional. Two additional things, to be exact. It's worth mentioning again that my shell of choice is fish1, and therefore everything that follows is written for fish.

Multitasking vs. Simultasking

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…

I have long tossed around a handful of terminal multiplexers - screen, tmux, and dvtm mainly - as a means to the end of multitasking on the terminal. They all have their problems - primarily when it comes to scrolling (awkward modal methods) and mouse reporting1. Somewhere, during my recent week of no internet, I came to several intertwined realizations…

At the core of this web of realizations is the notion that (from a user's standpoint), there is 'multitasking,' and there is 'simultasking.' Simultasking is my own term, and it wouldn't surprise me if others have discussed this paradigm before, and named it better than I ever could. Regardless, that's the term we have to deal with for the rest of this article. Simultasking is a subset of multitasking - it's the branch of multitasking that requires simultaneous (or near-simultaneous) interaction with the multiple tasks at hand. Multitasking is playing a little bit of Zork after you finish writing a paragraph, simultasking is playing around in interactive Ruby as you read a Ruby tutorial. Multitasking is keeping the fish documentation on hand as you write a fish script (just in case), simultasking is holding a conversation in naim while fiddling with fermat.


ep

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…

I spend a good deal of time inside a terminal. Text-based apps are powerful, when you know what you're doing, and fast (also when you know what you're doing, I suppose). If an equivalent Cocoa or X11 GUI tool offers me little advantage, I'm probably going to stick to either a CLI- or TUI-based piece of software. One of the more important, taken-for-granted pieces of the command line environment is that of the pager. Typically, one would use something like more or less for their pager. For a while, I used w3m as my pager, as well as my text-based web browser. Then Snow Leopard came out, and everything from MacPorts got totally jacked up and left much of my system jacked up as well. Parts of it I've fixed, other parts I've been lazy about. For that reason, or perhaps not, I have transitioned to ELinks as my text-based web browser. Today, after recent discussions with a friend regarding w3m and ELinks, I had a thought - why not use ELinks as my pager as well?


dc Syntax for Vim

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…

I use dc as my primary calculator for day-to-day needs. I use other calculators as well, but I try to largely stick to dc for two reasons - I was raised on postfix (HP 41CX, to be exact) and I'm pretty much guaranteed to find dc on any *nix machine I happen to come across. Recently, however, I've been expanding my horizons, experimenting with dc as a programming environment, something safe and comfortable to use as a mental exercise. All of that is another post for another day, however - right now I want to discuss writing a dc syntax definition for vim.


On multitasking

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…

There's a lot of whining and complaining from people who have and who haven't used the iPad, much of it revolving around the lack of multitasking. It seems likely that, as with the iPhone's copy & paste, multitasking is a feature that is in the works, and when it comes it will fit the device better than we ever could have hoped. This will, of course, require much thought about the reasons we multitask, and the paradigm shift that accompanies the rest of the iPad platform.


Back to my MacVim

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…

A while back, I tested and reviewed a slew of text editors for the Mac. Since then, I had primarily stuck with two editors - CotEditor for the stripped-down HTML of these blog posts, and TextWrangler for more serious coding. In my past, I'd made a point of learning Vi(m) for a handful of reasons:


About the pink place that I call brhfl dot com

This page needs revisiting, so I guess I’ll revisit it. When I (bri) revamped my personal site for the third time, I wanted it to be less scattered, more focused. I wanted it to be direct. Given that the top categories as of this revision are gaming, math, and stack language… I’m not sure how focused I really am. But, disparate as they may be, I think those things do sum up my interests well, and I think that if you like vintage video games, modern board and roleplaying games, clumsy stumbles through math problems, and dreamt-up stack language ideas, you might just like this pink place.

My Flickr photos (external)


PPCG (external)


yum yum you are a bread (external)