File Managers

In 2023-12, I got a nag email from Jam Software, passive-aggressively letting me know that I was using TreeSize on more machines than I was licensed for. Perhaps they meant my old laptop, from which I can’t delicense because said computer is an unbootable mess of corrupted data. But honestly, it’s hard to say what they meant; the email was as self-contradictory as it was condescending. TreeSize is great software, but a practice like this makes Jam a company I can’t recommend, and I’ve removed the links to their site accordingly.
Microsoft’s File (or Windows) Explorer1 has never been good2. Early Windows felt like a GUI for the sake of a GUI, competition to the Macintosh. The Mac’s Finder was itself quite simple, and also never really quite grew into anything for power users. This makes sense for Apple, but Microsoft started off with a weak simulacrum of Finder and never really got around to embracing its power users. Before Windows was ever released, Peter Norton was selling an incredibly powerful file manager for DOS, Norton Commander3.

GemiNaut's clever solution to a peculiar problem

I’m a big proponent of the web being leaner and more text-based. In light of how strongly the web has veered in the opposite direction, it’s probably a radical position to say that I think less of the web should have any visual styling attached to it at all. More text channels where a reader can maintain a consistent, custom reading experience feels like a better solution than a bunch of disparate-looking sites all with their own color schemes, custom fonts, and massive headers1.

I often use text-based web browsers like Lynx and WebbIE. I also tend to follow a lot of people who maintain very webring-esque sites, even moreso than mine. But there is more internet than just the HTTP-based World Wide Web. Gopher is, or was, depending on your outlook, an alternative protocol to HTTP. It was more focused on documents that kind of reference one another in a more bidirectional way, and because it never really got off the ground in the way HTTP did, it also never really got the CSS treatment; it’s really just about structured text. Despite most of the information about Gopher on the web being historical retrospectives, enthusiasts of a similar mind to me are keeping the protocol alive2.

Then there’s Gemini3. Gemini is a sort of modern take on Gopher. For nerds like me, it’s wonderful that such an effort exists. If you’re interested in the unstyled side of the internet, Gemini is worth looking into. I do think it needs a bit of love, however, as curl maintainer Daniel Stenberg points out how lacking the implementation details are. I disagree with a few of Daniel’s points; Gemini falls into a lot of ‘trappings’ that HTTP escaped because HTTP development steered toward mass appeal. Gemini is for a small web, one for weirdos like me. The specification and implementation issues seem very real, however, and while I don’t think Gemini can or should get WWW-level acceptance, an RSS-sized niche would be nice, at least, and software sort of needs to know how to work for that to happen.

All of this only really matters for background context. I’ll likely post more of my thoughts on a textual internet in the future, and I’ll likely also be dipping my toes in publishing on a Gemini site. The point of this post, however, is to talk about a strange problem that happens with unstyled text-based content. While there are certainly far fewer distractions between the reader and the content, there’s also a sort of brain drain that comes from sites being visually indistinguishable from one another. I always just kind of assumed this was one of those annoyances that would never really be important enough to try to solve. Hell, the way most software development is going these days, I don’t really expect to see any new problem-solving happening in the UX sphere. But I recently stumbled across a browser that solves this in a very clever way.

GemiNaut4 is an open-source Gemini and Gopher browser for Windows that uses an identicon-esque visual system to help distinguish sites. Identicons are visual representations of hash functions, typically used for a similar version of the same problem – making visually distinct icons for default users on a site. If everyone’s default icon is, say, an egg, then every new user looks the same. Creating a simple visual off of a hash function helps keep users looking distinct by default. I’ve often seen them used on password inputs as well – if you recognize the identicon, you know you’ve typed your password in correctly without having the password itself revealed.

Don Parks, who created the original identicon, did so to ‘enhance commenter identity’ on his blog5. But he knew there was more to it than this:

I originally came up with this idea to be used as an easy means of visually distinguishing multiple units of information, anything that can be reduced to bits. It’s not just IPs but also people, places, and things.

IMHO, too much of the web what we read are textual or numeric information which are not easy to distinguish at a glance when they are jumbled up together. So I think adding visual identifiers will make the user experience much more enjoyable.

-“Identicon Explained” by Don Parks via Wayback Machine

And indeed, browser extensions also exist for using identicons in lieu of favicons; other folks have pieced together the value in tying them to URLs. But GemiNaut uses visual representations of hashes like these to create patterned borders around the simple hypertext of Gopher and Gemini sites. The end result is clean pages that remain visually consistent, yet are distinctly framed based on domain. It only exists in one of GemiNaut’s several themes, and I wish these themes were customizable. Selfishly, I also wish more software would adopt this use of hash visualization.

Aside from browsing Gemini and Gopher, GemiNaut includes Duckling, a proxy for converting the ‘small web’ to Gemini. The parser has three modes: text-based, simplified, and verbose. The first is, as one might expect, just the straight text of a page. Of the other two, simplified is so stripped-down that apparently this blog isn’t ‘small’ enough to fully function in it6. But it does work pretty well in verbose mode, though it lacks the keyboard navigation of Lynx, WebbIE, or even heavy ol’ Firefox.

I had long been looking for a decent Windows Gopher client, and was happy to find one that also supports Gemini and HTTP with the Duckling proxy enabled in GemiNaut. But truly, I’d like to see more development in general for the text-based web. All the big browsers contain ‘reader modes,’ which reformat visually frustrating pages into clean text. ‘Read later’ services like Instapaper do the same. RSS still exists and presents stripped-down versions of web content. There is still a desire for an unstyled web, and it would be great to see more of the software that exists in support of it adopting hash visualizations for distinction.


TOTP: It's not Google Authenticator

I’ve been meaning to write about this since Twitter announced that only the eight-dollar-checkmark class would have access to SMS-based 2-factor authentication (2FA)1. Infosec circles got back into heated debates about the security implications of SMS-based authentication compared to the risk of losing access to the more-secure option of TOTP. This post isn’t really about that debate, but the major takeaways from either side are that:

User friction is a very real issue, and TOTP will always be more frictional than SMS; I can’t solve that in this post. Personally, I prefer to use TOTP when available due to the risk of a SIM-swapping attack2. This post, however, is more concerned with the matter of keeping your secret portable and within your control if you decide to use TOTP for 2FA.

If you’ve made it this far without knowing what TOTP is, well, that’s almost certainly by design. I would hazard that most people who are aware of it know it exclusively as Google Authenticator. Getting an increasingly-vital, open standard to be almost exclusively associated with one shitty app from one shitty company is certainly very good for that company, but very bad for everyone else. So the first order of business here is to clarify that whenever you see a site advertising 2FA via ‘Google Authenticator,’ what they actually mean is TOTP, or more accurately RFC 6238, an open standard3. Additionally, if you’re reading this and you currently implement TOTP on a site you manage or are planning to, I implore you to describe it accurately (including Google Authenticator as one of several options, if necessary) rather than feeding into the belief that the magical six-digit codes are a product of Alphabet.

So what, then, is TOTP? Even if you know it isn’t A Google Thing, the mechanism by which a QR code turns into a steady stream of six-digit codes is not entirely obvious. This is, typically, how we set up TOTP – we’re given a QR code which we photograph with our authenticator app, and suddenly we have TOTP codes. The QR code itself contains just a few pieces of URI-encoded data. This may include some specifics about the length of the code to be generated, the timing to be used, the hash method being used, and where the code is intended to be used. Crucially, it also contains an important secret – the cryptographic key that, along with a known time reference, is the foundation from which the codes are cryptographically generated. Essentially, a very strong password is kept secure, and from this an easily-digestible temporary code is generated based on time. Because it comes from a cryptographic hash function, exposing one (or more) of these codes does not have the same security implications as exposing the key itself.

Keeping the key itself secret is, in fact, extremely important. Vendor lock-in aside, I assume this partially contributes to the opacity of what happens in between scanning the QR code and having a functional 2FA setup. A large part of the debate over whether ‘Google Authenticator’ is a good 2FA solution is the fact that once your secret is in the Google Authenticator app, it is not coming out. If your app data gets corrupted, or if something misbehaves during a phone transition, you’re out of luck. Hopefully you’ve kept the recovery codes for your accounts safe somewhere. If to you, as to most people, TOTP means Google Authenticator, then this is a very real concern. One goof could simultaneously lock you out of all of your accounts that are important enough to you that you enabled their 2FA.

When I was de-Googling myself years ago, I went through the somewhat-laborious process of generating all new codes to put into Authy. In addition to (or in lieu of, I’m not entirely sure) local storage, Authy keeps your TOTP info in the cloud, allowing you to keep several devices in sync, including a desktop app. While this is a better solution than Google Authenticator, I’m not linking to it as I still think it’s a pretty bad one. The desktop app is an awful web-browser-masquerading-as-desktop-software creation. The system of PINs and passwords to access your account is convoluted. And, while in theory you can put the desktop app into a debug mode and extract your data, there’s no officially-supported path toward data portability. The unofficial method could go away at any time; in fact, while I will credit Indrek Ardel with the original method4, it seemingly no longer works and one must find more recent forks that do. On top of this, the aforementioned bad desktop app and confusing set of passwords meant that it was still just easier to start fresh with new codes when I recently switched away from Authy. Finally, Authy is another corporate product. It’s owned by Twilio, and they seem to want a piece of that lock-in pie as well, offering their own 2FA service that is a quasi-proprietary implementation of TOTP5, as outlined by Ardel.

For years, I’ve been using various KeePass implementations in conjunction with one another as a portable password management solution. I can keep a copy of the database in my OneDrive (or whatever cloud storage I happen to have access to; right now it’s OneDrive but frankly that’s because it’s cheap — not because it’s good) and have access to it from my phone and various computers. I can sync copies to flash drives if necessary, or drop a copy on an M-Disc with other important files to stash in a safe. I was, for a long time, using an unmaintained fork, KeePassX, because it simply vibes better with how I want computers to look and feel than its replacement, KeePassXC does. On mobile, I’ve been using Strongbox6. At some point, I noticed they added support for TOTP codes! The app will happily scan a QR code and add the relevant data to an entry.

This was interesting and novel, and I was already thinking about moving all of my codes into it, simply because storing them that way meant the data was easily recoverable. If I wanted to switch again in the future, I now had access to the secret and any other relevant parameters, and could generate a new QR code from them if need be. But then I happened to notice that KeePassXC, the desktop software I had been avoiding, also supports TOTP codes. And Strongbox’s implementation is fully compatible with KeePassXC’s! This changed things – suddenly this was a portable solution for accessing my TOTP codes and not merely the data behind them. I generated new codes for everything I use (and upgraded my security on a few things that had implemented TOTP without my noticing) and ditched Authy.

While you can add TOTP codes directly in the KeePassXC desktop app, you can’t do it directly from a QR code. Windows is fond of capturing screenshots to the clipboard7; I would love to see an option in KeePassXC that scans an image in the clipboard for a QR code (and then clears the clipboard). Getting codes out is extremely straightforward. Since the data is just in normal entries in my database, a code I scan in via Strongbox will show up in KeePassXC once OneDrive catches up. It is worth noting that this rather shatters the ‘something you know / something you have’ model of 2FA, but the flexibility is there to manage codes and passwords however the user is comfortable. The most important aspect for me was liberating my TOTP data from a series of lockboxes for which I lacked the key.

Ultimately, I don’t think average users care much about data portability until they’re forced to. By the time their hands are forced, the path of least resistance tends to just be to stick with the vendor that’s locked them in8. With TOTP, the ramifications of this can be extremely annoying. More importantly, however, I think Google has done a very good job at preventing users from even knowing that TOTP portability is possible. Whether I convince anyone to store their codes in KeePass databases or not is immaterial; I really just want people to know they have options, and why they might want to use them. I want people to give just a small amount of thought to the implications of having a login credential that you not only have zero knowledge of, but also have zero access to. Frankly, I want people to stop doing free advertising for Google. And finally, I genuinely want a return to an internet where, occasionally, we make our users learn one little technical term instead of letting multi-billion dollar corporations coöpt everything good.


Rawwwwwr, let's talk about Wavosaur

Okay, so I promise I’m actually working on my 2022 media retrospective post, but I’ve also been itching to write about a particular piece of software that I’ve been getting a lot of use out of lately. I’ve been dabbling a bit with music production in tracker software, a style which is built entirely1 around the use of samples. As such, I’ve found myself needing to work directly on waveforms, editing samples out of pieces of media I’ve stolen or recordings I’ve made directly2. Having used Adobe Audition as both a multitracker and a wave editor for a long time, I rather like its approach as a dual-purpose tool. I do not, however, like Adobe, nor do I really want to wait for Audition to start up when I’m just chopping up waves. It’s too much tool for my current needs. I’ve also used Audacity in the past, which is a multitracker that certainly can function as a wave editor if you want it to. But, among other issues, it’s just not pleasant to use. So I’ve looked into a number of wave editors over the past few weeks, and have primarily settled on Wavosaur.

Wavosaur is not perfect software, I have a few quibbles that I’ll bring up in a bit. It is, however, really good software, with a no-nonsense interface that at least tries to be unintrusive, and is largely user-customizable. It’s quick to launch, and quick to load files. By default, it will attempt to3 load everything that was open when it was last exited, this can be disabled to make things even quicker. While this is true of pretty much any audio editing software, it supports the import of raw binary data as well as enough actual media formats that I can open up an MP4 video of an episode of Arthur that I downloaded from some sketchball site and start slicing up its audio without issue.

Navigating waves is pretty straightforward. Scrollwheel is assigned to zoom instead of scroll, which I do not like. An option for this would be great. It’s not a huge deal, however, since I’m moving around more by zooming than by scrolling in the first place. Zoom in and out are not bound to the keyboard by default; I set horizontal zoom to Ctrl+/- and vertical to CtrlAlt+/-. I might remove modifiers from vertical altogether, but my point is more that binding them to something logical makes navigating helpful, along with CtrlE and CtrlR, the default bindings for zooming to selection and zooming out all the way.

Wavosaur can deal with two different sorts of markers, and these are stored within the .wav file itself. Normal markers can be used to identify all manner of thing in the file. No data (like a name, for example) can be stored along with the marker, so a somewhat sparing use is probably best, but to my knowledge there is no limit to the number of markers that can be added. Other software does allow for similar markers to be named and then navigated by name, but to my knowledge none of these store these in a standardized way in the .wav file itself. I also haven’t seen other wave editing software that supports the other sort of marker that Wavosaur supports – loop markers. There can only be one pair of these — an in and an out — per file. Set your loops to the note’s sustain duration, and you have a very basic implementation of envelope control. While I don’t know of other software that writes this information, both trackers that I’m currently playing with — MilkyTracker and Renoise — will read it4. Wavosaur doesn’t really have a way to preview loop points in context, unfortunately, but the fact that it reads and writes them still makes for a useful starting point within the tracker.

My second-most-used wave editor over the past few weeks has been NCH WavePad5. Aside from the aforementioned loops, WavePad lacks two features that really makes Wavosaur shine for sample creation. The first is the ability to snap to zero-crossings. Doing this helps to ensure that samples won’t end up popping when they trigger (or, with loop points, retrigger). This can easily be enabled and disabled in the menus, though toggling it can’t be bound to a key for some reason. The second is the ability to universally display time in audio samples6 instead of hours, minutes, and seconds. When fully zoomed in, WavePad switches to time based on audio samples, but I couldn’t find a way to set it as a permanent display. Often, with trackers, it’s advantageous to have a fairy intimate knowledge of how many audio samples you’re dealing with in a given sample. Being able to permanently set the display this way in Wavosaur is very helpful.

Wavosaur allows for resampling to an arbitrary sample rate. It has inbuilt pitch- and time-shifting, and a few basic effects like filters. For everything else, it supports VST in a straightforward way. You can build up a rack and preview things live, editing VST parameters while playing a looped selection of audio, and applying once things sound right. There’s some MIDI functionality, though I’m not sure the extent of it. Basic volume automation is included and works well enough. A wealth of visualization tools – spectrum analyzers and oscilloscopes and such – are included, and even have little widget versions that can live in the toolbar. It includes calculation tools for note frequency, delay, and BPM; BPM detection can also automatically place markers on beats. If you set markers at beats in this way, or manually, it will scramble audio based on markers for you.

I said I had a few quibbles that I’d like to get to. I already sort of mentioned one – while keyboard control is decent, not everything can be keybound. Like toggling snap-to-zero-crossings, there are quite a few actions that I would really like to have keyboard control over. Currently you can easily select between marker points by double-clicking within them, but the same can’t be done from the keyboard; overall, selection could use more granular control via menus and the keyboard. One very annoying thing is that doing an undo action resets the horizontal zoom out to 100%. If I’ve zoomed in on a section of audio that I’m looking to slice out into a new sample, I don’t want to lose that view if I need to correct a goofball mistake I made. Finally, something that a lot of good software has spoiled me for is a one-step process for making a new file from a selection. Right now it’s a two-step process of copying and pasting-as-new, which is fine. But it does sort of add up when you’re chopping up a bunch of samples. These are all pretty minor issues, and overall I think Wavosaur is a great little waveform editor. If you’re working with samples for trackers, I think it may be the best choice (on Windows, at least).


Some things I have been meaning to write about but haven't

So… I have a few posts that I’ve sort of been working on, but they’re involved. I have others that I just haven’t been motivated to actually work on; motivation in general has been difficult lately. And there have been some things I’ve played with or thought about recently, but I just can’t figure out a way to sort of give those things the narrative structure that I hope for when I’m writing here.

On Heathcliff and hackish image manipulation

This should probably just be two posts, but it’s been months since I posted anything and I’m just going to go for it. But if you just want to see me talk about a terrible bodge-job of a shell script, scroll down a bit.
For a while I’ve had this idea to start a Twitter bot that posts a strip made up of a random Heathcliff panel paired with a random Heathcliff caption. There are a few reasons for this, the first of which is that under Peter Gallagher’s tenure, Heathcliff has gotten… weird. Recurring themes include friendly but inexplicable robots, helmets that communicate what their wearer is thinking (maybe?), the Garbage Ape, the magical levitating properties of bubblegum1, the meat tank… the strip has gotten to be a real experience for every possible state of the human mind.

The voice of a wizard hacking away

My pals at Sandy Pug Games have opened up preorders for WIZARDPUNK, a zine of various wizard stories and whatnot. It’s full of brilliant work, and I highly recommend checking it out! I have a little epistolary slice-of-life piece in it, which I’m honestly pretty proud of. In addition to this, I was asked if something rather curious was possible, if there was any way some audio-producing computer code could be squished down to a reasonable size such that someone could theoretically type it in.

All of the Windows Explorers, together at last (external)

I have quite a few posts lined up, and I’m excited about all of them, but… I’m very stressed, and writing is very hard right now. So in the meantime, this post title-links to a very cool recent writeup by Gravislizard, a streamer (&c.) whose dives into retro computing I really admire. The linked post compares basically every notable revision to Windows Explorer since… before it was even called Explorer. Twenty little writeups complete with screenshots, from Windows 1.04 to Windows 10. Lovely little trip through history.


Yet another baffling UX decision from Adobe

As of mid-June 2020, Adobe seems to have fixed this. Whether it was a bug or a poor decision is hard to say. I’m leaving this post up for two reasons: first, it is entirely believable that Adobe would do this intentionally; and second, regardless it’s still a good case study in the impacts of this sort of decision.

Adobe apparently updated Acrobat DC recently, which I’m only aware of because of a completely inexplicable change that’s wreaking havoc on my muscle memory (and therefore, my productivity). I haven’t seen any sort of update notification, no changelogs. But on multiple computers spanning multiple Creative Cloud accounts, this change popped up out of the blue. The change? Online help is now accessed via F2 instead of F1.

Actually, this isn’t true. Presumably, sensing that such a change would break years of muscle memory for folks who use F1 to access help1 and/or realizing that this change completely violates a de facto standard that has been nearly universal across software for decades, Adobe actually decided to assign both F1 and F2 to online help. F2 is, however, the key blessed with being revealed in the Help menu.

So, good! Adobe didn’t break anyone’s muscle memory! Except… for those of us who spend all day in Acrobat doing accessibility work. As I wrote in a 2017 post about efficiently using the keyboard in Acrobat, F2 is was the way to edit tags (and other elements in the left-hand panel) from the keyboard2.

Properly doing accessibility work in Acrobat often requires going through an entire document tag-by-tag. Unlike, say, plaintext editing of an HTML file, this is accomplished via a graphical tree view in Acrobat. It is comically inefficient for such a crucial task; attempting to make the most of it was largely the purpose of that earlier post. Fortunately, there is a new way to edit tags via the keyboard: CtrlF2.

This is an incredibly awkward chord, and I have Caps Lock remapped to Ctrl; it’s far, far more awkward using the actual Ctrl key. But let’s pretend for a minute that it’s no more miserable to press than F2. I cannot see any reason why this decision was made. It presumably won’t be used by folks who have muscle memory and/or decades worth of knowledge that F1 invokes online help. It isn’t (currently, maybe they do plan to remap F1 freeing up an additional key. It breaks the muscle memory of users who need to manipulate tags, objects, &c. It’s completely inexplicable, and therefore entirely predictable for the UX monsters at Adobe.

It’s worth noting, in closing, that this isn’t solely an accessibility issue. However, it’s extremely frustrating that there is one tool in this world that actually allows accessibility professionals to examine and edit the core structural elements of PDFs, and that the developers of this tool have so little respect for the folks who need to do this work. I could come up with countless features that would improve the efficiency of my process3, yet… Adobe instead insists on remapping keyboard shortcuts that make the process even slightly manageable. Keyboard shortcuts that I’ve been using for versions upon versions. It’s incredibly disheartening.


A test of three zippers

2023-12-09 update: I have a new laptop, and for related reasons I’m also rebuilding this blog. I redid the test in this post on the new machine (AMD Ryzen 9 7940HS @ 4.00 GHz w/ Radeon 780M Graphics; 48GB RAM). When I was doing this/revisiting this post, I realized I didn’t note what 7-Zip settings I was using. On this machine, at ‘fast’ and ‘fastest’ (which seem to run identically), it is faster than Windows (16 vs 26 seconds), producing a file that’s 9MB larger. At ‘normal’, it produced a smaller test file than Windows, but took 1:17. WinZip with OpenCL enabled won the speed test at 14 seconds for the third-smallest file. Strangely, it didn’t really use much of the GPU. Without OpenCL enabled, WinZip produced the smallest file and took 23 seconds.
I’m in the middle of quite a few posts, and honestly… this one should be pretty short because I had no idea I’d be writing it. I’m trying to make my Windows experience as pleasant as possible (that itself is an upcoming post), and part of that has involved looking for a good archive tool. Windows handles ZIP files well enough, but it’s kind of a barebones approach and it doesn’t handle any of the other major archive formats that I’m aware of.

Geometry Expressions

I’ve written before about the geometry construction language, Eukleides. In that post, I said that ‘I [was] drawn to Eukleides because it is a language […], and not a mouse-flinging WYSIWYG virtual compass.’ Those WYSIWYG mouse-flingers are known as interactive geometry software (IGS), and I’ve never been a huge fan of them. Most of them are built in Java, and it shows. Even beyond Java issues, they largely feel made by interns employed by mathematicians rather than folks who have read The Design of Everyday Things. At the same time, complicated constructions like Gauss’s 17-gon1 can quickly become unwieldy in written code. I have experimented with many, though never really settled on an IGS.

Geometry Expressions (GX) is currently on sale for $10 (instead of $99) due to the pandemic. Saltire, the maker, hasn’t stated when this sale will end, which… is fair. I had previously played with the trial of GX, and found it to be… pretty usable, but also I need to really be in a mood to drop a hundred bucks on hobbyist (I mean, for me) maths software. At $10, I decided to take the plunge. Here are my thoughts so far.

The good

The UI/UX isn’t that bad!
I don’t think this is written in Java? But it might be. It’s cross-platform (macOS/Windows) so it’s entirely possible that it’s written in… something weird. The UX fits in fine with Windows, I can only imagine it’s kind of awful on macOS, but… I haven’t tested that yet. Just a gut feeling. There are UI/UX quirks that I’ll get into later, but it’s… manageable!
Export options
One thing that I really don’t love about Eukleides is that you can basically just export to EPS. I then have to separately convert this to SVG, and from there, post-process the SVG. Eukleides also only lets you draw from like… eleven basic colors? GX is clearly built with exporting in mind, and integral to this is the fact that you can… well… use color good! But the actual export options are also great. Native SVG, our old friend EPS, your normal raster formats, and… interesting things. Lua, which I haven’t tried yet, and both animated GIFs and interactive HTML/JS, neither of which I’ve done anything interesting with yet.
Interactivity
I mean, it is called an interactive geometry system, but it is really rather magical how that all comes together. Assuming everything is glued together correctly (another topic for later), you can just drag points around and watch your construction work with different parameters. So, in the simple angle bisector shown below, dragging points A or C around will change the angle of ∠ABC, and the construction will adjust, changing ∠ABM and ∠MBC accordingly.
Robust toolset
I guess you wouldn’t really expect less, but Eukleides, for instance, really kind of gives you the bare minimum for objects and the like. GX has fifteen drawing tools which behave as expected. It has fourteen methods of constraint – for instance, in the illustration below, radius r is a constraint. I constructed the first circle and applied the constraint. I could have constructed the other two circles at any size – as soon as I applied the constraint, r, they were bound to it. These can be units as well; a square with side lengths constrained to 2 has double the side length as one constrained to 1. It has fourteen built in construction tools, which don’t interest me much as my use-case is largely doing constructions from scratch. Finally, it has eleven calculations, such as the angle calculations in the construction below.

Basic angle bisector created in Geometry Expressions A B C r H r K r M z 0 ~ 0.4294711 z 1 ~ 0.4294711

All in all, it’s a pretty nice tool for the things I want to use it for. But, unsurprisingly, there are some pretty frustrating snags.

The bad

Incident snapping does not work well
I said pretty frustrating, but I’m starting off with an incredibly frustrating UX gaffe. In the above construction, I followed the method of doing an angle bisector by hand with a compass and straightedge. Since I was doing this on an IGS that could precisely measure things for me, however, the construction itself had to be quite precise. I tried several times, and kept coming up with angles that were slightly off. The problem was that center points H and K were not quite aligned with the intersection of circle B along ∠ABC. Why did this happen? When creating the two intersecting circles (H and K), the cursor changed to a design that clearly indicated it was snapping incident to the relevant intersection. Additionally, the two intersecting objects were highlighted. But it didn’t actually snap. The only way to get this to work was to use the construction tools to make intersection points; the circle tool was willing to snap incident to existing point just fine. This is absurd. Certainly the solution is to make snapping function across the board, but if that can’t be done, don’t make the UI change such that it appears as though that’s happening. I don’t know how such a decision can be shipped.
It’s easy to feel like you have to undo a lot
The tools are pretty good for constructing and the like, but… less-so for touching things up or fixing goofs. There were plenty of time where I created things and just felt kind of… lost in either how I accidentally made a thing, or how to get a thing to do what I wanted vs. just… getting it right from the outset. For instance, I haven’t figured out how to rotate a polygon around its midpoint, only vertices (and these rotations don’t seem to have any shift-constraints, nor do translations). Mild example but little things like that make the toolset feel less fleshed-out than I’d like.
Nonstandard UX behaviors
To an extent, this still feels like some mathematician’s hired hand whipped up some controls without studying, say, Illustrator. There aren’t keyboard shortcuts that I can find for the tools (the menu doesn’t even have alt-keys defined, which is infuriating). Scroll-wheel zooms (actually, I believe it scales the document, which is even sillier) instead of… scrolling (this is one of my biggest pet peeves in image editing software). Scroll/pan is achieved by holding right-click instead of Space. Et cetera. It’s not as bad as many that I’ve played with, but it can be cumbersome to use.
Unclean SVG export
I’m glad I can export right to SVG! But the export is… a lot. There’s a lot of extra stuff in there, and weird behaviors like every digit in 0.4294711 up there being a separate textbox. I actually imported it into Illustrator and cleaned a couple of things up (there were some points and extra bits that I couldn’t quite figure out how to get rid of, &c.) but… the text is really small! It’s not font-size, and my knowledge of SVG isn’t quite at the point where I’m going to solve it for this post. And while I apologize for the text being difficult to read, it does help demonstrate that the SVG output is just a bit much. I also had to touch up the rightward double arrow; GX’s export opted to find this in a Symbol font instead of using U+21D2. Little things, but room for improvement.

In conclusion

For $102? I’m happy with this purchase. It doesn’t do too much; it’s not a full-featured CAS with an IGS built in. Because of this, all of your tools are right there in front of you and are fairly self-explanatory. The UX could use some polish, but it isn’t terrible. There are a lot of export options, and hopefully I can figure out how to do something fun with the interactive ones. I don’t know that I would be bothering to write this if I was just checking out the trial at a $99 price point, however. It’s specialized software, and I get that; we’re also increasingly numb to the work that goes into software and the value of said work. But, boy, if I was going to pay full-price? I sure as hell would want keyboard shortcuts, functioning snapping, and just a little bit of general UX touch-up.

If you’re reading this, and you’re a recreational maths nerd, and you’re stuck at home, and Saltire is still offering GX for $10… I think it’s hard to pass up.


Backward compatibility in operating systems

Earlier this week, Tom Scott posted a video to YouTube about the forbidden filenames in Windows. It’s an interesting subject that comes up often in discussions of computing esoterica, and Scott does an excellent job of explaining it without being too heavy on tech knowledge. Then the video pivots; what was ostensibly a discussion on one little Windows quirk turns into a broader discussion on backward compatibility, and this inevitably turns into a matter of Apple vs. Microsoft. At this point, I think Scott does Apple a bit of a disservice.

If you’ve read much of my material here, you’ll know I don’t have much of a horse in this race; I’m not in love with either company or their products. I’m writing this post from WSL/Ubuntu under Windows 10, a truly unholy matrimony of software. And while I could easily list off my disappointments with MacOS, I genuinely find Windows an absolute shame to use as a day-to-day, personal operating system. One of my largest issues is how much of it is steeped in weird legacy garbage. A prime example is the fact that Windows 10 has both ‘Settings’ and ‘Control Panel’ applications, with two entirely different user experiences and a seemingly random venn diagram of what is accessible from where.

This all comes down to Microsoft’s obsession with backward compatibility, which has its ups and downs. Apple prioritizes a streamlined, smooth experience over backward compatibility, yet they’ve still gone out of their way to support a reasonable amount of backward compatibility throughout their history. They’ve transitioned processor architecture twice1, each time adding a translation layer to the operating system to extend the service life of software. I think they do precisely the right amount of backward compatibility to reduce bloat and confusion2. It makes for a better everyday, personal operating system.

That doesn’t make it, however, a better operating system overall; it would be absurd to assume that one approach can be generally declared better. Microsoft’s level of obsession in this regard is crucial for, say, enterprise activities, small businesses that can’t afford to upgrade decades-old accounting software, and gaming. There is absolutely comfort in knowing that you can run (with varying levels of success) Microsoft Works from 2007 on your brand new machine. It’s incredibly valuable, and it requires a ton of due diligence from the Windows team.

So, this isn’t to knock Microsoft at all, but it is why I think dismissing Apple for a lack of backward compatibility is an imperfect assessment. I’ve been thinking about this sort of thing a lot lately as I decide what to do moving forward with this machine – do I dual-boot or try to live full-time in Windows 10 with WSL. And I’ve been thinking about it a lot precisely because of how unpleasant I find Windows3 to be. Thinking about that has made me examine why, and what my ideal computing experience is. Which is another post for another day, as I continue to try to make my Windows experience as usable as possible. Also, I’m not in any way trying to put down Scott’s video, which I highly recommend everyone watch; it was enjoyable even with prior knowledge of the forbidden filenames. It just happened to time perfectly with my own thoughts on levels of backward compatibility.


On Animal Crossing and native UX

Nintendo (of Australia) has revealed that Animal Crossing: New Horizon will only support one island per console. Different cartridge? Same island. Different user account? Same island. This obviously reads as some money-grabbing garbage (that they’re releasing a special edition Switch alongside the game doesn’t help), but there’s another issue here that I feel will largely go untouched-upon. Using a computer these days is a horrible mess, and to me this is largely due to the use of non-native UI widgets.

On the Kensington Expert Wireless (and other pointing devices)

I’ve expressed once or twice before my disappointment with the current selection of pointing devices. This hasn’t improved much, if any. To make matters worse, Trackpoints are becoming less and less common on laptops. Such is the case with my HP Spectre, a deficiency I knew would be an issue going into things. When I was writing about pointing devices back in 2016, I ended up acquiring a Logitech MX Master. I still use that mouse, and also own an MX Master 2. They are incredibly good mice, the closest thing that I have found to the perfect mouse.

Thinking of pointing devices to use with the Spectre, I immediately figured I’d get an MX Anywhere to toss in the pouch of my laptop sleeve. What a horrible mistake. The truly standout feature of the MX Master is its wheel. It scrolls with individual clicks like wheel mice of yore until a specific speed is reached, at which point it freewheels like a runaway train. It’s the perfect physical manifestation of inertial scrolling. It also, notably, still clicks to perform the duty of middle-click. Both of these things are broken on the MX Anywhere – you have to manually select freewheel or click scrolling, and you do that by depressing the wheel. Middle click is a separate button below the wheel, with no regard for muscle memory. I returned the MX Anywhere and will likely just buy a cheap slim mouse to throw in the sleeve; it seems unlikely there are any travel-sized mice out there with modern inertial scrolling.

I also have considered I might need a pointing device other than the touchscreen for certain higher-precision activities while lounging in bed. And, three paragraphs in, we get to the meat of this post: my experiences so far with a trackball, the Kensington Expert Wireless. Trackballs, even more than mice, feel resistant to progress. Only a handful of notable companies are producing trackballs, and of the available models, relatively few are Bluetooth. Kensington has been making versions of the Expert for over twenty years, and the latest change came four years ago with the introduction of the Bluetooth model. The basic layout that has remain unchanged over the years is a large ball surrounded by four large buttons at the corners. The current iterations, both wired and wireless, also have a ring around the ball for scrolling.

Most modern trackballs seem to have a traditional scroll wheel. This, to me, is absurd. You’re not getting modern inertial scrolling with these (even Logitech’s MX-branded trackball has traditional clicky scrolling), and you have a perfectly good device capable of inertia right in front of you: the ball. I would love to see a designer in hardware/firmware simply dedicate a button to switching the ball into scroll mode. As it stands, however, Kensington’s ring is the least obtrusive of the lot, and the four buttons are all very easily accessed. And, while it is a bit convoluted, ball-scrolling behavior is attainable in Windows1 via software.

The first bit of the puzzle is the official KensingtonWorks software. This allows configuration of what each of the four buttons does, as well as the upper two buttons pressed together, and the lower two buttons pressed together. These upper and lower chords do have a limitation – it seems they aren’t held, they’re only momentary presses. There’s also no way to achieve the desired ball-scrolling effect here, so this stage is just minor tweaks to buttons. By default, starting at the upper-left and moving clockwise, the buttons are middle-click, back, right-click, left-click. I use middle-click more than right-click, and thought that swapping these would make sense, but the pinky-stretching actually made that a bad choice. I ultimately settled on swapping middle-click and back, and assigning forward to the upper two buttons pressed together. I haven’t decided what to do with the lower two in concert yet.

The next step is a third-party bit of software, X-Mouse Button Control. From here, I’ve intercepted middle-click to be ‘Change Movement to Scroll’. Within this option, I have it set to lock the scrolling axis based on movement, and to simply send a middle-click if there’s no movement. Thus, clicking the upper-right button sends a middle-click whereas holding it and flicking the ball around turns into scrolling. It works so well that I am again shocked that this isn’t scrolling behavior being designed into any trackballs.

I would love to see Kensington integrate this behavior into firmware or KensingtonWorks. I would love to see Kensington replace the scroll ring with the SlimBlade’s rotation-detecting ball sensor. I would love to see Kensington release a Bluetooth version of the SlimBlade. But for now, I have a pretty clean solution: an unobtrusive, solid-feeling trackball with decent customization options in a software layer.


Cats, dogs, and birbs (according to my phone)

2021-02 update: Because the turds at Viacom have removed all of the cross-posts of Garfield comics from Garfield.com, I have changed the link to the Garfield comic in the birds section to point to GoComics. This is bullshit.

I’ve never really used iOS’s automatic thing-detection for photo categories before, but I was looking for a specific picture of a dog from my ~8 years worth of photos, so I gave it a shot.

The 231 photos my phone thinks are of cats include:

The 214 photos my phone thinks are of dogs include:

The 76 photos my phone thinks are of birds include:

NIRB, Birb don’t want nirb scirbs a scirb is a birb that can’t get nirb lirb from birb!


VT100 Line Drawing

One of those totally useful1 things that crosses my mind occasionally is recompiling a version of dc that won’t choke on characters above code point 127. Among other reasons, occasionally code golf questions come up that really want box drawing characters used for some reason, and it just isn’t possible in dc. Except, I got to thinking… it absolutely is on a VT100, and xterm supports the same escape codes. I just haven’t really explored them.

Allocations

My ‘daily driver’ USB drive gave up the ghost recently, and after having secured a replacement1, it was time for the always-fun task of formatting. I could’ve left things as-is, but the stock partition was FAT32 with 32K block allocations. While not the end of the world, I was really hoping to set the new drive up with smaller block allocations. The previous drive was partitioned with 32K allocations, which wasn’t ideal given that I tend to keep a lot of small files around.

Kakoune

I’m not writing this post in vim, which is really a rather odd concept for me. I’ve written quite a bit about vim in the past; it has been my most faithful writing companion for many years now. Part of the reason is its portability and POSIX inclusion – it (or its predecessor, vi) is likely already on a given system I’m using, and if it isn’t, I can get it there easily enough. But just as important is the fact that it’s a modal editor, where text manipulation is handled via its own grammar and not a collection of finger-twisting chords. There aren’t really many other modal editors out there, likely because of that first point – if you’re going to put the effort into learning such a thing, you may as well learn the one that’s on every system (and the one with thousands of user-created scripts, and the one where essentially any question imaginable is just a Google away…). So, I was a bit surprised when I learned about Kakoune, a modal editor that simply isn’t vim1.

Now, I’ve actually written a couple of recent posts in Kakoune so that I could get a decent feel for it, but I have no intention of leaving vim. I don’t know that I would recommend people learn it over vim, for the reasons mentioned in the previous paragraph. Though if those things were inconsequential to a potential user, Kakoune has some very interesting design ideas that I think would be more approachable to a new user. Heck, it even has a Clippy:

~                                                          ╭──╮   ╭───┤nop├────╮
~                                                          │  │   │ do nothing │
~                                                          @  @  ╭╰────────────╯
~                                                          ││ ││ │
~                                                          ││ ││ ╯
~                                                          │╰─╯│
~                                                          ╰───╯
nop          unset-option                                                      █
:nop            content/post/2018-06/kakoune.md 17:1 [+] prompt - client0@[2968]

Here are a few of my takeaways:

I guess there are far more negative points in that list than positives, but the truth is that the positives are really positive. Kakoune has done an incredible job of changing vim paradigms in ways that actually make a lot of sense. It’s a more modern, accessible, streamlined approach to modal editing. Streamlining even justifies several of my complaints – certainly the lack of a file browser, and probably the lack of splitting fall squarely under the Unix philosophy of Do One Thing and Do It Well. I’m going to continue to try to grok Kakoune a bit better, because even in my vim-centric world, I can envision situations where the more direct (yet still modal) interaction model of Kakoune would be incredibly beneficial to my efficiency.


Revisiting my Linux box

My Mac Pro gave up the ghost last week, so while I wait for that thing to be repaired, I’ve been spending more time on my Lenovo X220 running Ubuntu. While I do use it for writing fairly often, that doesn’t even require me to start X. Using it a bit more full-time essentially means firing up a web browser alongside whatever else I’m doing, which has led to some additional mucking around. For starters, I went ahead and updated the system to 16.04, which (touch wood) went very smoothly as has every Linux upgrade I’ve performed in the past couple of years. This used to be a terrifying prospect.

Updating things meant that the package list in apt also got refreshed, and I was a wee bit shocked to find that Hugo, the platform I use to generate this very blog, was horribly out of date. Onward to their website, and they recommend installing via Snapcraft, which feels like a completely inexplicable reinventing of the package management wheel1. Snapcraft is supposedly installed with Ubuntu 16.04, but not on a minimal system apparently, so I went and did that myself. Of course it has its own bin/ to track down and add to the ol’ $PATH, but whatever – Hugo was up to date. I think I sudoed a bit recklessly at one point, since some stuff ended up owned by root that shouldn’t have been, but that was an easy enough fix.

I run uzbl as a minimalist web browser, and have Chromium installed for something a bit more full-featured. I decided to install Firefox, since it is far less miserable of a browser than ever, and its keyboard navigation is far better than Chromium’s. Firefox runs well, and definitely fits better into my keyboard-focused setup, but there is one snag: PulseAudio. At some point, the Firefox team decided not to support ALSA directly, and it now relies on PulseAudio exclusively for audio. I can see small projects using PulseAudio as a crutch, but for a major product like Firefox it just feels lazy. PulseAudio is too heavy and battery-hungry, and I will not install it, so for the time being I’m just not watching videos and the like in Firefox. I did stumble upon the apulse project, but so far haven’t had luck with it.

I use i3 as my window manager, and I love it so much – when I’m not using this laptop as a regular machine, I forget how wonderful tiling window managers are. When I move to my cluttered Windows workspace at the office, I miss i3. Of course, I tend to have far more tasks to manage at work, but there’s just something to be said for the minimalist, keyboard-centric approach.

I had some issues with uxterm reporting $TERM as xterm and not xterm-256color, which I sorted out. A nice reminder that fiddling with .Xresources is a colossal pain. I’m used to mounting and unmounting things on darwin, and it took me a while to remember that udisksctl was the utility I was looking for. Either I hadn’t hopped on wireless since upgrading my router2, or the Ubuntu upgrade wiped out some settings, but I had to reconnect. wicd-curses is really kind of an ideal manager for wireless, no regrets in having opted for that path. I never got around to getting bluetooth set up, and a cursory glance suggests that there isn’t a curses-based solution out there. What else… oh, SDL is still a miserable exercise.

All in all, this setup still suits a certain subset of my needs very well. Linux seems to be getting less fiddly over time, though I still can’t imagine that the ‘year of desktop Linux’ is any closer to the horizon. I wouldn’t mind living in this environment, though I would still need software that’s only available on Mac/Win (like CC), and the idea of my main computer being a dual-boot that largely keeps me stuck in Windows is a bit of a downer. Perhaps my next experiment will be virtualization under this minimal install.


netrw and invalid certificates

Don’t trust invalid certificates. Only do this sort of workaround if you really know what you’re dealing with is okay.

Sometimes I just need to reference the source of an HTML or CSS file online without writing to it. If I need to do this while I’m editing something else in vim, my best course of action is to open a split in vim and do it there. Even if I’m not working on said thing in vim, that is the way that I’m most comfortable moving around in documents, so there’s still a good chance I want to open my source file there.

netrw, the default1 file explorer for vim, handles HTTP and HTTPS. By default, it does this using whichever of the following it finds first: elinks, links, curl, wget, or fetch. At work, we’re going through an HTTPS transition, and at least for the time being, the certificates are… not quite right. Not sure what the discrepancy is (it’s not my problem), but strict clients are wary. This includes curl and wget. When I went to view files via HTTPS in vim, I was presented with errors. This obviously wasn’t vim’s fault, but it took a bit of doing to figure out exactly how these elements interacted and how to modify the behavior of what is (at least originally) perceived as netrw.

When netrw opens up a remote connection, it essentially just opens up a temporary file, and runs a command that uses that temporary file as input or output depending on whether the command is a read or write operation. As previously mentioned, netrw looks for elinks, links, curl, wget, and fetch. My cygwin install has curl and wget, but none of the others. It also has lynx, which I’ll briefly discuss at the end. I don’t know if elinks or links can be set to ignore certificate issues, but I don’t believe so. curl and wget can, however.

We set this up in vim by modifying netrw_HTTP_cmd, keeping in mind that netrw is going to spit out a temporary file name to read in. So we can’t output to STDOUT, we need to end with a file destination. For curl, we can very simply use :let g:netrw_HTTP_cmd="curl -k". For wget, we need to specify output, tell it not to verify certs, and otherwise run quietly: :let g:netrw_HTTP_cmd="wget --no-check-certificate -q -O".

I don’t have an environment handy with links or elinks, but glancing over the manpages leads me to believe this isn’t an option with either. It isn’t with lynx either, but in playing with it, I still think this is useful: for a system with lynx but not any of the default HTTP(s) handlers, netrw can use lynx via :let g:netrw_HTTP_cmd="lynx -source >". Also interesting is that lynx (and presumably links and elinks via different flags) can be used to pull parsed content into vim: :let g:netrw_HTTP_cmd="lynx -dump >".


Trying Twitterific

[N]ot to worry, for the full Twitter experience on your Mac, visit Twitter on web.

I could not stop laughing in disgust when I read the email in which Twitter, a company known primarily for taking user experience and ruining it, announced that they were shuttering their Mac client. The idea that Twitter in a browser is in any way a palatable experience is horrifying, and the only explanation I can offer is that the entire Twitter UX team is comprised of unpaid interns.

As part of our ongoing effort to streamline our apps and provide a more consistent and up-to-date Twitter experience across platforms, we are no longer supporting the Twitter for Mac app.

To be fair, the official Mac app was horribly neglected, and just… a bad experience. It didn’t support the latest changes to the Twitter service (like 280 chars), it was a buggy mess when you tried to do simple things like scrolling, and it crashed at least once a week on me. It was a bad app, yet still infinitely more manageable than using a full-fledged web browser for something as miniature-by-design as Twitter. Enter Twitterrific.

The idea of paying a third party so that I can access a service so rampantly overrun by TERFs and nazis that I feel the need to maintain a private account never really made sense to me. But, unlike the other great UX nightmare, Facebook, I don’t hate the company and the service with every atom of my body. I guess I’m kind of a sucker for the shithole that is Twitter. So, I have paid for Twitterrific. And, it’s pretty good.

Twitter clients were once this sort of UI/UX playground, and while I don’t entirely think that’s a good thing, some genuinely positive user interaction experiences were born of it. Twitterrific (speaking only of the MacOS edition for this post) feels largely native, but still has enough of these playground interactions as to frustrate me. The biggest one is that threads (etc.) don’t expand naturally, they pop out in little impermanent window doodads, and if you want to ensure you don’t lose your place, you have to manually tear them off and turn them into windows.

There are some other little issues, like a lack of granular control over notification sounds, but all in all the thing is better than the official client has been for years. Mostly just in that it reliably updates, it knows how to scroll, and like any good MacOS app it does not freeze every other day. I’ve been using it since Twitter made their shitty announcement (mid-February), and it’s a solid product. I guess this post has been more rant than review, but the facts are simple: if you use a Mac and you use Twitter, your experience either has gone or will go to absolute shit. Unless you use a third-party Twitter client. And Twitterrific is a pretty good one.


Dotfile highlights: .vimrc

I use zsh, and portability across Darwin, Ubuntu, Red Hat, cygwin, WSL, various gvims, etc. means I may have pasted something in that’s system-specific by accident.

New series time, I guess! I thought for the benefit of my future self, as well as anyone who might wander through these parts, there might be value in documenting some of the more interesting bits of my various dotfiles (or other config files). First up is .vimrc, and while I have plenty of important yet trivial things set in there (like set shell=zsh and mitigating a security risk with set modelines=0), I don’t intend to go into anything that’s that straightforward. But things like:

"uncomment this on a terminal that supports italic ctrl codes
"but doesn't have a termcap file that reports them
"set t_ZH=^[[3m
"set t_ZR=^[[23m

…are a bit more interesting. I do attempt to maintain fairly portable dotfiles, which means occasionally some of the more meaningful bits start their lives commented out.

Generally speaking, I leave word wrapping on, and I don’t hard wrap anything1. I genuinely do not understand the continuing practice of hard wrapping in 2018. Even notepad.exe soft wraps. I like my indicator to be an ellipsis, and I need to set some other things related to tab handling:

"wrap lines, wrap them at logical breaks, adjust the indicator
set wrap
if has("linebreak")
	set linebreak
	set showbreak=…\ \ 
	set breakindentopt=shift:1,sbr
endif

Note that there are two escaped spaces after the ellipsis in showbreak. I can easily see this trailing space because of set listchars=eol:↲,tab:→\ ,nbsp:·,trail:·,extends:…,precedes:…. I use a bent arrow in lieu of an actual LFCR symbol for the sake of portability. I use ellipses again for the ‘more stuff this way’ indicators on the rare occasions I turn wrapping off (set sidescroll=1 sidescrolloff=1 for basic unwrapped sanity). I use middots for both trailing and non-breaking spaces, either one shows me there’s something space-related happening. I also only set list if &t_Co==256, because that would get distracting quickly on a 16 color terminal.

Mouse handling isn’t necessarily a given:

if has("mouse") && (&ttymouse=="xterm" || &ttymouse=="xterm2")
	set mouse=a "all mouse reporting.
endif

I’m not entirely sure why I check for xterm/2. I would think it would be enough to check that it isn’t null. I may need to look into this. At any rate, the variable doesn’t exist if not compiled with +mouse, and compiling with +mouse obviously doesn’t guarantee the termcap is there, so two separate checks are necessary.

I like my cursors to be different in normal and insert modes, which doesn’t happen by default on cygwin/mintty. So,

"test for cygwin; not sure if we can test for mintty specifically
"set up block/i cursor
if has("win32unix")
	let &t_ti.="\e[1 q"
	let &t_SI.="\e[5 q"
	let &t_EI.="\e[1 q"
	let &t_te.="\e[0 q"
endif

Trivial, but very important to me:

"make ctrl-l & ctrl-z work in insert mode; these are crucial
imap <C-L> <C-O><C-L>
imap <C-Z> <C-O><C-Z>

I multitask w/ Unix job control constantly, and hugo server’s verbosity upon file write means I’m refreshing the display fairly often. Whacking Ctrlo before Ctrll or Ctrlz is easy enough, but I do it enough that I’d prefer to simplify.

I have some stuff in for handling menus on the CLI, but I realize I basically never use it… So while it may be interesting, it’s probably not useful. Learning how to do things in vim the vim way is generally preferable. So, finally, here we have my status line:

if has("statusline")
	set laststatus=2
	set statusline=%{winnr()}%<:%f\ %h%m%r%=%y%{\"[\".(&fenc==\"\"?&enc:&fenc).((exists(\"+bomb\")\ &&\ &bomb)?\",B\":\"\").\"]\ \"}%k\ %-14.(%l,%c%V%)\ %P
endif

I don’t like my status line to be too fancy, or rely on anything nonstandard. But there are a few things here which are quite important to me. First, I start with the window number. This means when I have a bunch of splits, I can easily identify which I want to switch to with (say) 2Ctrlww. I forget what is shown by default, but toward my right side I show edited/not, detected filetype, file encoding, and presence or lack thereof of BOM. Here’s a sample:

2<hfl.com/content/post/2018-02/vimrc.md [+][markdown][utf-8]  65,6           Bot

That’s about everything notable from my .vimrc. Obviously, I set my colorscheme, I set up some defaults for printing, I set a few system-dependent things, I set some things to pretty up folds. I set spell; display=lastline,uhex; syntax on; filetype on; undofile; backspace=indent,eol,start; confirm; timeoutlen=300. I would hesitantly recommend new users investigate Tim Pope’s sensible.vim, though I fundamentally disagree with some of his ideas on sensibility (incsearch?2 autoread? Madness).


Firefox mobile

Well, I finally downgraded upgraded to iOS 11, which means trying out the mobile version of Firefox1 and revisiting the Firefox experience as a whole. While Quantum on the desktop did show effort from the UI team to modernize, my biggest takeaway is that both the mobile and desktop UIs still have a lot of catching up to do. I mentioned previously how the inferiority of Firefox’s URL bar might keep me on Chrome, and the reality is that this is not an outlier. Both the desktop and mobile UI teams seem to be grasping desperately at some outdated user paradigms, and the result is software that simply feels clumsy. While I have always been a proponent of adhering to OS widgets and behaviors as much as possible, this is only strengthened on mobile where certain interaction models feel inextricable from the platform.

All of this to bring me to my first and most serious complaint about Firefox Mobile: no pull-to-refresh. I believe this was a UI mechanism introduced by Twitter, but it’s so ingrained into the mobile experience at this point that I get extremely frustrated when it doesn’t work. This may seem petty, but to me it feels as broken as the URL bar on desktop.

A UI decision that I thought I would hate, but am actually fairly ambivalent on, is the placement of navigation buttons. Mobile Chrome puts the back button with the URL bar, hiding it during text entry, and hides stop/refresh in a hamburger menu (also by the URL bar). Firefox Mobile has an additional bar at the bottom with navigation buttons and a menu (much like mobile Safari). I don’t like this UI, it feels antiquated and wasteful, but I don’t hate it as much as I expected to. One thing that I do find grating is the menu in this bar. I have a very difficult time remembering what is in this menu vs. the menu in the URL bar. The answer often feels counterintuitive.

In my previous post about desktop Firefox, I was ecstatic about the ability to push links across devices, something I’ve long desired from Chrome. It worked well from desktop to desktop, and it works just as well on mobile. This is absolutely a killer feature for folks who use multiple devices. Far superior to syncing all tabs, or searching another device’s history. On the subject of sync, mobile Firefox has a reader mode with a save-for-later feature, but this doesn’t seem to integrate with Pocket (desktop Firefox’s solution), which makes for a broken sync experience.

Both Chrome and Firefox have QR code detection on iOS, and both are quick and reliable (much quicker and more reliable than the detection built into the iOS 11 camera app). Chrome pastes the text from a read QR code into the URL bar; Firefox navigates to the text contained in the code immediately. That’s a terrifyingly bad idea.

A few additional little things:

Finally, a few additional thoughts on desktop Firefox (Quantum), now that I’ve gotten a bit of additional use in:


Animal Crossing: Pocket Camp

Animal Crossing: Pocket Camp has been available stateside for about a week now, and it is… strange. This post on ‘Every Game I’ve Finished’ (written by Mathew Kumar) mirrors a lot of my thoughts – I would recommend reading it before reading this. I haven’t really played a lot of Animal Crossing games before, and I tend to avoid free-to-play1 games. The aforementioned post is largely predicated on the fact that Pocket Camp doesn’t fully deliver on either experience. Which, I guess I wouldn’t really know, but something definitely feels odd about the game to me.

Early in his post, Kumar states that ‘[Pocket Camp] makes every single aspect of it an obvious transaction’, which is comically true. My socialist mind has a hard time seeing the game as anything but a vicious parody of capitalism. My rational mind, of course, knows this is not true because the sort of exploitative mundaneness that coats every aspect of the game is the norm in real life.

This becomes even more entertaining when you observe how players set prices in their Markets. For the uninitiated, when your character has a surplus of a thing, they can offer that thing for sale to other players. The default price is its base value, but you can adjust the sale price down a small amount or up a large amount. Eventually you’ll likely just max out your inventory and be forced to put things up for sale in this Market. More eventually, you’ll max out the Market and be forced to just throw stuff away without getting money for it. But in the meantime, people (strangers and friends) will see what you have to offer and be given the opportunity to buy it.

For the most part, if you need an item (I use the term ‘need’ loosely), it is common, and either hopping around or waiting a couple of hours will get you that item. So there should be no reason to charge a 1000% markup on a couple of apples. But (in my experience thus far) that is far more common than to see items being sold for the minimum (or even their nominal value). I don’t know if it’s just players latching on to the predatory nature of free-to-play games or what, and I’m really curious to know if it works. I’ve been listing things in small quantities (akin to what an animal requests) for the minimum price, and while I’ve sold quite a few items, most still go to waste – I can’t imagine anything selling at ridiculous markups.

So far this description of a capitalist hellscape has probably come off as though I feel negatively toward the game, which I really don’t. To return to Kumar, he leaves his post stating that he hasn’t given up on the game yet, but ‘like Miitomo, the first time I miss a day it’s all over.’ This comparison to Miitomo is apt, and a perfect segue into why I’m invested in this minor dystopia.

Miitomo (another Nintendo mobile thing) is really just a game where you… decorate a room and try on clothes. You answer questions and play some pachinko-esque minigames in order to win decorations and clothes, but it’s basically glorified dress-up. It seems like mostly young people playing it, but it’s also just a wonderful outlet for baby trans folks, people questioning gender, and any number of people seeking a little escape. I find Miitomo to be very valuable and underrated, and a lot of the joy Miitomo brings me is echoed by Pocket Camp.

While the underlying concept behind Pocket Camp is that you’re a black market butterfly dealer or whatever, there’s also a major ‘dollhouse’ component to it. You buy and receive cute clothes and change your outfits, which has no bearing on the game. You buy things to decorate your campsite which (effectively2) has no bearing on the game. You can drop 10,000 dollars bells on a purse that does nothing but sit in the dirt looking pretty. I guess it’s hypocritical to praise this meaningless materialism, but it’s a nice escape. A little world to mess around in and make your own.

I don’t know how long I’ll obsessively island-hop the world of Pocket Camp, but I think that (like Miitomo) once the novelty wears off, I’ll still pop in to play around with my little world when it occurs to me to do so. And the whole time, in my mind, it will remain a perfectly barbed satire on capitalism.


Firefox Quantum

There was once a time where the internet was just beginning to overcome its wild wild west nature, and sites were leaning toward HTML spec compliance in lieu of (or, more accurately, I suppose, in addition to) Internet Explorer’s way of doing things. Windows users in the know turned to Firefox; Mac users were okay sticking with Safari, but they were still far and few between. Firefox was like the saving grace of the browser world. It was known for leaking memory like a sieve, but it was still safer and more standards-compliant than IE. Time went on, and Chrome happened. Compared to Chrome, Firefox was slow, ugly, lacking in convenience features, it had a lackluster search bar, and that damn memory leak never went away. Firefox largely became relegated to serious FOSS nerds and non-techies whose IT friends told them it was the only real browser a decade ago.

I occasionally installed/updated Firefox for the sake of testing, and these past few years it only got worse. The focus seemed to be goofy UI elements over performance. It got uglier, less pleasant to use, and more sluggish. I assumed it was destined to become relegated to Linux installs. It just… was not palatable. I honestly never expected to recommend Firefox again, and in fact when I did just that to a fellow IT type he assumed that I was drunk on cheap-ass rum.

Firefox 57 introduces a new, clean UI (Photon); and a new, incredibly quick rendering engine. I can’t tell if the rendering engine is just a new version of Gecko, or if the engine itself is called Quantum (the overall new iteration of the browser is known as Quantum), but I do know it’s very snappy. I’m not sure if it is, but it feels faster than Chrome on all but the lowest-end Windows and macOS machines that I’ve been testing it on. It still consumes more memory than other browsers I’ve pitted it against, and its sandboxing and multiprocessor support is a work in process. The UI looks more at home on Win 10 than macOS, but in either case it looks a hell of a lot better than the old UI, and it fades into the background well enough. On very low-end machines (like a Celeron N2840 2.16GHz 2GB Win 8 HP Stream), Firefox feels more sluggish than Chrome – and this sluggishness seems related to the UI rather than the rendering engine.

I’ve been using Quantum (in beta) for a while, alongside Chrome, and that’s really what I want to attempt to get at here. Both have capable UIs, excellent renderers, and excellent multi-device experiences. I don’t particularly like Safari’s UI, but even if I did the UX doesn’t live up to my needs simply because it’s vendor-dependent (while not platform-dependent, the only platforms are Apple’s), and I want to be able to sync things across my Windows, macOS, iOS, and Linux environments. Chrome historically had the most impressive multi-device experience, but I think Firefox has surpassed it – though both are functional. So it’s starting to come down to the small implementation details that really make a user experience pleasant.

As a keyboard user, Firefox wins. Firefox and Chrome1 both have keyboard cursor modes, where one can navigate a page entirely via cursor keys and a visible cursor. This is an accessibility win, but very inefficient compared to a pointing device. Firefox, however, has another good trick – ‘Search for text when you type’, previously known as Type Ahead Find (I think, I know it was grammatically mysterious like that). So long as the focus is on the body, and not a textbox, typing anything begins a search. Ctrl– or Cmd-G goes to the next hit, and Enter ‘clicks’ it. Prefacing the search with a restricts it to links. It makes for an incredibly efficient navigation method. Chrome has some extensions that work similarly, but I never got on with them and I definitely prefer an inbuilt solution.

Chrome’s search/URL bar is way better2. It seems to automatically pick up new search agents, and they are automatically available when you start typing the respective URL. One hits tab to switch from URL entry to searching the respective site, and it works seamlessly and effortlessly. All custom search agents in Firefox, by contrast, must be set up in preferences. You don’t get a seamless switch from URL to search, but instead must set up search prefixes. So, on Chrome, I start typing ‘amazon.com’, and at any point in the process, I hit tab, and start searching Amazon. With Firefox, I have to have set up a prefix like ‘am’, and remember to do a search like ‘am hello kitty mug’ to get the search results I want. It is not user-friendly, it is not seamless, and it just feels… ancient. Chrome’s method also allows for autocomplete/instant search for these providers, which is only a feature you get with your main search engine in Firefox. It is actually far superior to simply not use this feature in Firefox and use DuckDuckGo bangs instead. The horribly weak search box alone could drive me back to Chrome.

Chrome used to go back or forward (history-wise) if you overscrolled far enough left or right – much like how Chrome mobile works. This no longer seems to work on Chrome desktop, and it doesn’t work on Firefox either. I guess I’m grumpier at Google for teasing and taking away. I know it was a nearly-undiscoverable UI feature, and probably frustrated users who didn’t know why they were jumping around, but it freed up mouse buttons.

I don’t know how to feel about Pocket vs. Google’s ‘save for later’ type solution. Google’s only seems to come up on mobile. Pocket is a separate service, and without doing additional research, it’s unclear how Mozilla ties into it (they bought the service at some point). At least with Google you know you’re the product.

I have had basically no luck streaming on Firefox. Audio streams simply don’t start playing; YouTube and Hulu play for a few seconds and then blank and stop. I assume this will be fixed fairly quickly, but it’s bad right now.

Live Bookmarks are a thing that I think Safari used to do, too? Basically you can have an RSS feed turn into a bookmark folder, and it’s pretty handy. Firefox does this, Chrome has no inbuilt RSS capability. Firefox doesn’t register JSON feed which makes it a half-solution to me, which makes it a non-solution to me. But, it’s a cool feature. I would love to see a more full-featured feed reader built in.

Firefox can push URLs to another device. This is something that I have long wished Chrome would do. Having shared history and being able to pull a URL from another device is nice, but if I’m at work and know I want to read something later, pushing it to my home computer is far superior.

I’ll need to revisit this once I test out Firefox on mobile (my iOS is too far out of date, and I’m not ready to make the leap to 11 yet). As far as the desktop experience is concerned, though, Quantum is a really, really good browser. I’m increasingly using it over Chrome. The UI leaves a bit to be desired, and the URL/search bar is terrible, but the snappiness and keyboard-friendliness are huge wins.


Speech synthesis

When I was in elementary school, I learned much of my foundation in computing on the Commodore 64. It was a great system to learn on, with lots of tools available and easy ways to get ‘down to the wire’, so to speak. Though it was hard to see just how limited the machines were compared with what the future held, some programs really stood out for how completely impossible they seemed1. One such program was S.A.M. – the Software Automated Mouth, my first experience with synthesized speech2.

Speech synthesis has come a long way since. It’s built into current operating systems, it can be had in IC form for under $9, and it’s becoming increasingly present in day-to-day life. I routinely use Windows’ built in speech synthesizer along with NVDA as part of my accessibility checking regimen. But I’m also increasingly becoming dismayed by the egregious use of speech synthesis when natural human speech would not only suffice but be better in every regard. Synthesis has the advantage of being able to (theoretically) say anything while not paying a person to do the job. I’m seeing more and more instances where this doesn’t pan out, and the robot is truly bad at its job to boot.

Three examples, all train-related (I suppose I spend a lot of time on trains): the new 7000 series DC Metro cars, the new MARC IV series coach cars, and the announcements at DC’s Union Station. None of these need to be synthesized. They’re all essentially announcing destinations – they have very limited vocabularies and don’t make use of the theoretical ability to say anything. Union Station’s robot occasionally announces delays and the like, but often announcements beyond the norm revert to a human. Metro and MARC trains only announce stops and have demonstrated no capacity for supplemental speech. Where old and new cars are paired, conductors/operators still need to make their own station stop announcements.

So these synthesizers don’t seem to have a compelling reason to exist. It could be argued that human labor is now potentially freed up, but given the robots’ limited vocabularies and grammars, the same thing could be accomplished with human voice recordings. I can’t imagine that the cost of hiring a voice actor with software to patch the speech together into meaningful grammar would be appreciably more expensive than the robot. In fact, before the 7000 series Metro cars, WMATA used recordings to announce door openings and closings; they replaced these recordings in 2006, and the voice actor was rewarded with a $10 fare card3.

Aside from simply not being necessary, the robots aren’t good at their job. This is, of course, bad programming – human error. But it feels like the people in charge of the voices are so far detached from the final product that they don’t realize how much they’re failing. The MARC IV coaches are acceptable, but their grammar is bizarre. When the train is coming to a station stop, an acceptable thing to announce might be ‘arriving at Dickerson’, which is in fact what the conductors tend to say. The train, instead, says ‘this train stops at Dickerson’, which at face value says nothing beyond that the train will in fact stop there at some point. It’s bad information, communicated poorly. Union Station’s robot has acceptable grammar, but she pronounces the names of stations completely wrong. Speech synthesizers generally have two components: the synthesizer that knows how to make phonemes (the sounds that make up our speech), and a layer that translates the words in a given language to these phonemes. My old buddy S.A.M. had the S.A.M. speech core, and Reciter which looked up word parts in a table to convert to phonemes. This all had to fit into considerably less than 64K, so it wasn’t perfect, and (if memory serves), one could override Reciter with direct phonemes for mispronounced words. Apple’s say command (well, their Speech Synthesis API) allows on-the-fly switching between text and phoneme input using [[inpt TEXT]] and [[inpt PHON]] within a speech string4. So again, given just how limited the robot’s vocabulary is (none of these trains are adding station stops with any regularity), someone should have been able to review what the robot says and suggest overrides. Half the time, this robot gets so confused that she sounds like GLaDOS in her death throes.

Which brings me to my final point – the robots simply aren’t human. Even when they are pronouncing things well, they can be hard to understand. On the flipside, the DC Metro robot sounds realistic enough that she creeps me the hell out, which I can only assume is the auditory equivalent of the uncanny valley. I suppose a synthesized voice could have neutrality as an advantage – a grumpy human is probably more off-putting than a lifeless machine. But again, this is solvable with human recordings. I cannot imagine any robot being more comforting than a reasonably calm human.

Generally speaking, we’re reducing the workforce more and more, replacing the workforce with automation, machinery. It’s a necessary progression, though I’m not sure we’re prepared to deal with the unemployment consequences. It’s easy to imagine speech synthesis as a readily available extension of this concept – is talking a necessary job? But human speech is seemingly being replaced in instances where the speaking does not actually replace a human’s job and/or a human recording would easily suffice. In some instances, speaking being replaced is a mere component of another job being replaced – take self-checkout machines (which tend to be human recordings despite the fact that grocery store inventories are far more volatile than train routes, hence ‘place your… object… in the bag’). But I feel like I’m seeing more and more instances that seem to use speech synthesis which is demonstrably worse than a human voice, and seemingly serves no purpose (presumably beyond lining someone’s pockets).


Tagging in Acrobat from the keyboard

December 2023 update via a prior May 2020 update. At some point within the Acrobat DC lifecycle, the behavior of F6 has changed. .

Since much of my work revolves around §508 compliance, I spend a lot of time restructuring tags in Acrobat. Unfortunately you can’t just handwrite out these tags à la HTML, you have to physically manipulate a tree structure. The Tags panel is very conducive to mouse use, and because Adobe is Adobe, not very conducive to keyboard use. Many important tasks are missing readily available keyboard shortcuts, and it has taken me a while to be able to largely ditch the mouse1 and instead use the keyboard to very quickly restructure the tags on very long, very poorly tagged documents.

A couple of notes – this assumes a Windows machine, and one with a Menu key2. While I generally prefer working on MacOS, I’m stuck with Windows at work, so these are my efficiencies. Windows may actually have the leg up here, since the Acrobat keyboard support is so poor, and MacOS does not have a Menu key equivalent. Additionally, this applies to Acrobat XI, it may or may not apply to current DC versions. Finally, all of this information is discoverable, but I haven’t really seen a primer laid out on it. If nothing else perhaps it will help a future version of myself who forgets all of this.


Binaries and hex editors

Talking about certain files as ‘binaries’ is a funny thing. All files are ultimately binary, after all, it’s just a matter of whether or not a file is encoded as text. Even in the world of text, an editor or viewer needs to know how the text is encoded, what bytes map to what characters. Is a file ASCII, UTF-8, PostScript? Once we know something is text or not text, it’s still likely to be made to the standards of a specific format, lest it be nothing but plain text. Markdown, HTML, even PDF1 are human-readable text to an extent, with rules about how their content is interpreted. A human as well as a web browser knows that a <p> starts a paragraph, and this paragraph continues until a matching </p> is found.

If we open a binary in a text editor, we’ll see a lot of familiar characters, where data happens to coincide with printable ASCII. We’ll also see a lot of gibberish, and in fact some of the characters may cause a terminal to behave erratically. Opening a binary in a hex editor makes a little more sense of it, but still leaves a lot to be answered. In one column, we’ll see a lot of hexadecimal values; in another we’ll see the same sort of gibberish we would have seen in our text editor. In some sort of status display, we’ll also generally see a few more bits of information – what byte we’re on, its hex value, its decimal value, etc. Why would we ever want to do this? Well, among other things, binary file formats have rules as well, and if we know these rules, we can inspect and navigate them much like an HTML file. Take this piece of a PNG file, as it would appear in bvi (my hex editor of choice).

00000000  89 50 4E 47 0D 0A 1A 0A 00 00 00 0D 49 48 44 52 .PNG........IHDR
00000010  00 00 02 44 00 00 01 04 08 06 00 00 00 C9 50 2B ...D..........P+
00000020  AB 00 00 00 04 73 42 49 54 08 08 08 08 7C 08 64 .....sBIT....|.d
00000030  88 00 00 00 09 70 48 59 73 00 00 0B 12 00 00 0B .....pHYs.......
00000040  12 01 D2 DD 7E FC 00 00 00 1C 74 45 58 74 53 6F ....~.....tEXtSo
"ban_ln_560_NLW.png" 14498451 bytes    00000000 10001001 \211 0x89 137 NUL

Semaphore and sips redux

In this article, I do sem -j +5, allowing 5 jobs to run at a time. -j can be used with integers, percents, and +/– values such that one can say -j +0 -j -1 to run one fewer job than their available cores (+0), etc.

I was going to simply edit my last post, but this might warrant its own, as it’s really more about sem and parallel than it is sips. parallel’s manpage describes it as ‘a shell tool for executing jobs in parallel using one or more computers’. It’s kind of a better version of xargs, and it is super powerful. The manpage starts early with a recommendation to watch a series of tutorials on YouTube and continues on to example after example after example. It’s intense.

In my previous post, I suggested using sem for easy parallel execution of sips conversions. sem is really just an alias for parallel --semaphore, described by its manpage (yes, it gets its own manpage) as a ‘counting semaphore [that] simply waits for a semaphore to become available and then runs the command given’. It’s a convenient and fairly accessible way to parallelize tasks. Backing up for a second, it does have its own manpage, which focuses on some of the specifics about how it queues things up, how it waits to execute tasks, etc. It does this using toilet metaphors, which is a whole other conversation, but for the most part it’s fairly clear, and it’s what I tend to reference when I’m figuring something out using sem.

In my last post (and in years of converting things this way), I had to decide between automating the cleanup/rm process or parallelizing the sips calls. The problem is, if you do this:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" && rm "$i"

…the parallelism gets all thrown off. sem executes, cues up sips, presumably exits 0, and then rm destroys the file before sem even gets the chance to spawn sips. None of the files exist, and sips has nothing to convert. The sem manpage doesn’t really address chaining commands in this manner, presumably it would be too difficult to fit into a toilet metaphor. But it occurred to me that I might come up with the answer if I just looked through enough of the examples in the parallel manpage (worth noting that a lot of the parallel syntax is specific to not being run in semaphore mode). The solution is facepalmingly simple: wrap the && in double quotes:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" "&&" rm "$i"

…which works a charm. We could take this even further and feed the PNGs directly into optipng:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" "&&" rm "$i" "&&" optipng "${i/.tif/.png}"

…or potentially adding optipng to the sem queue instead:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" "&&" rm "$i" "&&" sem -j +5 optipng "${i/.tif/.png}"

…I’m really not sure which is better (and I don’t think time will help me since sem technically exits pretty quickly).


Darwin image conversion via sips

I use Lightroom for all of my photo ‘development’ and library management needs. Generally speaking, it is great software. Despite being horribly nonstandard (that is, using nonnative widgets), it is the only example of good UI/UX that I’ve seen out of Adobe in… at least a decade. I’ll be perfectly honest right now: I hate Adobe with a passion otherwise entirely unknown to me. About 85-90% of my professional life is spent in Acrobat Pro, which gets substantially worse every major release. I would guess that around 40% of my be-creative-just-to-keep-my-head-screwed-on time is spent in various pieces of CC (which, subscription model is just one more fuck-you, Adobe). But Lightroom has always been special. I beta tested the first release, and even then I knew… this was the rare excuse for violating so many native UI conventions. This made sense.

Okay, from that rant we come up with: thumbs-down to Adobe, but thumbs-up to Lightroom. But there’s one thing that Lightroom has never opted to solve, despite so many cries, and that is PNG export. Especially with so many photographers (myself included) using flickr, which reencodes TIFFs to JPEGs, but leaves the equally lossless PNG files alone, it is ridiculous that the Lightroom team refuses to incorporate a PNG export plugin. Just one more ’RE: stop making garbage’ memo that I need to forward to the clowns at Adobe.

All of this to just come to my one-liner solution for Mac users… sips is the CLI/Darwin equivalent of the image conversion software that MacOS uses for conversion in Preview, etc. The manpage is available online, conveniently. But my use is very simple – make a bunch of supid TIFFs into PNGs.

for i in ./*.tif ; sips -s format png "$i" --out "${i/tif/png}" && rm "$i"

…is the basic line that I use on a directory full of TIFFs output from Lightroom. Note that this is zsh, and I’m not 100% positive that the variable substitution is valid bash. Lightroom seemingly outputs some gross TIFFs, and sips throws up an error for every file, but still exits 0, and spits out a valid PNG. sips does not do parallelism, so a better way to handle this may be (using semaphore):

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/tif/png}"

…and then cleaning up the TIFFs afterward (rm ./*.tif). Either way. There’s probably a way to do both using flocks or some such, but I haven’t put much time into that race condition.

At the end of the day, there are plenty of image conversion packages out there (ImageMagick comes to mind), but if you’re on MacOS/Darwin… why not use the builtins if they function? And sips does, in a clean and simple way. While it certainly isn’t a portable solution, it’s worth knowing about for anyone who does image work on a Mac and feels comfortable in the CLI.


dvtm and the mouse

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point… Notably, in February 2021, a reader sent in a comment informing me that a PR was submitted to support mouse wheel scrolling in DVTM, and that they’ve patched it into their local environment with success. I haven’t (and won’t, as I rely on job control for multitasking for the past… ten years or so) tested this, so YMMV, but… it’s an update!

I've gotten quite a few hits from people searching for things like 'dvtm pass mouse.' I don't have much to say on the matter, except that this is the one thing that really bugs me about dvtm. As I have mentioned previously, given the choice between screen, tmux, and dvtm, I like dvtm the best. It is certainly the simplest, and has the smallest footprint. It automatically configures spaces, and makes notions of simultasking as simple as double-clicking. I would say that it brings the best of the GUI experience to terminal multiplexing, while still keeping true to the command line.


dc Syntax for Vim

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…

I use dc as my primary calculator for day-to-day needs. I use other calculators as well, but I try to largely stick to dc for two reasons - I was raised on postfix (HP 41CX, to be exact) and I'm pretty much guaranteed to find dc on any *nix machine I happen to come across. Recently, however, I've been expanding my horizons, experimenting with dc as a programming environment, something safe and comfortable to use as a mental exercise. All of that is another post for another day, however - right now I want to discuss writing a dc syntax definition for vim.