Backward compatibility in operating systems

Earlier this week, Tom Scott posted a video to YouTube about the forbidden filenames in Windows. It’s an interesting subject that comes up often in discussions of computing esoterica, and Scott does an excellent job of explaining it without being too heavy on tech knowledge. Then the video pivots; what was ostensibly a discussion on one little Windows quirk turns into a broader discussion on backward compatibility, and this inevitably turns into a matter of Apple vs. Microsoft. At this point, I think Scott does Apple a bit of a disservice.

If you’ve read much of my material here, you’ll know I don’t have much of a horse in this race; I’m not in love with either company or their products. I’m writing this post from WSL/Ubuntu under Windows 10, a truly unholy matrimony of software. And while I could easily list off my disappointments with MacOS, I genuinely find Windows an absolute shame to use as a day-to-day, personal operating system. One of my largest issues is how much of it is steeped in weird legacy garbage. A prime example is the fact that Windows 10 has both ‘Settings’ and ‘Control Panel’ applications, with two entirely different user experiences and a seemingly random venn diagram of what is accessible from where.

This all comes down to Microsoft’s obsession with backward compatibility, which has its ups and downs. Apple prioritizes a streamlined, smooth experience over backward compatibility, yet they’ve still gone out of their way to support a reasonable amount of backward compatibility throughout their history. They’ve transitioned processor architecture twice, each time adding a translation layer to the operating system to extend the service life of software. I think they do precisely the right amount of backward compatibility to reduce bloat and confusion1. It makes for a better everyday, personal operating system.

That doesn’t make it, however, a better operating system overall; it would be absurd to assume that one approach can be generally declared better. Microsoft’s level of obsession in this regard is crucial for, say, enterprise activities, small businesses that can’t afford to upgrade decades-old accounting software, and gaming. There is absolutely comfort in knowing that you can run (with varying levels of success) Microsoft Works from 2007 on your brand new machine. It’s incredibly valuable, and it requires a ton of due diligence from the Windows team.

So, this isn’t to knock Microsoft at all, but it is why I think dismissing Apple for a lack of backward compatibility is an imperfect assessment. I’ve been thinking about this sort of thing a lot lately as I decide what to do moving forward with this machine – do I dual-boot or try to live full-time in Windows 10 with WSL. And I’ve been thinking about it a lot precisely because of how unpleasant I find Windows2 to be. Thinking about that has made me examine why, and what my ideal computing experience is. Which is another post for another day, as I continue to try to make my Windows experience as usable as possible. Also, I’m not in any way trying to put down Scott’s video, which I highly recommend everyone watch; it was enjoyable even with prior knowledge of the forbidden filenames. It just happened to time perfectly with my own thoughts on levels of backward compatibility.


  1. Apple has absolutely let the ball drop in the areas of bloat and confusion in other ways, but I maintain that it would be far worse if they attempted the level of backward compatibility that Microsoft does. ↩︎
  2. Like many, I believe Windows 7 was the peak of the OS. Windows 10 is… eh. At least we have WSL. And, FWIW, I believe somewhere around OSX 10.6 was where Apple peaked. ↩︎

Open Mic Aid (external)

Hello, my tiny audience. I’ve been teleworking for… three weeks now? I think? I don’t know, I’m not trying to keep track. The reality is that we are all cooped up, likely for months, for good reason. Maybe some of us will find time to do things, maybe some of us will lose time. It’s… new. And hard. Really, really hard.

For the time being, I’m financially secure. But a lot of people aren’t. Like… a lot. a lotttttt. a lotttttttttttttttttttttttttttttt. This thing has really been exposing the cracks in our bullshit economic system, but… at the expense of a lot of vulnerable folks. My pals at Sandy Pug Games are doing a thing to… maybe make some small difference when it comes to that. Open Mic Aid, link right there, or in the title, is a 44pp. zine featuring 20 different artists, all in support of the Restaurant Workers’ Community Foundation. The Foundation’s COVID-19 fundraiser is shooting 50% of proceeds to direct relief, 25% to NPOs supporting restaurants, and 25% to loans for restaurants to reboot themselves. This shit matters. Personally, I’m… terrified to know how many of my local restaurants I’ll lose. Part of that is selfish, but part of it is… that’s a lot of people separated from their jobs, and from what is potentially a comfort zone for them. Companies claiming they’re family is bullshit, but among workers? Yeah, bonds form. And there’s going to be a lot of weird diaspora action going on. I don’t know. It sucks. But you know that.

Anyway this is a link post, so I’ll keep it short. That’s the important reason to go check it out. Here’s a less-important reason: I contributed a thing to it! It’s a slow-paced non-game where you roll 1d10 every week as a growing plant, and… maybe do a thing. Here’s an example:

you are a sprout. you see thin roots near you in the dark soil, and you feel confident in your future as you begin to develop a sense of self. you understand that growth takes time. roll. on a 1, 2, or 3, look at the boxes under ‘b’. if all are filled in, move to c, otherwise fill in that many boxes. on any other roll, move to b(2).

I might write a little lua script to see how long an average play will last, but a nice thing about working with d10s is… some of that is already done! So I have a rough idea, and I like what’s happening. I also like what’s happening with the other submissions! A dear friend of mine is doing a paint-by-numbers spread where the numbers correspond to sort of… emotions or abstract concepts in your mind. It’s a very cool idea and the preliminary artwork I saw was incredible, as expected. I’ve also seen some nice photo work that’s going in, and I know the rest of the submissions run the gamut from poetry to diarism to… games where you aren’t a plant!

Anyway, everything sucks right now. It sucks in varying amounts for all of us. I have my own concerns about groceries and the like, but at least I have an income. It’s not a contest. But certain laborers are going to be or are already being hit hard. This project aims to alleviate at least some of that, and there’s a lot of creative energy going into it. Check it.


Solving puzzles using Sentient Lang

I’ve been playing a mobile room-escaping-themed puzzle game (I believe the title is simply Can You Escape 50 Rooms) with a friend, and there was a certain puzzle that we got stuck on. By stuck, I mean that we certainly would’ve figured it out eventually, but it was more frustrating than fun, and it consumed enough time that I thought up a fun way to cheat. I am not against cheating at puzzles that are failing to provide me with joy, or that I’m simply unable to complete, but I have a sort of personal principal that if I’m going to cheat, I’m going to attempt to learn or develop something in the process. If I cheat at a crossword, for example, I don’t just look at the answer key; I do research on the subject to figure it out. In this case, I decided to use a language that I’ve only barely dabbled in to solve the puzzle.

It’s worth noting that this is about the solve and not the solution, and I’m not including a solution here. There shouldn’t be any spoilers except to show/explain one puzzle from the game. With that said, the puzzle gives you something like this:

Fifteen discs numbered 1-15 can be moved freely among fifteen positions. There is essentially an outer ring of seven discs, an inner ring of seven discs, and a center disc. This forms seven diamonds: we can imagine a disc from the outer ring as the top of the diamond, then that connects to two discs ‘below’ it in the inner ring, and finally the center disc is the bottom of the diamond. So in the example above, 8-14-6-13 makes a diamond, 10-6-5-13 makes a diamond, and so on. The puzzle is to rearrange these so the sum of the four discs in every diamond is 30.

Every diamond shares one disc with every other diamond (the center), has one unique disc (the disc in the upper ring), and has two discs that are each shared with one other diamond (the discs in the inner ring). A lot of my intuitive thoughts proved impossible pretty quickly, and it devolved into a lot of random positioning between the two of us. When I was given the greenlight to write a program to solve it for us, I figured I could either brute-force it in any language, or I could do something fun and use Sentient.

Sentient is perfect for this sort of task. You don’t need to know how to solve a thing, you just give it a bunch of rules and it sorts that out itself. We’re going to map our fifteen positions to indices in an array; their respective values will be the corresponding disc. Let’s map out the indices:

Our seven diamonds are thus 0-1-2-8, 0-2-3-9, 0-3-4-10, 0-4-5-11, 0-5-6-12, 0-6-7-13, and 0-7-1-14. Our rules are that the sum of each of those sets of values be 30, each value be between 1 and 15, and each value be unique. I’m sure there’s a cleaner way to have coded this, but the following works:

array15<int5> discs;
invariant discs.all?(function (bt) {
	return bt.between?(1,15);
	});
invariant discs.uniq?;
invariant [discs[0],discs[1],discs[2],discs[8]].sum == 30;
invariant [discs[0],discs[2],discs[3],discs[9]].sum == 30;
invariant [discs[0],discs[3],discs[4],discs[10]].sum == 30;
invariant [discs[0],discs[4],discs[5],discs[11]].sum == 30;
invariant [discs[0],discs[5],discs[6],discs[12]].sum == 30;
invariant [discs[0],discs[6],discs[7],discs[13]].sum == 30;
invariant [discs[0],discs[7],discs[1],discs[14]].sum == 30;
expose discs;

We begin by declaring the array. Arrays in Sentient must be dimensioned at initialization; ours has fifteen values, so we initialize with array15. We also have to declare what it will contain, which in our case is integers. While array15<int> would have worked just fine, we can specify that we’re only using 5 bits1 worth of integer values to speed up the search, hence array15<int5>. The array is called discs.

Our rules are written with invariant statements. These should be pretty straightforward: a function that checks if each value is between? 1 and 15, a requirement that everything be uniq?ue, and then one rule each for our seven summation requirements. The final expose command lets Sentient know what we want to see as our results. That’s it! Even on a slow in-browser version of Sentient, a set of results comes back in a handful of seconds. You can also ask Sentient to try to come up with multiple solutions; curious if there was only one valid solution to the diamonds puzzle, I asked for and was returned three unique solutions.


  1. Note these are signed integers; int5 ranges from -16 to 15. ↩︎

Caltrops

I love four-sided dice (which I will refer to from here on as d4s, in keeping with standard notation). I also love clean, simple dice mechanics in TTRPGs. Many of these use d6s, Fate uses d3s in the shape of d6s, some use only a percentile set or a single d20. I’m certainly not about to say that there aren’t any d4-based systems out there. But I have not encountered one on my own time, and my love of these pointy little bits has had me thinking about potential workings for a while now. And while I don’t have anything resembling a system here, I had some interesting thoughts and had my computer roll a few tens of millions of digital dice for me, and I’d like to lay out a few initial thoughts that may, some day, turn into something.

The TL;DR is this: players can, for any resolution1, roll two, three, or four d4s. If every die has the same value, regardless of what this value is, that counts as a special. Otherwise, the values are summed with 1s and 2s treated as negative (so, -1, -2, +3, +4). And that’s it, roll complete! What is a special, exactly? Well, I don’t really know. My initial thought was that the all-of-a-kind roll would be a critical success. After seeing the maths, and thinking about what I would opt to do in any given situation. Which led me to believe that the all-of-a-kind roll should certainly be special in some way, but likely a more interesting and dynamic way than just ‘you score very big’. This could be a trigger for something special on your character sheet related to whatever thing you are rolling for, or it could be a cue for the GM to pause the action and shift course. It should certainly always be something positive, but I don’t think the traditional crit mentality quite fits.

I’ll get into the numbers in more detail in a minute, but the key takeaways are:

Ignoring specials for a minute, we see a clear advantage to rolling more dice. Generally speaking, we will trend toward getting higher values, and the likeliest values for us to get on a given roll are better. When we factor in specials, rolling two dice becomes a lot more attractive; specials come up 25% of the time! Which is a very cool way to shift the balance, in my mind, but it’s also why it needs to be something other than just ‘BIG SMASH’. Make it too strong, and it basically becomes the universal choice. Making it more dynamic or narrative seems like a likely way to make the decision meaningful for players. Another possibility is a potential cooldown mechanic where rolling two specials in an encounter would force that character to cut out; that would likely leave the 3d4 option unused, however, as players would roll 2d4 until hitting a special, and then switch directly to 4d4.

I wrote a quick and dirty Lua3 script to let me roll a few tens of millions of virtual dice and run the numbers. The resultant percentage table is below. My initial script only returned the number of specials, positives, negatives, and zeroes. Upon seeing the steep declination toward 0% specials on rolls of more than 4 dice, I decided I was only going to do further testing on 2, 3, and 4. I’ve included the percentages of specials for 5, 6, 7, and 8 dice just to show the trend.

Result percentages in the Caltrops concept
# d4s 2 3 4 5 6 7 8
Special 25 6.3 1.6 0.4 0.1 0.025 0.006
-7 0 0 1.6
-6 0 0 2.3
-5 0 4.7 1.6
-4 0 4.7 0
-3 12.5 0 1.6
-2 0 0 6.3
-1 0 4.7 9.4
0 0 14.1 6.3
1 12.5 14.1 1.6
2 25 4.7 2.3
3 12.5 0 9.4
4 0 4.7 14.1
5 0 14.1 9.4
6 0 14.1 2.3
7 12.5 4.7 1.6
8 0 0 6.3
9 0 0 9.4
10 0 4.7 6.3
11 0 4.7 1.6
12 0 0 0
13 0 0 1.6
14 0 0 2.3
15 0 0 1.6

One final (for now) takeaway after having stared at these numbers in multiple forms. I mentioned the use of special instead of critical because of a traditional critical making a roll of 2d4s too powerful; you’ll get that hit 25% of the time. There’s another truth to 2d4 rolls, however, and that is that the chance of negative rolls is the lowest: 12.5% of 2d4 rolls are negative, 14.1% of 3d4 rolls are negative, and 22.8% of 4d4 rolls are negative. Every negative 2d4 roll is -3, however, and the chance of getting -3 or lower for 3d4 is 9.4% and for 4d4 is 7.1%. This raises a question as to what is a better motivator. You’re more likely to get a negative with more dice, and it’s possible to get a worse negative, but the trend is toward a better negative (the above numbers didn’t reflect zero; the likeliest non-positive result for 3d4 is, in fact, zero). It’s worth running through how this plays out and deciding whether negative values matter, or simply the fact that a negative was, in fact, rolled. My instinct says stay with values, but that doesn’t take into account the feeling of how the dice are treating you.

Clearly there are a lot of ‘what ifs’ to work through, and there’s a lot more involved in practical testing than just rolling millions and millions of dice. But I do think I’m on to something interesting here, something simple, but with slightly-less-than-simple decision determinations.


  1. This is obviously a tentative thought, and I can imagine there may be instances when a GM would want to veto a certain number of dice. I can also imagine a scenario where players might have a set pool of dice to pull from, making the decision more impactful, though fewer seems to potentially be more beneficial than more, so this probably won’t bear out. A potentially cleaner way to mix this up would be to directly tie die counts to stats/abilities. ↩︎
  2. This holds true for other-sided dice as well, albeit in multiples that correspond to the number of sides (as compared to four). So 3d6 averages right out to 4.5, &c. The influence of the crit/special rule reduces as sidedness increases. ↩︎
  3. One of my half-hearted goals for 2020 is to become more proficient in Lua. Aside from just being a capable scripting language that I largely find myself comfortable with, it’s the basis of the Pico-8, the upcoming Playdate console, and the Love 2D gaming engine, all things that I would like to dabble in. ↩︎

Unicode bloats and emoji kitchens

Unicode 13 is coming, and bringing with it a handful of exciting things. Particular to my interests is a new Legacy Computing section with characters like seven-segment display numerals and graphics characters like those found on the Commodore 64 and other machines of the era. Of course, new emoji are coming as well, including among other things a magic wand, a beaver, and the trans pride flag (finally!). Unicode is doing a lot of necessary language work behind the scenes as well; the 12.1.0-13.0.0 diffs show additional characters for well over ten different scripts. In short, major versions of the Unicode Standard span a wide variety of character types, because the standard itself, by design, spans a wide variety of character types. From the Unicode Consortium’s FAQ:

Unicode covers all the characters for all the writing systems of the world, modern and ancient. It also includes technical symbols, punctuations, and many other characters used in writing text. The Unicode Standard is intended to support the needs of all types of users, whether in business or academia, using mainstream or minority scripts.

Something that always comes up when a new version is released, but seemed particularly strong this go-around, is the notion of bloat in Unicode. While things like legacy computing symbols may factor into this notion, it is generally directed at Emoji. I specifically heard quite a few suggestions (all from Googlers, FWIW) this time that emoji never should have been in Unicode to begin with. This notion is patently absurd to me, as is the fundamental notion that Unicode is or has any risk of becoming bloated.

Moving emoji into a standard was largely a response to existing interoperability issues. Emoji started1 on Japanese mobile phones, with a handful of competing non-standards. SoftBank sent theirs via character sequences, using an old-school shift in/out escaping mechanism. NTT DoCoMo used Unicode, but without emoji being a part of the Unicode Standard, they did it via private use characters. Au simply opted to send images, which is interoperable but… means you’re sending images when character encodings should and would suffice. Unicode sprinkled in bits of emoji here and there, but without vendor support it wasn’t a priority. Back in its “Don’t be evil” days, Google provided the initial push and with Apple on board as well, building emoji into Unicode became a priority.

I fail to see what other solutions could have been considered. I maintain that even in the 5G era, sending images back and forth when character codes would suffice is a Bad Idea. It also removes standardization; the emoji you see won’t be from your vendor of choice, but from a disparate collection of your contacts’ vendors. A new character encoding could be proposed, specifically for poops and cat faces, but that complicates everything while solving nothing. Aside from the (trivial, but nonzero) overhead required to shift encodings midstream and the (again, likely trivial but still additional) burden on vendors to contribute to and develop software for an unnecessary additional standard. There would be complications surrounding redundancy; Unicode has several characters that are intended to be rendered in either emoji-style or traditional glyphs. An additional character can be joined with these to force either rendering. Presumably the same thing could be achieved with two side-by-side encodings, but it certainly wouldn’t be as graceful, and would require interstandards crosswalking to resolve instances where a system lacked one of the redundant glyphs.

The only problem either of these solutions would conceivably solve would be reducing the frivolous emoji burden on the Unicode Consortium, allowing them more resources to devote to potentially-more-important matters like scripts for underrepresented writing systems. This is basically how it already works, though – the Consortium is broken up into a handful of technical committees, and emoji gets its own full-blown subcommittee. Folks aren’t being taken away from their work on Khitan to discuss the merits of a plunger emoji. Additionally, also lifted straight from the FAQ:

Their encoding, surprisingly, has been a boon for language support. The emoji draw on Unicode mechanisms that are used by various languages, but which had been incompletely implemented on many platforms. Because of the demand for emoji, many implementations have upgraded their Unicode support substantially. That means that implementations now have far better support for the languages that use the more complicated Unicode mechanisms. See L2/18-044.

I find it extremely difficult to find a good-faith explanation as to why Unicode should not house these conveyances. There’s really nothing in the standard itself to bloat2. Sure, there are more tables to print and edge cases to discuss, but most of the burden is on the group making decisions about inclusion and the like, and on vendors for support concerns. Neither of these things would change if a separate encoding standard was introduced. Both of these things, however, would change if we were to simply send images.

From a user perspective, plenty of people don’t seem to understand that emoji aren’t images, and the technical details… don’t really matter. Case in point: see how many people are constantly asking Apple to give them their desired emoji. From a vendor perspective, this means throwing standards to the wind and sending images could make for good branding: your unique set of symbols is now spread beyond your devices, and if you can just plop whatever in there… well, suddenly you’re the phone, the OS, the keyboard that can uniquely send the ‘pigeon pooping’ emoji.

I started writing this post in January and paused for a bit while I didn’t really have the energy to write. At that point, my thoughts were largely just that this idea that emoji belong (and have always belonged) outside of Unicode was absurd. I’ve mentioned Google a couple of times, and I would have mentioned them regardless since it seems very fitting to me that present-day Googlers seem to have a very meh attitude toward open standards. But then something else happened: Google’s mobile keyboard, Gboard, added support for Emoji Kitchen, an emoji mashup toy made by Googlers3. Which means… Google is doing the exact thing I just described. They’re taking an open, rational standard, and making something uniquely theirs and proprietary. It’s a fun little lark, but it’s also rather disconcerting.

I have hopefully made it clear that I think sending images when character codes will suffice is user-hostile, even if users aren’t aware of it. This becomes an even bigger concern when we work in accessibility – Unicode has descriptions for any given character that a screen reader will interpret; sending images relies on vendors to attach alt-text and for the protocol/format to support it (I don’t believe there is any provisioning for this in MMS, for instance). Emoji Kitchen, the website, doesn’t seem to attach any alt text to its final creation. I don’t know if the Gboard mode does, or again how well that would even be supported across typical messaging platforms. In all, I think the idea that emoji should have been kept out of Unicode from the get-go is indefensible, and I really hope major vendors who love obscuring open things into proprietary things don’t push us all into sending images to one another just to say 👋.


  1. Emoji ‘started’ in a lot of disparate ways, of course. Emoticons predated emoji, AIM transparently replaced these with faces, ideograms and pictograms have existed throughout time. But the set of characters that directly grew into what we now know and love as emoji, and the basic mechanisms of how they exist within text come out of this Japanese mobile phone phenomenon. ↩︎
  2. Within reason, of course. And there are plenty of checks in place to ensure reason is maintained. Flip through the document register some time for an idea of how much discourse goes on regarding proposals. ↩︎
  3. Emoji Kitchen’s about page (which I can’t link to because the website doesn’t function like a website, naturally) credits two Googlers without ever mentioning Google or Alphabet. Despite its integration into Gboard, it’s hard to officially state that it’s a Google product. But it is certainly the product of Googlers. ↩︎