Revisiting the travel chess computer

Computers are interesting things. When we think of computers, we tend to think of general-purpose computers – our laptops, smartphones, servers and mainframes, things that run a vast array of programs composed of hundreds of thousands of instructions spanning a multitude of chips. When I was younger, general-purpose computers were more-or-less hobbyist items for home users. Single-purpose computers still exist everywhere, but there was certainly a time when having a relatively cheap, often relatively small computing device for a specific task was either preferable to doing that task on a general-purpose computer, or perhaps the only way to do it. Something like a simple four-function calculator was a far more commonplace device before our phones became more than just phones.

Chess poses an interesting problem here. By modern standards, it doesn’t take much to make a decently-performing chess computer. The computer I’ll be discussing later in this post, the Saitek Kasparov Travel Champion 21001 runs on a 10MHz processor with 1KB of RAM and 32KB of program ROM (including a large opening library). It plays at a respectable ~2000 ELO2. This was released in 1994, a time when the general-purpose computer was becoming more of a household item. The Pentium had just been released; a Micron desktop PC with a 90MHz Pentium and 8MB of RAM was selling for $2,499 (the equivalent of $4,988 in 2022, adjusting for inflation)3. 486s were still available; a less-capable but still well-kitted-out 33MHz 486 with 4MB of RAM went for $1,399 ($2,797 in 2020 dollars). Chessmaster 4000 Turbo would run on one of these 486s, albeit without making the recommended specs. It cost $59.95 ($119.85 in 2020 dollars)4, and while it’s hard to get a sense of the ELO it performed at, players today still seem to find value in all of the old Chessmaster games; they may not play at an advanced club level, but they were decent engines considering they were marketed to the general public. A more enthusiast-level software package, Fritz 3, was selling for 149 DEM5, which I can’t really translate to 2020 USD, but suffice it to say… it wasn’t cheap. Fritz 3 advertised a 2800 ELO6; a tester at the time estimated it around 2440 ELO. Interestingly, when that tester turned Turbo off, reducing their machine from a 50MHz 486 to 4.77MHz, ELO only dropped by about 100 points.

All of this is to say that capable chess engines don’t need a ton of processing power. At a time when general-purpose computers weren’t ubiquitous in the home, a low-spec dedicated chess computer made a lot of sense. The earliest dedicated home chess computers resembled calculators, lacking boards and only giving moves via an LED display, accepting them via button presses. Following this were sensory boards, accepting moves via pressure sensors under the spaces. These were available in full-sized boards as well as travel boards, the latter of which used small pegged pieces on proportionally small boards with (typically clamshell) lids for travel.

In 2022, we all have incredibly powerful computers on our desks, in our laps, and in our purses. Stockfish 15, one of the most powerful engines available, is free open source software. is an incredible resource even at the free level, powered by the commercially-available Komodo engine. Full-size electronic boards still exist, which can interface with PCs or dedicated chess computers. Some of these products are pretty neat – DGT makes boards that recognize every piece and Raspberry Pi-based computers built into chess clocks. There is an undying joy in being able to play an AI (or an online opponent) on a real, physical, full-sized board.

The market for portable chess computers has pretty much dried up, however. Pegboard travel sets eventually gave way to LCD handhelds with resistive touchscreens and rather janky segment-based piece indicators. These were more compact than the pegboards, and they required less fiddling7 and setup. The advent of the smartphone, however, really made these into relics; a good engine on even the lowest-end modern phone is just a better experience in every single way. On iOS, tChess powered by the Stobor engine is a great app at the free level, and its pro features are well-worth the $8 asking price. The aforementioned app is excellent as well.

When I was quite young, I improved my chess skills by playing on a 1985 Novag Piccolo that my parents got me at a local flea market. I loved this pegboard-based computer – the sensory board which indicated moves via rank-and-file LEDs, the minimalist set of button inputs, even the company’s logo. It was just a cool device. It is, of course, a pretty weak machine. Miniaturization and low-power chips just weren’t at the state that they are now, and travel boards suffered significantly compared to their full-sized contemporaries. The Piccolo has been user rated around 900 ELO, it doesn’t know things like threefold repetition, and lacks opening books.

I’ve been trying to get back into chess, and I decided that I wanted a pegboard chess computer. Even though the feeling pales in comparison to a full-sized board, I don’t have a ton of space, I tend to operate out of my bed, and I have that nostalgic itch for something resembling my childhood Novag. Unfortunately, things didn’t improve much beyond the capabilities of said Novag during the pegboard era. I would still love to find one of the few decent pegboard Novags – the Amber or Amigo would be nice finds. But I ended up getting a good deal on a computer I had done some research on, the aforementioned Saitek Kasparov Travel Champion 2100 (from hereon simply referred to as the 2100).

I knew the 2100 was a decent little computer with a near-2000 ELO8 and a 6000 half-move opening library. I liked that it offered both a rank-and-file LED readout and a coordinate readout on its seven-segment LCD. Knowing that these pegboard computers struggled to achieve parity with their full-sized counterparts, I was pretty surprised to find some above-and-beyond features that I was familiar with from PC chess engines. The LCD can show a wealth of information, including a continuous readout of what the computer thinks the best move is. A coaching mode is present, where the computer will warn you when pieces are under attack and notify you if it believes you’ve made a blunder. A random mode is present, choosing the computer’s moves randomly from its top handful of best options instead of always choosing what it believes is the best of the best. You can select from themed opening books or disable the opening library entirely. These are all neat features that I really wasn’t expecting from a pegboard computer9.

I can see why the 2100 tends to command a high price on the secondary market – if you want a traditional pegboard chess computer, it seems like a hard one to beat. I’m certainly intrigued by some of the modern solutions – the roll-up Square Off PRO looks incredibly clever10. But for a compact yet tactile solution that I can tune down to my current skill level or allow to absolutely blast me, the 2100 checks a lot of unexpected boxes. As I mentioned, these travel units died out for good reason; I can play a quick game on against Komodo and get an incredibly detailed, plain-language analysis afterword that highlights key moments and lets me play out various ‘what if?’ scenarios. I do this nearly every day as of late. Purchasing a nearly-three-decade-old chess computer may have been a silly move. But it’s a different experience compared to poking at at an app on my phone. It’s tactile, it’s uncluttered. It’s scaled down, but there’s still something about just staring at a board and moving pieces around. I still use my phone more, but the 2100 offers something different, and it offers that alongside a decent engine with a flexible interface11. Maybe one of these days someone will come out with a travel eboard, but I doubt it. Solutions like the Square Off PRO are likely the direction portable chess computers are headed. This is fine, it’s a niche market. I’m just glad a handful of decent models were produced during the pegboard era, and I’m happy to have acquired the Saitek Kasparov Travel Champion 2100.

Digirule 2U

I keep meaning to post about SISEA, but like… I don’t have anything to say that others haven’t said better. Much like SESTA/FOSTA, this bill is a direct attack on sex workers under a thin anti-trafficking guise. Listen to what sex workers are saying about this. Contact the folks who are supposed to listen to us. Let’s do what we can to stop this garbage.
I’ve written about single board computers before, and have bought and briefly played with a modern board from Wichit Sirichote. I’d meant to write about my experience with this board, but I haven’t actually gotten too far into the weeds with it yet. I need to either find a wall-wart that will power it, or else hook up my bench supply to mess with it, and… my attention span hasn’t always proven up to the task.

I bought another four-function calculator

Something I find rather amusing is that despite my owning… a lot of classic HP calculators1, this here blog only has posts about one old Sinclair calculator (which is, at least, a postfix machine) and one modern four-function, single-step Casio calculator (that somehow costs $300). And, as of today… yet another modern Casio calculator. I actually do want to write something about the HPs at some point, but… they’re well-known and well-loved. I’m excited about this Casio because it’s a weird throwback (that, like the S100, I had to import), and because it intersects two of my collector focuses: calculators and retro video games.

The mid-1970s brought mass production of several LCD technologies, which meant that pocket LCD calculators (and even early handheld video game consoles were a readily obtainable thing by the early 1980s. Handheld video games were in their infancy, and seeking inspiration from calculators seemed to be a running theme. Mattel’s Auto Race came to fruition out of a desire to reuse readily-available calculator-sized LED technology in the 1970s; Gunpei Yokoi was supposedly inspired to merge games with watches (in, of course, the Game & Watch series) after watching someone fiddle idly with a calculator. Casio took a pretty direct approach with this, releasing a series of calculators with games built in. Later games had screens with both normal calculator readouts and custom-shaped electrodes to present primitive graphics (like the Game & Watch units, or all those old terrible Tiger handhelds), some of which were rather large for renditions of games like Pachinko. The first, however, was essentially a bog-standard calculator as far as hardware was concerned2: regular 8-digit 7-segment display, regular keypad. I suspect this was largely to test the reception of the format before committing to anything larger; aside from the keypad graphics, the addition of the speaker, and the ROM mask… it looks like everything could’ve been lifted off of the production line for any number of their calculators: the LC-310 and LC-827 have identical layouts.

This was the MG-880, and it was clearly enough of a hit to demonstrate the viability of pocket calculators with dedicated game modes. The game itself is simple. Numbers come in from the right side of the screen in a line. The player is also represented by a number, which they increment by pressing the decimal separator/aim key. When the player presses the plus/fire key, the closest matching digit is destroyed. These enemy numbers come in ever-faster waves, and once they collide with you, it’s game over. Liquid Crystal has more information on the MG-880 here.

So that’s all very interesting (if you’re the same type of nerd I am), but I mentioned I was going to be talking about a modern Casio calculator in this post. About three years ago, Casio decided to essentially rerelease (remaster?) the MG-880 in a modern case; this is the SL-880. I haven’t owned an MG-880 before, so I can’t say that the game is perfectly recreated down to timing and randomization and what-have-you, but based on what I’ve read/seen of the original, it’s as faithful a recreation as one needs. In fact, while the calculator has been upgraded to ten digits, the game remains confined to the MG-880’s classic eight. Other upgrades to the calculator side of things include dual-power, backspace, negation, memory clear, tax rate functions (common on modern Japanese calculators) and square root3. You can also turn off the in-game beeping, which was not possible on the MG-880. The SL-880 is missing one thing from its predecessor, however: the melody mode. In addition to game mode, the speaker allowed for a melody mode where different keys simply mapped to different notes. The only disappointing thing about this omission is how charming it is seeing the solfège printed above the keys.

So was the SL-880 worth importing? Honestly, yes. The calculator itself feels impossibly light and a bit cheap, but it is… a calculator that isn’t the S100 in the year 2020. The game holds up better than I expected. It is, of course, still a game where you furiously mash two keys as numbers appear on a screen, but given the limitations? Casio made a pretty decent calculator game in 1980. More important to me, however, is where it sits in video game history. One might say I should just seek out an original MG-880 for that purpose, and… perhaps I will, some day4. But I think there’s something special about Casio deciding to release a throwback edition of such an interesting moment in video game history. And while the MG-880 was a success, it certainly isn’t as much of a pop culture icon as, say, the NES. This relative obscurity is likely why I find this much more charming than rereleases like the NES Classic Edition. It feels like Casio largely made it not to appeal to collectors, but to commemorate their own history.

(Retro) Single-board computers

Single-board computers from the early microcomputing era have always fascinated me. Oft-unhoused machines resembling motherboards with calculator-esque keypads and a handful of seven-segment LEDs for a display1, their purpose was to train prospective engineers on the operations of new microprocessors like the Intel 8080 and MOS 6502. Some, like MOS’s KIM-1 were quite affordable, and gave hobbyists a platform to learn on, experiment with, and build up into something bigger.

The KIM-1 is, to me, the archetypal single-board. Initially released by MOS and kept in production by Commodore, it had a six-digit display, 23-key input pad, 6502 processor, and a pair of 6530 RIOT chips. MOS pioneered manufacturing technology that allowed for a far higher yield of chips than competitors, making the KIM-1 a device that hobbyists could actually afford. I would love to acquire one, but unfortunately they are not nearly as affordable these days, often fetching around $1,000 at auction. Humorously, clones like the SYM-1 that were far more expensive when they were released are not nearly as collectable and sell at more reasonable rates. Even these are a bit pricy, however, and you never know if they’ll arrive operable. If they do, it’s a crapshoot how long that will remain true.

Other notable single-boards like the Science of Cambridge (Sinclair) MK14 and the Ferguson Big Board rarely even show up on eBay. The MK14 is another unit that I would absolutely love to own – I have a soft spot for Clive Sinclair’s wild cost-cut creations. This seems extremely unlikely, however, leaving me to resort to emulation. Likewise for the KIM-1, a good emulator humorously exists for the Commodore 64.

History has a way of repeating itself, I suppose, and I think a lot of that retro hobbyist experience lives on in tiny modern single-board computers like the Raspberry Pi and Arduino. I’m glad these exist, I’d be happy to use one if I had a specific need, but they don’t particularly interest me from a recreational computing perspective. Given that these modern descendants don’t scratch that itch, and the rarity and uncertainty of vintage units, I was very excited to recently stumble across Thai engineer Wichit Sirichote’s various single-board kits for classic microprocessors. Built examples are available on eBay. The usual suspects are there: 8080, 8088, 8086, Z80, 68008, 6502; some odd ducks as well like the CDP1802.

I have ordered, and plan to write about the cheapest offering: the 8051 which sells in built form for $85, shipped from Thailand. The 8051 was an Intel creation for embedded/industrial systems, and is an unfamiliar architecture for me. If it all works out how I hope it will, I wouldn’t mind acquiring the 6502, Z80, CDP1802 and/or one of the 808xs. I’d love to see a version using the SC/MP (as used in the Cambridge MK14), but I’m not sure there are any modern clones available2. For now, I will do some recreational experiments with the 8051, perhaps hitting a code golf challenge or two. While this can’t be quite the same as unboxing a KIM-1, I love that somebody is making these machines. And not just one or two, but like… a bunch. Recreational computing lives.

MINOL and the languages of the early micros

This post was updated in May 2020 with an explanatory footnote sent in by a reader.

When I started playing with VTL-2, another small and obscure language was included in the same download: MINOL. Inspired by BASIC syntax and written by a high-schooler in 1976, it “has a string-handling capability, but only single-byte, integer arithmetic and left-to-right expression evaluation.” What I am assuming is the official spec PDF was seemingly submitted over several letters to and subsequently published by the magazine, “Dr. Dobb’s Journal of Computer Calesthenics and Orthodontia.” This article described the purpose and syntax of the language, as well as the code for the Altair interpreter.

MINOL has 12 statements: LET, PR(int), IN(put), GOTO, IF, CALL, END, NEW, RUN, CLEAR, LIST, and OS (to exit the interpreter).As quoted above, there is integer arithmetic (+-*/), and there are three relational operators, =, <, and the inexplicably-designated #1 for not equal. Line numbers are single-byte, with a maximum of 254 lines. Statements can be separated with a colon. Exclamation points are random numbers. If (immediately) running a line without a line number, GOTO calls its line number 0. Rudimentary string-handling seems to be the big sell. This basically entails automatically separating a string into individual code points and popping them into memory locations, as well as some means of inverting this process. An included sample program inputs two strings and counts the number of instances of the second string in the first; being a bunch of code points contiguous in memory, it is certainly functional.

Is MINOL interesting, as a hobbyist/golf language? I may very well try one or two string-based challenges with it. Its limitations are quirky and could make for a fun challenge. I think more than anything, however, I’m just fascinated by this scenario that the Altair and similar early micros presented. Later micros like the Commodore PET booted right into whatever version of BASIC the company had written or licensed for the machine, but these early micros were very barebones. Working within the system restrictions, making small interpreters, and designing them around specific uses was a very real thing. It’s hard to imagine languages like MINOL or VTL-2 with their terse, obscure, limited syntaxes emerging in a world where every machine boots instantly into Microsoft BASIC.

Once again, I don’t know how much value there is in preserving these homebrew languages of yore, but as I mentioned when discussing VTL-2, folks nowadays generate esoteric languages just to mimic Arnold Schwarzenegger’s speaking mannerisms. Given that climate, I think there’s a pretty strong case to keep these things alive, at least in a hobbyist capacity. And given the needs of early micro hobbyists, I find the design of these languages absolutely fascinating. I’m hopeful that I can dig up others.

VTL-2: golfing and preservation

I’ve been playing with Gary Shannon and Frank McCoy’s minimalist programming language from the ‘70s, VTL-2 PDF as of late. Written for the Altair 8800 and 680 machines, it was designed around being very small – the interpreter takes 768 bytes in ROM. It has quite a few tricks for staying lean: it assumes the programmer knows what they’re doing and therefore offers no errors, it uses system variables in lieu of many standard commands, and it requires that the results of expressions be assigned to variables. It is in some ways terse, and its quirks evoke a lot of the fun of the constrained languages of yore. So, I’ve been playing with it to come up with some solutions for challenges on Programming Puzzles and Code Golf Stack Exchange (PPCG)1.

I mentioned that it is in some ways terse, and this is a sticking point for code golf. VTL-2 is a line-numbered language, and lines are rather expensive in terms of byte count. Without factoring in any commands, any given line requires 4 bytes: (always) two for the line number, a mandatory space after the line number, and a mandatory CR at the end of the line. So, at a minimum, a line takes 5 bytes. This became obvious when golfing a recent challenge:

3 B=('/A)*0+%+1

saved a byte over

3 B='/A
4 B=%+1

These almost certainly look nonsensical, and that is largely because of two of the things I mentioned above: the result of an expression always needs to be assigned to a variable, and a lot of things are handled via system variables instead of commands. For example, ' in the code above is a system variable containing a random number. There is no modulo nor remainder command, rather % is a system variable containing the remainder of the last division operation. Thus originally, I thought I had to do a division and then grab that variable on the next line. As long as the division is performed, however, I can just destroy the result (*0) and add the mod variable, making it a single shot. It’s a waste of our poor Altair’s CPU cycles, but I’m simulating that on modern x64 hardware anyway. And despite the extra characters, it still saves a byte2.

Other notable system variables include ? for input and output:

1 A=?
2 ?="Hello, world! You typed "
3 ?=A

Line 1 takes input – it assigns variable A to the I/O variable, ?. Line 2 prints “Hello, world! You typed &rquo;, and then line 3 prints the contents of variable A. Lines 2 and 3 assign values to the I/O variable. The system variable # handles line numbers. When assigned to another variable (I=#), it simply returns the current line number. When given an assignment (#=20), it’s akin to a GOTO. The former behavior seems like it could come in handy for golf: if you need to assign an initial value to a variable anyway, you’re going to be spending 4 bytes on the line for it. Therefore, it may come in handy to, say, initialize a counter by using its line number: 32 I=#.

Evaluation happens left-to-right, with functional parentheses. Conditionals always evaluate to a 1 for true and a 0 for false. Assigning the line number variable to a 0 in this way is ignored. With that in mind, we can say IF A==25 GOTO 100 with the assignment #=A=25*100. A=25 is evaluated to a 1 or a 0 first, and this is multiplied by 100 and # is assigned accordingly. ! contains the last line that executed a #= plus 1, and therefore #=! is effectively a RETURN.

There’s obviously more to the language, which I may get into in a future post3. Outside of the syntactical quirks which make it interesting for hobbyist coding, the matter of running the thing makes it less than ideal for programming challenges. Generally speaking, challenges on PPCG only require that a valid interpreter exists, not that one exists in an online interpreter environment such as Try It Online (TIO). In order to futz around in VTL-2, I’m running a MITS Altair 8800 emulator and loading the VTL-2 ROM. TIO, notably, doesn’t include emulation of a machine from the ‘70s with a bundle of obscure programming language ROMs on the side.

This brings me to my final point: how much effort is being put into preserving the lesser-known programming languages of yore, and how much should be? I personally think there’s a lot of value in it. I’ve become so smitten with VTL-2 because it is a beautiful piece of art and a brilliant piece of engineering. Many languages of that era were, by the necessity of balancing ROM/RAM limitations with functionality and ease of use. Yet, there’s no practical reason to run VTL-2 today. It’s hard to even justify the practicality of programming in dc, despite its continued existence and its inclusion a requirement for POSIX-compliance. New esoteric languages pop up all the time, often for golfing or for sheer novelty, yet little to no effort seems to be out there to preserve the experimental languages of yesteryear. We’re seeing discussions on how streaming gaming platforms will affect preservation, we have hosting a ton of web-based emulators and ROMs, we have hardware like Applesauce allowing for absolutely precise copies to be made of Apple II diskettes. Yet we’re simply letting retro languages… languish.

To be clear, I don’t think that this sort of preservation is akin to protecting dying human languages. But I do think these forgotten relics are worth saving. Is “Hello, world!” enough? An archive of documentation? An interpreter that runs on an emulator of a machine from 1975? I don’t know. I don’t feel like I have the authority to make that call. But I do think we’re losing a lot of history, and we don’t even know it.

Build your own dial-up ISP in 2019 (external)

Charming read on how one might construct their own dial-up internet connection in this age of egregious Xfinity bills1. On the surface, it sounds like a goofy lark, but if you dive into retrocomputing enough, you find plenty of systems with readily available modems and few (if any) other means of networking. I always wondered how complex one would need to get to set up a system like this. If you could trick the modems on either end into not caring about hook/dialing/&c., could you just go over an audio connection and skip telephony? I don’t know the answer to that, but the linked article accomplishes it with a virtual PBX and analog VOIP adaptors – a purpose-built private telephony system in the middle. It’s an interesting read, and a good link to hold on to for future reference.

Portal, Commodore 64 style

I’ve been thinking a lot about empathy and emotion in video games lately, and this has really given me the itch to play through Portal again. This weekend, I did just that… sort of. Jamie Fuller1 has released a 2D adaptation of the classic for the Commodore 64 (C64), and it is pure joy. It’s quick – 20 levels with brief introductions from GLaDOS, completable in around a half hour. The C64 had a two-button mouse peripheral (the 13512) but it was uncommon enough that even graphical environments like GEOS supported moving the cursor around with a joystick. Very few games had compatibility with the mouse, and here we are in 2018 adding one more – using WAD to move and the mouse to aim/fire is a perfect translation of Portal’s modern PC controls. If you’re not playing on a real C64 with a real 1351, VICE emulates the mouse, and it works great on’s browser-based implementation as well.

The VCSthetic

The Atari VCS, better known as the 2600, was an important part of my formative years with technology. It remains a system that I enjoy via emulation, and while recently playing through some games for a future set of posts, I started to think about what exactly made so many of the (particularly lesser-quality) games have such a unique aesthetic to them. The first third-party video game company, Activision, was famously started by ex-Atari employees who wanted credit and believed the system was better suited to original titles than hacked-together arcade ports. They were correct on this point, as pretty much any given Activision game looks better than any given Atari game for the VCS. Imagic, too, was made up of ex-Atari employees, and their games were pretty visually impressive as well. Atari had some better titles toward the end of their run, but for the most part their games and those of most third-parties are visually uninspiring. Yet the things that make them uninspiring are all rather unique to the system:

256 pixels

I’ve been restoring a Milton Bradley Microvision and am now happily at the point where I have a fully functional unit. Introduced in 1979, it’s known as the first portable game console with interchangeable cartridges. Anyone who has scoured eBay and yard sales for Game Boys knows that the monochrome LCDs of yore were fairly sensitive to heat and even just age. For a system ten years older than the Game Boy (and one that sold far fewer numbers), functional units are fairly hard to come by. But for a while, I’ve been invested in patching one together, and I plan to enjoy it until it, too, gives up the ghost1.

Another World

Is there a word for nostalgia, but bad? Kind of like how you can have a nightmare that is on one hand an objectively terrible experience, but on the other… fascinating, compelling even. When I was quite young, the household computer situation was a bit of a decentralized mess. I guess the Commodore 64 was the family computer, but it was essentially mine to learn 6510 ML and play Jumpman on. My sister had a Macintosh Quadra which I guess was largely for schoolwork, but it had a number of games on it that were positively unbelievable to my 8-bit trained eyes. Among these was the bane of my wee existence, Another World1.

I guess I’m about to give away a few spoilers, but they’re all from the first minute or so of punishment play. Another World begins with a cutscene where we learn that our protagonist is a physics professor named Lester who drives a Ferrari2. At this point, we realize we are dealing with a science fiction title. Lester starts doing some very professorly things on his computer, and then some lightning strikes his ARPANET wires or whatever and suddenly our protagonist is deep underwater! Some kind of sea monster grabs him, and… game over?! The cutscenes are rendered with the same beautifully polygonal rotoscoping as the rest of the game, so it’s entirely possible that you die several times watching this scene before grasping that you’re actually supposed to press buttons now.

This stressful memory came back hard upon recently purchasing a Switch and inexplicably making this year’s port of Another World my first purchase. Well, I guess it is explicable: ‘nostalgia, but bad.’ The frustrations of a game that will let you die if you simply do nothing within the first five seconds had not changed much from my childhood. This is a fundamental part of the experience; Another World is a game that wants you to die. It demands that you die. A lot. It’s a lovely game, and one that I’m sure a lot of folks remember (fondly or otherwise) from their Amigas and Macs, but I couldn’t help but think that this sort of trial-and-error experience really wouldn’t fly today if not for nostalgia3. Though I have to ask myself, how does this differ from, say, Limbo, another game that tricks you into death at every turn?

The next death in Another World is when little polygonal slug-looking things slip a claw into Lester’s leg, collapsing him. You have to kind of squish them just right, and it’s the first of many deadly puzzles that rely more on a very finicky sort of perfection rather than just a clever solution. Slightly further into the game, Lester faces a challenge that neatly sums up the whole problem: perfect positioning and perfect timing are required to dodge two screens worth of oddly-timed falling boulders. These moments are very reminiscent of the frustratingly exacting challenges in Dragon’s Lair, a point of inspiration for designer Éric Chahi4. I think this is where a modern take like Limbo feels less annoying in its murderous tendencies – you rarely die because you didn’t time something out to the nanosecond or position yourself on just the right pixel; you die because something crafty in the evil, evil environment outsmarted you.

This sort of thing seems to be a point of maturity for gaming in general. The aforementioned Jumpman was one of my favorite games back in the day, but it was painstakingly picky down to the pixel. Collision detection has eased up in modern times, and additional system resources give designers a lot more room to make challenges diverse and clever instead of simply difficult-by-any-means-necessary. Another World’s spiritual successor, Flashback5 definitely still had these moments, but by the time its 3D sequel, Fade to Black came out, things were much less picky.

I’m certain I beat both Flashback and Fade to Black, but I don’t think I ever had it in me to get through Another World. I guess this was part of why I jumped right on the Switch port. The game has won many battles, but I do intend to win the war. And the fact of the matter is, that for all my griping, it is still an incredibly enjoyable game. ‘Nostalgia, but bad’ certainly doesn’t mean that the game is bad, it means that the game forced all of my respective memories to be bad. The graphics have a unique quality about them6, and the sparse atmosphere feels very modern. The challenges are often interesting, even when they’re more technical than cerebral. It’s a game that I think is best experienced in short spurts, so as not to be consumed by the seemingly infinite tedium of frustrating deaths. It’s a product of its time, and must be treated as such. And while its demands certainly reveal its age, little else about it feels out of place on a portable console in 2018.

Speech synthesis

When I was in elementary school, I learned much of my foundation in computing on the Commodore 64. It was a great system to learn on, with lots of tools available and easy ways to get ‘down to the wire’, so to speak. Though it was hard to see just how limited the machines were compared with what the future held, some programs really stood out for how completely impossible they seemed1. One such program was S.A.M. – the Software Automated Mouth, my first experience with synthesized speech2.

Speech synthesis has come a long way since. It’s built into current operating systems, it can be had in IC form for under $9, and it’s becoming increasingly present in day-to-day life. I routinely use Windows’ built in speech synthesizer along with NVDA as part of my accessibility checking regimen. But I’m also increasingly becoming dismayed by the egregious use of speech synthesis when natural human speech would not only suffice but be better in every regard. Synthesis has the advantage of being able to (theoretically) say anything while not paying a person to do the job. I’m seeing more and more instances where this doesn’t pan out, and the robot is truly bad at its job to boot.

Three examples, all train-related (I suppose I spend a lot of time on trains): the new 7000 series DC Metro cars, the new MARC IV series coach cars, and the announcements at DC’s Union Station. None of these need to be synthesized. They’re all essentially announcing destinations – they have very limited vocabularies and don’t make use of the theoretical ability to say anything. Union Station’s robot occasionally announces delays and the like, but often announcements beyond the norm revert to a human. Metro and MARC trains only announce stops and have demonstrated no capacity for supplemental speech. Where old and new cars are paired, conductors/operators still need to make their own station stop announcements.

So these synthesizers don’t seem to have a compelling reason to exist. It could be argued that human labor is now potentially freed up, but given the robots’ limited vocabularies and grammars, the same thing could be accomplished with human voice recordings. I can’t imagine that the cost of hiring a voice actor with software to patch the speech together into meaningful grammar would be appreciably more expensive than the robot. In fact, before the 7000 series Metro cars, WMATA used recordings to announce door openings and closings; they replaced these recordings in 2006, and the voice actor was rewarded with a $10 fare card3.

Aside from simply not being necessary, the robots aren’t good at their job. This is, of course, bad programming – human error. But it feels like the people in charge of the voices are so far detached from the final product that they don’t realize how much they’re failing. The MARC IV coaches are acceptable, but their grammar is bizarre. When the train is coming to a station stop, an acceptable thing to announce might be ‘arriving at Dickerson’, which is in fact what the conductors tend to say. The train, instead, says ‘this train stops at Dickerson’, which at face value says nothing beyond that the train will in fact stop there at some point. It’s bad information, communicated poorly. Union Station’s robot has acceptable grammar, but she pronounces the names of stations completely wrong. Speech synthesizers generally have two components: the synthesizer that knows how to make phonemes (the sounds that make up our speech), and a layer that translates the words in a given language to these phonemes. My old buddy S.A.M. had the S.A.M. speech core, and Reciter which looked up word parts in a table to convert to phonemes. This all had to fit into considerably less than 64K, so it wasn’t perfect, and (if memory serves), one could override Reciter with direct phonemes for mispronounced words. Apple’s say command (well, their Speech Synthesis API) allows on-the-fly switching between text and phoneme input using [[inpt TEXT]] and [[inpt PHON]] within a speech string4. So again, given just how limited the robot’s vocabulary is (none of these trains are adding station stops with any regularity), someone should have been able to review what the robot says and suggest overrides. Half the time, this robot gets so confused that she sounds like GLaDOS in her death throes.

Which brings me to my final point – the robots simply aren’t human. Even when they are pronouncing things well, they can be hard to understand. On the flipside, the DC Metro robot sounds realistic enough that she creeps me the hell out, which I can only assume is the auditory equivalent of the uncanny valley. I suppose a synthesized voice could have neutrality as an advantage – a grumpy human is probably more off-putting than a lifeless machine. But again, this is solvable with human recordings. I cannot imagine any robot being more comforting than a reasonably calm human.

Generally speaking, we’re reducing the workforce more and more, replacing the workforce with automation, machinery. It’s a necessary progression, though I’m not sure we’re prepared to deal with the unemployment consequences. It’s easy to imagine speech synthesis as a readily available extension of this concept – is talking a necessary job? But human speech is seemingly being replaced in instances where the speaking does not actually replace a human’s job and/or a human recording would easily suffice. In some instances, speaking being replaced is a mere component of another job being replaced – take self-checkout machines (which tend to be human recordings despite the fact that grocery store inventories are far more volatile than train routes, hence ‘place your… object… in the bag’). But I feel like I’m seeing more and more instances that seem to use speech synthesis which is demonstrably worse than a human voice, and seemingly serves no purpose (presumably beyond lining someone’s pockets).