brhfl.com

Another World

Is there a word for nostalgia, but bad? Kind of like how you can have a nightmare that is on one hand an objectively terrible experience, but on the other… fascinating, compelling even. When I was quite young, the household computer situation was a bit of a decentralized mess. I guess the Commodore 64 was the family computer, but it was essentially mine to learn 6510 ML and play Jumpman on. My sister had a Macintosh Quadra which I guess was largely for schoolwork, but it had a number of games on it that were positively unbelievable to my 8-bit trained eyes. Among these was the bane of my wee existence, Another World1.

I guess I’m about to give away a few spoilers, but they’re all from the first minute or so of punishment play. Another World begins with a cutscene where we learn that our protagonist is a physics professor named Lester who drives a Ferrari2. At this point, we realize we are dealing with a science fiction title. Lester starts doing some very professorly things on his computer, and then some lightning strikes his ARPANET wires or whatever and suddenly our protagonist is deep underwater! Some kind of sea monster grabs him, and… game over?! The cutscenes are rendered with the same beautifully polygonal rotoscoping as the rest of the game, so it’s entirely possible that you die several times watching this scene before grasping that you’re actually supposed to press buttons now.

This stressful memory came back hard upon recently purchasing a Switch and inexplicably making this year’s port of Another World my first purchase. Well, I guess it is explicable: ‘nostalgia, but bad.’ The frustrations of a game that will let you die if you simply do nothing within the first five seconds had not changed much from my childhood. This is a fundamental part of the experience; Another World is a game that wants you to die. It demands that you die. A lot. It’s a lovely game, and one that I’m sure a lot of folks remember (fondly or otherwise) from their Amigas and Macs, but I couldn’t help but think that this sort of trial-and-error experience really wouldn’t fly today if not for nostalgia3. Though I have to ask myself, how does this differ from, say, Limbo, another game that tricks you into death at every turn?

The next death in Another World is when little polygonal slug-looking things slip a claw into Lester’s leg, collapsing him. You have to kind of squish them just right, and it’s the first of many deadly puzzles that rely more on a very finicky sort of perfection rather than just a clever solution. Slightly further into the game, Lester faces a challenge that neatly sums up the whole problem: perfect positioning and perfect timing are required to dodge two screens worth of oddly-timed falling boulders. These moments are very reminiscent of the frustratingly exacting challenges in Dragon’s Lair, a point of inspiration for designer Éric Chahi4. I think this is where a modern take like Limbo feels less annoying in its murderous tendencies – you rarely die because you didn’t time something out to the nanosecond or position yourself on just the right pixel; you die because something crafty in the evil, evil environment outsmarted you.

This sort of thing seems to be a point of maturity for gaming in general. The aforementioned Jumpman was one of my favorite games back in the day, but it was painstakingly picky down to the pixel. Collision detection has eased up in modern times, and additional system resources give designers a lot more room to make challenges diverse and clever instead of simply difficult-by-any-means-necessary. Another World’s spiritual successor, Flashback5 definitely still had these moments, but by the time its 3D sequel, Fade to Black came out, things were much less picky.

I’m certain I beat both Flashback and Fade to Black, but I don’t think I ever had it in me to get through Another World. I guess this was part of why I jumped right on the Switch port. The game has won many battles, but I do intend to win the war. And the fact of the matter is, that for all my griping, it is still an incredibly enjoyable game. ‘Nostalgia, but bad’ certainly doesn’t mean that the game is bad, it means that the game forced all of my respective memories to be bad. The graphics have a unique quality about them6, and the sparse atmosphere feels very modern. The challenges are often interesting, even when they’re more technical than cerebral. It’s a game that I think is best experienced in short spurts, so as not to be consumed by the seemingly infinite tedium of frustrating deaths. It’s a product of its time, and must be treated as such. And while its demands certainly reveal its age, little else about it feels out of place on a portable console in 2018.


Speech synthesis

When I was in elementary school, I learned much of my foundation in computing on the Commodore 64. It was a great system to learn on, with lots of tools available and easy ways to get ‘down to the wire’, so to speak. Though it was hard to see just how limited the machines were compared with what the future held, some programs really stood out for how completely impossible they seemed1. One such program was S.A.M. – the Software Automated Mouth, my first experience with synthesized speech2.

Speech synthesis has come a long way since. It’s built into current operating systems, it can be had in IC form for under $9, and it’s becoming increasingly present in day-to-day life. I routinely use Windows’ built in speech synthesizer along with NVDA as part of my accessibility checking regimen. But I’m also increasingly becoming dismayed by the egregious use of speech synthesis when natural human speech would not only suffice but be better in every regard. Synthesis has the advantage of being able to (theoretically) say anything while not paying a person to do the job. I’m seeing more and more instances where this doesn’t pan out, and the robot is truly bad at its job to boot.

Three examples, all train-related (I suppose I spend a lot of time on trains): the new 7000 series DC Metro cars, the new MARC IV series coach cars, and the announcements at DC’s Union Station. None of these need to be synthesized. They’re all essentially announcing destinations – they have very limited vocabularies and don’t make use of the theoretical ability to say anything. Union Station’s robot occasionally announces delays and the like, but often announcements beyond the norm revert to a human. Metro and MARC trains only announce stops and have demonstrated no capacity for supplemental speech. Where old and new cars are paired, conductors/operators still need to make their own station stop announcements.

So these synthesizers don’t seem to have a compelling reason to exist. It could be argued that human labor is now potentially freed up, but given the robots’ limited vocabularies and grammars, the same thing could be accomplished with human voice recordings. I can’t imagine that the cost of hiring a voice actor with software to patch the speech together into meaningful grammar would be appreciably more expensive than the robot. In fact, before the 7000 series Metro cars, WMATA used recordings to announce door openings and closings; they replaced these recordings in 2006, and the voice actor was rewarded with a $10 fare card3.

Aside from simply not being necessary, the robots aren’t good at their job. This is, of course, bad programming – human error. But it feels like the people in charge of the voices are so far detached from the final product that they don’t realize how much they’re failing. The MARC IV coaches are acceptable, but their grammar is bizarre. When the train is coming to a station stop, an acceptable thing to announce might be ‘arriving at Dickerson’, which is in fact what the conductors tend to say. The train, instead, says ‘this train stops at Dickerson’, which at face value says nothing beyond that the train will in fact stop there at some point. It’s bad information, communicated poorly. Union Station’s robot has acceptable grammar, but she pronounces the names of stations completely wrong. Speech synthesizers generally have two components: the synthesizer that knows how to make phonemes (the sounds that make up our speech), and a layer that translates the words in a given language to these phonemes. My old buddy S.A.M. had the S.A.M. speech core, and Reciter which looked up word parts in a table to convert to phonemes. This all had to fit into considerably less than 64K, so it wasn’t perfect, and (if memory serves), one could override Reciter with direct phonemes for mispronounced words. Apple’s say command (well, their Speech Synthesis API) allows on-the-fly switching between text and phoneme input using [[inpt TEXT]] and [[inpt PHON]] within a speech string4. So again, given just how limited the robot’s vocabulary is (none of these trains are adding station stops with any regularity), someone should have been able to review what the robot says and suggest overrides. Half the time, this robot gets so confused that she sounds like GLaDOS in her death throes.

Which brings me to my final point – the robots simply aren’t human. Even when they are pronouncing things well, they can be hard to understand. On the flipside, the DC Metro robot sounds realistic enough that she creeps me the hell out, which I can only assume is the auditory equivalent of the uncanny valley. I suppose a synthesized voice could have neutrality as an advantage – a grumpy human is probably more off-putting than a lifeless machine. But again, this is solvable with human recordings. I cannot imagine any robot being more comforting than a reasonably calm human.

Generally speaking, we’re reducing the workforce more and more, replacing the workforce with automation, machinery. It’s a necessary progression, though I’m not sure we’re prepared to deal with the unemployment consequences. It’s easy to imagine speech synthesis as a readily available extension of this concept – is talking a necessary job? But human speech is seemingly being replaced in instances where the speaking does not actually replace a human’s job and/or a human recording would easily suffice. In some instances, speaking being replaced is a mere component of another job being replaced – take self-checkout machines (which tend to be human recordings despite the fact that grocery store inventories are far more volatile than train routes, hence ‘place your… object… in the bag’). But I feel like I’m seeing more and more instances that seem to use speech synthesis which is demonstrably worse than a human voice, and seemingly serves no purpose (presumably beyond lining someone’s pockets).