brhfl.com

Animal Crossing: Pocket Camp

Animal Crossing: Pocket Camp has been available stateside for about a week now, and it is… strange. This post on ‘Every Game I’ve Finished’ (written by Mathew Kumar) mirrors a lot of my thoughts – I would recommend reading it before reading this. I haven’t really played a lot of Animal Crossing games before, and I tend to avoid free-to-play1 games. The aforementioned post is largely predicated on the fact that Pocket Camp doesn’t fully deliver on either experience. Which, I guess I wouldn’t really know, but something definitely feels odd about the game to me.

Early in his post, Kumar states that ‘[Pocket Camp] makes every single aspect of it an obvious transaction’, which is comically true. My socialist mind has a hard time seeing the game as anything but a vicious parody of capitalism. My rational mind, of course, knows this is not true because the sort of exploitative mundaneness that coats every aspect of the game is the norm in real life.

This becomes even more entertaining when you observe how players set prices in their Markets. For the uninitiated, when your character has a surplus of a thing, they can offer that thing for sale to other players. The default price is its base value, but you can adjust the sale price down a small amount or up a large amount. Eventually you’ll likely just max out your inventory and be forced to put things up for sale in this Market. More eventually, you’ll max out the Market and be forced to just throw stuff away without getting money for it. But in the meantime, people (strangers and friends) will see what you have to offer and be given the opportunity to buy it.

For the most part, if you need an item (I use the term ‘need’ loosely), it is common, and either hopping around or waiting a couple of hours will get you that item. So there should be no reason to charge a 1000% markup on a couple of apples. But (in my experience thus far) that is far more common than to see items being sold for the minimum (or even their nominal value). I don’t know if it’s just players latching on to the predatory nature of free-to-play games or what, and I’m really curious to know if it works. I’ve been listing things in small quantities (akin to what an animal requests) for the minimum price, and while I’ve sold quite a few items, most still go to waste – I can’t imagine anything selling at ridiculous markups.

So far this description of a capitalist hellscape has probably come off as though I feel negatively toward the game, which I really don’t. To return to Kumar, he leaves his post stating that he hasn’t given up on the game yet, but ‘like Miitomo, the first time I miss a day it’s all over.’ This comparison to Miitomo is apt, and a perfect segue into why I’m invested in this minor dystopia.

Miitomo (another Nintendo mobile thing) is really just a game where you… decorate a room and try on clothes. You answer questions and play some pachinko-esque minigames in order to win decorations and clothes, but it’s basically glorified dress-up. It seems like mostly young people playing it, but it’s also just a wonderful outlet for baby trans folks, people questioning gender, and any number of people seeking a little escape. I find Miitomo to be very valuable and underrated, and a lot of the joy Miitomo brings me is echoed by Pocket Camp.

While the underlying concept behind Pocket Camp is that you’re a black market butterfly dealer or whatever, there’s also a major ‘dollhouse’ component to it. You buy and receive cute clothes and change your outfits, which has no bearing on the game. You buy things to decorate your campsite which (effectively2) has no bearing on the game. You can drop 10,000 dollars bells on a purse that does nothing but sit in the dirt looking pretty. I guess it’s hypocritical to praise this meaningless materialism, but it’s a nice escape. A little world to mess around in and make your own.

I don’t know how long I’ll obsessively island-hop the world of Pocket Camp, but I think that (like Miitomo) once the novelty wears off, I’ll still pop in to play around with my little world when it occurs to me to do so. And the whole time, in my mind, it will remain a perfectly barbed satire on capitalism.


Firefox Quantum

There was once a time where the internet was just beginning to overcome its wild wild west nature, and sites were leaning toward HTML spec compliance in lieu of (or, more accurately, I suppose, in addition to) Internet Explorer’s way of doing things. Windows users in the know turned to Firefox; Mac users were okay sticking with Safari, but they were still far and few between. Firefox was like the saving grace of the browser world. It was known for leaking memory like a sieve, but it was still safer and more standards-compliant than IE. Time went on, and Chrome happened. Compared to Chrome, Firefox was slow, ugly, lacking in convenience features, it had a lackluster search bar, and that damn memory leak never went away. Firefox largely became relegated to serious FOSS nerds and non-techies whose IT friends told them it was the only real browser a decade ago.

I occasionally installed/updated Firefox for the sake of testing, and these past few years it only got worse. The focus seemed to be goofy UI elements over performance. It got uglier, less pleasant to use, and more sluggish. I assumed it was destined to become relegated to Linux installs. It just… was not palatable. I honestly never expected to recommend Firefox again, and in fact when I did just that to a fellow IT type he assumed that I was drunk on cheap-ass rum.

Firefox 57 introduces a new, clean UI (Photon); and a new, incredibly quick rendering engine. I can’t tell if the rendering engine is just a new version of Gecko, or if the engine itself is called Quantum (the overall new iteration of the browser is known as Quantum), but I do know it’s very snappy. I’m not sure if it is, but it feels faster than Chrome on all but the lowest-end Windows and macOS machines that I’ve been testing it on. It still consumes more memory than other browsers I’ve pitted it against, and its sandboxing and multiprocessor support is a work in process. The UI looks more at home on Win 10 than macOS, but in either case it looks a hell of a lot better than the old UI, and it fades into the background well enough. On very low-end machines (like a Celeron N2840 2.16GHz 2GB Win 8 HP Stream), Firefox feels more sluggish than Chrome – and this sluggishness seems related to the UI rather than the rendering engine.

I’ve been using Quantum (in beta) for a while, alongside Chrome, and that’s really what I want to attempt to get at here. Both have capable UIs, excellent renderers, and excellent multi-device experiences. I don’t particularly like Safari’s UI, but even if I did the UX doesn’t live up to my needs simply because it’s vendor-dependent (while not platform-dependent, the only platforms are Apple’s), and I want to be able to sync things across my Windows, macOS, iOS, and Linux environments. Chrome historically had the most impressive multi-device experience, but I think Firefox has surpassed it – though both are functional. So it’s starting to come down to the small implementation details that really make a user experience pleasant.

As a keyboard user, Firefox wins. Firefox and Chrome1 both have keyboard cursor modes, where one can navigate a page entirely via cursor keys and a visible cursor. This is an accessibility win, but very inefficient compared to a pointing device. Firefox, however, has another good trick – ‘Search for text when you type’, previously known as Type Ahead Find (I think, I know it was grammatically mysterious like that). So long as the focus is on the body, and not a textbox, typing anything begins a search. Ctrl– or Cmd-G goes to the next hit, and Enter ‘clicks’ it. Prefacing the search with a restricts it to links. It makes for an incredibly efficient navigation method. Chrome has some extensions that work similarly, but I never got on with them and I definitely prefer an inbuilt solution.

Chrome’s search/URL bar is way better2. It seems to automatically pick up new search agents, and they are automatically available when you start typing the respective URL. One hits tab to switch from URL entry to searching the respective site, and it works seamlessly and effortlessly. All custom search agents in Firefox, by contrast, must be set up in preferences. You don’t get a seamless switch from URL to search, but instead must set up search prefixes. So, on Chrome, I start typing ‘amazon.com’, and at any point in the process, I hit tab, and start searching Amazon. With Firefox, I have to have set up a prefix like ‘am’, and remember to do a search like ‘am hello kitty mug’ to get the search results I want. It is not user-friendly, it is not seamless, and it just feels… ancient. Chrome’s method also allows for autocomplete/instant search for these providers, which is only a feature you get with your main search engine in Firefox. It is actually far superior to simply not use this feature in Firefox and use DuckDuckGo bangs instead. The horribly weak search box alone could drive me back to Chrome.

Chrome used to go back or forward (history-wise) if you overscrolled far enough left or right – much like how Chrome mobile works. This no longer seems to work on Chrome desktop, and it doesn’t work on Firefox either. I guess I’m grumpier at Google for teasing and taking away. I know it was a nearly-undiscoverable UI feature, and probably frustrated users who didn’t know why they were jumping around, but it freed up mouse buttons.

I don’t know how to feel about Pocket vs. Google’s ‘save for later’ type solution. Google’s only seems to come up on mobile. Pocket is a separate service, and without doing additional research, it’s unclear how Mozilla ties into it (they bought the service at some point). At least with Google you know you’re the product.

I have had basically no luck streaming on Firefox. Audio streams simply don’t start playing; YouTube and Hulu play for a few seconds and then blank and stop. I assume this will be fixed fairly quickly, but it’s bad right now.

Live Bookmarks are a thing that I think Safari used to do, too? Basically you can have an RSS feed turn into a bookmark folder, and it’s pretty handy. Firefox does this, Chrome has no inbuilt RSS capability. Firefox doesn’t register JSON feed which makes it a half-solution to me, which makes it a non-solution to me. But, it’s a cool feature. I would love to see a more full-featured feed reader built in.

Firefox can push URLs to another device. This is something that I have long wished Chrome would do. Having shared history and being able to pull a URL from another device is nice, but if I’m at work and know I want to read something later, pushing it to my home computer is far superior.

I’ll need to revisit this once I test out Firefox on mobile (my iOS is too far out of date, and I’m not ready to make the leap to 11 yet). As far as the desktop experience is concerned, though, Quantum is a really, really good browser. I’m increasingly using it over Chrome. The UI leaves a bit to be desired, and the URL/search bar is terrible, but the snappiness and keyboard-friendliness are huge wins.


Speech synthesis

When I was in elementary school, I learned much of my foundation in computing on the Commodore 64. It was a great system to learn on, with lots of tools available and easy ways to get ‘down to the wire’, so to speak. Though it was hard to see just how limited the machines were compared with what the future held, some programs really stood out for how completely impossible they seemed1. One such program was S.A.M. – the Software Automated Mouth, my first experience with synthesized speech2.

Speech synthesis has come a long way since. It’s built into current operating systems, it can be had in IC form for under $9, and it’s becoming increasingly present in day-to-day life. I routinely use Windows’ built in speech synthesizer along with NVDA as part of my accessibility checking regimen. But I’m also increasingly becoming dismayed by the egregious use of speech synthesis when natural human speech would not only suffice but be better in every regard. Synthesis has the advantage of being able to (theoretically) say anything while not paying a person to do the job. I’m seeing more and more instances where this doesn’t pan out, and the robot is truly bad at its job to boot.

Three examples, all train-related (I suppose I spend a lot of time on trains): the new 7000 series DC Metro cars, the new MARC IV series coach cars, and the announcements at DC’s Union Station. None of these need to be synthesized. They’re all essentially announcing destinations – they have very limited vocabularies and don’t make use of the theoretical ability to say anything. Union Station’s robot occasionally announces delays and the like, but often announcements beyond the norm revert to a human. Metro and MARC trains only announce stops and have demonstrated no capacity for supplemental speech. Where old and new cars are paired, conductors/operators still need to make their own station stop announcements.

So these synthesizers don’t seem to have a compelling reason to exist. It could be argued that human labor is now potentially freed up, but given the robots’ limited vocabularies and grammars, the same thing could be accomplished with human voice recordings. I can’t imagine that the cost of hiring a voice actor with software to patch the speech together into meaningful grammar would be appreciably more expensive than the robot. In fact, before the 7000 series Metro cars, WMATA used recordings to announce door openings and closings; they replaced these recordings in 2006, and the voice actor was rewarded with a $10 fare card3.

Aside from simply not being necessary, the robots aren’t good at their job. This is, of course, bad programming – human error. But it feels like the people in charge of the voices are so far detached from the final product that they don’t realize how much they’re failing. The MARC IV coaches are acceptable, but their grammar is bizarre. When the train is coming to a station stop, an acceptable thing to announce might be ‘arriving at Dickerson’, which is in fact what the conductors tend to say. The train, instead, says ‘this train stops at Dickerson’, which at face value says nothing beyond that the train will in fact stop there at some point. It’s bad information, communicated poorly. Union Station’s robot has acceptable grammar, but she pronounces the names of stations completely wrong. Speech synthesizers generally have two components: the synthesizer that knows how to make phonemes (the sounds that make up our speech), and a layer that translates the words in a given language to these phonemes. My old buddy S.A.M. had the S.A.M. speech core, and Reciter which looked up word parts in a table to convert to phonemes. This all had to fit into considerably less than 64K, so it wasn’t perfect, and (if memory serves), one could override Reciter with direct phonemes for mispronounced words. Apple’s say command (well, their Speech Synthesis API) allows on-the-fly switching between text and phoneme input using [[inpt TEXT]] and [[inpt PHON]] within a speech string4. So again, given just how limited the robot’s vocabulary is (none of these trains are adding station stops with any regularity), someone should have been able to review what the robot says and suggest overrides. Half the time, this robot gets so confused that she sounds like GLaDOS in her death throes.

Which brings me to my final point – the robots simply aren’t human. Even when they are pronouncing things well, they can be hard to understand. On the flipside, the DC Metro robot sounds realistic enough that she creeps me the hell out, which I can only assume is the auditory equivalent of the uncanny valley. I suppose a synthesized voice could have neutrality as an advantage – a grumpy human is probably more off-putting than a lifeless machine. But again, this is solvable with human recordings. I cannot imagine any robot being more comforting than a reasonably calm human.

Generally speaking, we’re reducing the workforce more and more, replacing the workforce with automation, machinery. It’s a necessary progression, though I’m not sure we’re prepared to deal with the unemployment consequences. It’s easy to imagine speech synthesis as a readily available extension of this concept – is talking a necessary job? But human speech is seemingly being replaced in instances where the speaking does not actually replace a human’s job and/or a human recording would easily suffice. In some instances, speaking being replaced is a mere component of another job being replaced – take self-checkout machines (which tend to be human recordings despite the fact that grocery store inventories are far more volatile than train routes, hence ‘place your… object… in the bag’). But I feel like I’m seeing more and more instances that seem to use speech synthesis which is demonstrably worse than a human voice, and seemingly serves no purpose (presumably beyond lining someone’s pockets).


Tagging in Acrobat from the keyboard

Since much of my work revolves around §508 compliance, I spend a lot of time restructuring tags in Acrobat. Unfortunately you can’t just handwrite out these tags à la HTML, you have to physically manipulate a tree structure. The Tags panel is very conducive to mouse use, and because Adobe is Adobe, not very conducive to keyboard use. Many important tasks are missing readily available keyboard shortcuts, and it has taken me a while to be able to largely ditch the mouse1 and instead use the keyboard to very quickly restructure the tags on very long, very poorly tagged documents.

A couple of notes – this assumes a Windows machine, and one with a Menu key2. While I generally prefer working on MacOS, I’m stuck with Windows at work, so these are my efficiencies. Windows may actually have the leg up here, since the Acrobat keyboard support is so poor, and MacOS does not have a Menu key equivalent. Additionally, this applies to Acrobat XI, it may or may not apply to current DC versions. Finally, all of this information is discoverable, but I haven’t really seen a primer laid out on it. If nothing else perhaps it will help a future version of myself who forgets all of this.


Binaries and hex editors

Talking about certain files as ‘binaries’ is a funny thing. All files are ultimately binary, after all, it’s just a matter of whether or not a file is encoded as text. Even in the world of text, an editor or viewer needs to know how the text is encoded, what bytes map to what characters. Is a file ASCII, UTF-8, PostScript? Once we know something is text or not text, it’s still likely to be made to the standards of a specific format, lest it be nothing but plain text. Markdown, HTML, even PDF1 are human-readable text to an extent, with rules about how their content is interpreted. A human as well as a web browser knows that a <p> starts a paragraph, and this paragraph continues until a matching </p> is found.

If we open a binary in a text editor, we’ll see a lot of familiar characters, where data happens to coincide with printable ASCII. We’ll also see a lot of gibberish, and in fact some of the characters may cause a terminal to behave erratically. Opening a binary in a hex editor makes a little more sense of it, but still leaves a lot to be answered. In one column, we’ll see a lot of hexadecimal values; in another we’ll see the same sort of gibberish we would have seen in our text editor. In some sort of status display, we’ll also generally see a few more bits of information – what byte we’re on, its hex value, its decimal value, etc. Why would we ever want to do this? Well, among other things, binary file formats have rules as well, and if we know these rules, we can inspect and navigate them much like an HTML file. Take this piece of a PNG file, as it would appear in bvi (my hex editor of choice).

00000000  89 50 4E 47 0D 0A 1A 0A 00 00 00 0D 49 48 44 52 .PNG........IHDR
00000010  00 00 02 44 00 00 01 04 08 06 00 00 00 C9 50 2B ...D..........P+
00000020  AB 00 00 00 04 73 42 49 54 08 08 08 08 7C 08 64 .....sBIT....|.d
00000030  88 00 00 00 09 70 48 59 73 00 00 0B 12 00 00 0B .....pHYs.......
00000040  12 01 D2 DD 7E FC 00 00 00 1C 74 45 58 74 53 6F ....~.....tEXtSo
"ban_ln_560_NLW.png" 14498451 bytes    00000000 10001001 \211 0x89 137 NUL


Semaphore and sips redux

In this article, I do sem -j +5, allowing 5 jobs to run at a time. -j can be used with integers, percents, and +/– values such that one can say -j +0 -j -1 to run one fewer job than their available cores (+0), etc.

I was going to simply edit my last post, but this might warrant its own, as it’s really more about sem and parallel than it is sips. parallel’s manpage describes it as ‘a shell tool for executing jobs in parallel using one or more computers’. It’s kind of a better version of xargs, and it is super powerful. The manpage starts early with a recommendation to watch a series of tutorials on YouTube and continues on to example after example after example. It’s intense.

In my previous post, I suggested using sem for easy parallel execution of sips conversions. sem is really just an alias for parallel --semaphore, described by its manpage (yes, it gets its own manpage) as a ‘counting semaphore [that] simply waits for a semaphore to become available and then runs the command given’. It’s a convenient and fairly accessible way to parallelize tasks. Backing up for a second, it does have its own manpage, which focuses on some of the specifics about how it queues things up, how it waits to execute tasks, etc. It does this using toilet metaphors, which is a whole other conversation, but for the most part it’s fairly clear, and it’s what I tend to reference when I’m figuring something out using sem.

In my last post (and in years of converting things this way), I had to decide between automating the cleanup/rm process or parallelizing the sips calls. The problem is, if you do this:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" && rm "$i"

…the parallelism gets all thrown off. sem executes, cues up sips, presumably exits 0, and then rm destroys the file before sem even gets the chance to spawn sips. None of the files exist, and sips has nothing to convert. The sem manpage doesn’t really address chaining commands in this manner, presumably it would be too difficult to fit into a toilet metaphor. But it occurred to me that I might come up with the answer if I just looked through enough of the examples in the parallel manpage (worth noting that a lot of the parallel syntax is specific to not being run in semaphore mode). The solution is facepalmingly simple: wrap the && in double quotes:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" "&&" rm "$i"

…which works a charm. We could take this even further and feed the PNGs directly into optipng:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" "&&" rm "$i" "&&" optipng "${i/.tif/.png}"

…or potentially adding optipng to the sem queue instead:

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/.tif/.png}" "&&" rm "$i" "&&" sem -j +5 optipng "${i/.tif/.png}"

…I’m really not sure which is better (and I don’t think time will help me since sem technically exits pretty quickly).


Darwin image conversion via sips

I use Lightroom for all of my photo ‘development’ and library management needs. Generally speaking, it is great software. Despite being horribly nonstandard (that is, using nonnative widgets), it is the only example of good UI/UX that I’ve seen out of Adobe in… at least a decade. I’ll be perfectly honest right now: I hate Adobe with a passion otherwise entirely unknown to me. About 85-90% of my professional life is spent in Acrobat Pro, which gets substantially worse every major release. I would guess that around 40% of my be-creative-just-to-keep-my-head-screwed-on time is spent in various pieces of CC (which, subscription model is just one more fuck-you, Adobe). But Lightroom has always been special. I beta tested the first release, and even then I knew… this was the rare excuse for violating so many native UI conventions. This made sense.

Okay, from that rant we come up with: thumbs-down to Adobe, but thumbs-up to Lightroom. But there’s one thing that Lightroom has never opted to solve, despite so many cries, and that is PNG export. Especially with so many photographers (myself included) using flickr, which reencodes TIFFs to JPEGs, but leaves the equally lossless PNG files alone, it is ridiculous that the Lightroom team refuses to incorporate a PNG export plugin. Just one more ’RE: stop making garbage’ memo that I need to forward to the clowns at Adobe.

All of this to just come to my one-liner solution for Mac users… sips is the CLI/Darwin equivalent of the image conversion software that MacOS uses for conversion in Preview, etc. The manpage is available online, conveniently. But my use is very simple – make a bunch of supid TIFFs into PNGs.

for i in ./*.tif ; sips -s format png "$i" --out "${i/tif/png}" && rm "$i"

…is the basic line that I use on a directory full of TIFFs output from Lightroom. Note that this is zsh, and I’m not 100% positive that the variable substitution is valid bash. Lightroom seemingly outputs some gross TIFFs, and sips throws up an error for every file, but still exits 0, and spits out a valid PNG. sips does not do parallelism, so a better way to handle this may be (using semaphore):

for i in ./*.tif; sem -j +5 sips -s format png "$i" --out "${i/tif/png}"

…and then cleaning up the TIFFs afterward (rm ./*.tif). Either way. There’s probably a way to do both using flocks or some such, but I haven’t put much time into that race condition.

At the end of the day, there are plenty of image conversion packages out there (ImageMagick comes to mind), but if you’re on MacOS/Darwin… why not use the builtins if they function? And sips does, in a clean and simple way. While it certainly isn’t a portable solution, it’s worth knowing about for anyone who does image work on a Mac and feels comfortable in the CLI.


dvtm and the mouse

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…

I've gotten quite a few hits from people searching for things like 'dvtm pass mouse.' I don't have much to say on the matter, except that this is the one thing that really bugs me about dvtm. As I have mentioned previously, given the choice between screen, tmux, and dvtm, I like dvtm the best. It is certainly the simplest, and has the smallest footprint. It automatically configures spaces, and makes notions of simultasking as simple as double-clicking. I would say that it brings the best of the GUI experience to terminal multiplexing, while still keeping true to the command line.


dc Syntax for Vim

This is an old post from an old blog; assets may be missing, links may be broken, and my opinions may differ considerably by this point…

I use dc as my primary calculator for day-to-day needs. I use other calculators as well, but I try to largely stick to dc for two reasons - I was raised on postfix (HP 41CX, to be exact) and I'm pretty much guaranteed to find dc on any *nix machine I happen to come across. Recently, however, I've been expanding my horizons, experimenting with dc as a programming environment, something safe and comfortable to use as a mental exercise. All of that is another post for another day, however - right now I want to discuss writing a dc syntax definition for vim.