GemiNaut's clever solution to a peculiar problem

I’m a big proponent of the web being leaner and more text-based. In light of how strongly the web has veered in the opposite direction, it’s probably a radical position to say that I think less of the web should have any visual styling attached to it at all. More text channels where a reader can maintain a consistent, custom reading experience feels like a better solution than a bunch of disparate-looking sites all with their own color schemes, custom fonts, and massive headers1.

I often use text-based web browsers like Lynx and WebbIE. I also tend to follow a lot of people who maintain very webring-esque sites, even moreso than mine. But there is more internet than just the HTTP-based World Wide Web. Gopher is, or was, depending on your outlook, an alternative protocol to HTTP. It was more focused on documents that kind of reference one another in a more bidirectional way, and because it never really got off the ground in the way HTTP did, it also never really got the CSS treatment; it’s really just about structured text. Despite most of the information about Gopher on the web being historical retrospectives, enthusiasts of a similar mind to me are keeping the protocol alive2.

Then there’s Gemini3. Gemini is a sort of modern take on Gopher. For nerds like me, it’s wonderful that such an effort exists. If you’re interested in the unstyled side of the internet, Gemini is worth looking into. I do think it needs a bit of love, however, as curl maintainer Daniel Stenberg points out how lacking the implementation details are. I disagree with a few of Daniel’s points; Gemini falls into a lot of ‘trappings’ that HTTP escaped because HTTP development steered toward mass appeal. Gemini is for a small web, one for weirdos like me. The specification and implementation issues seem very real, however, and while I don’t think Gemini can or should get WWW-level acceptance, an RSS-sized niche would be nice, at least, and software sort of needs to know how to work for that to happen.

All of this only really matters for background context. I’ll likely post more of my thoughts on a textual internet in the future, and I’ll likely also be dipping my toes in publishing on a Gemini site. The point of this post, however, is to talk about a strange problem that happens with unstyled text-based content. While there are certainly far fewer distractions between the reader and the content, there’s also a sort of brain drain that comes from sites being visually indistinguishable from one another. I always just kind of assumed this was one of those annoyances that would never really be important enough to try to solve. Hell, the way most software development is going these days, I don’t really expect to see any new problem-solving happening in the UX sphere. But I recently stumbled across a browser that solves this in a very clever way.

GemiNaut4 is an open-source Gemini and Gopher browser for Windows that uses an identicon-esque visual system to help distinguish sites. Identicons are visual representations of hash functions, typically used for a similar version of the same problem – making visually distinct icons for default users on a site. If everyone’s default icon is, say, an egg, then every new user looks the same. Creating a simple visual off of a hash function helps keep users looking distinct by default. I’ve often seen them used on password inputs as well – if you recognize the identicon, you know you’ve typed your password in correctly without having the password itself revealed.

Don Parks, who created the original identicon, did so to ‘enhance commenter identity’ on his blog5. But he knew there was more to it than this:

I originally came up with this idea to be used as an easy means of visually distinguishing multiple units of information, anything that can be reduced to bits. It’s not just IPs but also people, places, and things.

IMHO, too much of the web what we read are textual or numeric information which are not easy to distinguish at a glance when they are jumbled up together. So I think adding visual identifiers will make the user experience much more enjoyable.

-“Identicon Explained” by Don Parks via Wayback Machine

And indeed, browser extensions also exist for using identicons in lieu of favicons; other folks have pieced together the value in tying them to URLs. But GemiNaut uses visual representations of hashes like these to create patterned borders around the simple hypertext of Gopher and Gemini sites. The end result is clean pages that remain visually consistent, yet are distinctly framed based on domain. It only exists in one of GemiNaut’s several themes, and I wish these themes were customizable. Selfishly, I also wish more software would adopt this use of hash visualization.

Aside from browsing Gemini and Gopher, GemiNaut includes Duckling, a proxy for converting the ‘small web’ to Gemini. The parser has three modes: text-based, simplified, and verbose. The first is, as one might expect, just the straight text of a page. Of the other two, simplified is so stripped-down that apparently this blog isn’t ‘small’ enough to fully function in it6. But it does work pretty well in verbose mode, though it lacks the keyboard navigation of Lynx, WebbIE, or even heavy ol’ Firefox.

I had long been looking for a decent Windows Gopher client, and was happy to find one that also supports Gemini and HTTP with the Duckling proxy enabled in GemiNaut. But truly, I’d like to see more development in general for the text-based web. All the big browsers contain ‘reader modes,’ which reformat visually frustrating pages into clean text. ‘Read later’ services like Instapaper do the same. RSS still exists and presents stripped-down versions of web content. There is still a desire for an unstyled web, and it would be great to see more of the software that exists in support of it adopting hash visualizations for distinction.


Hollow hearts

An interesting thing that I’ve noticed over the past few years of internetting is how we’ve established conventions around like, favorite, &c. buttons, and how frustrating it is when sites break those conventions. The meaning of such a button is largely determined by its context; saving for later (say bookmarking, or wishlisting) for an e-commerce site, acknowledgement or praise for social media, and somewhere in between those two for blogs and other content consumption platforms. This isn’t a hard rule, obviously. Twitter, for example, has a bookmarking function, but also lets you easily browse through liked tweets. Bookmarking is a more buried option, as its intent isn’t to display praise, and I would guess that because of this intentional design decision, a lot of people simply use likes in lieu of bookmarks.

Iconography is also generally pretty standard, often hearts or stars. This defines context in its own way; users famously had concerns when Twitter moved from stars to hearts. Which makes a lot of sense – slapping a star on the tweet ‘My cat Piddles died this AM, RIP’ has a pretty different vibe than a heart. Since this happened retroactively to everything anyone had ever starred… it was certainly jarring.

Other iconography certainly exists; bookmark-looking things clearly define their intent, pins do the same to a lesser extent, bells indicate notifications1, and sites with voting systems will often use thumbs-up/down or up/down arrows for this tri-state toggle. Medium, notably, went from a straightforward ‘recommend’ (heart) system to ‘claps’, a convoluted variable-praise system represented by a hand. While dunking on Medium is not the purpose of this post, I think it’s worth mentioning that this shift was enough to essentially prevent me from ever reading anything on the site again2. Having to rate any given article from 1-50, and then sit around clicking as I worry about that decision is anxiety-inducing agony, especially when I know it affects authors’ rankings and/or payouts. It also feels incredibly detached from the real-world phenomenon it’s supposed to mimic. Clapping for a performer in an isolated void is a very different experience than reacting in real-time with the rest of an audience. But to get back on track, clapping additionally violates our expectations by no longer being a toggle. It increases, maxes out, and if you want to reset it to zero, you have to hunt for that option.

Which brings me to my point, and my frustration. These things are usually a toggle between a hollow heart or star3 and a filled one: ♡/♥︎ or ☆/★. This is very easy to understand, it mimics the checkboxes and radio buttons we’ve been using on computers for decades. Empty thing off, filled thing on. So long as we know that this icon has meaning, and that meaning brings with it a binary on/off state4, a hollow or filled icon indicates what state the content is in. If a user can’t toggle this (a notification, say), it’s simply an indicator. If a user can, then… well, it’s a toggle, and there’s likely a counter nearby to indicate how many others have smashed that like button.

This is great, and intuitive, and it works very effectively. Which is why it’s extremely frustrating when sites violate this principle. Bandcamp, for example, uses a hollow heart at the top of the page to take you to your ‘collection,’ a library which is a superset of your wishlist. Wishlisting is represented by a separate on/off heart toggle. This toggle, on an individual track/album page, has a text description next to the heart; the collection button at the top of the page does not. This is utterly backward, as the toggle works intuitively, and the button… has no meaning until you click it5. Etsy, on the other hand, uses a hollow heart at the top to bring you to your favorites page. But it does two things right: it has a text label, and it brings you only to things that are directly connected with a matching heart toggle.

GoComics is an equally perplexing mess of filled hearts. A comic itself has both a heart (like) and a pin (save)6. Both are always filled, with the toggle being represented by a slight change in brightness: 88% for off, 68% for on. It’s very subtle and hard to scan. These are actual toggles, however, unlike in their comments section. Their comments also have filled hearts to indicate likes, but they only serve as indicators. To actually like a comment, you must click a text-only link that says ‘Like,’ and isn’t even next to the heart. At this point, the text does the same absurdly-slight brightness shift from #00A8E1 to #0082AE. While it’s difficult to scan the comic’s heart icon’s brightness shift, the comment’s ‘Like’ text’s brightness shift is nearly imperceptible. A comment’s heart icon doesn’t even appear until there’s at least one like, and clicking it just brings up a list of users who have liked it. Suffice it to say, I click this accidentally on a near-daily basis. Humorously, GoComics understands the hollow/filled standard: they use it on their notifications bell icon.

These are just two examples in a sea of designs that prioritize aesthetics over intuition and ease of use. Medium tacks a filled star on after the read-time estimate for no apparent reason. Lex has both a functional heart and star toggle on every post, but no immediate explanation as to what differentiates them. Amazon seemingly has a heart toggle on its mobile app, but not its website, and it’s unclear what differentiates this from the regular wishlist. Ultimately, I don’t think this is a space that needs innovation (like, arguably, Medium’s claps), or one that merits subtle aesthetics. Folks have largely realized the perils of excessively abstracting ordinary checkboxes and radio buttons, and this relatively new breed of binary toggle should intuitively work in exactly the same way.


On the dot org situation (external)

In case anyone reading this isn’t up to speed, the .org Top-Level Domain (TLD) is apparently being sold to a private equity firm. The post link goes out to a good summary on the Electronic Frontier Foundation’s site. This is bad, and it should be fairly obvious why it’s bad, but beyond that… the whole thing really just feels like one more large step in the inevitable decline of the internet as we know it. I’ve been expecting this to happen at any time, except with like… a novelty TLD. Like, Donuts gets tired of owning the .exposed TLD and sells it off and now everyone with a domain under that TLD has doubled rates. I didn’t see it coming (yet) with one of the Big Ones.

Is it a huge stretch to think that ICANN could, in so many decades, ceases to function or shift to a profit-driven model? There’s a sense of stability with a .com domain, but is it more fragile than we think? What if your email is tied to a domain you can no longer afford; how many services tie your identity so strongly to your email that it’s difficult or impossible to change it? Thinking in terms of decades is a bit absurd; there will be sea changes in the internet through that time. I just worry that most of those changes will be for the worse, that independence and openness will continue to be threatened, and that archival gaps will continue to increase.


Site updates, supporting open source software, &c.

Haven’t done a meta post since August, so now seems like as good of a time as any to discuss a few things going on behind the scenes at brhfl dot com. For starters, back in November, I updated my About page. It was something I forced myself to write when I launched this pink blog, and it was… pretty strained writing. I think it reads a bit more naturally and in my voice now, and also better reflects what I’m actually writing about in this space. I also published my (p)review of Americana in November, which was an important thing to write. Unfortunately, it coincided with Font Library, the host of the fonts I use here, being down. This made me realize that I rely on quite a few free and/or open source products, and that I should probably round up ways to support them all. I’ll get to that at the end of this post, it’s a thought process that started in November, though.


DuckDuckGo

A while back, I started testing two things to switch up my browsing habits (and partially free them from Google): I began using Firefox Quantum1, and I switched my default search provider to DuckDuckGo. I have been spending pretty much equal time with both Google and DuckDuckGo since (though, admittedly, I have many prior years of comfort with Google). This has been more than just a purposeless experiment. Google started out as a company that I liked that made a product that I liked. This slowly but surely morphed into a company that I was somewhat iffy about, but with several products that I liked. Nowadays, the company only increases in iffiness, but Google’s products are increasingly feeling bloated and clumsy. Meanwhile the once-laughable alternatives to said products have improved dramatically.

As far as results are concerned, Google (the search engine, from here on out) is still quite good. When it works, it’s pretty much unbeatable for result prioritization, that is, getting me the answer I’m seeking out with little-to-no poking around. But it’s not infrequent that I come across a query that simply doesn’t work – it’s too similar to a more common query, so Google thinks I must have wanted the common thing, or Google includes synonyms for query terms that completely throw off the results. The ads, and sponsored results (aka different ads) are increasing to the point of being a distraction (particularly on mobile, it can take multiple screens worth of scrolling to actually get to results). AMP content is prioritized, and AMP is a real thorn in the side of the open web (Kyle Schreiber sums up many of AMP’s problems succinctly). Finally, Google is obviously an advertising company, and we all know by now that everything we search for exists as a means to track us. This is not a huge complaint for me; it’s a known ‘price’ for the service. For as much as it leads to targeted advertising, it also helps tailor search results. Of course, this seems nice on the surface, but is a bit of a double-edged sword due to the filter bubble.

To be fair, some of these things are mitigated by using encrypted.google.com, but its behavior is seemingly undocumented and certainly nothing I would rely on2. This is where DuckDuckGo, which was designed from the ground up to avoid tracking, comes in. DuckDuckGo makes its money from ads, but these ads are based on the current search rather than anything persistent. They can also be turned off in settings. The settings panel also offers a lot of visual adjustments, many of which I’m sure are welcome for users with limited vision3. Anyway, my experiences thus far using DuckDuckGo as a serious contender to Google are probably best summed up as a list:

All in all, I have no qualms using DuckDuckGo as my primary search engine. I will not pretend that I do not occasionally need to revert to Google to get results on some of the weirder stuff that I’m trying to search for – although, as mentioned earlier, Google thinks it’s smarter than me and rewrites my obscure searches half the time anyway. DuckDuckGo isn’t entirely minimalist or anything, but its straightforward representation, its immediacy, and its clarity all remind me of how clean Google was when it first came to exist in a sea of Lycoses, AltaVistas, and Dogpiles. It returns decent results, and it’s honestly just far more pleasant to use than Google is these days.


netrw and invalid certificates

Don’t trust invalid certificates. Only do this sort of workaround if you really know what you’re dealing with is okay.

Sometimes I just need to reference the source of an HTML or CSS file online without writing to it. If I need to do this while I’m editing something else in vim, my best course of action is to open a split in vim and do it there. Even if I’m not working on said thing in vim, that is the way that I’m most comfortable moving around in documents, so there’s still a good chance I want to open my source file there.

netrw, the default1 file explorer for vim, handles HTTP and HTTPS. By default, it does this using whichever of the following it finds first: elinks, links, curl, wget, or fetch. At work, we’re going through an HTTPS transition, and at least for the time being, the certificates are… not quite right. Not sure what the discrepancy is (it’s not my problem), but strict clients are wary. This includes curl and wget. When I went to view files via HTTPS in vim, I was presented with errors. This obviously wasn’t vim’s fault, but it took a bit of doing to figure out exactly how these elements interacted and how to modify the behavior of what is (at least originally) perceived as netrw.

When netrw opens up a remote connection, it essentially just opens up a temporary file, and runs a command that uses that temporary file as input or output depending on whether the command is a read or write operation. As previously mentioned, netrw looks for elinks, links, curl, wget, and fetch. My cygwin install has curl and wget, but none of the others. It also has lynx, which I’ll briefly discuss at the end. I don’t know if elinks or links can be set to ignore certificate issues, but I don’t believe so. curl and wget can, however.

We set this up in vim by modifying netrw_HTTP_cmd, keeping in mind that netrw is going to spit out a temporary file name to read in. So we can’t output to STDOUT, we need to end with a file destination. For curl, we can very simply use :let g:netrw_HTTP_cmd="curl -k". For wget, we need to specify output, tell it not to verify certs, and otherwise run quietly: :let g:netrw_HTTP_cmd="wget --no-check-certificate -q -O".

I don’t have an environment handy with links or elinks, but glancing over the manpages leads me to believe this isn’t an option with either. It isn’t with lynx either, but in playing with it, I still think this is useful: for a system with lynx but not any of the default HTTP(s) handlers, netrw can use lynx via :let g:netrw_HTTP_cmd="lynx -source >". Also interesting is that lynx (and presumably links and elinks via different flags) can be used to pull parsed content into vim: :let g:netrw_HTTP_cmd="lynx -dump >".


FOSTA-SESTA

Content warning: mentions/links to data on sex trafficking, and murders of women & sex workers.

I know this site gets zero traffic, but regardless I regret that I didn’t take the energy to write about FOSTA-SESTA before FOSTA passed. FOSTA-SESTA is anti-sex-worker legislature posing as anti-trafficking legislature. It’s a bipartisan pile of shit, and the party split among the two dissenting votes in the FOSTA passage was also bipartisan. Since the passage of FOSTA, Craiglist has shut down all personals1, reddit has shut down a number of subreddits, and today Backpage was seized. I would implore anyone who gives a shit about sex workers and/or the open internet to follow Melissa Gira Grant on Twitter.

If you don’t support sex workers, frankly I don’t want you reading my blog. But if you’re here anyway, it’s worth pointing out that the absurdity laid out by FOSTA is a threat to the open web at large, which almost certainly explains why Facebook supported it. It’s not just sex workers who oppose this thing, NearlyFreeSpeech.net, the host I use for all of my sites, had a pointed and clear blog post outlining how frightening it is.

Obviously, it’s worth listening to sex workers on this matter, which nobody did. But it’s also worth listening to law enforcement, the folks who are actually trying to prevent trafficking. And, who would have guessed, law enforcement actually works with sites like Craigslist and Backpage to crack down on the truly villainous aspects of the sex trade. Idaho, just last month, for instance. Meanwhile, having outlets where sex workers can openly communicate and vet their clients saves their lives — when Craiglist opened its erotic services section, female homicide dropped by over 17 percent. That is to say that so many sex workers are routinely murdered, that helping them vet clients significantly reduces the overall female homicide rate.

This whole thing is misguided and cruel2, and I don’t really know what to do about it at this point. But listening to people who are closely following the impacts is a start. It’s a death sentence for sex workers, and a death sentence for the open web, and anyone who cares about either needs to keep abreast of the impact as it unfolds.


An interesting memcached/UDP amplification attack (external)

A handful of reports out there about a recent DDOS attack that relied on memcached and DDOS’s best friend, UDP. Link is to Cloudflare’s blog post about the attack, which is a thorough yet accessible explanation. It seems like this is the most amplified amplification attack yet, and without even using a significant number of memcached vectors. A lot of potential vectors were from cloud hosts like AWS and Linode – many of these have apparently closed up the hole. Hopefully this minimizes the potential for a larger attack, but it’s worth quoting Cloudflare:

The [UDP] specification shows that it’s one of the best protocols to use for amplification ever! There are absolutely zero checks, and the data WILL be delivered to the client, with blazing speed! […] Developers: Please please please: Stop using UDP.

Cloudflare also touches on the fact that the larger problem is IP spoofing, and they wrote a followup post about that specifically. I just found the memcached amplification attack fascinating.


Firefox mobile

Well, I finally downgraded upgraded to iOS 11, which means trying out the mobile version of Firefox1 and revisiting the Firefox experience as a whole. While Quantum on the desktop did show effort from the UI team to modernize, my biggest takeaway is that both the mobile and desktop UIs still have a lot of catching up to do. I mentioned previously how the inferiority of Firefox’s URL bar might keep me on Chrome, and the reality is that this is not an outlier. Both the desktop and mobile UI teams seem to be grasping desperately at some outdated user paradigms, and the result is software that simply feels clumsy. While I have always been a proponent of adhering to OS widgets and behaviors as much as possible, this is only strengthened on mobile where certain interaction models feel inextricable from the platform.

All of this to bring me to my first and most serious complaint about Firefox Mobile: no pull-to-refresh. I believe this was a UI mechanism introduced by Twitter, but it’s so ingrained into the mobile experience at this point that I get extremely frustrated when it doesn’t work. This may seem petty, but to me it feels as broken as the URL bar on desktop.

A UI decision that I thought I would hate, but am actually fairly ambivalent on, is the placement of navigation buttons. Mobile Chrome puts the back button with the URL bar, hiding it during text entry, and hides stop/refresh in a hamburger menu (also by the URL bar). Firefox Mobile has an additional bar at the bottom with navigation buttons and a menu (much like mobile Safari). I don’t like this UI, it feels antiquated and wasteful, but I don’t hate it as much as I expected to. One thing that I do find grating is the menu in this bar. I have a very difficult time remembering what is in this menu vs. the menu in the URL bar. The answer often feels counterintuitive.

In my previous post about desktop Firefox, I was ecstatic about the ability to push links across devices, something I’ve long desired from Chrome. It worked well from desktop to desktop, and it works just as well on mobile. This is absolutely a killer feature for folks who use multiple devices. Far superior to syncing all tabs, or searching another device’s history. On the subject of sync, mobile Firefox has a reader mode with a save-for-later feature, but this doesn’t seem to integrate with Pocket (desktop Firefox’s solution), which makes for a broken sync experience.

Both Chrome and Firefox have QR code detection on iOS, and both are quick and reliable (much quicker and more reliable than the detection built into the iOS 11 camera app). Chrome pastes the text from a read QR code into the URL bar; Firefox navigates to the text contained in the code immediately. That’s a terrifyingly bad idea.

A few additional little things:

Finally, a few additional thoughts on desktop Firefox (Quantum), now that I’ve gotten a bit of additional use in:


Boing Boing is being sued over a hyperlink (external)

2018-02: The initial suit has been dismissed. Let’s see if Playboy keeps hitting, or if this is the win.

Well, this is bad. Playboy is suing Happy Mutants, LLC (parent company of Boing Boing) because Boing Boing linked to an article containing (Playboy’s unlicensed) copyrighted content. I know about this because I generally like the writing at Boing Boing, and I follow a handful of current and former staff. But this has nothing to do with liking Boing Boing or not – the linked article rightfully states that this ‘would end the web as we know it’. The web is built on guiding people from point A to point B, the hyperlink is a defining feature of the web. If content creators are afraid to use the power of the hyperlink to guide their viewers elsewhere… the web dies.

As a socialist content creator, my feelings on intellectual property are rather complex, but I know one thing to be true – if I violate intellectual property laws, that is my responsibility. Nobody who shows others my misdoings should be culpable. Happy Mutants, LLC has filed a motion to dismiss; lets hope the courts have some sense.


"You're scaring us" (external)

Somehow I missed this until now, but of course after Mozilla went and released their first good web browser in forever, they then went and mucked everything up. Apparently the ‘Shield Studies’ feature, which is supposed to act as a distributed test system for new features, was instead used to unwittingly install a disturbing-looking extension that was effectively an ad for a TV show. The problem ultimately seems to stem from a disconnect between Mozilla (the corporation) and Mozilla (the NPO and community) – and in fact, their developers were not thrilled about it. This is a huge breach of trust, and if Mozilla (the corporation) can’t wrap their head around their own manifesto, I can’t imagine a very good future. Mozilla did acknowledge that they fucked up, but the apology seems rather half-hearted at best. I know I have disabled Shield Studies, and until I see some evidence that a genuine attempt is being made to restore user trust, I will remain skeptical of Mozilla’s motives.


Firefox Quantum

There was once a time where the internet was just beginning to overcome its wild wild west nature, and sites were leaning toward HTML spec compliance in lieu of (or, more accurately, I suppose, in addition to) Internet Explorer’s way of doing things. Windows users in the know turned to Firefox; Mac users were okay sticking with Safari, but they were still far and few between. Firefox was like the saving grace of the browser world. It was known for leaking memory like a sieve, but it was still safer and more standards-compliant than IE. Time went on, and Chrome happened. Compared to Chrome, Firefox was slow, ugly, lacking in convenience features, it had a lackluster search bar, and that damn memory leak never went away. Firefox largely became relegated to serious FOSS nerds and non-techies whose IT friends told them it was the only real browser a decade ago.

I occasionally installed/updated Firefox for the sake of testing, and these past few years it only got worse. The focus seemed to be goofy UI elements over performance. It got uglier, less pleasant to use, and more sluggish. I assumed it was destined to become relegated to Linux installs. It just… was not palatable. I honestly never expected to recommend Firefox again, and in fact when I did just that to a fellow IT type he assumed that I was drunk on cheap-ass rum.

Firefox 57 introduces a new, clean UI (Photon); and a new, incredibly quick rendering engine. I can’t tell if the rendering engine is just a new version of Gecko, or if the engine itself is called Quantum (the overall new iteration of the browser is known as Quantum), but I do know it’s very snappy. I’m not sure if it is, but it feels faster than Chrome on all but the lowest-end Windows and macOS machines that I’ve been testing it on. It still consumes more memory than other browsers I’ve pitted it against, and its sandboxing and multiprocessor support is a work in process. The UI looks more at home on Win 10 than macOS, but in either case it looks a hell of a lot better than the old UI, and it fades into the background well enough. On very low-end machines (like a Celeron N2840 2.16GHz 2GB Win 8 HP Stream), Firefox feels more sluggish than Chrome – and this sluggishness seems related to the UI rather than the rendering engine.

I’ve been using Quantum (in beta) for a while, alongside Chrome, and that’s really what I want to attempt to get at here. Both have capable UIs, excellent renderers, and excellent multi-device experiences. I don’t particularly like Safari’s UI, but even if I did the UX doesn’t live up to my needs simply because it’s vendor-dependent (while not platform-dependent, the only platforms are Apple’s), and I want to be able to sync things across my Windows, macOS, iOS, and Linux environments. Chrome historically had the most impressive multi-device experience, but I think Firefox has surpassed it – though both are functional. So it’s starting to come down to the small implementation details that really make a user experience pleasant.

As a keyboard user, Firefox wins. Firefox and Chrome1 both have keyboard cursor modes, where one can navigate a page entirely via cursor keys and a visible cursor. This is an accessibility win, but very inefficient compared to a pointing device. Firefox, however, has another good trick – ‘Search for text when you type’, previously known as Type Ahead Find (I think, I know it was grammatically mysterious like that). So long as the focus is on the body, and not a textbox, typing anything begins a search. Ctrl– or Cmd-G goes to the next hit, and Enter ‘clicks’ it. Prefacing the search with a restricts it to links. It makes for an incredibly efficient navigation method. Chrome has some extensions that work similarly, but I never got on with them and I definitely prefer an inbuilt solution.

Chrome’s search/URL bar is way better2. It seems to automatically pick up new search agents, and they are automatically available when you start typing the respective URL. One hits tab to switch from URL entry to searching the respective site, and it works seamlessly and effortlessly. All custom search agents in Firefox, by contrast, must be set up in preferences. You don’t get a seamless switch from URL to search, but instead must set up search prefixes. So, on Chrome, I start typing ‘amazon.com’, and at any point in the process, I hit tab, and start searching Amazon. With Firefox, I have to have set up a prefix like ‘am’, and remember to do a search like ‘am hello kitty mug’ to get the search results I want. It is not user-friendly, it is not seamless, and it just feels… ancient. Chrome’s method also allows for autocomplete/instant search for these providers, which is only a feature you get with your main search engine in Firefox. It is actually far superior to simply not use this feature in Firefox and use DuckDuckGo bangs instead. The horribly weak search box alone could drive me back to Chrome.

Chrome used to go back or forward (history-wise) if you overscrolled far enough left or right – much like how Chrome mobile works. This no longer seems to work on Chrome desktop, and it doesn’t work on Firefox either. I guess I’m grumpier at Google for teasing and taking away. I know it was a nearly-undiscoverable UI feature, and probably frustrated users who didn’t know why they were jumping around, but it freed up mouse buttons.

I don’t know how to feel about Pocket vs. Google’s ‘save for later’ type solution. Google’s only seems to come up on mobile. Pocket is a separate service, and without doing additional research, it’s unclear how Mozilla ties into it (they bought the service at some point). At least with Google you know you’re the product.

I have had basically no luck streaming on Firefox. Audio streams simply don’t start playing; YouTube and Hulu play for a few seconds and then blank and stop. I assume this will be fixed fairly quickly, but it’s bad right now.

Live Bookmarks are a thing that I think Safari used to do, too? Basically you can have an RSS feed turn into a bookmark folder, and it’s pretty handy. Firefox does this, Chrome has no inbuilt RSS capability. Firefox doesn’t register JSON feed which makes it a half-solution to me, which makes it a non-solution to me. But, it’s a cool feature. I would love to see a more full-featured feed reader built in.

Firefox can push URLs to another device. This is something that I have long wished Chrome would do. Having shared history and being able to pull a URL from another device is nice, but if I’m at work and know I want to read something later, pushing it to my home computer is far superior.

I’ll need to revisit this once I test out Firefox on mobile (my iOS is too far out of date, and I’m not ready to make the leap to 11 yet). As far as the desktop experience is concerned, though, Quantum is a really, really good browser. I’m increasingly using it over Chrome. The UI leaves a bit to be desired, and the URL/search bar is terrible, but the snappiness and keyboard-friendliness are huge wins.


Firefox fixes (et cetera)

I’ve been testing out Firefox Quantum recently, which is a post for another day, but it made me realize one thing and that is that this site right here barely functioned for anyone using Firefox. Either Quantum or the old engine (Gecko? Is Quantum a replacement for Gecko or a version of it?). Frankly, it’s much stricter than I would have imagined, and assuming that something that functions fine in IE/Edge and Chrome/Safari would also function fine in Firefox was… not a safe assumption, apparently. Here are a few things that I’ve fixed over the past few days, some related to Firefox and others not.


The internet sucks (external)

Well, this sucks. My host, NFSN, is doing a major overhaul to their pricing scheme simply because the internet has become such a horrible hotbed of malice. To be clear, when I say ‘this sucks’, I don’t mean any negativity toward NFSN. The article link up there goes to their blog post explaining the matter, and it frankly seemed inevitable that fighting DDOS attacks would catch up to their pricing scheme. Previously, if you had a static site with low bandwidth and storage, you could probably get a year out of a quarter (domain registration not included, of course). The new plan allows for basically a $3.65 annual minimum which is still impressive (especially given what NFSN offers). But it’s a bummer that it’s come to this.

I would like to reiterate that this is not a complaint against NFSN. I will continue to use them for hosting, I will continue to recommend them, I will continue to praise them. I believe this is a necessary move. I’m just really, really pissed off that this is where we are with the internet. I don’t know what’s going on behind the scenes as far as law enforcement, but the internet is a global network (really?) and that’s not an easy problem to solve. I just hope something is happening to clean this wasteland up, because the advancements we’ve made in the information age are too important to bury under a sheet of malice.