Sometimes I just need to reference the source of an HTML or CSS file online without writing to it. If I need to do this while I’m editing something else in
vim, my best course of action is to open a split in
vim and do it there. Even if I’m not working on said thing in
vim, that is the way that I’m most comfortable moving around in documents, so there’s still a good chance I want to open my source file there.
netrw, the default file explorer for
vim, handles HTTP and HTTPS. By default, it does this using whichever of the following it finds first:
fetch. At work, we’re going through an HTTPS transition, and at least for the time being, the certificates are… not quite right. Not sure what the discrepancy is (it’s not my problem), but strict clients are wary. This includes
wget. When I went to view files via HTTPS in
vim, I was presented with errors. This obviously wasn’t
vim’s fault, but it took a bit of doing to figure out exactly how these elements interacted and how to modify the behavior of what is (at least originally) perceived as
netrw opens up a remote connection, it essentially just opens up a temporary file, and runs a command that uses that temporary file as input or output depending on whether the command is a read or write operation. As previously mentioned,
netrw looks for
fetch. My cygwin install has
wget, but none of the others. It also has
lynx, which I’ll briefly discuss at the end. I don’t know if
links can be set to ignore certificate issues, but I don’t believe so.
wget can, however.
We set this up in
vim by modifying
netrw_HTTP_cmd, keeping in mind that
netrw is going to spit out a temporary file name to read in. So we can’t output to STDOUT, we need to end with a file destination. For
curl, we can very simply use
:let g:netrw_HTTP_cmd="curl -k". For
wget, we need to specify output, tell it not to verify certs, and otherwise run quietly:
:let g:netrw_HTTP_cmd="wget --no-check-certificate -q -O".
I don’t have an environment handy with
elinks, but glancing over the manpages leads me to believe this isn’t an option with either. It isn’t with
lynx either, but in playing with it, I still think this is useful: for a system with
lynx but not any of the default HTTP(s) handlers,
netrw can use
:let g:netrw_HTTP_cmd="lynx -source >". Also interesting is that
lynx (and presumably
elinks via different flags) can be used to pull parsed content into
:let g:netrw_HTTP_cmd="lynx -dump >".
I know this site gets zero traffic, but regardless I regret that I didn’t take the energy to write about FOSTA-SESTA before FOSTA passed. FOSTA-SESTA is anti-sex-worker legislature posing as anti-trafficking legislature. It’s a bipartisan pile of shit, and the party split among the two dissenting votes in the FOSTA passage was also bipartisan. Since the passage of FOSTA, Craiglist has shut down all personals, reddit has shut down a number of subreddits, and today Backpage was seized. I would implore anyone who gives a shit about sex workers and/or the open internet to follow Melissa Gira Grant on Twitter.
If you don’t support sex workers, frankly I don’t want you reading my blog. But if you’re here anyway, it’s worth pointing out that the absurdity laid out by FOSTA is a threat to the open web at large, which almost certainly explains why Facebook supported it. It’s not just sex workers who oppose this thing, NearlyFreeSpeech.net, the host I use for all of my sites, had a pointed and clear blog post outlining how frightening it is.
Obviously, it’s worth listening to sex workers on this matter, which nobody did. But it’s also worth listening to law enforcement, the folks who are actually trying to prevent trafficking. And, who would have guessed, law enforcement actually works with sites like Craigslist and Backpage to crack down on the truly villainous aspects of the sex trade. Idaho, just last month, for instance. Meanwhile, having outlets where sex workers can openly communicate and vet their clients saves their lives — when Craiglist opened its erotic services section, female homicide dropped by over 17 percent. That is to say that so many sex workers are routinely murdered, that helping them vet clients significantly reduces the overall female homicide rate.
This whole thing is misguided and cruel, and I don’t really know what to do about it at this point. But listening to people who are closely following the impacts is a start. It’s a death sentence for sex workers, and a death sentence for the open web, and anyone who cares about either needs to keep abreast of the impact as it unfolds.
A handful of reports out there about a recent DDOS attack that relied on
memcached and DDOS’s best friend, UDP. Link is to Cloudflare’s blog post about the attack, which is a thorough yet accessible explanation. It seems like this is the most amplified amplification attack yet, and without even using a significant number of memcached vectors. A lot of potential vectors were from cloud hosts like AWS and Linode – many of these have apparently closed up the hole. Hopefully this minimizes the potential for a larger attack, but it’s worth quoting Cloudflare:
The [UDP] specification shows that it’s one of the best protocols to use for amplification ever! There are absolutely zero checks, and the data WILL be delivered to the client, with blazing speed! […] Developers: Please please please: Stop using UDP.
Cloudflare also touches on the fact that the larger problem is IP spoofing, and they wrote a followup post about that specifically. I just found the
memcached amplification attack fascinating.
Well, I finally
downgraded upgraded to iOS 11, which means trying out the mobile version of Firefox and revisiting the Firefox experience as a whole. While Quantum on the desktop did show effort from the UI team to modernize, my biggest takeaway is that both the mobile and desktop UIs still have a lot of catching up to do. I mentioned previously how the inferiority of Firefox’s URL bar might keep me on Chrome, and the reality is that this is not an outlier. Both the desktop and mobile UI teams seem to be grasping desperately at some outdated user paradigms, and the result is software that simply feels clumsy. While I have always been a proponent of adhering to OS widgets and behaviors as much as possible, this is only strengthened on mobile where certain interaction models feel inextricable from the platform.
All of this to bring me to my first and most serious complaint about Firefox Mobile: no pull-to-refresh. I believe this was a UI mechanism introduced by Twitter, but it’s so ingrained into the mobile experience at this point that I get extremely frustrated when it doesn’t work. This may seem petty, but to me it feels as broken as the URL bar on desktop.
A UI decision that I thought I would hate, but am actually fairly ambivalent on, is the placement of navigation buttons. Mobile Chrome puts the back button with the URL bar, hiding it during text entry, and hides stop/refresh in a hamburger menu (also by the URL bar). Firefox Mobile has an additional bar at the bottom with navigation buttons and a menu (much like mobile Safari). I don’t like this UI, it feels antiquated and wasteful, but I don’t hate it as much as I expected to. One thing that I do find grating is the menu in this bar. I have a very difficult time remembering what is in this menu vs. the menu in the URL bar. The answer often feels counterintuitive.
In my previous post about desktop Firefox, I was ecstatic about the ability to push links across devices, something I’ve long desired from Chrome. It worked well from desktop to desktop, and it works just as well on mobile. This is absolutely a killer feature for folks who use multiple devices. Far superior to syncing all tabs, or searching another device’s history. On the subject of sync, mobile Firefox has a reader mode with a save-for-later feature, but this doesn’t seem to integrate with Pocket (desktop Firefox’s solution), which makes for a broken sync experience.
Both Chrome and Firefox have QR code detection on iOS, and both are quick and reliable (much quicker and more reliable than the detection built into the iOS 11 camera app). Chrome pastes the text from a read QR code into the URL bar; Firefox navigates to the text contained in the code immediately. That’s a terrifyingly bad idea.
A few additional little things:
- A security note that’s less severe than the QR code thing, but still concerning – if you want your stored login info (read: saved passwords) to be protected (by PIN and/or Touch ID), you need to set that up. Chrome hides this behind Touch ID by default. Firefox’s whole marketing angle is security and privacy, and they haven’t been good at either lately.
- Mobile Firefox has a night reading mode which attempts to make things light-on-dark while generally preserving colors. It’s a neat idea, and fairly well-implemented, though I have run into some rendering bugs from it.
- I like Chrome’s auto-search results list better than Firefox’s (which seems different for the sake of being different), but both are usable.
- I like Firefox’s open tabs view better than Chrome’s. Chrome’s is kind of card-based, whereas Firefox has this grid of miniaturized websites, showing a lot more at a given time.
- Chrome has a far more practical approach to opening URLs on the clipboard. It just comes up as an option in the auto-suggestion list when you’re typing into the bar. Firefox basically gives you one chance when you switch to the app.
- Firefox allows Duckduckgo as the default search engine while Chrome does not.
- Firefox has a very convenient toggle to allow you to refrain from loading images.
- Mobile Firefox does not seem to have the ‘live bookmark’ RSS feature of desktop Firefox.
- Firefox also has ‘Focus’, a dedicated
porn private browsing app. It’s… handy, I guess? But I’m not sure it has strong advantages over using a private browsing mode in Firefox or Chrome.
Finally, a few additional thoughts on desktop Firefox (Quantum), now that I’ve gotten a bit of additional use in:
- Chrome’s status-bar download interface is far superior, in my opinion.
- I maintain that Firefox feels snappier than Chrome, but if a background tab has been suspended, it takes longer to spring to life than in Chrome. Firefox does seem to be better at remembering a cached state of a tab that it’s bringing back vs. simply reloading.
- Making Firefox’s UI decent takes some preference-hunting, and even in its best state, Chrome still feels more modern. This is a theme across desktop and mobile Firefox – the UI team seems to be trying, but still largely stuck in a sort of late-90s hacker mentality.
- Firefox clearly has the superior sync technology, and type-to-search is a godsend.
Well, this is bad. Playboy is suing Happy Mutants, LLC (parent company of Boing Boing) because Boing Boing linked to an article containing (Playboy’s unlicensed) copyrighted content. I know about this because I generally like the writing at Boing Boing, and I follow a handful of current and former staff. But this has nothing to do with liking Boing Boing or not – the linked article rightfully states that this ‘would end the web as we know it’. The web is built on guiding people from point A to point B, the hyperlink is a defining feature of the web. If content creators are afraid to use the power of the hyperlink to guide their viewers elsewhere… the web dies.
As a socialist content creator, my feelings on intellectual property are rather complex, but I know one thing to be true – if I violate intellectual property laws, that is my responsibility. Nobody who shows others my misdoings should be culpable. Happy Mutants, LLC has filed a motion to dismiss; lets hope the courts have some sense.
Somehow I missed this until now, but of course after Mozilla went and released their first good web browser in forever, they then went and mucked everything up. Apparently the ‘Shield Studies’ feature, which is supposed to act as a distributed test system for new features, was instead used to unwittingly install a disturbing-looking extension that was effectively an ad for a TV show. The problem ultimately seems to stem from a disconnect between Mozilla (the corporation) and Mozilla (the NPO and community) – and in fact, their developers were not thrilled about it. This is a huge breach of trust, and if Mozilla (the corporation) can’t wrap their head around their own manifesto, I can’t imagine a very good future. Mozilla did acknowledge that they fucked up, but the apology seems rather half-hearted at best. I know I have disabled Shield Studies, and until I see some evidence that a genuine attempt is being made to restore user trust, I will remain skeptical of Mozilla’s motives.
There was once a time where the internet was just beginning to overcome its wild wild west nature, and sites were leaning toward HTML spec compliance in lieu of (or, more accurately, I suppose, in addition to) Internet Explorer’s way of doing things. Windows users in the know turned to Firefox; Mac users were okay sticking with Safari, but they were still far and few between. Firefox was like the saving grace of the browser world. It was known for leaking memory like a sieve, but it was still safer and more standards-compliant than IE. Time went on, and Chrome happened. Compared to Chrome, Firefox was slow, ugly, lacking in convenience features, it had a lackluster search bar, and that damn memory leak never went away. Firefox largely became relegated to serious FOSS nerds and non-techies whose IT friends told them it was the only real browser a decade ago.
I occasionally installed/updated Firefox for the sake of testing, and these past few years it only got worse. The focus seemed to be goofy UI elements over performance. It got uglier, less pleasant to use, and more sluggish. I assumed it was destined to become relegated to Linux installs. It just… was not palatable. I honestly never expected to recommend Firefox again, and in fact when I did just that to a fellow IT type he assumed that I was drunk on cheap-ass rum.
Firefox 57 introduces a new, clean UI (Photon); and a new, incredibly quick rendering engine. I can’t tell if the rendering engine is just a new version of Gecko, or if the engine itself is called Quantum (the overall new iteration of the browser is known as Quantum), but I do know it’s very snappy. I’m not sure if it is, but it feels faster than Chrome on all but the lowest-end Windows and macOS machines that I’ve been testing it on. It still consumes more memory than other browsers I’ve pitted it against, and its sandboxing and multiprocessor support is a work in process. The UI looks more at home on Win 10 than macOS, but in either case it looks a hell of a lot better than the old UI, and it fades into the background well enough. On very low-end machines (like a Celeron N2840 2.16GHz 2GB Win 8 HP Stream), Firefox feels more sluggish than Chrome – and this sluggishness seems related to the UI rather than the rendering engine.
I’ve been using Quantum (in beta) for a while, alongside Chrome, and that’s really what I want to attempt to get at here. Both have capable UIs, excellent renderers, and excellent multi-device experiences. I don’t particularly like Safari’s UI, but even if I did the UX doesn’t live up to my needs simply because it’s vendor-dependent (while not platform-dependent, the only platforms are Apple’s), and I want to be able to sync things across my Windows, macOS, iOS, and Linux environments. Chrome historically had the most impressive multi-device experience, but I think Firefox has surpassed it – though both are functional. So it’s starting to come down to the small implementation details that really make a user experience pleasant.
As a keyboard user, Firefox wins. Firefox and Chrome both have keyboard cursor modes, where one can navigate a page entirely via cursor keys and a visible cursor. This is an accessibility win, but very inefficient compared to a pointing device. Firefox, however, has another good trick – ‘Search for text when you type’, previously known as Type Ahead Find (I think, I know it was grammatically mysterious like that). So long as the focus is on the body, and not a textbox, typing anything begins a search. Ctrl– or Cmd-G goes to the next hit, and Enter ‘clicks’ it. Prefacing the search with a ‘ restricts it to links. It makes for an incredibly efficient navigation method. Chrome has some extensions that work similarly, but I never got on with them and I definitely prefer an inbuilt solution.
Chrome’s search/URL bar is way better. It seems to automatically pick up new search agents, and they are automatically available when you start typing the respective URL. One hits tab to switch from URL entry to searching the respective site, and it works seamlessly and effortlessly. All custom search agents in Firefox, by contrast, must be set up in preferences. You don’t get a seamless switch from URL to search, but instead must set up search prefixes. So, on Chrome, I start typing ‘amazon.com’, and at any point in the process, I hit tab, and start searching Amazon. With Firefox, I have to have set up a prefix like ‘am’, and remember to do a search like ‘am hello kitty mug’ to get the search results I want. It is not user-friendly, it is not seamless, and it just feels… ancient. Chrome’s method also allows for autocomplete/instant search for these providers, which is only a feature you get with your main search engine in Firefox. It is actually far superior to simply not use this feature in Firefox and use DuckDuckGo bangs instead. The horribly weak search box alone could drive me back to Chrome.
Chrome used to go back or forward (history-wise) if you overscrolled far enough left or right – much like how Chrome mobile works. This no longer seems to work on Chrome desktop, and it doesn’t work on Firefox either. I guess I’m grumpier at Google for teasing and taking away. I know it was a nearly-undiscoverable UI feature, and probably frustrated users who didn’t know why they were jumping around, but it freed up mouse buttons.
I don’t know how to feel about Pocket vs. Google’s ‘save for later’ type solution. Google’s only seems to come up on mobile. Pocket is a separate service, and without doing additional research, it’s unclear how Mozilla ties into it (they bought the service at some point). At least with Google you know you’re the product.
I have had basically no luck streaming on Firefox. Audio streams simply don’t start playing; YouTube and Hulu play for a few seconds and then blank and stop. I assume this will be fixed fairly quickly, but it’s bad right now.
Live Bookmarks are a thing that I think Safari used to do, too? Basically you can have an RSS feed turn into a bookmark folder, and it’s pretty handy. Firefox does this, Chrome has no inbuilt RSS capability. Firefox doesn’t register JSON feed which makes it a half-solution to me, which makes it a non-solution to me. But, it’s a cool feature. I would love to see a more full-featured feed reader built in.
Firefox can push URLs to another device. This is something that I have long wished Chrome would do. Having shared history and being able to pull a URL from another device is nice, but if I’m at work and know I want to read something later, pushing it to my home computer is far superior.
I’ll need to revisit this once I test out Firefox on mobile (my iOS is too far out of date, and I’m not ready to make the leap to 11 yet). As far as the desktop experience is concerned, though, Quantum is a really, really good browser. I’m increasingly using it over Chrome. The UI leaves a bit to be desired, and the URL/search bar is terrible, but the snappiness and keyboard-friendliness are huge wins.
I’ve been testing out Firefox Quantum recently, which is a post for another day, but it made me realize one thing and that is that this site right here barely functioned for anyone using Firefox. Either Quantum or the old engine (Gecko? Is Quantum a replacement for Gecko or a version of it?). Frankly, it’s much stricter than I would have imagined, and assuming that something that functions fine in IE/Edge and Chrome/Safari would also function fine in Firefox was… not a safe assumption, apparently. Here are a few things that I’ve fixed over the past few days, some related to Firefox and others not.
- The width/height of SVGs can’t be specified in
rem units, which makes sense upon reflection. An SVG is really a standalone bit of XML that just happens to be acceptable to dump into an HTML file. I was thinking in HTML tidiness terms and declaring
width="auto" height="13rem", but obviously something that hypothetically can stand alone has no master em to be relative to. It’s definitely not valid SVG, but I never thought twice about it since it seemed like a best practice from an HTML standpoint, and Chrome, Safari, IE, and Edge all handled it as expected. Firefox rendered at 100% width and the height to match, and those suckers got big. I temporarily slipped in a JS patch, but I think I have now edited every post to use ems instead of rems.
- My SVG-in-a-data-URI-in-CSS didn’t work, because I wasn’t properly percent-encoding the data (notably, the hashes and percent signs). This, again, makes perfect sense but I hadn’t really thought about it because WebKit-based browsers didn’t care. This also failed in IE/Edge, I just never noticed, largely because I had only made use of SVGs in this way in one post – A chessboard for pebbling. Luckily, none of the browsers tested seemed to care about spaces. Also, protip: do
- My drawers got stuck, or, you know, just didn’t work. Those drawers up top, when you click ‘categories’, or the like. They didn’t work because I was using
addRule and I guess
insertRule, while far more obnoxious is also the more ubiquitous solution. I temporarily had a wedge in there to conditionally choose between the two, but I think that only matters for very old IEs and… I have to draw the line somewhere. Unfortunately, the drawers are very hackish no matter what I do; injecting rules into stylesheets is not something I would recommend, but here we are.
- Fixing the drawers broke the drawers on iOS. Which was super frustrating,
insertRule should have been pretty much universal. I had never attempted to use the iOS web inspector before, and there is a reason for that: it is a fucking terrible experience. You have to use Safari on the phone, you have to tether the phone to your Mac, and then you have to use Safari on your Mac. I don’t like doing any of these things, and juggling them all while trying to get work done is, well, fucking terrible. Anyway, turns out iOS’s version of WebKit doesn’t like injecting CSS at index 0 for some reason. So, now I do index 1, who cares, does not matter. Index -1 is supposed to place the rule at the end, so I thought, but it overflows on everything I’ve tried it on with ‘the end’ being like 4 billion elements deep. Again, don’t care.
- So now the drawers work, but still have a weird graphical glitch which isn’t new, I just haven’t solved it yet – on iOS, the text is very large when you open up a drawer. The drawer is sized such that the text is the right size, and on Chrome (but not Safari) it jumps back to the right size after scrolling. Very odd.
- Unrelated, but I patched some other things in my Hugo template while I was meddling, like having a really broken process for
<meta name="description"> before, which I probably half-assed because when I think about meta tags I start thinking about SEO, and then I vomit. But, I tweaked this, and hopefully I’ll have some reasonable descriptions in the Googleverse at least.
- I also started fixing some minor issues with some posts, primarily capitalization of categories, since the drawer doodad lowercases them anyway. But Hugo inexplicably has no inbuilt alphabetical sort, and therefore sorts all uppercase letters before all lowercase letters. You can see this by opening the categories drawer up top and seeing ‘svg’ before a bunch of ‘a’s. I need to make a decision on how to handle this soon – lowercase all of my categories, sacrificing semantics for aesthetics; or use an awful fucking hack. I mostly included this list item for the sake of not losing that link.
Well, this sucks. My host, NFSN, is doing a major overhaul to their pricing scheme simply because the internet has become such a horrible hotbed of malice. To be clear, when I say ‘this sucks’, I don’t mean any negativity toward NFSN. The article link up there goes to their blog post explaining the matter, and it frankly seemed inevitable that fighting DDOS attacks would catch up to their pricing scheme. Previously, if you had a static site with low bandwidth and storage, you could probably get a year out of a quarter (domain registration not included, of course). The new plan allows for basically a $3.65 annual minimum which is still impressive (especially given what NFSN offers). But it’s a bummer that it’s come to this.
I would like to reiterate that this is not a complaint against NFSN. I will continue to use them for hosting, I will continue to recommend them, I will continue to praise them. I believe this is a necessary move. I’m just really, really pissed off that this is where we are with the internet. I don’t know what’s going on behind the scenes as far as law enforcement, but the internet is a global network (really?) and that’s not an easy problem to solve. I just hope something is happening to clean this wasteland up, because the advancements we’ve made in the information age are too important to bury under a sheet of malice.