Dotfile highlights: .vimrc

I use zsh, and portability across Darwin, Ubuntu, Red Hat, cygwin, WSL, various gvims, etc. means I may have pasted something in that’s system-specific by accident.

New series time, I guess! I thought for the benefit of my future self, as well as anyone who might wander through these parts, there might be value in documenting some of the more interesting bits of my various dotfiles (or other config files). First up is .vimrc, and while I have plenty of important yet trivial things set in there (like set shell=zsh and mitigating a security risk with set modelines=0), I don’t intend to go into anything that’s that straightforward. But things like:

"uncomment this on a terminal that supports italic ctrl codes
"but doesn't have a termcap file that reports them
"set t_ZH=^[[3m
"set t_ZR=^[[23m

…are a bit more interesting. I do attempt to maintain fairly portable dotfiles, which means occasionally some of the more meaningful bits start their lives commented out.

Generally speaking, I leave word wrapping on, and I don’t hard wrap anything1. I genuinely do not understand the continuing practice of hard wrapping in 2018. Even notepad.exe soft wraps. I like my indicator to be an ellipsis, and I need to set some other things related to tab handling:

"wrap lines, wrap them at logical breaks, adjust the indicator
set wrap
if has("linebreak")
	set linebreak
	set showbreak=…\ \ 
	set breakindentopt=shift:1,sbr
endif

Note that there are two escaped spaces after the ellipsis in showbreak. I can easily see this trailing space because of set listchars=eol:↲,tab:→\ ,nbsp:·,trail:·,extends:…,precedes:…. I use a bent arrow in lieu of an actual LFCR symbol for the sake of portability. I use ellipses again for the ‘more stuff this way’ indicators on the rare occasions I turn wrapping off (set sidescroll=1 sidescrolloff=1 for basic unwrapped sanity). I use middots for both trailing and non-breaking spaces, either one shows me there’s something space-related happening. I also only set list if &t_Co==256, because that would get distracting quickly on a 16 color terminal.

Mouse handling isn’t necessarily a given:

if has("mouse") && (&ttymouse=="xterm" || &ttymouse=="xterm2")
	set mouse=a "all mouse reporting.
endif

I’m not entirely sure why I check for xterm/2. I would think it would be enough to check that it isn’t null. I may need to look into this. At any rate, the variable doesn’t exist if not compiled with +mouse, and compiling with +mouse obviously doesn’t guarantee the termcap is there, so two separate checks are necessary.

I like my cursors to be different in normal and insert modes, which doesn’t happen by default on cygwin/mintty. So,

"test for cygwin; not sure if we can test for mintty specifically
"set up block/i cursor
if has("win32unix")
	let &t_ti.="\e[1 q"
	let &t_SI.="\e[5 q"
	let &t_EI.="\e[1 q"
	let &t_te.="\e[0 q"
endif

Trivial, but very important to me:

"make ctrl-l & ctrl-z work in insert mode; these are crucial
imap <C-L> <C-O><C-L>
imap <C-Z> <C-O><C-Z>

I multitask w/ Unix job control constantly, and hugo server’s verbosity upon file write means I’m refreshing the display fairly often. Whacking Ctrlo before Ctrll or Ctrlz is easy enough, but I do it enough that I’d prefer to simplify.

I have some stuff in for handling menus on the CLI, but I realize I basically never use it… So while it may be interesting, it’s probably not useful. Learning how to do things in vim the vim way is generally preferable. So, finally, here we have my status line:

if has("statusline")
	set laststatus=2
	set statusline=%{winnr()}%<:%f\ %h%m%r%=%y%{\"[\".(&fenc==\"\"?&enc:&fenc).((exists(\"+bomb\")\ &&\ &bomb)?\",B\":\"\").\"]\ \"}%k\ %-14.(%l,%c%V%)\ %P
endif

I don’t like my status line to be too fancy, or rely on anything nonstandard. But there are a few things here which are quite important to me. First, I start with the window number. This means when I have a bunch of splits, I can easily identify which I want to switch to with (say) 2Ctrlww. I forget what is shown by default, but toward my right side I show edited/not, detected filetype, file encoding, and presence or lack thereof of BOM. Here’s a sample:

2<hfl.com/content/post/2018-02/vimrc.md [+][markdown][utf-8]  65,6           Bot

That’s about everything notable from my .vimrc. Obviously, I set my colorscheme, I set up some defaults for printing, I set a few system-dependent things, I set some things to pretty up folds. I set spell; display=lastline,uhex; syntax on; filetype on; undofile; backspace=indent,eol,start; confirm; timeoutlen=300. I would hesitantly recommend new users investigate Tim Pope’s sensible.vim, though I fundamentally disagree with some of his ideas on sensibility (incsearch?2 autoread? Madness).


Personal Log

For reasons that are not really relevant to this post, I am in search of a good solution for a personal log or journal type thing. Essentially, my goal is to be able to keep a record of certain events occurring, with a timestamp and brief explanation. Things like tags would be great, or fields for additional circumstances. Ideally, it’ll be portable and easily synced between several machines. Setting up an SQLite database would be a great solution, except that merge/sync issue sounds like a bit of a nightmare. jrnl looks really neat, but I haven’t been able to try it yet due to a borked homebrew on my home Mac1 and a lack of pip on my main Win machine. While syncing this could also be a chore, and there would be no mobile version, the interface itself is so straightforward that writing an entry could be done in any text editor, and then fed line-by-line into the command.

But I haven’t settled on anything yet, so I was just kind of maintaining a few separate text files using vim (Buffer Editor on iOS), and figuring I’d cat them together at some point. But I got to thinking about ways I can make my life easier (not having to manually insert entries at the right spot chronologically, etc.), and came to a few conclusions about ways to maintain a quick and dirty manual log file.

First thing was to standardize and always use a full date/timestamp. Originally I was doing something like:

2018-01-29:
    8.05a, sat on a chair.
    10.30p, ate a banana.

2018-01-30:
    [...]

…which is very hard to sort. So I decided to simply preface every single entry with an ISO-formatted stamp down to the minute, and then delimit the entry with a tab (2018-01-29T22.30 ate a banana.). As a matter of principal, I don’t like customizing my environment too much, or in ways that will lead to my forgetting how to do things without customization. I don’t, therefore, have many custom mappings in vim, but why not simplify adding the current date and/or time:

"insert time and date in insert mode
imap <Leader>d <C-R>=strftime('%F')<CR>
imap <Leader>t <C-R>=strftime('%R')<CR>
imap <Leader>dt <C-R>=strftime('%FT%R')<CR>

Leaderd for ISO-formatted date, Leadert for ISO-formatted time (down to minutes), Leaderdt for both, separated by a ’T’. If everything is formatted the same, with lines beginning in ISO time formats, then every entry can readily be sorted. Sorting is simple in vim, :sort or, to strip unique lines, :sort u. I think that it’s more likely that the merge operation would happen outside of vim, in which case we’d use the sort command in the same way: cat log1.tsv log2.tsv >> log.tsv && sort -u -o log.tsv log.tsv. sorting in place like this was a new discovery for me; I had always used temporary files before, but apparently if the output file to sort (specified by -o) is the same as the input file, it handles all the temporary file stuff on its own.

I think it’d be neat to establish a file format for this in vim. Sort upon opening (with optional uniqueness filtering). Automatically timestamp each newline. Perhaps have some settings to visually reformat the timestamp to something friendlier while retaining ISO in the background (using conceal?). The whole thing is admittedly very simple and straightforward, but it’s a process that seems worthwhile to think about and optimize. While most journaling solutions are much more complicated, I think there is a lot of value in a simple timestamped list of events for tracking certain behaviors, etc. I’m a bit surprised there isn’t more out there for it.


Golfing in Eukleides

Eukleides is decidedly not a golfing language, but when a geometry-related question came up on PPCG, I had to give it a shot. Eukleides can be quite rigid; for starters it is very strongly typed. Functions are declared by what they intend to return, so while set would be the shortest way to declare a function, it can’t really be exploited (unless returning a set is, in fact, desired). Speaking of sets, one thing that is potentially golfable is ‘casting’ a point to a set. Given a point, p, attempting a setwise operation on p will fail because a point does not automatically cast to a set (strict typing). p=set(p) will overwrite point p with a single-item set containing the point that was p. If, however, it is okay to have two copies of the point in the set, p=p.p is three bytes shorter.

If user input is required, the command number("prompt") reads a number from STDIN. The string for the prompt is required, though it can be empty (""). Thus, if more than four such inputs (or empty strings for other purposes) are required, it saves bytes to assign a variable with a one-letter name to an empty string.

Whitespace is generally unavoidable, but I did come to realize that boolean operators do not need to be preceded by whitespace if preceded by a number. So, if a==7or a==6 is perfectly valid. aor a, 7ora, 7 or6 are all invalid, however. This may be an interpreter bug, but for the time being it is an exploitable byte.

Finally, loci. Loci are akin to for-loops that automagically make sets. Unfortunately, they don’t seem to care much for referencing themselves mid-loop, which meant that I couldn’t exploit how short of a construction they are compared to manually creating a set in a for-loop.

This was a fun exercise, and just goes to show that if you poke around at any language enough, you’ll find various quirks that may prove useful given some ridiculous situation or another.


Separating cd and pushd

While much of this post applies to bash, I am a zsh user and this was written from that standpoint.

One piece of advice that I’ve seen a lot in discussions on really tricking out one’s UNIX (&c.) shell is either setting an alias from cd to pushd or turning on a shell option that accomplishes this1. Sometimes the plan includes other aliases or functions to move around on the directory stack, and the general sell is that now you have something akin to back/forward buttons in a web browser. This all seems to be based on the false premise that pushd is better than cd, when the reality is that they simply serve different purposes. I think that taking cd out of the picture and throwing everything onto the directory stack greatly reduces the stack’s usefulness. So this strategy simultaneously restricts the user to one paradigm and then makes that paradigm worse.

It’s worth starting from the beginning here. cd changes directories and that’s about it. You start here, tell it to go there, now you’re there. pushd does the same thing, but instead of just launching the previous directory into the ether, it pushes it onto a last in, first out directory stack. pushd is helped by two other commands – popd to pop a directory from the stack, and dirs to view the stack.

% mkdir foo bar baz
% for i in ./*; pushd $i && pushd
% dirs -v
0       ~/test
1       ~/test/foo
2       ~/test/baz
3       ~/test/bar

dirs is a useful enough command, but its -v option makes its output considerably better. The leading number is where a given entry is on the stack, this is globbed with a tilde. ~0 is always the present working directory ($PWD). You’ll see in my little snippet above that in addition to pushding my way into the three directories, I also call pushd on its own, with no arguments. This basically just instructs pushd to flip between ~0 and ~1:

% pushd; dirs -v
0       ~/test/foo
1       ~/test
2       ~/test/baz
3       ~/test/bar

This is very handy when working between two directories, and one reason why I think having a deliberate and curated directory stack is far more useful than every directory you’ve ever cded into. The other big reason is the tilde glob:

% touch ~3/xyzzy
% find .. -name xyzzy
../bar/xyzzy

So the directory stack allows you to do two important things: easily jump between predetermined directories, and easily access predetermined directories. This feels much more like a bookmark situation than a history situation. And while zsh (and presumably bash) has other tricks up its sleeves that let users make easy use of predetermined directories, the directory stack does this very well in a temporary, ad hoc fashion. cd actually gives us one level of history as well, via the variable $OLDPWD, which is set whenever $PWD changes. One can do cd - to jump back to $OLDPWD.

zsh has one more trick up its sleeve when it comes to the directory stack. Using the tilde notation, we can easily change into directories from our stack. But since this is basically just a glob, the shell just evaluates it and says ‘okay, we’re going here now’:

% pushd ~1; dirs -v
0       ~/test
1       ~/test/foo
2       ~/test
3       ~/test/baz
4       ~/test/bar

Doing this can create a lot of redundant entries on the stack, and then we start to get back to the cluttered stack problem that started this whole thing. But the cd and pushd builtins in zsh know another special sort of notation, plus and minus. Plus counts up from zero (and therefore lines up with the numbers used in tilde notation and displayed using dirs -v), whereas minus counts backward from the bottom of the stack. Using this notation with either cd or pushd (it is a feature of these builtins and not a true glob) essentially pops the selected item off of the stack before evaluating it.

% cd +3; dirs -v
0       ~/test/baz
1       ~/test/foo
2       ~/test
3       ~/test/bar
% pushd -0; dirs -v
0       ~/test/bar
1       ~/test/baz
2       ~/test/foo
3       ~/test

…and this pretty much brings the stack concept full circle, and hopefully hits home why it makes far more sense to curate this stack versus automatically populating it whenever you change directories.


Extracting JPEGs from PDFs

I’m not really making a series of ‘things your hex editor is good for’, I swear, but one more use case that comes up frequently enough in my day-to-day life is extracting JPEGs from PDF files. This can be scripted simply enough, but I find doing these things manually from time to time to be a valuable learning experience.

PDF is a heck of a file format, but we really only need to know a few things right now. PDFs are made up of objects, and some of these objects (JPEGs included) are stream objects. Stream objects always have some relevant data stored in a thing called a dictionary, and this includes two bits of data we need to get our JPEG: the Filter tells the viewer how to interpret the stream, and the Length tells us how long, in bytes, the data is. The filter for JPEGs is ‘DCTDecode’, so we can open up a PDF in a hex editor (I’ll be using bvi again) and search for this string to find a JPEG. Before we do, one final thing we should know is that streams begin immediately after an End Of Line (EOL) marker following the word ‘stream’. EOL in a PDF should always be two bytes – 0D 0A or CR LF.

/DCTDecodeEnter

00002E80  6C 74 65 72 2F 44 43 54 44 65 63 6F 64 65 2F 48 lter/DCTDecode/H
00002E90  65 69 67 68 74 20 31 31 39 2F 4C 65 6E 67 74 68 eight 119/Length
00002EA0  20 35 35 33 33 2F 4E 61 6D 65 2F 58 2F 53 75 62  5533/Name/X/Sub
00002EB0  74 79 70 65 2F 49 6D 61 67 65 2F 54 79 70 65 2F type/Image/Type/
00002EC0  58 4F 62 6A 65 63 74 2F 57 69 64 74 68 20 31 32 XObject/Width 12
00002ED0  31 3E 3E 73 74 72 65 61 6D 0D 0A FF D8 FF EE 00 1>>stream.......
/DCTDecode                                     00002E85  \104 0x44  68 'D'

This finds the next ‘DCTDecode’ stream object and puts us on that leading ’D’, byte offset 2E85 (decimal 11909) in this instance. Glancing ahead a bit, we can see that the Length is 5533 bytes. If we then search for ‘stream’, (/streamEnter), we’ll be placed at byte offset 2ED3 (decimal 11987). The word ‘stream’ is 6 bytes, and we need to add an additional 2 bytes for the EOL. This means our JPEG data starts at byte offset 11995 and is 5533 bytes long.

How, then, to extract this data? It may not be everyone’s favorite tool, but dd fits the bill perfectly. It allows us to input a file, start at a byte offset, go to a byte offset, and output the resulting chunk of file – just what we want. Assuming our file is ‘test.pdf,’ we can output ‘test.jpg’ like…

dd bs=1 skip=11995 count=5533 if=test.pdf of=test.jpg

bs=1 sets our block size to 1 byte (which is important, dd is largely used for volume-level operations where blocks are larger). skip skips ahead however many bytes, essentially the initial offset. count tells it how many bytes to read. if and of are input and output files respectively. dd doesn’t follow normal Unix flag conventions, there are no prefixing dashes and those equal signs are quite atypical, and dd is quite powerful, so it’s always worth reading the manpage.


Binaries and hex editors

Talking about certain files as ‘binaries’ is a funny thing. All files are ultimately binary, after all, it’s just a matter of whether or not a file is encoded as text. Even in the world of text, an editor or viewer needs to know how the text is encoded, what bytes map to what characters. Is a file ASCII, UTF-8, PostScript? Once we know something is text or not text, it’s still likely to be made to the standards of a specific format, lest it be nothing but plain text. Markdown, HTML, even PDF1 are human-readable text to an extent, with rules about how their content is interpreted. A human as well as a web browser knows that a <p> starts a paragraph, and this paragraph continues until a matching </p> is found.

If we open a binary in a text editor, we’ll see a lot of familiar characters, where data happens to coincide with printable ASCII. We’ll also see a lot of gibberish, and in fact some of the characters may cause a terminal to behave erratically. Opening a binary in a hex editor makes a little more sense of it, but still leaves a lot to be answered. In one column, we’ll see a lot of hexadecimal values; in another we’ll see the same sort of gibberish we would have seen in our text editor. In some sort of status display, we’ll also generally see a few more bits of information – what byte we’re on, its hex value, its decimal value, etc. Why would we ever want to do this? Well, among other things, binary file formats have rules as well, and if we know these rules, we can inspect and navigate them much like an HTML file. Take this piece of a PNG file, as it would appear in bvi (my hex editor of choice).

00000000  89 50 4E 47 0D 0A 1A 0A 00 00 00 0D 49 48 44 52 .PNG........IHDR
00000010  00 00 02 44 00 00 01 04 08 06 00 00 00 C9 50 2B ...D..........P+
00000020  AB 00 00 00 04 73 42 49 54 08 08 08 08 7C 08 64 .....sBIT....|.d
00000030  88 00 00 00 09 70 48 59 73 00 00 0B 12 00 00 0B .....pHYs.......
00000040  12 01 D2 DD 7E FC 00 00 00 1C 74 45 58 74 53 6F ....~.....tEXtSo
"ban_ln_560_NLW.png" 14498451 bytes    00000000 10001001 \211 0x89 137 NUL

Of lynx and curl

I use zsh, and some aspects of this article may be zsh specific, particularly the substitution trick. bash has similar ways to achieve these goals, but I won’t be going into anything bash-specific here.

At work, I was recently tasked with archiving several thousand records from a soon-to-be-mercifully-destroyed Lotus Notes database. Why they didn’t simply ask the DBA to do this is beyond me (just kidding, it almost certainly has to do with my time being less valuable, results be damned). No mind, however, as the puzzle was a welcome one, as was the opportunity to exercise my Unix (well, cygwin in this case) chops a bit. The exercise became a simple one once I realized the database had a web server available to me, and that copies of the individual record web views would suffice. A simple pairing of lynx and curl easily got me what I needed, and I realized that I use these two in tandem quite often. Here’s the breakdown:

There are two basic steps to this process: use lynx to generate a list of links, and use curl to download them. There are other means of doing this, particularly when multiple depths need to be spidered. I like the control and safety afforded to me by this two-step process, however, so for situations where it works, it tends to be my go-to. To start, lynx --dump 'http://brhfl.com' will print out a clean, human-readable version of my homepage, with a list of all the links at the bottom, formatted like

1. http://brhfl.com/#content
2. http://brhfl.com/
3. http://brhfl.com/./about/
4. http://brhfl.com/./categories/
5. http://brhfl.com/./post/

…and so on (note to self: those ./ URLs function fine, and web browsers seem to transparently ignore them, but… maybe fix that?). For our purposes, we don’t want the formatted page, nor do we want the reference numbers. awk helps us here: lynx --dump 'http://brhfl.com' | awk '/http/{print $2}' looks for lines containing ‘http’, and only prints the second element in the line (default field separator being a space).

http://brhfl.com/#content
http://brhfl.com/
http://brhfl.com/./about/
http://brhfl.com/./categories/
http://brhfl.com/./post/

…et cetera. For my purposes, I was able to single out only the links to records in my database by matching a second pattern. If we only wanted to return links to my ‘categories’ pages, we could do lynx --dump 'http://brhfl.com' | awk '/http/&&/categories/{print $2}', using a boolean AND to match both patterns.

http://brhfl.com/./categories/
http://brhfl.com/./categories/apple/
http://brhfl.com/./categories/board-games/
http://brhfl.com/./categories/calculator/
http://brhfl.com/./categories/card-games/

…and so on. Belaboring this any further would be more a primer on awk than anything, but it is necessary1 for turning lynx --dump into a viable list of URLs. While this seems like a clumsy first step, it’s part of the reason I like this two-step approach: my list of URLs is a very real thing that can be reviewed, modified, filtered, &c. before curl ever downloads a byte. All of the above examples print to stdout, so something more like lynx --dump 'http://brhfl.com' | awk '/http/&&/categories/{print $2}' >> categories-urls would (appending to and not clobbering) store my URLs in a file. Then it’s on to curl. for i in $(< categories-urls); curl -O "$i" worked just fine2 for my database capture, but our example here would be less than ideal because of the pretty URLs. curl will, in fact, return

curl: Remote file name has no length!

…and stop right there. This is because the -O option simplifies things by saving the local copy of the file with the remote file’s name. If we want to (or need to) name the files ourselves, we use the lowercase -o filename instead. While this would be a great place to learn more about awk3, we can actually cheat a bit here and let the shell help us. zsh has a tail-matching substitution built in, used much like basename to get the tail end of a path. Since URLs are just paths, we can do the same thing here. To test this, we can for i in $(< categories-urls); echo ${i:t}.html and get

categories.html
apple.html
board-games.html
calculator.html
card-games.html

…blah, blah, blah. This seems to work, so all we need to do is plug it in to our curl command, for i in $(< categories-urls); (curl -o "${i:t}".html "$i"; sleep 2). I added the two seconds of sleep when I did my db crawl so that I wasn’t hammering the aging server. I doubt it would have made a difference so long as I wasn’t making all of these requests in parallel, but I had other things to work on while it did its thing anyway.

One more reason I like this approach to grabbing URLs – as we’re pulling things, we can very easily sort out the failed requests using curl -f, which returns a nonzero exit status upon failure. We can use this in tandem with the shell’s boolean OR to build a new list of URLs that have failed: (i="http://brhfl.com/fail"; curl -fo "${i:t}".html "$i" || echo "$i" >> failed-category-urls) gives us…

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (22) The requested URL returned error: 404 Not Found
~% < fail.html
zsh: no such file or directory: fail.html
zsh: exit 1      < fail.html
~% < failed-category-urls
http://brhfl.com/fail

…which we can then run through curl again, if we’d like, to get the resulting status codes of these URLs: for i in $(< failed-category-urls); (printf "$i", >> failed-category-status-codes.csv; curl -o /dev/null --location --silent --head --write-out '%{http_code}\n' "$i" >> failed-category-status-codes.csv)4. < failed-category-status-codes.csv in this case gives us

http://brhfl.com/fail,404

…which we’re free to do what we want with. Which, in this case, is probably nothing. But it’s a good one-liner anyway.


Making multiple directories with mkdir -p

I often have to create a handful of directories under one root directory. mkdir can take multiple arguments, of course, so one can do mkdir -p foo/bar foo/baz or mkdir foo !#:1/bar !#:1/baz (the latter, of course, would make more sense given a root directory with a longer name than ‘foo’). But a little trick that I feel slips past a lot of people is to use .. directory traversal to knock out a bunch of directories all in one pass. Since -p just makes whatever it needs to, and doesn’t care about whether or not any part of the directory you’re passing exists, mkdir -p foo/bar/../baz works to create foo/bar and foo/baz. This works for more complex structures as well, such as…

% mkdir -p top/mid-1/../mid-2/bottom-2/../../mid-3/bottom-3
% tree
.
└── top
    ├── mid-1
    ├── mid-2
    │   └── bottom-2
    └── mid-3
        └── bottom-3