Twenty years after

http://esr.ibiblio.org/?p=7244

I just shipped what was probably the silliest and most pointless software release of my career. But hey, it’s the reference implementation of a language and I’m funny that way.

Because I write compilers for fun, I have a standing offer out to reimplement any weird old language for which I am sent a sufficiently detailed softcopy spec. (I had to specify softcopy because scanning and typo-correcting hardcopy is too much work.)

In the quarter-century this offer has been active, I have (re) implemented at least the following: INTERCAL, Michigan Algorithmic Decoder, and a pair of obscure 1960s teaching languages called CORC and CUPL, and an obscure computer-aided-instruction language called Pilot.

Pilot…that one was special. Not in a good way, alas. I don’t know where I bumped into a friend of the language’s implementor, but it was in 1991 when he had just succeeded in getting IEEE to issue a standard for it – IEEE Std 1154-1991. He gave me a copy of the standard.

I should have been clued in by the fact that he also gave me an errata sheet not much shorter than the standard. But the full horror did not come home to me until I sat down and had a good at both documents – and, friends, PILOT’s design was exceeded in awfulness only by the sloppiness and vagueness of its standard. Even after the corrections.

But I had promised to do a reference implementation, and I did. Delivered it to the inventor’s friend. He couldn’t get it to work – some problem with the version of YACC he was using, if I recall correctly. It wasn’t something I could fix remotely, and I left it to him to figure out, being pretty disgusted with the project. I don’t know if he ever did.

I did fix a couple of minor bugs in my masters; I even shipped occasional releases until late 1996. Then…I let the code molder in a corner for twenty years.

But these things have a way of coming back on you. I got a set of fixes recently from one Frank J. Lhota, forward-porting it to use modern Bison and Flex versions. Dear sweet fornicating Goddess, that meant I’d have to…issue another release. Because it’s bad form to let fix patches drop on the floor pour discourager les autres.

So here it is. It does have one point of mild interest; the implementation is both an interpreter and a compiler (it’s a floor wax! It’s a dessert topping!) for the language – that is, it can either interpret the parsed syntax tree or generate and compile corresponding C code.

I devoutly hope I never again implement a language design as botched as Pilot. INTERCAL was supposed to be a joke…

Dilemmatizing the NRA

http://esr.ibiblio.org/?p=7239

So, the Washington Post publishes yet another bullshit article on gun policy.

In this one, the NRA is charged with racism because it doesn’t leap to defend the right of black men to bear arms without incurring a lethal level of police suspicion.

In a previous blog post, I considered some relevant numbers. At 12% of the population blacks commit 50% of violent index crimes. If you restrict to males in the age range that concentrates criminal behavior, the numbers work out to a black suspect being a a more likely homicidal threat to cops and public safety by at least 26:1.

I haven’t worked out how the conditional probabilities crunch out if you have the prior that your suspect is armed, but it probably makes that 26:1 ratio worse rather than better.

Police who react to a random black male behaving suspiciously who might be in the critical age range as though he is an near-imminent lethal threat, are being rational, not racist. They’re doing what crime statistics and street-level experience train them to do, and they’re right to do it. This was true even before the post-Ferguson wave of deliberate assassinations of police by blacks.

The NRA would, I’m sure, love to defend the RKBA of a black man who isn’t a thug or gangbanger. So would I. The trouble is that when you’re considering police stops nice cases like that are damned thin on the ground.

Seriously, the victims in these stop-and-shoot cases pretty much always turn out to have a history of violent behavior and rap sheets as long as your arm. Often (as in the recent Terence Crutcher case) there is PCP or some other disassociative anaesthetic in their system or in arm’s reach.

It’s hardly any wonder the NRA doesn’t want to spend reputational capital defending the RKBA of these mooks. I wouldn’t either in their shoes; this is not racism, it’s a very rational reluctance to get one’s cause entangled with the scum of the earth.

I cannot help but think that articles like the Post are intended to put the NRA on the horns of a dilemma; remain silent in these cases or be falsely accused of racism, versus speaking up and hearing “Aha! So all that other posturing was bogus, you think it’s just fine for hardened criminals to have guns!”

Sigh…

Thinking like a master programmer, redux

http://esr.ibiblio.org/?p=7234

Yes, there was a bug in my vint64 encapsulation commit. I will neither confirm nor deny any conjecture that I left it in there deliberately to see who would be sharp enough to spot it. I will however note that it is a perfect tutorial example for how you should spot bugs, and why revisions with a simple and provable relationship to their ancestors are best

The following call in libntp/ntp_calendar.c is incorrect:

setvint64u(res, vint64s(res)-0x80000000);

Now consider the line this replaced:

res.Q_s -= 0x80000000;

And notice what that expands to, semantically, in C:

res.Q_s = res.Q_s – 0x80000000;

Spotted it yet?

My encapsulation patch is extremely regular in form. One of my blog commenters (the only one to spot the bug, so far) pointed out correctly that an ideal transformation of this kind looks like it was done using a text editor search and replace feature – and, in fact, I did most of it with regexp-replace commands in Emacs.

It’s good when your patches are this regular, because it means that you can spot bugs by looking for irregularities – places where a local change breaks a rule followed in the rest of the patch. Importantly, this way to spot defects works even when you don’t fully understand the code.

This is a major reason the code state after every change should have a single provable relationship to its antecedent – because if it has more than one change in it, telltale irregularities will be harder to see.

OK, here is the corrected form of the call:

setvint64u(res, vint64u(res)-0x80000000);

The one-character difference is that the correct inner call is to vint64u(), not vint64s(). You should have been able to spot this in one of a couple of ways.

One is by noticing that the original expression was doing unsigned arithmetic, so what is that call to get a signed value doing in there?

The even simpler way to spot the irregularity is to have noticed that in the rest of the diff there are no other calls like

setvint64X(res, vint64Y(res) … );

in which X and Y are unequal. There is a purely textual symmetry in the patch that this one statement breaks. Because the author was being careful about simplicity and provable relationships, that in itself should be enough to focus a reviewer’s suspicions even if the reviewer doesn’t know (or has forgotten) the meaning of the s and u suffixes.

I’m writing this as though the author and reviewer are different people, but these techniques for bug spotting – and, more importantly, these techniques for writing patches so bugs are easy to spot – apply even when you are your own reviewer and you are looking at a diff mere moments after you changed the code.

You get fast at coding, and you get good at doing it with a low defect rate, by developing the habits of mind that make self-checking like this easy. The faster you can self-check, the faster you can write while holding expected defect rates constant. The better you can self-check, the lower you can push your defect rate.

“To do large code changes correctly, factor them into a series of smaller steps such that each revision has a well-defined and provable relationship to the last” is good advice exactly because the “well-defined and provable” relationship creates regularities – invariants – that make buggy changes relatively easy to spot before you commit them.

I often go entire months per project without committing a bug to the repository. There have been good stretches on NTPsec in which my error rate was down around one introduced bug per quarter while I was coding at apparently breakneck speed. This is how I do that.

Having good tests – and the habit of adding a unit or regression test on every new feature or bug – helps a lot with that, of course. But prior to testing is good habits of mind. The combination of good habits of mind with good testing is not just additively effective, it’s multiplicatively so.

Thinking like a master programmer

http://esr.ibiblio.org/?p=7230

To do large code changes correctly, factor them into a series of smaller steps such that each revision has a well-defined and provable relationship to the last.

(This is the closest I’ve ever come to a 1-sentence answer to the question “How the fsck do you manage to code with such ridiculously high speed and low defect frequency? I was asked this yet again recently, and trying to translate the general principle into actionable advice has been on my mind. I have two particular NTPsec contributors in mind…)

So here’s a case study, and maybe your chance to catch me in a mistake.

NTP needs a 64-bit scalar type for calendar calculations; what it actually wants is 32 bits of seconds since a far-past epoch and 32 bits of fractional-second precision, which you can think of as a counter for units of seconds * 1e-32. (The details are a little messier than this, but never mind that for now.)

Consequently, one of the archaisms in the NTP code is an internal type called vint64. It dates from the era of 32-bit machines (roughly 1977 to 2008). In those days you couldn’t assume your C compiler had int64_t or uint64_t (64-bit integer and unsigned-integer types). Even after the 64-bit hardware transition, it was some years before you could safely assume that compilers for the remaining 32-bit machines (like today’s Raspberry Pis) would support int64_t/uint64_t.

Thus, a vint64 is an NTP structure wrapping 2 32-bit integers. It comes with a bunch of small functions that do 64-bit scalar arithmetic using it. Also, sadly, there was a lot of code using it that didn’t go through the functional interface, instead exposing the guts of the vint64 structure in unclean ways.

This is, for several reasons, an obvious cleanup target. Today in 2016 we can assume that all compilers of interest to us have 64-bit scalars. In fact the NTP code itself has long assumed this, though the assumption is so well-hidden in the ISC library off to the side that many people who have worked in the main codebase probably do not know it’s there.

If all the vint64s in NTP became typedefs to a scalar 64-bit type, we could use native machine operations in most cases and replace a lot of function calls and ugly exposed guts with C’s arithmetic operators. The result would be more readable, less bulky, and more efficient. In this case we’d only pare away about 300LOC, but relentless pursuit of such small improvements adds up to large ones.

The stupid way to do it would have been to try to go from vint64 to int64_t/uint64_t in one fell swoop. NSF and LF didn’t engage me to be that stupid.

Quoting myself: “A series of smaller steps such that each revision has a well-defined and provable relationship to the last.”

Generally, in cases like this, the thing to do is separate changing the interface from changing the implementation. So:

1. First, encapsulate vint64 into an abstract data type (ADT) with an entirely functional interface – un-expose the guts.

2. Then, change the implementation (struct to scalar), changing the ADT methods without disturbing any of the outside calls to them – if you have to do the latter, you failed step 1 and have to clean up your abstract data type.

3. Finally, hand-expand the function calls to native C scalar operations. Now you no longer have an ADT, but that’s OK; it was scaffolding. You knew you were going to discard it.

The goal is that at each step it should be possible, and relatively easy to eyeball-check that the transformation you did is correct. Helps a lot to have unit tests for the code you’re modifying – then, one of your checks is that the unit tests don’t go sproing at any step. If you don’t have unit tests, write them. They’ll save your fallible ass. The better your unit tests are, the more time and pain you’ll save yourself in the long run.

OK, so here’s you chance to catch me in a mistake.

https://gitlab.com/NTPsec/ntpsec/commit/13fa1219f94d2b9ec00ae409ac4b54ee12b1e93f

That is the diff where I pull all the vint64 guts exposure into a ADT (done with macro calls, not true functions, but that’s a C implementation detail).

Can you find an error in this diff? If you decide not, how did it convince you? What properties of the diff are important?

(Don’t pass over that last question lightly. It’s central.)

If you’re feeling brave, try step 2. Start with ‘typedef uint64_t vint4;’, replacing the structure definition, and rewrite the ten macros near the beginning of the diff. (Hint: you’ll need two sets of them.)

Word to the newbies: this is how it’s done. Train your brain so that you analyze programming this way – mostly smooth sequences of refactoring steps with only occasional crisis points where you add a feature or change an assumption.

When you can do this at a microlevel, with code, you are inhabiting the mindset of a master programmer. When you can do it with larger design elements – data structures and entire code subsystems – you are getting inside system-architect skills.