Forwards and Backwords Pipes in Go

In the Go language, an often-asked question is “how do I tell if all the goroutines are done?”  I asked myself that recently, while porting a measurement library to use channels (pipes).  As the channels went to goroutines, how would I know when they were done and that I could exit?

The common answer I found was to use a semaphore, a sync.WaitGroup, increment it before calling a goroutine and decrement it at the end of the goroutine. The main program could then wait for it to drop to zero and only then exit. Exiting any earlier would have killed the goroutines, and therefore discard any work they hadn’t finished.

However, some of the elegant examples in books and talks didn’t include waitgroups. For example, Lexical Scanning in Go by Rob Pike didn’t use anything special, and neither did a tiny lexer and parser I had copied from his talk.

Instead my code started a lexer/parser goroutine and then looped, consuming the output,  something like:

go run(l) // Runs the lexer/parser, sends to l.Pipe
for tok = range l.Pipe {
        slice = append(slice, tok)
        if tok.Typ == token.EOF {
 return slice

The idiom here is to make the last step in the pipeline part of the mainline, and consume the material created in the pipe. In effect I’m build the pipeline protruding “backwards” from main(), instead of “forwards” and needing to have the main program wait for a semaphore.

I can’t always structure library code this way: in a number of cases I have needed to have the pipe proceed “forwards” from the mainline and I indeed use sync.WaitGroup there.

However, when I can, I like the idea of writing go this(); go that(); go theOtherThing() and then reading the results in the mainline. It’s elegant.


The “Modem Testing” Problem

I’m a happy user of vagrant, and over Christmas I was setting up a new set of instances for work, using Ubuntu under KVM.  It’s always desirable to have a target VM that exactly matches your production environment, so your testing matches where you’re going to use your work.

Sure enough, it failed.

I know this works under Ubuntu, but my dev machine is running Fedora 25, and that probably hasn’t been tested.  I can run VMs via Virtual Box, that’s fine, but it’s not exactly the same as production.

What has that to do with modems?

Well, in a previous life, I used to work with a company that did v32bis components for modems, and they called any “A doesn’t work with B” problems modem testing problems.

When  modems were fairly new and there were dozens of manufacturers, each manufacturer tested their newest release against their previous release, and then perhaps against an old copy of a well-known competitor’s product, and released them into the wild.

At which point their customer-support line lit up with new customers complaining that their newly bought modem didn’t work with their ISP any more.

The usual answer was “tell your ISP to use our product instead of brand X”.

This was not particularly helpful: their ISP usually said “don’t use anything we haven’t tested, and we’ve only tested brand X”.

That’s exactly what happened here: the combination of vagrant, KVM, Ubuntu and Fedora hasn’t been tested, and, depending on the Ubuntu version, may have a failing NFS or may not have networking at all.

This is by no means unusual: tons of products only work if you get the exact configuration that the developers selected.  That’s part of why I always have an exact copy of production in the first place!

What do you do about it?

If you’re an individual, file bugs. That’s the best thing you can do to help the implementors.  Giving them a test sequence that they can add to their automated regression tests is a friendly act, so they won’t ever make the same mistake again. Be open to testing their fixes, and don’t just label them bozos (;-))

If you’re a contributor, have a matrix of stuff you need to at least smoke-test against. Making friends with beta-testers and bug reporters can give you a huge list of platforms that have been debugged, and you might talk them into running a cron job that makes them part of your nightly build-and-test farm.

You’re not going to have to solve  the “every possible brand of modems” problem if you can find a community of people who care enough to report bugs and test fixes. You don’t actually ever have to test against everything, just the things people actually use. And if the people depend on your software, they’re motivated to want to have it run well. Recruit them!