Logs as Journals, a Migration Story

One of my customers needed to build a new large object-storage array, but using different software and hardware than the now-obsolescent product they have.  That means that they had to migrate everything.

Fortunately, the array is accessed via REST calls over https, so we have web-server logs. This is a story about how we used them to migrate everything.

Introduction

The web server used for the REST interface  logs every PUT or GET from the old system. The low-level mechanism includes a database of all the objects, and can list every object in the system.

It’s simple, therefore, to write a query to the low-level database that will list all the objects in the system, so we can migrate them.  It doesn’t, however, account for changes to the system. Our migration would be a copy of the system, but only up to the date at which we did the query.

As we expect the migration to take some weeks, That Would Be Bad.

Fortunately, we have all the PUTs in the webserver log, and it is up to date.  If we read the log with tail -f and send a notice of every PUT to the new system, those objects can be migrated too.

In effect, we are treating the logs as a variant of a database commit journal, and replaying it at another site to keep the new up to date with the old.

 

Client

At the old site, we set up a program that is little more that tail -f | awk, which picks out every put to the old system and ships it off to the new one.

We also set up another, independent program that reads the log and reports each second how many requests there were. It also uses a web server option to report the average time a request takes, so we can tell if the migration program is slowing down the old system.

Server

On the new system, we have a program that reads list of objects to migrate, request them from the old system and copies them to the new.  Several copies are running at the same time, one of which strictly migrates new objects. As those are new, they will probably be used soon, so it is to our advantage to make sure we have them.

The others migrate objects from the list we get from the database, and we can slow down, speed up, start or stop them, so as to be able to control how much load we put on the old system. And ensure we don’t overload the new system, of course: a read from the old system is fairly “light”, but a write is heavier than a read, and we’re strictly doing writes to the new system.

If we have to stop or restart, we can see how far we got by looking at the logs, the journal, of the new system. They’re even in the same format. We can restart the migration with the next file to be transferred.

Conclusion

Logs aren’t (just) collections of information that the programmers cared about. In many cases, they can be used as database-like journals, and played or replayed to reproduce behaviour at another site or at another time.

If you ever need to migrate work, replay steps that led to a bug or generate a representative load test of an existing system, look to your logs.

 

Advertisements

A Few Days into BDD

No, that’s not something about bondage and discipline, it’s behavior-directed development.

In a previous life, we had a gazillion lines of unreachable code, but never knew what we could get rid of. In my current life, we have quite a small program that needs testing, but we left testing to the end. Could BDD help?

Introduction

Behavior directed development uses a nice notation to specify how stories should be implemented: the person who wrote the stories should be able to read a “features” file and be able to say if it was what they wanted.

For example, you could write

Scenario: create a file in trial-use
  Given there is no such file as trial-use/junk.txt
  When we can put file trial-use/junk.txt from /tmp/data-file.txt
  Then file trial-use/junk.txt exists
  And put therefor works

and argue convincingly that you had proven the “green” case for put.

Behind the scenes, this turns into normal code, macro-expanded into a programming language, in this case python:

@when(u'we can put file {key} from {data_file}')
def step_impl(context, key, data_file):
    full_path = proto + server_ip + port + key
    implement_put(context, full_path, data_file)
    expect_success(context)

The @when maps an arbitrary string with parameters into a call to step_impl().

Notation

The classic form of a feature file is a list of scenarios, all looking like:

Scenario: X causes Y
 Given some prerequisite
 When I do X
 Then I observe Y
  • The Scenario clause says what I’m trying to achieve.
  • A “Given” clauses set prerequisites, and should always come out true. For example, it might clean out a trial-use/junk.txt if it exists.
  • A “When” clause is the thing I want to test, and which should succeed. If creating the junk.txt file crashes, it should throw an exception so the test runner knows it failed.
  • Finally, a “Then” clause tests whether or not the when clause did what it hoped to.

if you have multiple given, when or then clauses. you can call them “And” clauses, and they will be interpreted as the same kind of clause as the last one read.

Strengths and Weaknesses

What turns my crank is that it’s succinct enough that you can use it after the fact, when your or your colleagues don’t follow TDD or BDD. For a complete file service, I had one feature file and two python files.  I did break it up later, but it was that small.  So I wasn’t late.

What would have been an amazing win in my former life is that I can cover a complete ABI with a small set of tests, all for the success cases.  With those and a coverage tool, I can carefully find all the reachable code in the top-level API, and comment out the rest.  A reachability tool will then tell me what in the rest of the program isn’t needed any more.  That’s something I would have killed for.

There are some problems, though: in the python implementation, anything passed between clauses either had to be a global or an element of “context”, as implementation steps can’t share anything.

Conversely, every phrase in a given, when or then clause becomes part of a global state, so you end up thinking up counterintuitive ways of saying the same thing different ways for different scenarios: “Then I succeeded” can only exist once, with exactly one implementation.

The biggest weakness is one of its strengths: this is a very high-level language. That means that it is difficult to write low-level unit tests as I did in TDD. Instead, you need to think of the BDD tests as black-box tests of the required behavior. That’s wonderful, by the way: all too many unit tests are so low-level that they have to be completely rewritten if you find a better algorithm.

Conclusions

The next thing I plan to do is start writing BDD tests with go language as the implementation. I can write Go faster than I can Python, and I can write them first, in normal TDD/BDD practice. When I’m doing that, I can prune out all the dead code as I go, instead for having a “kill stuff” day at the end.

Slaw Links

I occasionally do guest posts in Slaw.ca, a legal blog. They’re a bit different from my nerdly stuff, but are the origin of “leafless”. In reverse chronological order, these are

An AI to Make Me Smarter, 21 June 2017

The Case for Redesigning Caselaw, 15 May 2017

Unlimited Copying Versus Legal Publishing, 1 June 2016

Using Hypothes.is With Legislation, 19 Jan 2016

The Only Thing Wrong With Looseleafs Is They’re Printed on Paper, 28 July 2015 [For my birthday!]

Secure Communications by Mandated Design? 9 April 2017

Thank Goodness for the NSA! — a Fable, 1 Jan 2013

Copyright Infringement Trolls: An “Appreciation of the Situation”. 16 Jan 2013

 

 

Canadian Politics, as Seen by a Space-Alien

I’m often puzzled by American politics, but it’s relatively simple. There are liberal and conservative parties, with ambiguous names, and they occasionally swap places. Usually they overlap, but not in this century.

Canadian politics, however, can be confusing.

There are five national parties, four of which are national, and they are named after what they stand for, except when they aren’t. And they change.

My space-alien friend, Zaphod Beeblebrox, asked me to explain them. In pictures, as he only reads hieroglyphics, so I drew this:

venn.png

The most liberal party was (and is) the NDP, which overlapped with the Liberals, who overlapped with the Progressive Conservatives, who overlapped with the Reform. That is illustrated in the first line of the Venn diagram on the left.

 

Historically, the Liberals and the Progressive Conservatives were the parties who got elected.

The New Democratic Party (“NDP”) is our furthest-left party, and when not actually governing or trying to govern, will actually say “socialist” in public. The other parties regularly call them socialists.

The Liberal Party is centrist, and is distinguished by being socially liberal and economically conservative. They usually form the government (ie, get elected). They believe the government has no place in the bedrooms of the nation, taxes should not rise and that budgets should usually balance. They are regarded by the conservative parties as “tax and spend liberals” and bad at budgeting. On bad days, liberals fear that’s correct.

The Progressive Conservative Party (“PC”) is centrist, and is distinguished from the liberals by being economically conservative and socially liberal. The occasionally form the government, and wish it was more often. They believe the government has no place in the bedrooms of the nation, and that taxes should fall if they can ever get the budget to balance. They are regarded by the liberal parties as legitimate conservatives, but ones who spend money like drunken sailors. On bad days, conservatives fear that’s correct.

The Reform Party is our most conservative party, and they once described themselves as “one third libertarian, one third objectivist and one third religious conservative”. The other parties just call them names like “troll”. Good day or bad, they think they’re the people who should be in charge.

Everything, however, is subject to change.

In a brilliant move, the Reform Party did a reverse takeover of the Progressive Conservative Party, and got rid of the term “Progressive”, which used to confuse everyone. As a result they formed the government for several terms (ie, they ran the place).

Unfortunately, they only got the majority of the PC party, not including a whole wodge of people in the centre, as illustrated by the suspicious empty hole in the second line of the Venn diagram. They won elections, as they had “united the right”, but they left a bunch of unsatisfied voters in the centre.

The liberals promptly moved right to fill the hole, and lost a bigger wodge of people on their left to the NDP, who stayed in the same place.

After a number of tries…

The Liberals finally learned to campaign in the presence of trolls and surprisingly un-Canadian levels of grumpyness, and are governing once more. They seem to have got a lot of voters from the hole and some from the NDP.

The hole still exists, the Conservatives don’t seem to have a place for centrists and the NDP still overlaps with the Liberals, just not at much.

It’s anyone’s guess whether we’ll try for just two parties, go back to four or try to make three work.

Zaphod strongly suggested we should go back to four. However, he said it was because four is a prime number on his planet. I too am inclined to think we should go back to four, but probably not for his reasons.

 

 

 

 

 

How NASDAQ solved YouTube’s problem

Once upon a time, I did an 8-month gig at NASDAQ, where my team spent their time moving a large suite of we called “crook detection” programs from one brand of computers to ours. At the end, we rolled them out to two largish buildings of people who spent their workdays finding “bad” or improper trades and fining the people who made them.

NASDAQ, you see,  had the same problem as YouTube: people break the rules. There were and are huge numbers of transactions per day, and no “bright line” test to identify rule-breaches.

However, because they were charging a fee as well as setting rules, NASDAQ was able to make policing the system for dishonest and ill-advised trades pay for itself. Audit is a profit centre.

This is a story about how.

Nature of the problem

NASDAQ is the “National Association of Securities Dealers Automatic Quotation System”, a complex of services related to the Nasdaq Inc. stock exchange.

They do a huge number of trades per day, all of which must be legal, and also obey a number of rules, such as limits on insider trading (trading by members of the company that issues the stock).

The law and rules state well-understood principles, but there are no “bright line” tests that would catch all bad trades. For any system of mechanized tests, there will be and are false negatives, improper trades that aren’t caught, and also false positives, trades which in fact are fine, but appear improper.

The problem is made harder because some of the broken rules are unintentional breaches, caused by ambiguity in the rules or in their understanding by the person making the trade. Others are carefully designed scams, designed to get aroind the rules.

Add to this the tension between the desire to not have to go to court over every little thing, versus the need for a dispute to be appealable to a court, in case of an error in interpretation.

On the face of it, this is an insurmountable problem: it’s too big, and it costs too much.

Comparison to Google

Google’s YouTube has a similar problem: there are huge numbers of videos on YouTube, and thousands of advertisements served to viewers every second, whose fees go to the authors of the videos and to the operation of YouTube.

Some of these videos break Google’s rules, some are explicitly illegal in most countries, and some are merely so horrible that advertisers don’t want their ads appearing with them.

The latter has recently posed a large problem to Google: advertisers discovered that their ads appearing with videos from Breitbart News, supporting terrorist groups. Companies as large as PepsiCo and Wal-Mart have withdrawn their ads.

Google has all the problems that NASDAQ has, in spades. They should be insolvable, but NASDAQ found a way.

NASDAQ’s solution

In short, look for bad trades and train the traders to do better.

Bad trades break down into categories by their degree of seriousness and also into categories by ease of detection. NASDAQ uses both these breakdowns to build a process that starts with algorithms and ends with courts, and at the same time pays for itself.

Breaking down breaches by seriousness

The first breakdown is to separate out inadvertent, minor or first offences and deal with them by sending warnings. Much of this is purely automatic, with questions from the traders about how to interpret the warning improving the messages and populating FAQs. After an initial burst of questions, this turns into something where most questions can be answered by the FAQs.

The next breakdown is into common breaches, and the levying of fines and suspensions for more serious or repeated offences. This is common enough that the fines are the source of income of the entire auditing process. A lot of people like to shave the rules as close as they can, and sometimes closer. They get fined.

The final breakdown is into very serious breaches, which can get the trader kicked out of association, or referred to the courts for criminal behaviour. These are rare.

To avoid arbitrary behaviour or mistakes in law by NASDAQ’s auditors, there is an appeal to courts to correct errors.

Breaking down breaches by ease of detection

Some kinds of breaches have better tests than others. A court may have drawn a bright line between proper and improper in a particular case, and an automated test can distinguish between them easily.

Others are very hard: individual auditors develop expertise in them, guide the development of diagnostic but not definitive tests, and look at the results of the diagnostic tests each day to see if they have found evidence of one of these more difficult cases.

Some are criminal in nature and have to exit the in-house system immediately.

The process

In the practice which out team observed, , it starts with small fines or suspensions. If a dealer keeps breaking the rules, the fines go up and the suspensions get longer. Most case stop there. The dealer learns what they need to do to stay safe, and does.

A few keep banging their heads against the system, looking for a magic trick or a dark corner.

Another small number say the rules are unfair or wrong, which requires review, and therefore requires a standard of review, including that it be public and fair.

An even smaller number appeal the review to a court, at a not inconsiderable expense.

By construction, the system is a pyramid, with the common cases dealt with automatically, the less common getting human review, and the smallest number exiting the system for the courts.

Conclusion

The problem isn’t impossible. It’s a wicked problem, but it has a fix that scales, is fair, and pays for itself.

As more crooks join, the fines go up and they hire more auditors. As the honest crooks mend their ways, the auditors spend more time looking at the hard questions, guiding the programmers and developing test that find more of the remaining crooks.

Capacity Planning as Pain Relief

As you may know from my old blogs, I’ve often done capacity planning, and generally recommend it as a pain-avoidance tool.

However, I was just reading a blog (not by one of my customers!) about how much pain they went through when they didn’t have enough storage performance, and it struck me it should take them about an hour to turn a pain-point into a pain-avoidance plan.  So this is how.

Introduction

A company I follow recently decided that they should stay with cloud storage, which was interesting, but the most interesting thing was they pointed out what made them consider having their own storage:  every time load got high, the write journal times went from 2 seconds to forty or more.

screenshot_2017-03-05_14-39-32

Now, if you happen to be doing anything where human beings wait for you, forty seconds is bad. Really bad. Twenty to thirty seconds is the timeout point for human short-term memory. After that long, many of us will have completely forgotten what we were doing. With me, I’d probably assume it had taken even longer, and conclude “my supplier has crashed”, and start wondering if this was another Amazon-S3-crash.

You can imagine what kind of pain they were in!

Pain Avoidance

However, they also have graphs of the load at the same time, which means that they can calculate one value that will be immensely useful to them: how much their storage slows down under load.

In a similar but read-heavy scenario, I plotted read times against load, and got a scattergram with three distinct regions:

sloth

The first was below about 100 IOPS, where the response time was quite low, as there were a relatively small number of request that came in at the same instant as another and had to wait. Above 100 I/O operations per second, we start having a lot of requests coming in at the same time and slowing down. By 120, we’re starting to see huge backups, with request sitting in queue for 30 seconds and more before  they could get a chance to go to the disk..

Response times vs load always form a “hocky-stick” curve, technically a hyperbola, and can be plugged into a queue modeller like pdq to get a good estimate (the solid line)  If I had a lot more data points at 110-140 IOPS, the scattergram would have shown a definite “_/” shape.

This the thing you need to avoid the pain: the slowdown curve. Once you know it, you can plan to avoid being at the wrong point in it.

Conclusion

If you have ever had a major slowdown, as the bloggers did with their journal writes, ask yourself: do you have the load from the same time period?

If you do, an ordinary spreadsheet will give you a scattergram of slowness vs load, and you can draw the hocky-stick curve by eye. Spreadsheets will draw exponentials, bot that’s nothing like accurate enough: your eyes will do better.

Now you know what to avoid, and the pain you suffered has been turned into data that can help you never have the problem again.

Know that curve and resolve to avoid the bad parts, forevermore!

 

“DLL Hell”, and avoiding an NP-complete problem

Needing two incompatible version of the same library  can be an evil problem, and a recent paper pointed out that solving it was NP-complete.

“DLL hell” has been a problem in Windows and many of the free and commercial Unixes. Interestingly, it was recognized and fixed in Multics, but then reproduced in both Unix and Windows, and finally re-fixed in SunOS 5. More recently, it was fixed in part in Linux glibc.

This is the story of how.

Introduction

Last week, I had the pleasure of reading about Kristoffer Grönlund’s 2017 linux.conf.au talk, Package managers all the way down , about the recurring problems of package managers for both operating systems and language families.

dll_hell

DLL hell is back, and hitting multi-language projects like Hawk as well as language-specific tools like npm.

Russ Cox investigated the problem, and wrote well-known paper that opens with “Dependency hell is NP-complete. But maybe we can climb out.”

The problem is that a program that use libraries can end up needing two different versions of a subordinate library. For example, I might write a main program that uses glibc n+1, but from it call a library that requires glibc n.

If your system can’t load both, you can’t link.

History

In principle, you can design a linker that can load both versions: the Rust language and the and the Nix package manager have addressed that approach, at the expense of managing a large and ambiguous symbol space.

A elegant idea, however, is to version-number the individual interfaces, and keep both old and new interfaces in the libraries. When you update a library, you get new interfaces and bug-fixes, but you also gets the backwards-compatibility of having the “old” interface in the same file.  Of course, it isn’t always a complete copy of the old versions: often it’s a “downdater” that calls the new one with different parameters, or an “updater” that wraps the old code in a function with additional capabilities.

The idea of having only one copy of a library on the system and keeping old versions of calls came originally from Multics: Paul Stachour wrote about their hardware and software versioning in You Don’t Know Jack About Software Maintenance. It that era, they did continuous delivery, something which required backwards compatibility: they just called it continuous maintenance.

On Multics, all maintained versions of an interface exist in the library, and it is the interfaces that have versions. If someone talks about version 7 of a library, they means a library that includes the interfaces that were added for version 7.

On Unix, that degree of complexity was undesirable: the early Unixes did not link dynamically and the number of libraries was small, so it was hard to have the problem.

Systems with lots of shared libraries and products depending on them, however, made us aware of it once more. Windows gave us the name “DLL hell”, and we promptly recreated it in  SunOS 4, as did the other Unix vendors.  For SunOS 5, my team under David J Brown then recreated the fix, and later discovered that the glibc team had done so too. David described both in a Usenix talk, Library Interface Versioning in Solaris and Linux.

We needed to do so because we had a “compatibility guarantee”. If your program ran and passed the “appcert” script, then we warranted it would run on all newer Solaris versions. If it didn’t, that was our fault, and ours to fix. Failing to fix DLL hell would have cost us significant money!

In Linux, Linus make a similar guarantee, to not break running programs. It too is hard, but manageable inside a mostly-static kernel of moderate size. It’s harder in the application world.

How it Works

Assume my main-program and a library both call memcpy. My program makes sure that the source and target don’t overlap, so it can use the default memcpy.  However, the library does do overlappped copies, and blithely assumed that memcpy did pure left-to-right copying. That’s no longer true, so the library is linked to  a specific memcpy, memcpy@GLIBC_2.2.5, which does make that guarantee. That interface is “downdated” to do strict left-to-right copying by being turned into a call to memmove instead of to the newest memcpy.

To ensure this, the compiler and linker need to be sensitive to the versions needed, and insert the equivalent of “.symver memcpy, memcpy@GLIBC_2.2.5” into the linkage process, using the rules described in David J. Brown’s paper to choose a version old enough to guarantee the needed behavior.

Understanding the implications

Paul Stachour’s paper describes how versioning is used in an across-the-net API, similar to modern web development, to achieve continuous delivery.

David J. Brown’s paper addresses managing many variations over time, and allowing some degree of new-runs-on-old, to avoid forcing developers to keep old compilers around when doing bug-fixes to older versions of a program.

To me, the value during development is huge.  We typically had a number of services in development or use, each with an API and several clients. We would only be working on one of the clients at a time, and wanted to avoid “flag days”, where the team had to stop and change the api of the server and all their clients before we could continue. Those were horrible.

Instead, our process became

  • put the new version of the API on the server, and add a downdater
  • when we next updated each  client, change the client to use it,
  • when all clients use the new version, remove the downdater.

The only oddity you’d notice was that some versions didn’t ever ship, but would be consumed internally.  You’d see versions like 4.1.3 followed by 4.1.9 if you used one of our team’s libraries.

That was not a solution to every linking problem: indeed, we still had the “normal” problem that each time we got  a new release of a library we call directly,  then we found any breaking changes the first time we compiled. But that’s the same read-the-release-notes problem we had before.  therefore we never need to worry about  “vendoring” or “dependency management”in our development.

Conclusions

 

Today, we’re once again faced with the incompatible versions problem, in both language- and os-based package managers.

This time, however, we have the opportunity to see two working implementations of a solution. From those, we have the opportunity take the best of their ideas and properly solve this recurring problem.