Once upon a time, I did an 8-month gig at NASDAQ, where my team spent their time moving a large suite of we called “crook detection” programs from one brand of computers to ours. At the end, we rolled them out to two largish buildings of people who spent their workdays finding “bad” or improper trades and fining the people who made them.
NASDAQ, you see, had the same problem as YouTube: people break the rules. There were and are huge numbers of transactions per day, and no “bright line” test to identify rule-breaches.
However, because they were charging a fee as well as setting rules, NASDAQ was able to make policing the system for dishonest and ill-advised trades pay for itself. Audit is a profit centre.
This is a story about how.
Nature of the problem
NASDAQ is the “National Association of Securities Dealers Automatic Quotation System”, a complex of services related to the Nasdaq Inc. stock exchange.
They do a huge number of trades per day, all of which must be legal, and also obey a number of rules, such as limits on insider trading (trading by members of the company that issues the stock).
The law and rules state well-understood principles, but there are no “bright line” tests that would catch all bad trades. For any system of mechanized tests, there will be and are false negatives, improper trades that aren’t caught, and also false positives, trades which in fact are fine, but appear improper.
The problem is made harder because some of the broken rules are unintentional breaches, caused by ambiguity in the rules or in their understanding by the person making the trade. Others are carefully designed scams, designed to get aroind the rules.
Add to this the tension between the desire to not have to go to court over every little thing, versus the need for a dispute to be appealable to a court, in case of an error in interpretation.
On the face of it, this is an insurmountable problem: it’s too big, and it costs too much.
Comparison to Google
Google’s YouTube has a similar problem: there are huge numbers of videos on YouTube, and thousands of advertisements served to viewers every second, whose fees go to the authors of the videos and to the operation of YouTube.
Some of these videos break Google’s rules, some are explicitly illegal in most countries, and some are merely so horrible that advertisers don’t want their ads appearing with them.
The latter has recently posed a large problem to Google: advertisers discovered that their ads appearing with videos from Breitbart News, supporting terrorist groups. Companies as large as PepsiCo and Wal-Mart have withdrawn their ads.
Google has all the problems that NASDAQ has, in spades. They should be insolvable, but NASDAQ found a way.
In short, look for bad trades and train the traders to do better.
Bad trades break down into categories by their degree of seriousness and also into categories by ease of detection. NASDAQ uses both these breakdowns to build a process that starts with algorithms and ends with courts, and at the same time pays for itself.
Breaking down breaches by seriousness
The first breakdown is to separate out inadvertent, minor or first offences and deal with them by sending warnings. Much of this is purely automatic, with questions from the traders about how to interpret the warning improving the messages and populating FAQs. After an initial burst of questions, this turns into something where most questions can be answered by the FAQs.
The next breakdown is into common breaches, and the levying of fines and suspensions for more serious or repeated offences. This is common enough that the fines are the source of income of the entire auditing process. A lot of people like to shave the rules as close as they can, and sometimes closer. They get fined.
The final breakdown is into very serious breaches, which can get the trader kicked out of association, or referred to the courts for criminal behaviour. These are rare.
To avoid arbitrary behaviour or mistakes in law by NASDAQ’s auditors, there is an appeal to courts to correct errors.
Breaking down breaches by ease of detection
Some kinds of breaches have better tests than others. A court may have drawn a bright line between proper and improper in a particular case, and an automated test can distinguish between them easily.
Others are very hard: individual auditors develop expertise in them, guide the development of diagnostic but not definitive tests, and look at the results of the diagnostic tests each day to see if they have found evidence of one of these more difficult cases.
Some are criminal in nature and have to exit the in-house system immediately.
In the practice which out team observed, , it starts with small fines or suspensions. If a dealer keeps breaking the rules, the fines go up and the suspensions get longer. Most case stop there. The dealer learns what they need to do to stay safe, and does.
A few keep banging their heads against the system, looking for a magic trick or a dark corner.
Another small number say the rules are unfair or wrong, which requires review, and therefore requires a standard of review, including that it be public and fair.
An even smaller number appeal the review to a court, at a not inconsiderable expense.
By construction, the system is a pyramid, with the common cases dealt with automatically, the less common getting human review, and the smallest number exiting the system for the courts.
The problem isn’t impossible. It’s a wicked problem, but it has a fix that scales, is fair, and pays for itself.
As more crooks join, the fines go up and they hire more auditors. As the honest crooks mend their ways, the auditors spend more time looking at the hard questions, guiding the programmers and developing test that find more of the remaining crooks.