Composing an “Arbiter of Truth”

William Gibson once said The future is already here—It’s just not very evenly distributed.

That also applies to the solutions to problems, like that of finding out who’s telling the truth in widespread discussion. By Gibson’s dictum, we should expect to find different parts of the solution, but not together, and likely in all sorts of unexpected places. It’s up to us to find them all and compose them together.

Introduction

Finding the truth looks like a wicked problem at first glance, it’s really more like a defense in depth, with constraints like preserving free speech turning into ways to filter out approaches that won’t work.

We’ve been doing it since the ancient Greeks, and the other side’s been fighting back that whole time: the Sophists got Socrates execute by the government of Athens, for instance.

It has at least three parts, maybe more:

1. Nasdaq

I wrote before in the blog about one part, how NASDAQ dealt with illegal and merely improper stock trades on their exchange.

To reprise, the underpinnings of their work was the law: stealing stock is illegal. Lying to incite a riot is also illegal. However, neither is particularly common. Much more common is honest error or enthusiastic mis-statement.

The next layer “up” is the agreement all the traders signed to be members of the exchange, and to trade stock. It provides for rules, which like city by-laws, have to be in compliance with the basic law.

If someone makes an improper trade, they can be cautioned the first time, given a small fine the second, and a whopping fine the third. Historically, the first two are the majority of the cases: Only a very few people are actually evil geniuses running stock scams or trying to fix an election,

In addition, the work of policing trades was broken into chunks, with the exchange starting small, with accidents. They split those off from gross evils, and broke that group down in turn by whether something was hard or easy to detect.

Their big advantage was that there was money on the table, and that people had agreed to the rules. Catching and fining crooks and crooklets made it pay for itself, which makes their approach, all by itself, something that could be used by a for-money operation like “monetized” YouTube channels

All of the above applies to finding lies, except for the money. That makes it doubly important to start small, and not try to boil the whole ocean at once.

2. Automation

The second part is one you’ve probably already thought of, automation.

Once something is found to be undesirable, such as a dog-whistle article inciting murder, all the copies can be found, plus all the likes and all the shares. Probably even including everyone who has seen it.

That let a small staff deal with a large effort by an advertiser or a “virtal” meme that expands quickly.

With luck, machine learning (ML) be trained to recognize minor variants of a banned article, and refer them to the staff to be sure that’s what is being recognized. Those can be treated the same way as the original posting

But how can we credibly detect the lies in time? The kind of team a site can afford are always going to be behind.

That is solved for a distantly related problem, one that is as as unexpectedly helpful as looking at policing stock trades

3. Slashdot

One of the older big discussion groups, slashdot, from its inception in 1997 needed to deal with overenthusiastic commentators, flamers and trolls. In 2020, it’s still easy to “read at 4 or 5”, and see a measured, reasonable and informative discussion of a difficult subject. Or you could “read at -1”, and listen to the madmen and flamers that elsewhere would drown out the insightful comments.

The site is driven by a commentator-as-moderator system. If your comments are moderated as informative, insightful or funny, you gain karma, and are given ten or fifteen moderator points. With them, you can mark good posts up, by 1 point per comment, and the bad posts down. The kinds of moderation range from “insightful” and “informative, through “underrated” and “overrated”, down to “troll” or “flamebait”

As a check in the moderators, anyone can “meta-moderate”, and vote on whether the moderators were being fair.

Of course, you can’t moderate comments on an arrtcle you’e commented on yourself, to avoid self-serving behavior.

Readers can set the level of quality they want to see, from -1 to 5. “Reading at four” will show you just the well-respected comments, rated at four to five.

Putting it all together

Start with human commentators who have been moderated as truthful by their peers, and give them each a handful of points, to rate comments from “elegantly argues truths” to “weasel-worded or “public relations” down to “illogical”, “twisted” “sophistry” or “the lie direct”

Allow anyone to meta-moderate, to police the moderators, and prohibit moderating and commenting on the same article.

Now, feed the opinions of the moderators to the human staff, who act as auditors, not moderators.

If the auditors agree that a statement is a dog-whistle, use automation to flag it, and to look for other examples of the same “veiled speech” in other posts.

Feed those as candidates back to the auditors, to keep the ML from turning itself into a troll!

With humans, the auditors can warn and teach the honestly mistaken, although just being moderated down can moderate the over-enthusiastic. If they can’t, their membership can be suspended, and in extreme cases, they can be turned over to the police

That’s my three-part composition: I’m sure there are more that I’ve never heard of.

Keep the humans honest, and throw the Sophists over the side of the bridge, to the the trolls.

One thought on “Composing an “Arbiter of Truth”

Leave a comment