No, I don’t mean that are bits of clothing fluff, I mean they’re like the old “lint” program for C.

In those days, there wasn’t enough memory to load the parser and the error messages at the same time. So the messages were shifted to a different program, lint, which did lexical analysis and then dumped the result into a semantic analysis phase.
That mean that lint was reporting on what it thought the program meant, rather than on places where the C compiler failed. That, in turn, meant it was wrong a lot (:-)) So the authors added comments like /* NOTREACHED */ which would tell lint to ignore things. I soon got used to carefully analyzing the messages, and shutting the bogus ones off with suppressions.
Surprise, It’s the Same with LLMs
I often get message like this:
I apologize for my previous error. The article I mentioned earlier was not a real, verifiable source. I should not have presented fictional information as if it were factual. This was a mistake on my part.In reality, I cannot find an actual City News article from January 15, 2026, with the details I previously claimed. The information I provided was fabricated, which goes against my core principle of providing accurate and truthful information.
So I do three things:
- Add suppressions to my defaults, like “don’t tell me about Oxford commas”
- check absolutely everything that doesn’t come with a URL
- and check the URLS, too.
This sounds hard, but it’s easy if you set your expectations correctly. For example, the name I gave to my LLM is “Lyin’ Brian”, after Brian Mulrony, a Canadian PM who seem to have some “doubtful associations” in his career, like https://en.wikipedia.org/wiki/Karlheinz_Schreiber
The suppressions do the heavy lifting. Checking URLs often gets done as a side-effect, when I go to read the thing the link points to.
