Truth Goggles: the Enlightenment Dream of Automated Fact Checking

polygraph_history_5comment submitted to TechCrunch article  “True Or False? Automatic Fact-Checking Coming To The Web – Complications Follow” by Devin Coldewey, 11/28/2011.

> “the layering of reference and context onto the information you read”.

This exists generally in the well-tested paradigm of citation and reputation, as it functions for example in peer-reviewed literature.  It seems that Daniel Schultz‘s “truth goggles” could be seen as a particular version of this, in which the annotation of the base layer is automated rather than authored, and the citation framework is specifically the fact-check databases Politifact and NewsTrust (for now).

If the citation framework were generalized to allow many annotators and reference sources, then I believe we’d be close to the http://Hypothes.is project’s model.

Pure algorithmic assessment of “fact” and reasoning and valid judgment is at minimum an extremely complex, long-term problem, and is quite possibly unsolvable in ways. In a human, distributed trust system such as present peer review, we trust that communicators are incented by reputation to uphold agreed-upon standards of evidence and judgment. Writers, journal editors, research funders, research institutions, etc. collectively build a system which, ideally, systematically rewards adherence to the shared objective standards and ethics. In this model, we don’t necessarily try or have to understand how each link in the system performs its complex evaluations; we rely on the fact that they are well incented to do it correctly, and are sufficiently cross-monitored to be trusted.

Regardless of peer-review mechanism, we have thorny questions of what constitutes “true,” or “factual,” and how people are affected by information. Coldewy says “facts are facts and fiction is fiction,” and I keep hearing versions of this in discussion of fact-checking systems and civic media; but to me it is a rather vast and optimistic supposition. What theories of language, of propaganda, of politics, of media effects, of cognitive science, support the view that people get truthful, and rationally deliberate together, if we just put more “factually” true “information” out there? It seems based more on traditional faith in Enlightenment rather than on hard evidence of how communication works.

I would like to see more, and am doing some work on, media analytics / environments based on empirical evidence and cognitive science models — what actually causes what effect on readers.

anyway, I think Schultz’s work is interesting and valuable, especially the distributed / API aspect, and I’m glad to see it covered and see the rapidly developing conversation around these issues.

Tim McCormick

Home / About


follow me on Twitter: @mccormicktim

Image credit:  TechCrunch

Leave a Reply