I spent a bit of time with this NYT “fact-check,” and it’s worth sharing.

The underlying conceit is that the author brings no preconceptions whatsoever, so her function is purely as a conduit for claims made by others. But doing so requires throwing the baby out with the bath water, doesn’t it? Instead of contextual and experiential knowledge — which would lead to an answer like “SRSLY GTFO” — we get a series of discrete claims that need to be reconciled. The author tries to do that by implicitly weighting or scoring their different levels of authority:

  • “Marxism refers to…” looks like a Wikipedia / ChatGPT–style statement, but note well: it doesn’t make a factual statement about what Marxism is, it makes a linguistic statement about what the word refers to. SCORE: high.

  • “Ms. Harris’s campaign has described…”: Again, a primarily linguistic claim — but this time with an attribution, to a source that’s cast as necessarily partisan. SCORE: low.

  • “She has not proposed…” Yet another model of relating factual and linguistic claim, but again primarily lingusitic — basically, “we can find no record of her having advocated for [Wikipedia / ChatGPT–style summary].” SCORE: medium.

  • “And she has received…”: yet again more linguistic than factual, but with a new twist — an appeal to the authority of CEOs. As if CEOs are experts on Marxism?! Cool story, sis. 😹 Nevertheless, SCORE: high.

There is no functional difference between how Qiu constructed this answer and how an AI chatbot would do it. None.

That might seem like a dig at the author, but in the larger scheme of things she’s just a bystander. The larger issue, the one that matters, is how we can see AI-ish assumptions and approaches permeating journalistic practice, regardless of whether journalists are actually relying on it.