from the I’m-sorry-I-can’t-do-that,-Dave dept
Late last year we wrote about how LA Times billionaire owner Patrick Soon-Shiong confidently announced that he was going to use AI to display “artificial intelligence-generated ratings” of news content, while also providing “AI-generated lists of alternative political views on that issue” under each article. After he got done firing a lot of longstanding LA Times human staffers, of course.
As we noted at the time Soon-Shiong’s gambit was a silly mess for many reasons.
One, a BBC study recently found that LLMs can’t even generate basic news story synopses with any degree of reliability. Two, Soon-Shiong is pushing the feature without review from humans (whom he fired). Three, the tool will inevitably reflect the biases of ownership, which in this case is a Trump-supporting billionaire keen to assign “both sides!” false equivalency on issues like clean air and basic human rights.
The Times’ new “insight” tool went live this week with a public letter from Soon-Shiong about its purported purpose:
“We are also releasing Insights, an AI-driven feature that will appear on some Voices content. The purpose of Insights is to offer readers an instantly accessible way to see a wide range of different AI-enabled perspectives alongside the positions presented in the article. I believe providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation.”
Unsurprisingly, it didn’t take long for the whole experiment to immediately backfire.
After the LA Times published a column by Gustavo Arellano suggesting that Anaheim, California should not forget its historic ties to the KKK and white supremacy, the LA Times’ shiny new AI system tried to “well, akshually” the story:
Yeah, whoops a daisy. That’s since been deleted by human editors.
If you’re new to American journalism, the U.S. press already broadly suffers from what NYU journalism professor Jay Rosen calls the “view from nowhere,” or the false belief that every issue has multiple, conflicting sides that must all be treated equally. It’s driven by a lust to maximize ad engagement and not offend readers (or sources, or event sponsors) with the claim that some things are just inherently false.
If you’re too pointed about the truth, you might lose a big chunk of ad-clicking readership. If you’re too pointed about the truth, you might alienate potential sources. If you’re too pointed about the truth, you might upset deep-pocketed companies, event sponsors, advertisers, or those in power. So what you often get is a sort of feckless mush that looks like journalism, but is increasingly hollow.
As a result, radical right wing authoritarianism has been normalized. Pollution caused climate destabilization has been downplayed. Corporations and CEOs are allowed to lie without being challenged by experts. Overt racism is soft-pedaled. You can see examples of this particular disease everywhere you look in modern U.S. journalism (including Soon-Shiong’s recent decision to stop endorsing Presidential candidates while America stared down the barrel of destructive authoritarianism).
This sort of feckless truth aversion is what’s destroying consumer trust in journalism, but the kind of engagement-chasing affluent men in positions of power at places like the LA Times, Semafor, or Politico can’t (or won’t) see this reality because it runs in stark contrast to their financial interests.
Letting journalism consolidate in the hands of big companies and a handful of rich (usually white) men results in a widespread, center-right, corporatist bias that media owners desperately want to pretend is the gold standard for objectivity. Countless human editors at major U.S. media companies are routinely oblivious to this reality (or hired specifically for their willingness to ignore it).
Since AI is mostly a half-baked simulacrum of knowledge, it can’t “understand” much of anything, including modern media bias. There’s no possible way language learning models could analyze the endless potential ideological or financial conflicts of interests running in any given article and just magically fix it with a wave of a wand. The entire premise is delusional.
Most major mainstream media moguls primarily see AI as a way to cut corners, cut costs, and undermine organized labor. Whether the technology actually works all that well is usually an afterthought to the kind of fail-upward brunchlords that dominate management at major media outlets.
The LA Times’ “Insight” automation is also a glorified sales pitch for Soon-Shiong’s software, since he’s a heavy investor in medical sector automation. So of course he’s personally, deeply invested in the idea that these technologies are far more competent and efficient than they actually are. That’s the sales pitch.
Which is amusing given that one of the software’s first efforts was to generate a lengthy defense of AI on the heels of an LA Times column warning about the potential dangers of unregulated AI:
“Responding to the human writers, the AI tool argued not only that AI “democratizes historical storytelling”, but also that “technological advancements can coexist with safeguards” and that “regulation risks stifling innovation.”
The pretense that these LLMs won’t reflect the biases of ownership is delusional. Even if they worked properly and weren’t a giant energy suck, they’re not being implemented to mandate genuine objectivity, they’re being implemented to validate affluent male ownership’s perception of genuine objectivity. That’s inevitably going to result in even more center-right, pro corporate, truth-averse pseudo-journalism.
There are entire companies that are dedicated to this idea of analyzing news websites and determining reliability and trustworthiness, and most of them (like Newsguard) fail constantly, routinely labeling propaganda outlets like Fox News as credible. And they fail, in part, because being truly honest about any of this (especially the increasingly radical nature of the U.S. right wing) isn’t good for business.
We’re seeing in real time how rich, right wing men are buying up newsrooms and hollowing them out like pumpkins, replacing real journalism with a feckless mush of ad-engagement chasing infotainment and gossip simulacrum peppered with right wing propaganda. It’s not at all subtle, and was more apparent than ever during the last election cycle.
The idea that half-cooked, fabulism-prone language learning models will somehow make this better is laughable, but it’s very obvious LA Times ownership, financial conflicts of interest and abundant personal biases in hand, is very excited to pretend otherwise.
Filed Under: ai, automation, bias, insights, journalism, llms, media, patrick soon-shiong, regulation
Companies: la times