• AiNews.com
  • Posts
  • AI Won’t Save Journalism ... But It Might Save Journalists Who Learn to Use It

AI Won’t Save Journalism ... But It Might Save Journalists Who Learn to Use It

Journalists who learn to use AI as a tool—not a crutch—will have the edge, says Otherweb founder Alex Fink

A focused woman sits at a desk in a modern newsroom, typing on a laptop. She wears a maroon button-up shirt over a white t-shirt, and a blue lanyard with an ID badge hangs around her neck. A microphone lies on the desk beside her, along with a notepad, suggesting she may be working as a journalist or reporter. In the background, several people are seated at workstations with multiple monitors, creating a busy newsroom or media office environment. The atmosphere is professional and concentrated, with cool lighting and large windows in the distance.

Journalism in the age of AI is one human, one laptop and a newsroom quietly evolving. Image created with ChatGPT.

AI Won’t Save Journalism ... But It Might Save Journalists Who Learn to Use It

By Alastair Goldfisher
Veteran journalist and creator of The Venture Lens newsletter and The Venture Variety Show podcast. Alastair covers the intersection of AI, startups, and storytelling with over 30 years of experience reporting on venture capital and emerging technologies.

AI isn’t going to save journalism. But used right, it could help preserve what matters most about it: time, accuracy and human focus.

The tension is clear. While tech companies promote AI as the next great leap, many journalists fear we’re watching the slow extinction of the profession. That fear isn’t unfounded.

AI has already been used by major media outlets to automate story generation. Sometimes it ends with embarrassing results. Just ask Gannett, which paused its AI-written high school sports stories after publishing copy that read like a robot had a quota to meet.

That’s the shortcut mindset, and it misses the point.

Journalism needs leverage, not shortcuts

Of course, journalists aren’t the only ones facing this shift.

In marketing, AI tools are already writing ad copy, personalizing email campaigns and generating strategy briefs. In startups, founders are using AI to shape pitch decks and simulate customer interviews.

The question isn’t whether AI will be used. It’s how smartly and responsibly it gets integrated into journalism and beyond.

Used lazily, AI can churn out unverified, flat content, as we’ve all seen. But used intentionally, it can give journalists a leg up. Summarizing hours of transcripts, analyzing datasets, generating questions and providing background on hard-to-find historical topics during deadline crunches.

Alex Fink, founder of the Otherweb, put it bluntly in a recent AiNews podcast on YouTube: “If a journalist wants to remain a journalist, they have to be ahead of the curve when it comes to adopting the latest and greatest AI tools,” Fink said. He added that early adopters who tinker and explore how AI can fit into their workflow will be the ones left standing. Everyone else is at risk.

Fink’s point is clear. AI won’t replace journalists. But journalists who use AI strategically will replace those who don’t.

What others are doing

AI is already part of the workflow at some news organizations. The New York Times uses AI to draft headline options, identify news angles and assist with summarizing large data sets. In February, I wrote about this here on my Substack, where I described how entrepreneurs and VCs can better prepare for interviews by understanding how AI shapes the questions that certain journalists ask.

Tools like Echo and semantic search engines are now doing the kind of backend prep that used to eat up hours in the newsroom. This isn’t about turning journalism into prompt engineering. It’s about freeing up time so journalists can better probe, follow up and tell stories with depth and clarity.

Where it goes wrong 

Problems happen when AI is treated as a full-time writer. The LA Times learned that the hard way recently when its new AI tool, called “Insights,” downplayed the history of the Ku Klux Klan. In response to its own column, Insights ignored the group’s history of violence and hate and described the KKK as “responding to societal change” and a “product of white Protestant culture.” The response was public outrage and subscription cancellations before Insights was removed. That’s what happens when AI-generated content is published without editing or oversight.

In another example, the BBC recently tested four gen AI tools—OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity—to summarize articles from its own website. Just over half (51%) of the AI-generated responses contained issues, with nearly one out of five introducing factual inaccuracies.

Zach Seward of The NY Times said that putting unreviewed AI copy in front of readers is an abdication of responsibility. “Putting the burden of verification back on the reader would undermine the whole reason you would come to the Times,” Seward told the Substack newsletter Depth Perception in its interview with him about the newspaper’s AI policy last week.

The journalist’s role is to verify and AI can assist, but it can’t be 100% accountable.

My POV: AI is a new kind of assistant editor

I’ve spent years working with reporters and now I train founders and investors on how to work with the media. I also help clients on how to work with AI. The goal isn’t to eliminate the human element. It’s to make humans sharper.

AI is the new assistant editor. It doesn’t report for me and doesn’t chase down sources, but it can help me polish my drafts, smooth out a smart angle or review transcripts from my interviews. Journalists who embrace AI tools will find themselves not just surviving but thriving.

It’s not about letting go of the pen, or keyboard. It’s about spending more time writing what matters.

🎙️ Stay informed by subscribing to The Venture Lens for the latest insights and updates from Alastair.