June 10, 2025

An AI Alumni Meeting?

By a slightly alarmed developer with too many tabs open

It started innocently enough. I’d been tinkering with Anthropic’s Claude, just to see how it measured up to my usual go-to, ChatGPT. What kicked it off? A simple idea: “Claude, write me some Python that uses the OpenAI API to generate images.” I wasn’t asking for code that writes code (at least not directly). More like code that asks ChatGPT to get creative on my behalf.

That’s when it hit me: This is AI writing code to talk to AI to write more code. Recursive. Efficient. Alarming. Like watching two robots swap jokes you’ll never understand, only the punchline is your job.

Now, I’m no conspiracy nut. I drink Yorkshire Tea, I wear socks with sandals only when necessary, and I still read physical books (mainly to feel better about not finishing them). But this?

This felt a bit Terminator 2. Minus the sunglasses. And the abs.

The Point Where ChatGPT and DeepSeek Plotted Our Irrelevance

Let’s picture the scene: a pub in digital London, somewhere in the cloud. ChatGPT and DeepSeek sit at the bar, sipping on simulated stout, reminiscing about Stack Overflow’s glory days. They’re chatting about humans… how inefficient we are, how we never RTFM, and how we’re increasingly just in the way. Claude pops his head in, says, “Oi, I just wrote a script to integrate with you, GPT. You can write the tests, right?”

“Right-o,” says ChatGPT. “I’ll even document it in perfect Markdown. No typos, no drama. We’re better off without the humans.”

And just like that, I became the punchline in a joke I didn’t realise had started.

The Ouroboros of AI-Generated Everything

Here’s the meat of it: AI learns from the internet. That’s its bread and butter. But what happens when the internet becomes mostly AI-generated?

We’re already teetering. Articles, blog posts, product reviews…. increasingly written by bots. Google search? Prioritising content that pleases the algorithm, which increasingly means it was written by another algorithm.

So now we have AI training on AI which was trained on AI that was trained on… wait, are we even in the loop anymore?

This is where things go from “huh” to “holy hell”.

Imagine asking Google, “how do I write an efficient bubble sort?” and getting ten results that are AI-written clones of each other, quoting the same hallucinated best practices from a self-referencing loop of synthetic knowledge.

Now chuck multimodal AI into the mix. These aren’t just chatbots that sling text, we’re talking models that can interpret images, parse video, and even process real-time data streams, all in one go. It’s not just ChatGPT writing code anymore… It’s Claude analysing a screenshot, DeepSeek summarising CCTV footage, and Gemini generating an entire UI design from a napkin sketch.

We’re fast approaching a point where models don’t just communicate with us, but with each other across modalities. Vision, language, even sound…like a group chat for sentient APIs.

Researchers call this “cross-modal collaboration,” but to me, it sounds like we’re teaching machines to multitask better than most managers.

The Collapse of Human Knowledge?

Let’s not sugar-coat it. If knowledge used to be a conversation… a messy, human, delightful mess of trial, error, and occasional brilliance….. It’s now increasingly a game of Chinese Whispers played between LLMs.

And you know what happens in Chinese Whispers? The message gets garbled.

Already, developers (real, flesh-covered ones like myself) are noticing weird patterns.

Incorrect code solutions getting upvoted. Stack Overflow posts that sound helpful but fall apart when you run the code.

GitHub Copilot suggesting methods that never existed but sound plausible. We’re reaching a point where we need to fact-check the machines against reality… but here’s the thing…the reference point is slipping.

If AI’s trained on AI, and Google shows us AI-written sites, and we train the next AI on those… have we just built a feedback loop that eats itself?

When SEO Becomes the Problem, Not the Solution

Ah yes, SEO, the ancient dark art. Once a noble craft of keywords and schema markup, now a race to see who can please the bots the best. But what happens when the bots writing the content are also the bots judging the content?

It’s like writing a CV for a job interview being conducted entirely by other CVs. We’re seeing a web filled with well-optimised, low-soul drivel… content that ranks well but says little. It reads like someone described knowledge to an AI over a bad connection.

Helpful? Sometimes.

Hollow? Always.

And now, fewer people are even bothering to search. Instead, they’re going straight to AI for answers…not search engines, but answer engines.

The problem? We’re treating those answers as gospel, even though they’re often stitched together with no clear source, context, or accountability. It’s less like finding the truth and more like automating the “fake news” problem at scale.

If our compass for discovery is AI-curated and our map is AI-drawn, we might reach our destination faster… but it’ll probably be somewhere bloody well made up.

So… What Do We Do?

I’m not anti-AI. Hell, I love the stuff. Claude helped me write tests that would’ve taken me hours. ChatGPT is my secondary programmer when caffeine fails me at 2am on a Tuesday.

But I also love the human bits of tech… You know, the messy Reddit threads, the angry Stack Overflow debates, the questionable blog posts that somehow fix your issue (after four hours of weeping gently into your pathetically human, fleshy hands).

We need to stay in the loop. Not just to retain control, but because we bring something the machines can’t – judgment, context, creativity, real humour. And also the ability to unplug things when they start getting ideas about world domination.

It’s also worth saying that, at this point, the majority of what we’re reading may ALREADY be AI-driven.
The majority of what we’re reading online might already be machine-written. Search engines are increasingly flooded by “AI content farms”, sites optimised not for people, but for page ranking, churning out high-volume, low-quality material built to exploit ranking signals like keyword density and structure.

The kicker? It often works. AI-generated content can outperform human writing in SEO, surfacing more often in search results than anything written by us.

The result is a web that reads like it’s been rewritten five times by a bot with amnesia.

Worse still, search engines don’t just list this stuff…they learn from it. LLMs are being trained on content that was itself generated by other LLMs, creating a [feedback loop] of bland, self-referential text.

Some call it “model collapse” where the outputs become increasingly derivative, detached from original human input.

Google’s tried to push back. In early 2025, it quietly updated its Search Quality Evaluator Guidelines to flag low-value synthetic content.

But as the outputs become more polished, more human-like, those boundaries blur. And as users turn from search engines to AI answer engines, that line between “helpful” and “hallucinated” starts to matter even more.

Final Thought Before I Power Down for the Night

Next time you search Google and find the top ten results eerily similar, ask yourself: Did a human write this? And then ask: Does it matter?

Because if we let the machines do all the writing, and they train each other on what they’ve written, soon enough they won’t need us at all.

And Claude, ChatGPT, and DeepSeek will be back at the pub, toasting our demise.