Rafał Truszkowski

I lead engineering teams and write about what I'm figuring out.

The AI Conversation We're Not Having

10 min read

We’re watching a generation of creators get labeled frauds for using tools that didn’t exist three years ago. And most of the people doing the labeling couldn’t define what those tools actually are.

I’ve been in tech for over a decade. I lead engineering teams. I ship product. I use AI tools daily, and yeah, they make my job easier - take that bias into account. But I’ve also seen the garbage they produce when there’s no taste behind them. I’m not here to sell you on AI. I’m not a doomer convinced the machines are coming for us all. I’m a practitioner who’s tired of watching this conversation go absolutely nowhere.

So let’s try to have an actual one.

What just happened with Expedition 33

If you’re not plugged into gaming, here’s the short version: Clair Obscur: Expedition 33 just set a record at The Game Awards 2025. Nine wins, including Game of the Year. One of the most visually stunning, narratively original games in years. A small French studio punching way above its weight with a budget under $10 million.

Then the internet discovered that some AI-generated placeholder assets had shipped in the release. Posters in the opening area with that telltale warped text. The studio patched them out within days. Their producer, François Meurisse, had mentioned months earlier that they used “some AI, but not much” during development.

The Indie Game Awards stripped their wins. Social media lit up with “AI slop” takes. A game that won Best Art Direction is now being framed as some kind of fraud.

I want to be fair here. Sandfall apparently agreed on their Indie Game Awards submission that no gen AI was used. Then Meurisse confirmed AI use the same day the show aired. That’s a credibility problem, not just inconsistent messaging, and it’s fair to criticize. Their director Guillaume Broche said “nothing in the game will come from AI” in December, which doesn’t square with what Meurisse said in July. Disclosure and honesty matter.

But “AI slop”? For placeholder textures that were removed? For a game that earned its accolades through years of genuine craft from a 30-person team? Calling that AI slop isn’t criticism - it’s pattern-matching dressed up as standards.

The Expedition 33 mess is a symptom of a bigger problem.

We can’t even agree on what we’re talking about

Part of why this discourse is so broken is that we’re using words that mean different things to different people.

“AI” is not new. It’s decades old. Your email spam filter is AI. Fraud detection on your credit card is AI. The accessibility tools that help blind users navigate the web are AI. Medical imaging diagnostics that catch tumors early are AI.

Generative AI is a recent, specific subset - large language models, image generators, the stuff that actually makes content.

When someone says “stop AI” or “boycott companies using AI,” they probably mean “stop using generative models to replace human creative labor.” But that’s not what they’re saying. And when your position is linguistically incoherent, it’s hard to build anything useful on top of it.

This might sound like pedantry, but you can’t have a real conversation when half the participants are talking about different things.

Where I agree with the critics

I’m not here to dismiss the concerns. Some of them are legitimate and I share them.

The training data question is real. These models were built on work scraped without consent or compensation. That happened, and I’m not going to defend it. Could have been done differently, should have been. The fact that it’s done doesn’t make it right.

The transition costs are real too. Illustrators, copywriters, voice actors are watching work evaporate in real time. That’s not abstract economic theory - it’s people’s livelihoods. I’m not going to sit here in relative stability and tell anyone their pain is invalid.

And the slop is real. Low-effort AI output is flooding every platform. Generic LinkedIn posts that sound like they were written by a committee of chatbots. SEO garbage clogging up search results. The signal-to-noise ratio is degrading everywhere you look.

These concerns are valid, and anyone who waves them away isn’t being serious.

Where I split from the boycott crowd

Here’s where it gets complicated.

We’ve been here before. Photographers displaced portrait painters. Digital audio displaced session musicians. Desktop publishing collapsed entire pre-press departments. Photoshop put retouchers out of work. Every time, the transition was brutal for incumbents. Every time, people said this technology would destroy the craft. Every time, new forms emerged that nobody predicted.

The analogy isn’t perfect. Those tools didn’t learn from other artists’ work without consent. I’m not comparing the ethics of how these tools were built. I’m comparing the disruption pattern and how craft tends to re-emerge on the other side.

What’s different now is speed. Those transitions used to take decades. This one is taking years. That compression is genuinely unprecedented, and it’s fair to feel overwhelmed by it.

But the pattern holds. Once the dust settles, once everyone has access to the same tools, the bar rises. “AI-assisted” becomes the baseline, not the advantage. And we start evaluating craft the way we always have: by whether the output is any good.

I know this is easier to say from where I’m sitting. Not everyone has the same leverage or the same runway. I don’t have a solution for the displacement happening right now - I wish I did. But I don’t think refusing to engage with reality is one either. Refusing to touch these tools doesn’t un-train the models. Doesn’t compensate artists retroactively. Doesn’t undo anything.

What it does do is remove your voice from the conversations that are actually shaping what comes next. Legislation, industry standards, compensation models, ethical frameworks. These are being built right now. The people in the room are the ones who understand how the technology actually works.

You can be angry about how we got here and pragmatic about where we go. Those aren’t mutually exclusive.

The detection problem and why it might not matter

Right now, AI detection is unreliable. Disclosure standards don’t exist. Watermarking is inconsistent. That’s a real gap, and I’m not going to pretend future solutions are guaranteed.

But I want to ask a harder question: in many contexts, does it actually matter if you can’t tell?

If a game moves you, if the art direction is stunning, if the narrative lands… does knowing that some placeholder textures were generated change its value? We don’t reject auto-tuned songs if they’re good songs. We don’t dismiss films with CGI if the story works. We don’t care that your iPhone photo was computationally enhanced if it captured a real moment.

What we reject is lazy. We reject the uncanny valley of “close enough.” We reject output that has no taste behind it.

Context matters here. A commissioned portrait isn’t the same as a background texture in a game. A novel has different expectations than a marketing email. I’m not saying provenance never matters. I’m saying the blanket assumption that any AI involvement equals fraud is lazy thinking.

Disclosure matters for honesty. Quality is still evaluated on its own terms.

What the real divide actually is

The conversation keeps getting framed as “AI versus no AI.” That’s the wrong axis.

The real divide is between human judgment driving the tool and the tool replacing human judgment entirely.

Slop isn’t defined by the presence of AI - it’s defined by the absence of craft. A Photoshop collage can be slop, so can a hand-painted canvas, so can a film shot on 35mm with a hundred-million-dollar budget.

The tool has never determined the quality - the human behind it always did.

I’ll admit this gets philosophically murky. If someone prompts an AI a hundred times and picks the best output, is that craft? What if they edit it? What if they don’t? I don’t have a clean answer. I’m not sure anyone does.

But here’s a rough heuristic I keep coming back to: could this person have produced something equivalent without the tool, given enough time and resources? If yes, the tool is augmentation. It’s a force multiplier for existing taste and skill. If no, the tool is doing the creative work, and the human is just pressing buttons.

That line is blurry, but blurry isn’t the same as nonexistent.

What I’m actually asking for

I’m not asking anyone to love AI or even use it. I’m not naive about who profits most from widespread adoption - the value largely accrues to platform owners, not individual creators, and that’s a legitimate concern.

But here’s what I am asking for:

Evaluate output, not just input. A game isn’t slop because of how it was made. It’s slop if the result is soulless. Judge the work. I get that for some people, the process matters as much as the product - that’s a legitimate ethical framework, even if I weigh things differently.

Get the terminology right. If you mean generative AI, say generative AI. If you mean LLMs, say LLMs. Precision isn’t pedantry when the stakes are this high.

Engage with the technology even if you’re critical of it. Especially if you’re critical of it. The people who understand these tools are the ones who will shape how they’re regulated and deployed, and absence isn’t a strategy.

And maybe, just maybe, consider that the person across from you in this debate isn’t evil. The “AI slop” crowd and the “AI everything” crowd are both mostly people trying to figure out a genuinely unprecedented situation. The loudest voices on both sides are drowning out everyone trying to actually think through this.

Where do you draw the line?

I’ve been writing this piece partly because I’m frustrated, but mostly because I genuinely want to know: where is your line?

Is it about disclosure? Final output? The percentage of AI involvement? Whether assets shipped or were just used internally? Whether the company profits? Whether individuals lose work?

I don’t think there’s one right answer. But I think we need better questions than “AI bad” or “AI good.” We need actual standards for evaluating this stuff. We need literacy, not just loyalty tests.

The question was never whether to use these tools - they exist and they’re not going away. The question is whether we’ll develop the taste and judgment to use them well, and the honesty to evaluate the results fairly.

Right now, we’re mostly just yelling past each other.

I’d like to think we can do better.

One more thing: I wrote this with Claude’s help. We went back and forth for a few hours - it pushed back on my arguments, flagged holes in my reasoning, and called out when I was being sloppy or unfair. That’s the point I’m making in this piece. The tool doesn’t replace the thinking - it pressure-tests it. And yes, I’m aware of the irony of disclosing this at the end of an article about disclosure. Consider it practicing what I preach.