If you aspire to management, learn to spot half-assed AI workflow

First, yes, the commenter is correct, this is grim:

This is fucking grim. Somebody invented a white guy, an "IT professional" named Edward Crabtree, who stopped the Bondi shooting and spread it all over the internet, which was picked up by AI agents and slop aggregation sites.The real hero is a fruit stand owner named Ahmed el Ahmed.

Tim Onion (@bencollins.bsky.social) 2025-12-14T20:02:01.665Z

The tragedy of needlessly lost lives is, of course, bad enough to despair, but it’s made that much worse that false information created to ostensibly (and obviously) prevent a Muslim man from being credited with the kind of heroism normally reserved for films* is so casually distributed through major social media channels. Putting despair aside (easier said than done), I’m not interested in only shaming twitter et al for promulgating false narratives that always seem to conveniently fit into Grok’s preferred narratives of white/western supremacy. I’m more interested in thinking about how our processing of information will evolve.

There is always selective pressure in labor and life for those who better adapt to a changing technological and information landscape, and there’s no shortage of change happening right now. Some of it falls into classic “resist the propaganda” tropes. Don’t believe what you see on TV has evolved to don’t believe what you learn from the internet→ social media→AI→??? Once again, easier said than done, and I think it is more nuanced than that. It’s not just about information insulation and nihilism, it’s about cultivating the ability to better intuit when you are being misled.

Is there a subreddit? Of course there is a subreddit:

The comments are interesting because they are collectively sussing out specific, tangible clues that this is or isn’t AI. The convenient lack of license plates is both evidence of an error (if the state requires front license plates) and one of selective deception (the left car has their plate cropped out rather than blurred out). There is also the uncanny over-simplicity of the setting. No other people, debris, trash cans, mailboxes, etc. The absolute perfection of the cars outside of the region immediately surrounding the point of collision.

We have intuitive tools at our disposal, likely borne out of the same cogntive sources of the “uncanny valley” that haunts certain animation. We may have evolved to avoid predators that used mimicry to approach and infiltrate. These skills are ancient and innate, though. They are not inherently honed to combat AI-generated and distributed deception. We will have to evolve. And, as alluded to earlier, this is going to show up in far more than our politics.

There’s lots of hype around training students to work with AI. That’s all well and good, but I’m not sure how different those tools are than the ones that we honed to search with Google, to write and debug our own code, or to simply write effectively. What about the skills to evaluate and credit inputs? To discern the product of narrow expertise from distilled generalizations i.e. to discern new workflow and products from recycled “AI slop”. How much of a manager’s job is to simple assess whether the task was completed sufficiently or half-assed 70% of the way there? A lot of it? Most of it? The thing about half-assing it is that you are only incentivized to do it when avoiding 50% of the toil is worth the risk of getting caught. What happens when you can avoid 95% of the toil? Basic economics says you’re going to half-ass it a lot more unless the probability of getting caught or the punishment increases. What that means is that if management doesn’t get better at identifying 5%-assed AI slop from employees they’re going to have to start firing employees when they do get caught. In a world with high separation costs, that’s not an attractive option. Which means tilting the balance of decision-making back towards “actually doing the work” will fall to improved managerial oversight and monitoring. There’s no shortage of handwringing over escalating C-suite salaries. It will be interesting to how people respond to wage scales rebalancing towards middle management.

The most cliched thing to ask for in a job applicant has long been “attention to detail” or that they be “detail oriented”. I’m not sure if that is now obsolete or more important than ever. It’s not just about attention, per se. It’s evaluation, perhaps even cynicism. And it’s not because AI is evil or corrupt or even wrong. It’s just overconfident, and that overconfidence is catnip for anyone who wants to believe their work for the day is done at 9:05am. If you want to be in charge, you’re going to have to get really good at sussing out the little signs that what you’re looking at wasn’t produced for your task, but the average of all similar tasks. Can you look quickly and closely? You’re the boss, you’re busy, but so you better be good at it. The AI is in the details.


*And seriously, Ahmed al Ahmed is a hero. A movie hero. A crawling through the air ducts to fight the bad guys hero. Unarmed, he tackled a man actively firing a rifle at innocents and in the process saved a number of lives we will never know. He was shot twice. He’s real. I am in awe.

2 thoughts on “If you aspire to management, learn to spot half-assed AI workflow

  1. Joy Buchanan's avatar Joy Buchanan December 15, 2025 / 9:51 am

    If AI can’t spot it’s own mistakes, then at a certain point we are going to have a really “what are we doing here?” moment.

    Liked by 1 person

  2. Joy Buchanan's avatar Joy Buchanan December 15, 2025 / 9:52 am

    “News websites” might oddly be one of the areas where mistakes matter less for making money. But somewhere there must be a place where mistakes really matter.

    Liked by 1 person

Leave a comment