Fake Outrage, Real Damage 🤖✨💀

The real lesson of the Great Cracker Barrel Logo Debacle of 2025: Stop and think, dammit.

In August a struggling restaurant chain whose customer base is mostly over 65 made a mild logo change. (That change wasn’t even really a change: the “new logo” closely matched the alternate version of the logo Cracker Barrel had been using on menus and digital ads for six years.)

This “change” triggered a social media firestorm, which ignited a minor culture war brushfire. A week later ~40% of that company’s market value had gone up in smoke, and already weak foot traffic dropped another 8% (per WSJ). That’s $545 million gone, and no doubt more than a few jobs with it.

Within 60 days the chain pulled a complete 180, trashing 18 months of work on everything from visual identity to food-prep procedures.

Then it turns out that the social media firestorm was mostly fabricated. Around half of all the griping was automated bots (via NRN and Restaurant Business). Much of the “real” complaining from humans was in response to the fabricated controversy.

Now the whole story comes out, and I’ve seen a string of mediocre “thought leadership” posts on LinkedIn using this as an object lesson. Those pieces of “thought leadership” invariably have turned out to be entirely AI written. ( 🤖✨ writing is pretty easy to spot, but I still use Pangram to check my work.)

Meanwhile, Cracker Barrel still serves mediocre food slowly to a dwindling audience under the auspices of a logo and decor that haven’t changed since the Carter administration. At every stage in this farce, people are reacting to and amplifying artificial signals rather than honestly listening and talking to each other—and causing real harm in the mix—and never stopping to ask if they actually care, if any of this matters, if what they are doing is tending toward making things better, if they even have a goal or preferred end-state in mind…oy.🤦‍♀️

An Unconsciously Biased Mind Will Produce an Unconsciously Biased Machine🤖

Yet another example of how absurdly easy it is to manipulate artificial intelligence—or even just accidentally make them into terrible bigots (and slightly above average antisemites). 

From University of Cambridge’s Ross Anderson (via security guru Bruce Schneier, “Manipulating Machine-Learning Systems through the Order of the Training Data”):

Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.

So what happens if the bad guys can cause the order to be not random? You guessed it – all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set – then let initialisation bias do the rest of the work.

Anderson concludes “It’s time for the machine-learning community to carefully examine their assumptions about randomness.”

I think that’s tangent to the real lesson, which is this:

All machines (including AIs) are created things, and created things bear the biases of their creators in unexpected, but ironclad ways: early color film was shite at photographing people of color simply because the folks who created the color film were all White and unintentionally selected techniques and chemical processes that worked better for their own paler skin tones than they did for darker ones. Similarly, male engineers built crash-test dummies that were roughly their own size and weight—and thus created “safety” features that killed women and children.