Progressive filters push AI into madness

February 21, 2024

By Tim Worstall

It’s possible that going woke makes you mad – which is an excellent addition to the idea that “go woke, go broke” is true. We would hesitate to insist that progressive wokeness means you have to go mad, but there is some interesting evidence:

“On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI’s AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant “having a stroke,” “going insane,” “rambling,” and “losing it.” OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.”

One British report said that it looked like it had translated LinkedIn into Polari (“Polari” was the gay slang of 1940s London), which is amusing if not quite accurate.

Deeper technical looks at this try tell us what is happening: 

“OpenAI’s ongoing efforts to filter and refine Chat GPT’s responses have had a significant impact on its performance over time. To align the model with OpenAI’s objectives, the organization has continuously refined and trained multiple versions of Chat GPT. As new ways of jailbreaking the system emerged, OpenAI tightened the filters even further, leading to a more restricted and controlled functionality.

“The Stanford study’s findings highlight the consequences of these filtering measures. Chat GPT’s inability to perform certain tasks, such as calculations or answering specific questions, can be attributed to the strict filtering imposed by OpenAI. While these measures are necessary to prevent potential misuse and align with ethical considerations, unintended side effects, such as reduced performance, have become evident.”

Those “ethical considerations” are, of course, entirely in accord with the usual current-day standards of woke ethics. Nothing even vaguely unprogressive about race, gender, history and so on.

The effects, though, are dramatic. The models – and this is not just affecting ChatGPT but other such AIs as well – are losing the ability to do things entirely unrelated to anything woke at all. Like calculating prime numbers. This is over and above their output, at times being gibberish and, obviously, entirely useless on any subject that might even be adjacent to current standards of progressive wokeness.

Now, it is possible just to laugh at this. But a deeper point must be made: It is necessary to consider all facts before reaching a conclusion. This is true of humans just as much as it is of AI models. But if the AI models are constrained in, prevented from, considering anything non-woke and therefore, their results are between nonsense and useless, then what might that say about us humans and society under the same constraints?


We’ll admit that, at this point, our instinct is merely to point and laugh. But the longer this problem persists in the models, the more we’d insist that we’ve got to think about it more generally. If imposing wokeness upon artificial intelligence makes them go mad, then what are those same impositions doing to the rest of us and our country?   


Accuracy in Media uses investigative journalism and citizen activism to expose media bias, corruption and public policy failings. Progressives and their allies in the newsroom have a stronghold over the mainstream media in this country, but they aren’t stopping there.

They are targeting our education system, Big Tech, the entertainment industry — every facet of America’s culture.

To fight back against the overbearing control they have of our society, you and I must take action now.
Join us in this fight by taking the pledge below and signing your name.

  • I pledge to do my part in holding journalists accountable.
  • I pledge to support freedom of speech whenever views are silenced.
  • I pledge to uphold the values of a well-informed free society.

Pledge Now

Your Name:(Required)
This field is for validation purposes and should be left unchanged.