Media push dangerous censorship model for AI
May 11, 2023
Wired and Ars Technica are cheering the idea that AIs, or artificial intelligence chatbots, will now be censored.
Apparently, this is clearly a very good idea, that a chatbot should not be allowed to say anything that runs against the UN’s rules for whatever is fashionable this week.
The suggestion itself comes from Anthropic, an AI startup. This would encode a “List of guiding AI values (that) draws on UN Declaration of Rights — and Apple’s terms of service.” Or, the other description, “a built-in “constitution” that can instill ethical principles and keep systems from going rogue. “
But this is always the problem with such systems. Whose set of ethics? The UN is the place that had Saudi Arabia on the Women’s Rights Committee. Or more normally blames the U.S. for everything.
Yet even that’s not the actual problem here. We’ve noted this before, TikTok adopting UN rules on what may be said about climate change, or Amazon using the SPLC definition of what is a hate group, or more generally about what ChatGPT and the like may be allowed to say.
That is the real problem: those who define what may be said are moving into these backrooms. It’s not the AI that decides what it wants to say, but the UN. It’s not what some TikToker decides to say that can be said but what the UN allows. It’s not Amazon who decides what is a charity but the ideologues at the SPLC. The decision of what is allowable is retreating into these self-appointed groups that we out here have no control over.
Yes, it’s true, the First Amendment only says that Congress shall not limit free speech. But we’re still giving up our rights if we allow others to limit in these same ways. Worse, we don;t know who is doing it, why, or even what they’re doing.
For example, the limits on what AI will be able to do will include barring: “content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy “ But who defines what is offensive? Insensitive? There are those who insist that all sorts of things are upsetting. And an AI that didn’t offend of upset someone would be unable to say anything at all.
What they mean, of course, is that it shouldn’t be allowed to say anything which is upsetting or offensive to those who write the rules. Who will have very different ideas about those things than the rest of us – as shown by their insistence on using the UN rules.
The danger here is that it will smuggle an entire set of morals, ethics and insistences into the fabric of everything. The problem with that is that morals and ethics do differ – so, making one set of them insistences within the system is not just anti-democratic it’s anti-liberty in the most obvious and basic sense.
It’s actually as with George Orwell’s point about NewSpeak in the novel 1984. If you change the language so that no one can say the wrong things then no one will be able to even think the wrong things. But as there it always does come down to who is defining what is the wrong thing that must not be thought?
We should be opposing these attempts to censor this technology just as we’ve all always been against the attempts to censor other and earlier ones – like print itself and so on. Precisely because we can and will only have freedom of thought and action when we’ve also freedom of speech and thus information.