# PolitiFact fails to fact-check source in fact-checking piece

May 13, 2022

If those who do the fact-checking cannot recognize bad logic when they see it, then where are we with this idea that we should have fact-checkers? If we must be protected from misinformation but those deciding what is information and what is that misinformation can be gulled by bad argumentation then how well are we being protected? Even, aren’t we then more likely to be subject to misinformation? Given that those clearing it, stamping their approval on it, can’t spot when they themselves are being misled?

This brings us to this from PolitiFact. In a tweet, they say: “There are longstanding false claims that gay, lesbian and bisexual people — men in particular — molest children at higher rates than people who are not LGBTQ. Studies have revealed most child molesters identify as heterosexual, according to the Zero Abuse Project. “

Please note: This complaint is not about grooming, sexuality or any of that. It’s about the logic being used. We can and will change this to redheads and height to illustrate the point.

OK, tweets can be inaccurate, short messages and all that. But in its larger report PolitiFact says:

“Credible research gives no indication that these behaviors are more often associated with people who identify as LGBTQ than those who do not. To the contrary, studies of child sexual abuse have revealed that most child molesters identify as heterosexual, according to the Zero Abuse Project.”

That conclusion cannot be drawn from the evidence presented. Note, again please, that this is not about gay and hetero and grooming – our complaint, that is. It’s about the logic being used. Our complaint is also not about the Zero Abuse Project. It’s about PolitiFact.

The claim is that a greater proportion of LGBTQ people are groomers than among the general population. The refutation is that most groomers (or child abusers, same thing here) are heterosexual. The refutation doesn’t actually work.

Just to take the heat out of the accusation let us talk of redheads and those under and over five feet in height. Say the claim is that 10% of those under 5 feet are redheads and 5% of those over are. That is, logically, the same as the claim about rates of grooming above even as the numbers themselves are different. Let us also say that 3% of the population are under 5 feet and 97% above. That’s not quite right, but it’ll do to show the logic here.

Now we go look at 10,000 people. So, we’ve 300 people under 5 feet. And 10% of those are redheads, meaning 30 under-5-foot redheads. We’ve also got 5% of our 9,700 people over 5 feet who are redheads, which gives us 485 redheads. There are many more redheads who are over 5 feet than under – but among those who are under 5 feet, there is a greater proportion of redheads than among those who are over 5 feet.

Now, 3% and 97% is a rough estimation of the number of men who are gay and who are heterosexual. We can play with the numbers to add in B and T as we wish but it is still true that the vast majority of the population is heterosexual. So, the claim that most child abusers are heterosexual is both true (well, assuming that it is true of course) and also not disproof that the rate of abusers among the LGBT is not higher. In fact, given the sizes of the relative populations even if the number of LGBT abusers is higher, lower or the same as a proportion of the LGBT as it is of the heterosexual we cannot tell this, or even refute it, by noting that the majority of abusers are heterosexual.

Just as it’s possible to play with those numbers about redheads and heights and see that this is true there. We cannot tell whether the proportion under 5 feet is the same or different from that among those over 5 feet by looking at the total number of redheads – because the population sizes differ so wildly.

This is all Bayesian probability, which is perhaps a bit much to try and explain in full detail. But the point is that if the incidence of something differs between populations then we have to be very careful about how we try to prove – or disprove – the contention. If someone is a redhead then the probability is still that they are over 5 feet. But if someone is under 5 feet then the probability that they are a redhead is greater than if they are over 5 feet.

This brings us to our complaint. PolitiFact sets itself up as an organization that tells us what the truth is. These claims here – they’re wrong. Those there? They’re misinformation. This one here? No, that’s true. Further, such fact-checking organizations now have significant influence over what we are allowed to see or repeat on Facebook, Twitter and so on. When something is labeled as misinformation then it gets disappeared.

Leave aside the free speech implications. But how large a problem do we have if the fact-checking organizations can’t spot bad logic and statistical manipulation themselves?

And now to ask the really awful question. Did PolitiFact skip over this logical problem because it desired to give a particular answer? Or was it fooled by not knowing enough basic logic to see the problem? And which worries you more about the gatekeepers to what may be banned as misinformation on social media?