The Facts About Rape Are Bad Enough

False Reports of Rape Infographic, Revised Design by Emily Millay Haddad

… without embellishing or misinterpreting the data.

Yesterday, one of my favorite activist organizations on Facebook, Solidarity, posted a link to this image:

Falsely Accused Infographic by The Enliven Project

It comes from The Enliven Project, a campaign founded by Sarah Pierson Beaulieu. It is, obviously, an incredibly arresting image. I’m in community with a number of survivors of sexual violence, and I often feel overwhelmed by the vastness of the problem of rape in this historical moment. However, my feelings and my political analysis have largely been driven by my personal connection to the problem (attempting to be a meaningful ally to my friends, lovers and family members who are survivors) and to the policy work of a number of fantastic organizations (especially INCITE: Women of Color Against Violence). But I’ve never actually looked up the statistics about rape and sexual assault in the United States before, and I’ve never seen rape statistics represented in such a startling manner. I had to know more.

And it turns out, it’s wrong. Well-intentioned, yes. Inspired simplicity in its presentation, absolutely. But inaccurate. Amanda Marcotte on Slate.com does a wonderful, nuanced job of breaking down all the ways that this infographic confuses, exaggerates and even, in some ways, underreports the problem. I highly recommend taking the time to read her article, and follow every link to the original data like I did. Marcotte concludes:

As I said above, the Enliven Project has the best intentions and they’re on the right path. It is true that most rapes go unreported, that the public believes false accusations are exponentially more common than they actually are, and that a man’s chances of being falsely accused of rape are incredibly small. All these things are important to convey, and an infographic is a great way to do it. Just fix the graphic, and the public will learn a lot.

So, because I had a bit of time on my hands, and the capacity to do it in Photoshop, and a broken heart for an engine to drive this research and production, I did exactly that. Here is my revised and updated graphic:

False Reports Rape Infographic

To clarify: this infographic is a visual representation of 1000 instances of rape in the United States, based on data roughly collected between 2005-2011. Out of those 1000 rapes, it is estimated that 46% are reported to the police.3 Only 37% of reported rapes (or 17.02% of total estimated rapes) are prosecuted. Of the rape reports prosecuted, only 18% are convicted (or 3.06% of total estimated rapes).1 While accounts wildly vary due to bias and poor reporting and experimental design, it is generally agreed that between 2-8% of reported rapes are false reports (or .09-3.68% of total estimated rapes).2  In this graphic, I chose to represent the false reports conservatively at the 8% level to avoid a derailing argument that I’m exaggerating the statistics.

I also agree with Marcotte that it’s important to clarify the difference between a false report and a false accusation. The great majority of false reports — that is, reports of rape that have been investigated and found to be false and baseless by authorities — name no specific person as the rapist. They follow a “stranger rape by force” narrative, where the description of the rapist is often vague and certainly unnamed. If a person is named and the report is found to be false, that is a false accusation. I couldn’t find any numbers about exactly what the proportion of false accusations is among false reports, but given the truly minor percentages that even anonymized false reports make up of the estimated total rapes in this country, it certainly seems like a big ol’ misogynistic red herring. This mythology that false rape accusations are common, and vindictive or confused women are going around pointing fingers at innocent men left and right, is simply untrue. The facts paint a different picture.

Lastly, as I read the studies and data collected on rape in the United States yesterday — and tried to comprehend the vastness of the problem while thinking about how to more accurately reflect that data — I learned a lot of terrible things. The Center for Disease Control’s National Intimate Partner and Sexual Violence Survey (NISVS) is one of the most illuminating and horrifying documents I’ve ever read, and among the best if you’re looking for comprehensive definitions and methodology (unlike the FBI’s unbelievably arcane definition of rape as “the carnal knowledge of a female, forcibly and against her will” — this has since been changed, but the studies and data collection have not yet caught up). Let’s make this clear: more than a million women were raped in the previous twelve months from when the survey was taken. So when you’re looking at these infographics, try to scale them up by 1,000. Please try.

But perhaps my favorite of the articles I read while I was chasing down the newest data was Thomas’ Meet The Predators on the Yes Means Yes blog. His article provides a close overview of the existing studies of two populations of men (college students and U.S. Navy new recruits) who will admit to committing acts that are legally defined as rape, as long as the researchers don’t use the word rape. The statistics that are revealed in these studies are also pretty illuminating — like how the average un-caught rapist has an average of 6 victims (and a median of 3 — which means that some un-caught rapists are raping many, many more people to skew the average up this high). Or how the vast majority of rapists target acquaintances or intimate partners, not strangers; and use intoxicants as their weapon of choice, not force. The phrase that comes to mind for this is “date rape” — which continues to have a certain patina of misogynist derision. But let’s be real — raping your drunk friends is what rape looks like in the U.S., in the majority of cases. It’s time we stopped making jokes about that and started holding our rapist friends accountable for their actions. And yes, you have friends who are rapists. And no, I’m not talking about prison when I’m talking about holding people accountable. I’m talking about the real, hard, messy work it is going to take to change our communities and our culture to emphasize and prioritize consent and non-violent, non-raping forms of masculinity and power. And I’m going to keep talking about this — because if I believe, as I do, that we can achieve enlightenment in this lifetime, you bet I believe we can end rape in our lifetimes. And we should. We must.

[UPDATED TO ADD: Today (1/11/13), The Enliven Project posted a background piece on their original graphic, with links to their source material and the arguments behind their choices. I, obviously, disagree with their interpretation of the data -- especially the 10% reporting statistic, which is an exaggeration of even their own sources -- which is why I made this revised graphic. I also wish that their graphic had been contextualized with these sources in the first place. I firmly believe that the Enliven Project's stated goal of creating "dialogue" around these issues must be placed within a rigorous, transparent framework which strives towards accuracy -- no matter how complex that process may be. It's not enough to say that the data is flawed, and then choose to skew that data towards your own ends. It undercuts that very dialogue we're seeking to have, and creates acrimony between well-intentioned though disagreeing individuals. In any case, I am glad that the Enliven Project has posted these figures and background -- even if I decry their missteps that got us here. Also cross-updated in the comments.]


1. See “7. What Percentage of Rape Cases Gets Prosecuted? What are the Rates of Conviction?” in Top Ten Things Advocates Need To Know Series. Published by the University of Kentucky Center for Research on Violence Against Women. December 2011. Last Accessed January 9, 2013 at http://www.uky.edu/CRVAW/files/TopTen/07_Rape_Prosecution.pdf.

2. See Lonsway, Archambault and Lisak. False Reports: Moving Beyond the Issue to Successfully Investigate and Prosecute Non-Stranger Sexual Assault. Published by The National Center for the Prosecution of Violence Against Women. The Voice, vol 3, no 1, 2009. Last Accessed January 9, 2013 at http://www.ndaa.org/pdf/the_voice_vol_3_no_1_2009.pdf.

[UPDATED TO ADD 1/15/2013] 3. See Table 7 in Truman and Planty. Criminal Victimization, 2011. Published by U.S. Department of Justice, Office of Justice Programs, Bureau of Justice Statistic. October 2012 NCJ 239437. Last accessed on January 15, 2013 at http://bjs.ojp.usdoj.gov/content/pub/pdf/cv11.pdf.

  • DavidS

    I posted this on the slate site, but thought I’d cross post it here as well.

    Your infographic is an improvement on the original, but your “False Reports” should really be “Provably False Reports”, which is a pretty big difference.

    If you look at the data from NCPVAW that you are using to calculate the number false reports, you will see that they are all derived from studies in which a selection of reports are analysed to see how many yield sufficiently good evidence to classify the report as false. In the case of the Kelly et al study, which your source regards as the most authorative, this means that there has to be “clear and credible admission by the complainants” or “strong evidential grounds” that the allegation is false.

    If you apply this kind of evidential standard to cases then you do, as your source suggests, consistently find that around 3-8% of reports can be classified as false.

    The snag that if you applied a similar evidential standard to determine whether an allegation was true, you would only classify a minority of them as true (probably somewhere between 6% and 30% depending on how strong you wanted the evidence to be). If someone were to put in stick men representing “True Reports” on this basis, there would be an outcry, and rightly so, because the conflation of “True” with “Provably True” would be idiotic.

    However to conflate “False” with “Provably False” is equally idiotic. The bottom line is that no one has the foggiest clue what proportion of rape reports are false. And by this I really mean not the foggiest clue. It isn’t simply that estimates of the rate of false reporting are subject to some uncertainty, as all statistics are, it is really that no one has the slightest idea what the correct figure is.

    • http://www.circlesoffireproductions.com Emily Millay Haddad

      Hey David — thanks for cross-posting here. I’m also going to cross-post my reply, but also encourage people to go check out the thread on Slate.com to see the full discussion over there (which I can’t really replicate here). All the best to you.

      Here’s my reply from Slate:

      “Thanks for replying, and for thinking this through so carefully with me.

      How do you arrive at the 6%-30% “true” reports numbers? I ask because, while this is a messy comparison, I was struck by how even your 6-30% estimate is significantly larger than the 3% percentage of rape convictions. Obviously, “guilty” convictions do not necessarily mean “true” reports, but I feel like the point of all these numbers is to illustrate (and argue about) the larger problem — that rape is endemic. It’s underreported. It’s under-prosecuted and poorly investigated — largely because of personal biases bolstered by myths like this one about false reports and accusations.

      I suppose I also think that there is a difference between uncertainty and mutability around the statistics and the exact phenomena we’re measuring and not having the “foggiest clue” about the proportion of false rape reports. All statistical data is about trying to nail down statistically significant, predictable patterns within certain constraints. It’s not a shot in the dark — it’s clusters of observable trends. There are gaps in the implications and the methodologies that we can (and should) argue about, but I suppose I don’t want to get too lost in the details to avoid the larger, glaring problem.”

      • DavidS

        Is rape endemic? Yes it is. Best UK estimates (from the British Crime Survey) are that 1 woman in 20 has been raped at some point in her life, which is shocking.

        Is it underprosecuted? Well, if by that you mean that there are fewer prosecutions than one would want in an ideal world you are right. If you mean that the Police are worse at prosecuting rape than other crimes then there is not much evidence to support you.

        In the UK approximately 25% of reported rapes result in a prosecution (slightly more than half of those prosecutions will result in a conviction for a sexual offence). Now 25% might not sound very good if you get your ideas about criminal justice from TV cop shows, but it is in fact significantly higher than the equivalent figure for crime in general. You could, of course, object that “crime in general” is not a good comparator, but if you compare clear up rapes for rape with those for other offences you don’t find anything to indicate that the police handle it any worse than any other offence.

        On the subject of false reports. I am afraid that anyone who advances a figure for the percentage of reports that are false really is making a shot in the dark, and we really do have not the foggiest clue.

        There are other statistics, for example the incidence of rape itself, where you could legitimately say that the problem is that it is difficult to precisely nail down the figure and estimates are somewhat unreliable.

        In the case of false reporting you can’t say that because there really aren’t any estimates at all. All the figures that are advanced as estimates of the number of false reports are in fact estimates of the number of provably false reports. If you advanced an estimate of the number of provably true reports as being an estimate of the number of actually true reports then no one would say that your estimate was “inaccurate” or go on about clusters of observable trends. They would simply say that you were an idiot, and possibly a misogynist one to boot.

        I am afraid that exactly the same attitude needs to be taken to anyone who advances an estimate of the number of provably false reports as being an estimate of the number of actually false reports. It is idiotic to do that, and there is no point in searching through research evidence to try to find if someone has genuinely estimated the number of false reports, even inaccurately, because until someone invents a crystal ball, or a lie detector that works, it is impossible to estimate that figure (even inaccurately).

        I’ll also cross post my justification for the statement that the percentage of reports that could be said to be “provably true” is somewhere between 6% and 30%. It seems to have mysteriously disappeared from the Slate site btw.

        It is based on UK data. The lower bound of 6% is calculated as follows:

        Numerator = number of reports that result in a conviction for the specific offence of rape (as defined in UK law).

        Denominator = All reports.

        The upper bound is calculated as follows:

        Numerator = all reports where the Crown Prosecution Service decide that there is sufficient evidence to prosecute, even if this results in the defendant being acquitted, and even if the initial report was of rape, but the prosecution was for a different offence, such as attempted rape.

        Denominator = all reports made but not subsequently withdrawn by the complainant. Remember that we are excluding withdrawn reports from the denominator so this makes the figure bigger. The logic behind this is that if the report is withdrawn then the CPS never get the chance to determine whether it is prosecutable or not.

        I think that if I did the calculation more exactly the upper bound would be closer to 38% than 30%.

        Sources for all these figures available on request. Some of them come from

        http://rds.homeoffice.gov.uk/rds/pdfs07/rdsolr1807.pdf

        • http://www.circlesoffireproductions.com Emily Millay Haddad

          Hey David — thanks for replying again here, and restoring the commentary that was somehow lost on the Slate site!

          I feel very very cautious about drawing comparisons between UK data and US data (and had misgivings about using research that did so here as well), although some of that can’t be avoided currently because of the impoverished nature of the field. Still, I hear your point about how making any claim of a precise number of false reports raises significant problems, as well as the spectre of what makes a “true” report.

          I lose you a bit on how debunking the belief of wide-spread false reporting through attempts to statistically quantify verifiable instances of false reporting is “idiotic,” but I am with you about how this area of inquiry has enormous pitfalls. For my purposes, it feels critically important to undercut this stereotype of false reporting and try to turn our focus, as even the researchers I cite want their intended audience to do, towards more skillful and compassionate investigation and prosecution.

          As to your other various assertions about what I might mean — I’m actually not invested in attempting to solve the problem of rape through more “effective” policing and more wide-spread imprisonment of rapists. I’m not making any claims about rape being a special case among investigators or prosecutors in terms of them treating this crime differently (with greater misogyny, with greater racism, with less vigilance, etc.) than other crimes that cross their desks. I (mostly) followed the framework that the original graphic used, and that much of the research is invested in, for expediency — and to augment the arguments that people are having. I have a different analysis, which I’ve mentioned (though not in great detail) here and elsewhere. That analysis is a much larger discussion, of course, but I did want to clarify at least that much.

          • DavidS

            Hi Emily

            I take your point about the mixing of UK and US statistics. We do that a bit too freely on my side of that Atlantic as well. There seem to be significant differences between the two countries. I get the impression that the incidence of rape is genuinely higher in the US, which wouldn’t be surprising as the US does seem to have a higher incidence of violent crime in general (or is this just me being prejudiced in favour of my own country?).

            My use of the word “idiotic” is probably a bit loaded, and I guess I ought to tone down my language a bit. But I am struck by the way that academics working in fields connected to gender seem to get away with methodological errors that would not be tolerated in an undergraduate essay if they were working in other fields.

          • DavidS

            Hi Emily,

            Just thought I’d add a few points to my earlier reply. When I made statements beginning with “If you mean …” I didn’t mean you personally. I intended it to be interpreted more as “If one means by …” if you get my drift.

            I just like to also try to clarify the point I was trying to make about estimates of the percentage of false reports, because I’m not sure I was explaining myself.

            There are some cases in statistics where someone produces an estimate that is inaccurate for various reasons, low sample size for example. In such cases you could potentially improve the accuracy of the estimate by using larger samples, better randomisation, or something similar.

            The point I am making about the estimates of false reporting is that they are not like that. The problem here is that an estimate of one quantity, the percentage of provably false reports is being presented as an estimate of another quantity, the percentage of actually false reports.

            Improving the accuracy of these estimates would not in any way solve the problem, because even if they were entirely precise, they would be estimates of the wrong thing.

            I stick by my point that you simply cannot estimate the percentage of reports that actually are false. The only thing that you can really say is that it must be greater than the percentage that are provably false, and less than the percentage that are not provably true. However saying that isn’t really saying much, because the percentage that are provably false is very small, and the percentage that are not provably false is, well most of them. Basically as far as the evidence goes, the figure could be pretty much anything.

    • Brooke

      David,

      I’m having some trouble interpreting your comments around this issue of false reports. Your comments reflect a tactic I’ve seen used over and over again here in the U.S. in conversations about rape: nitpick its definition long enough that people are allowed to forget or overlook its prevalence. In this case, you’re nitpicking the definition of “false report” rather than of rape itself, but I find the timbre of the conversation both familiar and disturbing, especially given the fact that Emily directly states that her choices around how to represent false reports in this infographic were conservative ones designed “to avoid a derailing argument.” I find this discussion of the ontology of rape statistics — how do we know what’s true or what’s false? How do we prove that we know it? — more than a bit derailing.

      However, you do take pains to make clear in one of your replies that you believe rape to be an endemic problem, and I appreciate that acknowledgment. Your motivation in this comment thread seems to be a desire for rigorous statistical analysis rather than a derailment of the conversation about the seriousness of the rape problem we’ve got going on over here. And setting aside my concerns about the nature of your arguments, I do sympathize with a desire for rigorous research. Even so, I disagree with your conclusion that drawing any conclusion at all about the real numbers of false reports is just so much sleight of hand.

      At hand is this question of truth vs. provable truth, and falsehood vs. provable falsehood. Your basic argument, if I understand you correctly, is that it is impossible to determine how many false reports there actually are, because only so many false reports are “provably” false. You then go on to assert that if we were to apply the same analysis to the question of which rape reports are true as various of Emily’s sources do to which rape reports are false, there would “be an outcry” because attempting that kind of analysis would be “idiotic” — and so, therefore, is attempting any analysis of which ones are false.

      I find that argument disturbing in a number of ways. For one thing, it’s fundamentally hopeless. It’s the rhetorical version of throwing up one’s hands in despair. “No one can know what’s true! We have no way of determining full truth or falsehood! Woe!”

      But more to the point, it’s a misplaced argument. The difference between true and provably true, or false and provably false, is not, as you put it, idiotic. It’s legal. There is a place for a rigorous distinction between probable truth and provable truth, and that place is in the courtroom. But we are not in a courtroom here. The purpose of Emily’s infographic, of the studies from which her information derives, and of this entire conversation is *not* to determine standards of proof — the justice system already does so, and rape is notoriously difficult to prosecute because of it. Here, in this conversation, and in the broader national conversation about the prevalence of rape, our purpose is to try to get at an understanding of the scope of the *actual* problem, rather than of the scope of the legally provable problem. If it were simple to prove or disprove rape, we would not need to have this conversation, because the number of rape prosecutions and convictions would much more closely mirror the number of actual rapes. They do not. You state in a later comment that only a quarter of reported rapes in the U.K. are prosecuted, and only slightly more than half of those successfully. Presumably you do not mean that only slightly more than 12% of reported rapes actually happened — you mean that only slightly more than 12% of them are successfully proven in a court of law in the U.K. By your own admission, then, there is a wide gulf between the number of rapes in your country and the number that are provably true in court. That is also the case in the U.S.

      But how wide is that gulf, exactly? That’s the point of this conversation. I have found for my entire adult life that most people don’t like to really look at or consider how wide that gulf actually is. There is tremendous resistance to the notion that rape is an endemic problem, a crime that’s actually *common.* The only way to combat that resistance, to shift our national inertia around this issue, is to create ways to rigorously analyze the data we have available about rape, and then to talk about the results. Loudly.

      And let’s be clear, here — this is vital work. Rape is a massive, evil, destructive problem. We *must* find a way to combat it. The first and clearest way to do so is to get an accurate look at its nature — to understand the real numbers. This is imperative. It is not work about which we can afford to throw up our hands in despair. I do not accept “There’s just no way to know,” as a valid answer, here. We must find a way.

      That’s what Emily’s sources are trying to do. They are not trying to determine which rape reports can be proven true or false in a court of law. They’re trying to determine a most likely estimate for how many reports are false, as one piece of an information campaign designed to combat one of rape culture’s most insidious myths — that many rape victims are actually liars. You do not sound like you’re here to call rape survivors liars. So, given your problems with the analysis Emily’s sources present, I’m assuming that your disagreement is with their methods — that you do not believe that the criteria they’re using to evaluate which rape reports are false are sufficient for the job.

      So let’s actually take a look at those sources and at the criteria in question.

      In your first response, you state that “the data from NCPVAW that [Emily is] using to calculate the number false reports [...] are all derived from studies in which a selection of reports are analysed to see how many yield sufficiently good evidence to classify the report as false.”

      First, Emily did no calculation to determine a number of false reports. As she stated, she simply lifted the upper bound, 8%, from the 2%-8% range presented by Lonsway, Archambault and Lisak in their NCPVAW paper “False Reports,” (http://www.ndaa.org/pdf/the_voice_vol_3_no_1_2009.pdf) the source you mention in your comment.

      Your response continues by pointing out that “the Kelly et al study, which [Emily's] source regards as the most authorative, this means that there has to be “clear and credible admission by the complainants” or “strong evidential grounds” that the allegation is false.”

      Lonsway, Archambault and Lisak do not present the Kelley study as their most authoritative reference. They do state that the Kelley study is the “largest and most rigorous study that is currently available in this area,” so they clearly find its conclusions both credible and noteworthy. However, like the rest of the series of studies that corroborate their primary findings, Kelley was conducted in the U.K. rather than the U.S., and is therefore of only limited use in discussing U.S. statistics.

      Their primary authority and the source from which they derive their 2%-8% statistic is The Making a Difference Project (http://www.evawintl.org/mad.aspx). They assert that “[t]o date, the MAD study is the only research conducted in the U.S. to evaluate the percentage of false reports made to law enforcement.” In other words, they state that MAD is their only source for rigorous statistics around false reports because it is *the* only source in the U.S. for those statistics. (It’s worth noting for the purposes of transparency that Lonsway and Archambault were involved in designing and implementing the MAD study.)

      The MAD Project is much more than a study of false reports. It collected comprehensive data on all reported sexual assaults from multiple police departments, judiciary systems and advocacy organizations across a three-year period in order to find ways to more effectively prosecute offenders. I point this out because it’s somewhat misleading, in this context, to state as you do above that this study analyzed reports to “see how many yield sufficiently good evidence to classify the report as false.” The study wasn’t trying to prove or disprove the falsehood of rape reports. It was trying to collect every possible piece of data across every possible vector about all reports. That’s a lot of data. You may take issue with the way the study analyzed their data, but I find it hard to believe you take issue with its quantity or breadth or manner of collection — I’ve reviewed the study methodology, and data collection was both rigorous and exhaustive. Feel free to take a look: http://www.evawintl.org/mad.aspx?subpage=5

      But leaving aside the MAD Project, for a second, let’s look at the concerns you have about the Kelley study, for which your statement about examining police reports to determine sufficient evidence of falsehood is accurate. The standards by which Kelley determined what percentage of the more than 2,000 reports it analyzed were false reports were “the official criteria for establishing a false allegation” for the police department in question. In other words, Kelley collected as much data as possible about each case and then attempted to apply *the police department’s own standards* to those cases. So when you, in your argument, make a distinction between false reports as determined by Kelley and “provably false reports,” you’re arguing that the criteria by which the police in the U.K. determine false reports are faulty. Further, if I understand you correctly, you’re not arguing that the number of false reports is probably lower than we think it is — you’re arguing that it’s probably higher. In other words, those false reports that Kelley identifies are only the false reports that can be *proven,* that some unidentifiable number of the rest of the 2,000-odd reports in the study are false, also, and that no one knows how to determine that number.

      I just want to highlight this: you are making the argument that *no one*, no professional organization, not researchers and not police departments, is capable of analyzing rape reports for falsehood — or, by extension, for truth — with efficacy.

      The interesting thing to me about your argument here is its baby-with-the-bathwater nature. If you read the full text of “False Reports,” you’ll see that Lonsway, et. al. take, essentially, your argument into account: “Of course, in reality, no one knows—and in fact no one can possibly know—exactly how many sexual assault reports are false.” But they don’t assert that because of that, we should all stop trying. Rather, they go on to assert that we *can* make a reasonable prediction:

      “However, estimates narrow to the range of 2-8% when they are based on more rigorous research of case classifications using specific criteria and incorporating various protections of the reliability and validity of the research—so the “study” does not simply codify the opinion of one detective who may believe a variety of myths regarding false reporting.” (3).

      So what’s really in question isn’t whether an analysis of false reports can be perfect. It can’t. What’s really in question is whether, and how much, we trust the research methodology of the professionals who have decided to study this question. Which leaves me wondering what, precisely, it is about their methodology that you so distrust.

      _______________________

      There’s an elephant in the room any time anyone talks about “provable” rape allegations. It’s this: we default to disbelief. The numbers you’ve created here for statistically “provable” rape cases — your 6% – 30% figure — leave, at minimum, 62% of cases that are unresolvable — neither provably true nor provably false. These are cases in which someone has come forward, to the police, in the face of what most reasonable people will admit are great psychological odds, to state that another person has violated their consent and their physical body. They are real cases involving real people. What would the arithmetic of proof that you present here do with those cases? It would say, “I neither believe nor disbelieve you, because there is no proof either way.”

      But I don’t actually believe that humans withhold judgment in that way. Not inside their own heads. We tend to either believe, or disbelieve. And in the case of rape, we default to disbelief. Which, in my opinion, is why we’re having a conversation right now about how many more false rape allegations there may be out there than the ones demonstrated in these studies, and not a conversation about how many more true rape allegations there are out there than successful prosecutions. That is the conversation that the authors of the various studies used to create this infographic are urging us to have, and I think we should take them up on it.

      • Brooke

        David, one more note:

        In my reply, I re-state your argument in this way:

        “Your basic argument, if I understand you correctly, is that it is impossible to determine how many false reports there actually are, because only so many false reports are “provably” false. ”

        That is incorrect; I mis-spoke. It should read:

        “Your basic argument, if I understand you correctly, is that it is impossible to determine how many false reports there actually are, because only a small number of rape reports can be conclusively proven to be either false or true.”

        • DavidS

          Brooke

          Your second comment is short and sweet and entirely correct. I do indeed argue that it is impossible to determine how many false reports there actually are, because only a small number of rape reports can be conclusively proven to be either false or true. Furthermore I can’t see that anyone has said anything that would cast any doubt on that conclusion.

          Your earlier comment is less cogent, because it is very long, attributes to me opinions that I have not expressed, and says things which are either false, or which are true but which I have not disagreed with. I doubt that I can answer every single one of your points without writing something far too long to read, but I will have a go at dealing with the most important ones.

          “I’m having some trouble interpreting your comments around this issue of false reports”

          Your second response exactly summarises my position in very few words, so perhaps you aren’t having as much trouble as you think.

          “Your comments reflect a tactic I’ve seen used over and over again here in the U.S. in conversations about rape: nitpick its definition long enough that people are allowed to forget or overlook its prevalence.”

          Sorry, but where did I nitpick the definition of rape? So far as I am aware its definition is fairly clear and I have not suggested otherwise.

          “I find that argument disturbing in a number of ways. For one thing, it’s fundamentally hopeless. It’s the rhetorical version of throwing up one’s hands in despair.”

          If you cannot know something then it is indeed dispiriting to be forced to admit that you do not know it. However it is considerably better than pretending that you do know it (and it is not “rhetorical”). As Wittgenstein says “Whereof one cannot speak, thereof one must be silent.” This of course does not mean that we can say nothing at all about rape, or for that matter about false reports. We just cannot say what their prevalence is and it would advance the argument hugely if both “Men’s rights” and feminist activists stopped pretending that they do know.

          “The difference between true and provably true, or false and provably false, is not, as you put it, idiotic. It’s legal.”

          The words you are attributing to me are almost the exact opposite of what I said. I did not say that the distinction between false and probably false was idiotic, I said that the *conflation* of the two was idiotic. I’m not sure what’s gone on here, but maybe it is a difference between the way words are used on opposite sides of the Atlantic. In the UK conflation means the *failure* to distinguish between two things.

          “Presumably you do not mean that only slightly more than 12% of reported rapes actually happened — you mean that only slightly more than 12% of them are successfully proven in a court of law in the U.K. By your own admission, then, there is a wide gulf between the number of rapes in your country and the number that are provably true in court. That is also the case in the U.S.”

          Yes that’s exactly what I mean. I completely agree that there is wide gulf between the number of rapes in your country and the number that are provably true in court. However I don’t see why you phrase this an “admission”. The fact that there is a huge difference between what is provably the case, and what actually is the case is precisely the point that I am making.

          “But how wide is that gulf, exactly?”

          No one has the foggiest clue. That is precisely the point that I am making.

          “That’s what Emily’s sources are trying to do. They are not trying to determine which rape reports can be proven true or false in a court of law. They’re trying to determine a most likely estimate for how many reports are false”.

          Emily’s sources are trying to determine how many reports are false, but what they actually do is determine how many are provably false. They are not strictly determining how many could be proved false in court, but the standards of proof they require are in many cases as stringent as those that would be required by a court, and will inevitably exclude cases that are false but where the evidence of falsehood is lacking.

          “First, Emily did no calculation to determine a number of false reports. As she stated, she simply lifted the upper bound, 8%, from the 2%-8% range presented by Lonsway, Archambault and Lisak in their NCPVAW paper “False Reports,” (http://www.ndaa.org/pdf/the_voice_vol_3_no_1_2009.pdf) the source you mention in your comment.”

          That’s more or less the problem. She did indeed simply take the upper bound from that report, without any attempt to determine how sound it was. If you want to nitpick the word “calculate” then I won’t put up a huge argument with you, although I would say that the determination of the maximum of a set of figures was a calculation.

          “Lonsway, Archambault and Lisak do not present the Kelley study as their most authoritative reference. They do state that the Kelley study is the “largest and most rigorous study that is currently available in this area,””

          I’m slightly baffled as to what distinction you draw between saying that it is the “most authoritive” and saying that it is “the largest and most rigorous”. However if you think there is a distinction there I am happy to let you make it.

          “The MAD Project is much more than a study of false reports. It collected comprehensive data on all reported sexual assaults”

          I’m happy to accept that, but it is a bit irrelevant to the argument. The MAD study may have calculated all sorts of figures besides the percentage of false reports, but the percentage of false reports is the one that has made its way into Emily’s infographic. It is clear from Lishak’s accounts of the MAD study that, like Kelly et al. they are actually measuring the number of allegations that are provably false.

          “In other words, Kelley collected as much data as possible about each case and then attempted to apply *the police department’s own standards* to those cases. So when you, in your argument, make a distinction between false reports as determined by Kelley and “provably false reports,” you’re arguing that the criteria by which the police in the U.K. determine false reports are faulty.”

          I’m not arguing anything of the sort. The standards applied by the police are the standards they should apply in such cases. However what the police are trying to do is classify reports as false if they are provably false, not calculate the number that actually are false. There would quite rightly be an outcry if they classified reports as false without good evidence, just as there would be an outcry if defendants were found guilty without good evidence. However the number of defendants found guilty is not an indicator of the number who are guilty, and the number of reports classified as false is not a good indicator of the number that actually are false.

          “Of course, in reality, no one knows—and in fact no one can possibly know—exactly how many sexual assault reports are false.” But they don’t assert that because of that, we should all stop trying. Rather, they go on to assert that we *can* make a reasonable prediction:”

          Simply asserting this doesn’t really help unless you can suggest some way in which you can make a reasonable prediction, without coming into possession of a crystal ball, or inventing a lie detector that actually works. No one has done that.

          “Which leaves me wondering what, precisely, it is about their methodology that you so distrust.”

          You pretty much summarised the problem yourself in your second comment, so I’m wondering why you are wondering.

  • http://adamhefty.wordpress.com Adam Hefty

    I appreciate the further discussion. However, I’m not sure about the shift from graphing “falsely accused” to “false reports.” I agree with you the issue of false accusations is a “big ol’ misogynistic red herring.” It seemed to me that one of the main points of the original infographic was to visualize precisely this red-herring-ness. The revised infographic does not allow one to visualize this as easily – in part because false reports are not raised as a red herring; false accusations are, and in part because with the revised methodology the false reports aren’t clusters.

    I don’t want to use the original infographic if the data was misleading, but it seems to me that we have to speculate on false accusations (extrapolating from the false reports) for the data to have full force.

    Also, if the data on false reports ‘follow a “stranger rape by force” narrative’ this seems all the more reason not to use false reports rather than false accusations as our visualization, since stranger rape by force is far less common than rape by acquaintances, date rape, and rape in intimate family or romantic relationships.

    • http://adamhefty.wordpress.com Adam Hefty

      End of the first paragraph of my comment should read, “the false reports aren’t clustered” [that is, you've spread out the little gray figures, so they are hard to see].

      • http://www.circlesoffireproductions.com Emily Millay Haddad

        Hey Adam — I think you’re exactly right that the original graphic was created in order to make clear how spurious the problem of false accusations actually are in proportion to the larger problem of rape. I was following the data for false reports because it was the only version of that data that I could actually find that included (even remotely) real numbers — false accusations are among those false reports, but not broken out. I would have to assume that there are fewer false accusations than false reports, and I would have to guess at exactly how few. The data I could find referred to how a “majority” of false reports follow this narrative of “stranger rape by force,” but again it wasn’t delineated. Given the mis-steps that had already been identified in how the original graphic handled the data, I felt very very uncomfortable with making similar leaps.

        That said, I agree that it thoroughly changes the focus of the graphic. In looking at the data and examining my own response to the original, I became much more interested in the proportions of unreported rapes to reported, to prosecuted, to convicted. That also seemed like the more solid data available. The false report indicators became more of a detail to me, especially given how hard (impossible?) that data is to verify. I agree that the emphasis of the design choice — the loss of the kind of visual “period” of the two black figures and how they capture your eye — substantially shifts the meaning of the graphic. The design choice I made to marginalize (literally, in putting them on the margins of the reported rapes, instead of clustering them, and changing them from black to gray) is a more subtle effect. And perhaps less successful as a provocation. For me, I want to talk about the immensity of rape as a demonstrable problem in the US — and not get too sidelined in arguments about false reports or false accusations. But given how prevalent this mythology of false accusations/reports is in this culture, I didn’t want to lose it entirely.

  • http://www.circlesoffireproductions.com Emily Millay Haddad

    Today (1/11/13), The Enliven Project posted a background piece on their original graphic, with links to their source material and the arguments behind their choices. I, obviously, disagree with their interpretation of the data — especially the 10% reporting statistic, which is an exaggeration of even their own sources — which is why I made this graphic in the first place. I also wish that their graphic had been contextualized with these sources in the first place. I firmly believe that the Enliven Project’s stated goal of creating “dialogue” around these issues must be placed within a rigorous, transparent framework which strives towards accuracy — no matter how complex that process may be. It’s not enough to say that the data is flawed, and then choose to skew that data towards your own ends. It undercuts that very dialogue we’re seeking to have, and creates acrimony between well-intentioned though disagreeing individuals. In any case, I am glad that the Enliven Project has posted these figures and background — even if I decry their missteps that got us here. [Also cross-updated in my post.]

    http://theenlivenproject.com/the-story-behind-the-infographic/

    • DavidS

      Emily

      I was wondering what the source is for your claim that 46% of rapes are reported to the police. This is very much higher than the percentage that get reported in the UK. Of course there’s no reason why US statistics shouldn’t be different from UK ones, but I can’t see how your 46% statistic is supported by the sources you cite at the end of your post.

      If I was the type to guess, I think I’d guess that Enliven got the rate of reporting more or less right, even though they got other things wrong.

      Incidentally you can find a very good, very recent, summary of UK data here, with comparisons drawn between statistics for sexual and non-sexual offences.

      http://www.justice.gov.uk/statistics/criminal-justice/sexual-offending-statistics

      Obviously it might not be of particular interest to Americans, but there is almost enough data there for someone to do a similar infographic for the UK (there is no estimate of the rate of false reporting though – the data were compiled by professional statisticians, not clairvoyants).

      • http://www.circlesoffireproductions.com Emily Millay Haddad

        David, thanks for that catch — I knew I had missed something. The source for that number is an analysis found in the U.S. Department of Justice’s report “Criminal Victimization, 2011″ published in October 2012 (available here http://bjs.ojp.usdoj.gov/content/pub/pdf/cv11.pdf, see Table 7), that compares the FBI’s Uniform Crime Reports (police records) and the CDC’s National Intimate Partner and Sexual Violence Survey (a robust phone interview of a sizable sample) to attempt a reasonable estimate on reporting percentages. It is a messy bit of data (see the entire discussion of the problems in the sidebar surrounding Table 7), largely because the definitions of rape between the FBI data and the CDC data are substantially different (as indicated in the table footnotes). But I’m hoping that the 2012 reports will be more aligned, since the FBI has finally, finally updated their definition.

        I originally came across the 46% number through RAINN’s Reporting Infographic (http://www.rainn.org/get-information/statistics/reporting-rates), but found that they were using the Department of Justice’s report for 2006-2010, and when I went to check the research, I found that there was more recent data. What’s interesting as I went back to check my data because of your question, I found that I also, unfortunately, underreported. According to Table 7 of the DOJ report, I should have used 49.9% for 2011 (as I attempted to maintain 2011 data across the graphic), instead of 46%. Which would add an additional four figures to the graphic for reporting.

        But now here’s the totally bizarre part of this that I can’t quite figure out. If you look at the DOJ’s document and go down to Table 8, which is specifically about reporting rates and percentage changes from 2002 to 2010 to 2011, you’ll see a breakout of the percentage of reporting. They report in 2002, a 55% reporting rate for “Rape/Sexual Assault;” in 2010, a 49% reporting rate; and in 2011, a 27% reporting rate. Without comment, without qualification. Given the disparity between this number in Table 8 and the number in Table 7, I basically had to assume it was a typo — or some totally problematic monkeying with the numbers. Given this bizarre disparity, I think I decided to go with the 2010 numbers (49%), and then made my own typo/error by reporting it (and re-designing the graphic) at 46%, referring again to RAINN’s 2006-2010-based numbers.

        And thanks for the stats from the UK! While I appreciate the “common sense” assumption that the data shouldn’t be all that different from one country to another (it seems like a kindness), I actually think there are legitimate reasons behind how the data could (and apparently is) substantially different between our two countries. It’s for that reason that I made every effort to NOT use UK data in my analysis, and to confine the statistics to US studies and conclusions.

        • DavidS

          Thanks for clearing this up.

          The thing that strikes me about the DoJ data to which you referr is that it reports an incidence of rape that is very much lower than the source that Amanda Marcotte uses to justify her claim that 1 in 5 women have been raped.

          Amanda’s article links to an NY times report to justify this claim. If you follow the links back, it appears that the primary source is the National Intimate Partner and Sexual Violence Survey (NISVS) which is available here.

          http://www.cdc.gov/violenceprevention/nisvs/index.html

          The DoJ data that you use suggest that for every 1,000 people there will be approximately 1 rape or sexual assault per year. They don’t seem to separate rape from sexual assault, also they don’t seem to separate men and women. However you can infer from their data that the incidence of rape in women cannot be greater than 2 per 1,000 women per year (that is the figure you get if you assume that all of the offences here were rapes of women – in other words no woman suffered a sexual assault less serious than rape, and no men suffered any sexual assaults at all).

          The NISVS report suggests that the incidence of rape among women is 5 per 1,000 per year. This is over twice as big as the maximum value that you can possibly get out of the DoJ data.

          Amanda seems to get her data from incidence from the NISVS report, and her data on reporting rates from the DoJ one, without realising they are inconsistent.

          I’m not sure what is going on here, and will try to dig a bit deeper into the data if I get time.

  • Pingback: Statistics - Worst Words | Censored Sensibility