Skip to Content, Navigation, or Footer.

The Tufts Attitudes About Sexual Conduct survey lacks intellectual integrity

On Wednesday, Sept. 30, the results of the 2015 Tufts Attitudes About Sexual Conduct survey (TASCS) were released to the broader Tufts community, and the results are quite concerning. Let’s assume that the Tufts definition of non-consensual intercourse is equivalent to the legal definition of rape, and the same applies of non-consensual sexual contact with sexual assault. That means that Tufts’ rate of all types of sexual misconduct in the 2014-2015 academic year perpetrated against females at eight percent is nearly 20 times higher than the national average.

According to the U.S. Bureau of Justice Statistics, 4.3 out of 1,000 female college students aged 18 to 24 experienced a sexual assault or rape as reported in the National Criminal Victimization Survey (NCVS) in 2013. The NCVS is a good baseline to compare against because it annually has an extremely high response rate (87 percent) and a large number of people sampled (approximately 160,000) and because it is extremely comprehensive and contextualized in its questionnaire. This is an alarming discrepancy either because sexual criminals are running rampant on the Tufts campuses or because something has gone very wrong with the treatment of Tufts’ sexual conduct data. I would argue for the latter.

Before I continue, I would like to make it clear that I do not mean to imply that the staff tasked with conducting the TASCS are in any way malicious or unintelligent. They are individuals doing the very difficult job of getting an idea of who is being victimized behind closed doors in our community. I have met some of them, and they obviously care about the wellbeing of the students and staff here. I believe the issue stems primarily from the 28.7 percent response rate of the TASCS. According to the report, this response rate is comparable to the response rates of similar colleges for similar surveys,but this doesn’t change the fact that the response rate is very low. We aren’t grading on a curve here. Surveys with low response rates are especially vulnerable to the effects of non-response bias, particularly participation bias. It is easy to imagine that certain groups are far more likely to respond to the survey than others and thus skew thedata.

In this case, sexual assault and rape victims, friends/family members of victims and people who put a high value on politics related to sex would be extremely motivated to spend the time to respond to this survey. I was a respondent myself, and, yes, responding to the survey was a non-trivial investment of time and effort. People who have been unaffected by sexual violence, or those who simply see it as a non-issue, have little incentive to respond.

Clearly, there is a very high risk that the data is largely skewed. Certain steps need to be taken before we can accept the data as academically rigorous.

To the administration’s credit, they mentioned in the TASCS report that this may present an issue. However, “...we developed a weighting scheme in order to make the survey sample look more comparable to the full population...” is the only explanation provided in the report. This is a vague response, and it gave me the feeling that the administration did not want to be transparent about their methods. Taken as is, the data is not in any way academically rigorous. So in search of answers, I attended the TASCS panel discussion on Oct. 8, intending to clear the muddy waters.

I asked the staff in charge of the TASCS directly about the weighting scheme, and their answer was that the data was simply adjusted to reflect disparate response rates of different demographics. Apparently, this weighting did not change the results very much, perhaps by a decimal place in most cases. However, this kind of weighing does not address how skewed the data may be due to participation bias. I asked whether other treatment methods were conducted, such as a short follow-up survey directed at non-responders to test how skewed the data is. Their response was that no further treatment was conducted.

To be clear, it is standard practice in the social sciences to conduct follow-up surveys that had a low response rate.The follow-up survey will usually only contain a handful of questions that represent the salient topics of the full survey and are administered to a set of randomly selected non-respondents. It is kept short so that the non-respondents do not feel that it is a major time investment and are more likely to answer the follow-up questions with a high response rate. If the follow-up results deviate significantly from the general results, then the surveyors can conclude that their data is skewed due to self-selection, and if there is no deviation, then the general results are much more likely to be accurate. We now know that that TASCS had no such test applied to the TASCS data.

Given the extreme deviation from the national average and the poor treatment of the TASCS data, the only intellectually honest conclusion is that the TASCS results should be disregarded and that we still don’t know the real rates of sexual victimization at Tufts. It is very likely that the real victimization rates are much lower than the TASCS would indicate due to the effects of participation bias. The TASCS is nothing more than advocacy data, and publishing the results as is shows that the Tufts administration is willing to put forth a politically profitable narrative while neglecting academic rigor. The administration is not made up of stupid people; many of them have Ph.Ds. I won’t say whether they were intentionally blind to this glaring issue or if they just allowed their critical thinking skills to take a short vacation, but the blind spot was there nonetheless.