A publisher of a national book of high student biographies has just released the results of a survey it did of its membership, those who appear in their book. The results, which appear to contain several findings of grave interest to the American public, are nevertheless suspect in their scientific validity. How often do we hear the word validity used in relationship to statistics which are proffered on the evening news or in the newspapers? Most people are left believing that all statistics are to be taken seriously and that they imply, by the fact that they are in the media, some assurance of being well grounded and useful. The facts, in my opinion, do not warrant such trust, nor do the researchers warrant plaudits for their work in this instance.
A questionnaire which was designed to tap into teens’ beliefs on everything from sex to suicide was mailed to just over 8,000 students across the nation. Just over 3,000 returned their questionnaires and from this sample the publisher made its findings public.
The number of teens who actually thought about suicide was found to be slightly more than 800 and the number who tried to kill themselves was around 130. Of course, these figures had to be calculated because percentages were given in the original press release and discussed by the spokesperson on national TV. When questioned about the responsibility of the researchers in terms of the students who thought about killing themselves and those who had actually tried, the spokesperson took a turn and answered another question; they intended to do something about the high percent of students who reported rape or forced sexual contact.
But where does this leave not only the kids who sent in the questionnaire and admitted to suicidal thoughts or actions, but all those kids who didn’t reply, but who had the same feelings or experiences? Where is the responsibility not only of the research company, but of the publishers and the schools where those students are in classes? What percent of the almost 5,000 kids who didn’t respond had tried to kill themselves or had thought about it? The study naively offers a glimpse into the, seriousness of teen suicide while ignoring its own inadequacy in truly providing information. In this case, ifs not only the kids who responded, but, more important, the kids who didn’t because those kids feel there’s no sense to reach out and tell someone of their pain. I suspect those kids are truly in danger and how is the study identifying them? For that matter, how is the study identifying the teens who admitted to trying to kill themselves?