As I indicated in an earlier issue of In Brief, I am making available various slideshows I use to teach about research. Finding Out What's Known is for an undergraduate lecture on what's good and bad about the various sources of information (anecdotes, popular media, websites, books, and scientific journal articles). It includes advice on how to read original-research articles and reviews in journals.
Pre and Post
I have condensed my previous resources on scientific writing into two slideshows. Writing Pre Data has general advice about scientific writing and specific advice about grants, proposals, and ethics applications. Writing Post Data is all about writing a thesis, journal article, and lay report. For more detailed information about writing papers, see earlier articles on Guidelines on Style for Scientific Writing (basically a summary of APA style), How to Write a Research Paper and How to Write a Literature Review. There are also existing resources for the writing you will need to do to when you give a talk or create a poster. See also Steve and Amanda Olivier's articles on ethics forms and comprehension in consent forms.
Update Feb 2018. Some of the above resources predate magnitude-based inference, so they contain now irrelevant advice on p values and statistical significance. See the Progressives Statistics article for details of reporting of inferential statistics and results generally. The number of decimal places or so-called significant digits to report was also the topic of a letter to the editor in Scandinavian Journal of Medicine and Science in Sports.
Update and Put-Down
Shortly after I published the article on journal impact factors for 2001 in this issue, colleagues sent me the impact factors for 2002. I have now added these to the article and associated spreadsheet. In what follows, I argue that differences in impact factor between journals arise mainly from differences in the volume of research activity in a field. I also comment on recent relevant items in the journal Nature.
Another colleague (the reviewer of this article) and I recently discussed the relative magnitudes of impact factors in different disciplines. It became clear to us that the average impact factor of journals specializing in what we called pure biophysical sport and exercise science hovers around 1. The factors for sociological journals in sport and exercise are even smaller. In contrast, journals specializing in generic fields of health, biochemistry, genomics, and physiology generally have impact factors of 3 or more. These differences in impact reflect differences mainly in volume rather than quality of research in the different fields. Why? Because it's impossible for an article in a particular field to get a large number of citations if there aren't a large number of researchers in that field. A journal that specializes in that field will therefore never have a high impact factor. We're in a relatively small field, guys. Our promotion and appointment committees should take this factor into account when they assess our performance using journal impact factors.
If the bean-counting mentality continues to dominate academia to the detriment of sport scientists, one solution is to convince our journal editors to allow us to cite more articles in our papers. Doubling the number of references will double the average impact factor of our journals. Currently most journals cap the number of references in a paper, on the grounds of limited space. But most journals also now levy page charges, so the authors are paying for the space anyway. And one day soon, when there are no more paper journals, space will not be an issue. Editors, please remove the cap.
Coincidentally, a commentary on the politics of publication appeared recently in Nature, followed by several letters and another letter. Regrettably these items all lacked summaries, so it's hard to glean the main points. The most pertinent point from the commentary appeared to be a call "not to be so desperate to push our papers into the leading [high-impact] journals", on the surprising and to me unconvincing grounds that publishing in such journals can compromise the quality of the science. I was also not convinced by the following assertion from the correspondence… "Any selection or promotion committee that asks you for impact factors is probably a second-rate organization. A good place will want to know about the quality of what you have written, not where you published it — and will be aware that the two things are uncorrelated." Sure, but we should still aim to publish in the best journals in our field, and selection and promotion committees should take journal quality into account. What the committees need to realize is that impact factors are only a rough guide to journal quality. For example, the Journal of Strength and Conditioning Research (current impact factor 0.8) seems to me to be every bit as good as Medicine and Science in Sports and Exercise (current impact factor 2.6). MSSE's higher impact probably reflects a higher proportion of papers in the generic field of population health and a higher proportion of review-type papers, which get cited more frequently than original-research articles.
Links to the commentary and correspondence in Nature point to copies at the Sportscience site. You may wish to download them before Nature insists I remove them.
Examine the fingers of your hand. Which is longer: the index finger (the finger you use to point with–technically the second digit, counting the thumb) or the ring finger (the fourth digit)? The ring finger in males is typically longer than the index finger, whereas the fingers are about the same length in females. There is some indirect evidence that the ratio of the length of the fingers is determined during early fetal development by testosterone (Manning, 2000): the more testosterone the fetus produces, the longer the ring finger, so the smaller the index/ring finger ratio.
Testosterone is, of course, the natural steroid hormone that enhances athletic performance, so are men with smaller finger ratios better athletes? In general, yes–they are more likely to become athletes and to reach higher competitive levels in a range of sports (Manning & Taylor, 2001). For example, professional football players tend to have lower finger ratios than non-athletes, first team players have lower ratios than reserve or youth team players, footballers who have played for their country have lower ratios than those who haven’t, and men with lower ratios run substantially faster over 800 m and 1500 m. So, would measurement of the finger ratio help to identify athletic talent? It might be worth doing some research to find out.
Manning JT (2000). Digit ratio: A pointer to fertility, behavior and health. New Jersey: Rutgers University Press
Manning JT, Taylor RP (2001). Second to fourth digit ratio and male ability in sport: Implications for sexual selection in humans. Evolution and Human Behavior, 22, 61-69