A Case Study on Interpreting Research: Reading the fine print!
A quick browse through the study 'A pilot randomised controlled trial of eccentric exercise to prevent hamstring injuries in community-level Australian Football' would lead one to believe that this is yet another study indicating the effectiveness of the Nordic Hamstring Exercise (NHE) on increased hamstring strength and reduced injury risk. Perhaps this study is a good example on the importance of reading the fine details of a research paper, before giving weight to a study and its suggested results and implications.
At the outset, the study lay-out seems fine - comparing eccentric exercise in the form of the NHE to stretching, and their effects on the rate of hamstring injury in soccer players throughout the course of a year. A standard NHE study - comparing something to nothing - but that's another issue! However, being a training study where the effects of an exercise are assessed, you would expect a minimal amount of exercise sessions and exposures to the exercise in order to assess a response - right? If we look closely at this study, we see that it required participants to complete only five exercise sessions over a 12-week period - three during the pre season and two during the first six weeks of the season. The sessions were spaced two weeks apart during pre-season and three weeks apart during the in-season period. To begin with, this represents a very low level of exposure to the exercise and so, even if compliance from participants was 100%, the validity of the results and their transfer to real world situations should be questioned. Five exercise sessions in a three month period cannot be seen as a true reflection of the effects of an exercise. With the frequency of each session spaced so far apart, adaptation to the exercise is unlikely at best.
In addition to the poorly constructed study outline, the compliance of those involved was extremely low. Of the five sessions, 30% of 'participants' who agreed to take part in the study, did not even complete one exercise session. There was a 50% fall in participation from the first session to the second session in the intervention group. Only 46% of participants completed a minimum of two exercise sessions and the study does not say which two sessions these were - so they could in fact have been as far as three months apart! Of the two in-season exercise sessions, only 10% of participants took part in them, so it becomes evident that (regardless of there only being five total exercise sessions) player exposure to the exercise was not maintained throughout the season (as infrequent as that would have been had it materialised!). When you take all of this into consideration, it seems a little misleading that the individuals completing only two exercise sessions in a three month period, still contribute to the results of the study and the recommendations that the authors derive from it.
Although the authors duly mention the obvious limitations of the study, this research remains a fine example of the importance of reading the smaller details of a study. When we do so, we realise there is little to take from this study. The authors quote a trend towards a protective effect of the NHE on hamstring injury, yet little weight can be given to these results as too many crucial details are omitted. The numbers of each group (control and intervention) completing each session are not provided. Which sessions (1-5) were completed by each group? Did participants complete session one and then return three months later to complete session five? And if so, did that particular player play the entirety of the season or did he see limited game time, thus limiting his chances of sustaining an injury. Did the participants even partake in their sports training on a regular basis? Failure to train regularly may have been the cause of injury through a lack of exposure to running and more specifically high-speed running. These are the details that are needed if exercise practitioners are to take anything from this study and use it as a guide to aid exercise selection when programming. For example, (for all the reader knows) a player may have completed session one, missed sessions two, three and four, completed session five, and only played in two games for his team that year, months later, in which he suffered no hamstring injuries - yet this sort of data was still included in the results and (even worse) recommendations on exercise selection and injury prevention are meant to be based on this level of research? Surely a higher standard of research should be the bar!
Below are some suggestions of what to look for in a training study of this nature, examining the effects of a resistance exercise on the rate of injury:
Training sessions (intervention) performed a minimum of once per week but ideally twice per week, for a minimum of 6-8 weeks.
The intervention is performed throughout the entire season as opposed to solely in the pre-season and first few weeks of the playing season.
A control group performing a similar volume of sport-specific training as the intervention group.
The total training load of both groups is monitored and presented in the study.
The number of games and training sessions completed for the remainder of the season (after study completion) is recorded so that player exposure can be accurately gauged.
When an injury does occur, the severity of the injury and the total time lost as a result of the injury are reported.
When an injury does occur – the injured players return to play and return to training is monitored and recorded so that rate of re-injury can be assessed.