The Era of “Science” (Mis)communication on Social Media: 3 ways baseball research is misinterpreted and misrepresented.

I want to pivot away from our regular blog posts on coaching to highlight something that I feel is just as important: identifying misrepresented and misinterpreted baseball research on social media. There are a plethora of coaches on instagram who use research to support their views - some of these views are supported by the literature whereas others are misguided (at best). 

When done correctly, peer-reviewed research can be a valuable tool used to inform training methods and enhance coaching. For example, empirical research, systematic reviews and meta-analyses can be used to improve a coach’s communication style and teaching method so as to effectively teach complex concepts to a variety of athletes ranging in levels of expertise (e.g., novice, intermediate, expert).

However, too often I’ve seen baseball instagram accounts use findings from the abstract of a single study as “proof” that their perspective is true and their competitors’ views are false. It’s a lazy attempt to try to seem credible and, at worst, can be harmful to their own athletes. 

Research shouldn’t be used as a polarizing tool. Each study is a piece in the puzzle and significant findings from a study should be taken as “interesting” and worthy of further investigation. 

RESEARCH COMMUNICATION TRAPS

That being said, I want to provide you with a few things for which you should be aware/suspicious when reading posts on social media that highlight significant findings:

1. They only present significant findings from the abstract and do not highlight the authors’ interpretation from the discussion section

This is pretty common and it drives me up the wall. Think of abstracts as being the sparknotes of a study. Abstracts are generally limited to 150-350 words so, as a result, the authors put the barebones results and usually 1-2 sentences for their interpretation and conclusion. It goes without saying that 1-2 sentences cannot capture the complexity of the findings, the limitations, and provide alternative explanations for the results. 

Generally speaking, the discussion section is where the authors frame their research findings within the context of the literature which may in contrast to other findings or be supported by prior research. If there is very little support for the findings of the study, it should be taken with some caution. 

2. There is only one study or a limited number of studies which support claims-made. 

This is another popular one. This is typically done by those who have read a small proportion of the literature on a topic or someone with ulterior motives. The instagram coach finds one study that supports their view and advertises it but ignores the litany of research that refutes it. 

For example, the image below is an instance of an Instagram coach who (periodically) posts research which refutes the efficacy of weighted training implements that their direct competitor is known for using.

 
 
It should be noted that there is also research out there that contradicts this evidence.

It should be noted that there is also research out there that contradicts this evidence.

 
 

Don’t drink the Kool Aid, if you think something is worth implementing into your training, then it’s worth taking the time to find supporting evidence. Systematic reviews are generally good for this - it gives you an idea of what the body of literature agrees upon. 

3. The sample size is small and the sample characteristics are not considered. 

Baseball research is notorious for having small sample sizes and large variances in the age of the participants. Without getting too deep into the statistical importance of a large sample size, a few extremely large or small values can have a significant impact on the results. 

Example:

Let’s say you have 20 participants (ages 12-18), and you hypothesize that a weighted ball throwing program significantly improves throwing velocity over 6-weeks relative to a throwing program without weighted baseballs. You split them into two groups for comparison, one that uses weighted balls while the other group does not.

After the 6-week period, three of the 10 participants in the weighted-ball group gained 6 mph whereas the other seven gained 2 mph (for an average gain of 3.2 mph, overall), and the other group gained 1.5 mph on average (5 athletes gained 1 mph while the other 5 gained 2 mph). 

On the surface, weighted-balls may seem like the way to go (3.5 mph vs. 1.5 mph gain) and this would further be supported by a statistically significant value of p = 0.0076 - the threshold is usually 0.05 or 0.01. However, this finding is far less meaningful given that only 30% of participants had a substantial increase in velocity and 70% performed just as well as 50% of those throwing a regulation weighted baseball. 

Sample Characteristics

It is especially important to consider the ages and physical maturity of athletes involved in the study; it is not uncommon in baseball research to see participants ranging in age from 12-18 years old. The problem here is that physical strength, athleticism and skill level are typically lower at younger ages and younger athletes may be likely to respond more drastically to weighted implements, therefore biasing results for one group over the other. 

The sample characteristics should be representative of the population about which conclusions are being drawn. A better explanation for the results may be that weighted implements are more effective at increasing velocity for younger athletes than for older, more physically mature athletes. For these reasons, you couldn’t definitively say that the results of a study which included youth or high school-aged athletes would be the same in a college or professional population.

CONCLUSION

Research is important, it progresses our knowledge toward better training techniques and guides coaches away from ineffective or harmful training practices. However, I want to stress that interpreting research is complex and you should approach grand or polarizing claims with a healthy dose of scepticism.

If something seems too good to be true, it probably is. If you see someone who claims to know “the secret” to success, use your critical thinking skills. Think about that person’s agenda and which motivations they may have from which they stand to benefit by making such claims - in most cases, they're trying to sell you something (e.g., coaching services, their book, etc.).

“We are drowning in information but starved for knowledge.”  - John Naisbitt

Be sure to look for these red flags listed in this article while on social media: (1) using an abstract or summary as proof without the author's interpretation, (2) using research that has not been replicated or is weakly supported, or (3) using research with a small or unrepresentative sample as support for their views. 

Lastly, the problem of misreporting and misrepresenting research extends beyond social media. It is common in news reporting (Resnick, 2017; Dumas-Mallet, Smith, Boraud, & Gonon, 2017) and even in research (Brock, 2019; Boutron & Ravaud, 2018; Letrud & Hernes, 2019). The point being, if researchers who spend months researching the literature on their respective topics can make the mistake of misreporting information, you should definitely be skeptical of the instagram coach who post single studies as “proof”. 


 
Untitled design (9).png

About the Author

Graham Tebbit is one of the lead throwing trainers with Velo Baseball as well as our head data analyst. While attending Hofstra University he earned his bachelor’s degree in psychology and later earned his MSc in Kinesiology while attending the University of Toronto where he studied sub-concussive head impacts to catchers. Contact Graham here.


 

References

  1. Brock, J. (2019, October 21). Careless citations don't just spread scientific myths – they can make them stronger. Nature Index. https://www.natureindex.com/news-blog/misciting-scientific-myths-spread-strengthen-hawthorne-effect

  2. Boutron, I. & Ravaud, P. (2018). Misrepresentation and distortion of research in biomedical literature. PNAS, 115(11), 2613–2619. www.pnas.org/cg/doi/10.1073/pnas.1710755115

  3. Dumas-Mallet, E., Smith, A., Boraud, T., & Gonon, F. (2017). Poor replication validity of biomedical association studies reported by newspapers. PLoS ONE 12(2): e0172650. doi:10.1371/journal.pone.0172650

  4. Letrud, K. & Hernes, S. (2019). Affirmative citation bias in scientific myth debunking: A three-in-one case study. PLoS ONE 14(9): e0222213. https://doi.org/10.1371/journal.pone.0222213

  5. Resnick, B. (2017, March 3). Study: half of the studies you read about in the news are wrong. Vox. https://www.vox.com/science-and-health/2017/3/3/14792174/half-scientific-studies-news-are-wrong

Member Login
Welcome, (First Name)!

Forgot? Show
Log In
Enter Member Area
My Profile Log Out