The National Center for Complementary and Integrative Health created an excellent series of interactive modules and videos on understanding the science behind health information.
Evidence-based medicine (EBM) is a favorite phrase in healthcare today - but what does it mean? The Centre for Evidence-Based Medicine defines it as:
"the conscientious, explicit, and judicious development and use of current best evidence in making decisions about the care of individual patients".
In other words, treatment based on scientifically-proven research, as up to date as possible while still being thoroughly checked. EBM relies heavily on systematic reviews and meta-analyses to determine the best evidence.
Sometimes health information presents the likelihood of something happening (or not happening) as an absolute risk percentage. We've all seen these assertions ("1 in 10..." etc.).
Medical research, however, more often talks about whether factors increase or decrease the [absolute] risk. This can get confusing, especially if the story doesn't state what the absolute risk numbers are, to start with. If a new drug lowers the risk of heart attack by 50%, but the absolute risk of heart attacks to start with was 2% (2 people out of 100), that means only 1 fewer (1%, or 1 person out of 100). Really understanding the importance of research findings might take some calculations.
The National Institutes of Health created this guide to the different types of medical clinical studies - and why / when they are important.
While these can be perfectly valid methods for scientific research, they usually are not considered to be sufficiently scientific to recommend a course of treatment on a person. Look for reports of a clinical trial or much larger study (1,000s of research study participants) before seriously considering that information.
In Vitro or animal studies: research that uses cells grown under controlled conditions, or laboratory animals, as a "stand in" for humans. These kinds of studies are done to decide whether an approach looks to be both promising AND safe, before attempting research involving humans. If the research has not progressed to humans in a clinical trial yet, it's still too experimental to be used as treatment.
Studies with a small population or case studies: research based on only a statistically small number of subjects. A group of 150 can be quite large for some things, but it's still too small to represent a true sample of all people with XYZ condition. It is too easy for unexamined factors (like age or co-existing conditions or environment) to skew the research results.
Studies without any control group: research that does not include a separate group of people - matched as closely as possible to the study group - who did not receive the experimental treatment. Without that, it's extremely difficult to show cause; the best it can do is suggest correlation, and even that can be questionable.
If there is a control group, it's called a "controlled study". Comparing the study group to the control group is one of the gold standards for modern scientific research. If the individual people in the study don't know which group they're in, it's called a "blind" study; and if the researchers conducting the study don't know either, it's called a "double-blind" study.
And no matter what research method was used, be cautious of any preprint article, and follow up to see if it has appeared in a peer-reviewed source.
News reports on medical research are often misleading, for any number of reasons. In order to make a story, more appealing to readers, it may overstate its importance. Or complex findings might be boiled down to make a short sound bite. News reports may be intentionally misleading - a study of Google Health News articles published in 2013 found that almost 88% of the reporting on medical studies in their sample "had at least some type of spin, such as misleading reporting or interpretation, omitting adverse events, suggesting animal study results apply to humans or claiming causation in studies that only reported associations."
So, readers, be skeptical! These critical thinking tools can help you determine the amount of spin in a health news story.
Think of peer review as quality assurance for scientific research. It doesn't guarantee that the research conclusions are right, but reviewers check to make sure that the science was done correctly. That process takes time, though - months, sometimes even years.
Most scientific research articles are organized in the same way - and they make the most sense if you don't read them from start to finish, but read them out of order, section by section. Start with the abstract (the author's summary of their paper).
Then skip to the conclusions and discussion sections to learn what the research study found.
Next up as step 3 - if the article still looks useful - read the Introduction. This should give you background context for this research study, why it matters and how it relates to other research studies.
And only then, when you think this article is still relevant to your question, go back to skim the description of the research project and its methods. Be sure to note the size of the study (just a few? or thousands?) and if it was a controlled study - see the nearby box for more details.
Image credits: Kapi'olani Community College's Library & Learning Resources webpage on how to read a scholarly article.
PlaneTree Health Library's mission is to guide the public to trustworthy, accurate, and free health and medical information. In operation since 1989, it is a free, public, patient and consumer health library and 501(c)3 nonprofit organization. It does not accept advertisements; it has no commercial relationship with the sources of information on these webpages. Visit our online information guides linked from our main website at: www.planetree-sv.org
The text on this page is copyright PlaneTree Health Library, licensed under Creative Commons CC BY-SA. Linked contents are the responsibility of their creators / copyright holders.