When it comes to getting solid advice on health, it can sometimes feel a little like entering a lucky dip. While we all have a rough idea of what constitutes healthy versus unhealthy activities, the official word on the specifics seem to always be changing. Worse, sometimes even the big aspects of health seem to change. For instance we were told until recently to avoid saturated fats because they increase bad cholesterol, whereas now studies are starting suggest that actually that doesn’t happen and that they’re quite good for us in fact.
Resveratrol is the substance found in red grapes and red wine and until recently it was believed that it could combat cancer and ageing while increasing energy. People ran out in their droves to spend money on expensive supplements and then we subsequently learned that actually those studies were carried out on mice and probably don’t apply to humans – because we’re so much bigger and the resveratrol would be needed in ridiculous quantities to have the same effect (1).
It’s enough to make you wonder whether you can really trust anyone when it comes to your health. And it certainly begs the question of why researchers keep changing their mind or often just don’t agree… what’s going on?
Read on and we’ll break down the processes that good researchers go through when conducting a study. In doing so, we’ll see why mistakes happen, why scientists keep ‘changing their mind’ and how to know when you can trust a study or not.
First, let’s start with the nature of science to begin with. Because actually you’ll see that in some ways, it’s a good sign that scientists keep contradicting themselves.
The point of science is to disprove theories. A study that sets out to confirm the established status quo is an example of bad science and is likely going to be biased and designed in such a way as to confirm the hypothesis. Instead then, a good study should seek to disprove a theory in order to test its validity. Scientists come up with theories and then they put those theories to the test. Eventually, a study normally comes along that demonstrates those theories to be incorrect and then the theories get adjusted. In the meantime though, it might look for a while as though the idea is sound if the first few studies corroborate it.
If science didn’t work like this, then we would never have made many of the discoveries we have today and we would probably still believe that the world was flat. In the meantime, we’d likely still be using leeches to treat flu symptoms.
If you’re always seeking to disprove a theory then, it stands to reason that you can never be 100% sure about anything. This means in turn that we can never be 100% sure that the conclusions of a study are accurate or would hold to be true outside of laboratory settings. Apart from anything else there is always the risk of chance which is almost impossible to illuminate. If you give 1,000 people a magic cure for the common cold, there is a very small possibility that those thousand people would get healthier at the same time by pure coincidence. It’s very unlikely, sure, but it cannot be completely discounted. Thus, the results of studies are actually measured in terms of probabilities – and the probability of the conclusion being accurate. If one group in a study sees a change of some sort, then the change must be great enough for it to be considered scientifically ‘significant’ – meaning that the change was high enough and unusual enough that we can safely rule out chance. Still though, even a result that is found to be ‘significant’, is still only able to make the claim that it’s ‘highly unlikely’ to be the result of chance.
Controls and Confounding Variables
Chance is one example of a confounding variable, but there are many others besides. A confounding variable essentially is any factor that might impact on the outcome of a study that hasn’t been effectively ruled out.
For instance, a study might look at people who eat bacon and find that they are more likely to get heart disease. However, further analysis of said study may reveal that the study was observational (meaning not set up with laboratory conditions) and that the rest of the participants’ diets weren’t controlled or observed. When you think that someone who eats lots of bacon is also more likely to eat many other ‘unhealthy foods’, you then have a confounding variable that might upset the results.
Likewise, you might be testing a supplement and getting your participants to take the supplement with milk. How can you be sure that the effects aren’t coming from increased milk? A good study will thus have to find ways to control and account for as many confounding variables as possible – often using control groups – though this won’t always be the case. A control group means creating a secondary group with conditions as similar as possible to the experimental group and then testing the differences in response.
The biggest confounding variable of all though in health studies is the placebo effect. In other words, the very fact that you know you’re taking a supplement to try and make you better, means that you often will feel better – and this belief can even bring on physical changes in the body. The placebo effect is so strong that it can even work when you know you are receiving the placebo dose and there is even a ‘negative’ placebo response called a ‘nocebo’. Placebo effects account for many studies that appear to ‘support’ the effectiveness of ‘alternative’ medicines (2).
The best we can do currently to try and limit the impact of the placebo effect, is to make sure that participants don’t know whether they’re in a control group or experimental group – which makes the study a ‘blind study’. Actually though, it’s important to take this one step further wherever possible by ensuring that the researchers themselves don’t know which group is which either. This should then help to prevent the researchers from accidentally giving subtle clues or acting in ways that could bias the outcome of the study.
Some studies use ‘self-reporting’ in order to collect their data, meaning that they are relying on the truthful and accurate accounts of their participants in order to draw their conclusions. For instance, this might mean asking smokers and non-smokers to report how they feel after climbing a flight of stairs.
Sometimes self-reporting is one of the only real methods we have for collecting data but as you can perhaps imagine, it isn’t the most accurate method by any stretch and is open to misremembering, bias and many other problems.
When choosing a group of participants (called your ‘sample’), it’s crucial for researchers to select individuals that they think are representative of the general public, or of the people they are studying. A study might be flawed for instance if it is conducted in a university using students that attend said institution (which is a common occurrence). Why? Because the sample is likely to be made up predominantly of participants who are within a certain age, a certain race, a certain geographical location and even a similar educational background. What’s true for them, might not be true for an old Jamaican man living halfway around the world. The sample also needs to be big enough in order for the results to be statistically significant.
When looking at health studies then, it’s important to look at the sample that was used. There are different types of sample that can be used, which include ‘random’ samples, ‘purposive’ samples and ‘opportunity’ samples.
A ‘longitudinal’ study is a study that is conducted over the course of several years or even decades. Assuming that the rest of the study is well-designed, this is the ideal scenario for a health study because it tells us whether or not the effects continued for the long term. It’s not always wise taking a new supplement if it hasn’t been tested for more than a few months at a time, because it might have long term side effects that weren’t observed in those studies.
As mentioned, observational studies don’t involve the controlled conditions of a laboratory experiment. In some ways, this can be beneficial, as a lab setting itself can bias results. But when it’s not a good thing, is when it means that other factors are not accounted for.
One big mistake that can still sometimes get made in studies, is assuming causality. What this means, is that a correlation is witnessed in a natural setting and from there, the researchers assume or imply that one variable caused the other.
This is a particularly big issue in studies looking at psychology. Say we were to use a combination of self-reporting, testing and observe for instance and found that a drop in dopamine was correlated with a lower mood. We might then be tempted to say that dopamine regulates mood and that a dopamine supplement must therefore be a good tool for treating depression or anxiety.
The reality however, might in fact be the reverse. Perhaps external factors made the participants feel low in mood and this in turn led to them having less dopamine. This would be the view of most cognitive behavioral therapists.
While it might be tempting to jump to conclusions, we must never establish causality from correlation alone.
Sometimes a bias will be even more apparent, because the study will have been funded by a company with ulterior motives. Many studies on heart health for instance have been funded by the tobacco industry historically, and the results of these must be called into question (3). Studies looking at Type A personalities are a prime example; here it appears that tobacco companies were trying to find a scapegoat for heart problems associated with smoking, the claim being that ‘Type A’ personalities were more likely to have heart problems and more likely to smoke as well (4). This campaign was so successful, that the concept is now commonly understood and often referred to in pop culture.
Does outside funding instantly mean a study should be disregarded? Not at all – it may simply be that a company wanted something to be looked into. What it does mean though, is that you should be a little extra cynical and careful in assessing it.
All of these factors play into the reliability of a study and any researcher will be familiar with the concepts from their training. Yet the same mistakes will consistently occur, because at the end of the day most researchers are somewhat biased, they are working within a time constraint and they maybe forget some of the important controls they need.
Our mistake is in putting scientists and researchers on a pedestal and acting as though everything they publish is gospel: it’s not. Did you know that science has only just confirmed the existence of people with no sense of rhythm? They call this ‘beat deafness’ and classify it as a type of ‘amusia’ (5). The study is new and the community are treating it like a new discovery… and yet I personally have known people with no sense of rhythm for years. You probably have too. Scientists are only discovering it now because there are only so many of them and only so many studies they can carry out. They’re only human.
If you want more proof that sometimes ‘scientists’ will rush their studies and force their results, then consider that often the studies published in journals are actually the work of post graduate students trying to get their PHD or even undergraduate students working on their dissertation. I know this, because my own psychology dissertation was almost published – and I can tell you first hand that I fudged the numbers a little in a bid to pass my exam. Nothing major, but I got people to take part twice under different names when I couldn’t get a big enough sample for instance and that’s hardly uncommon.
So Can You Trust Any Studies? Uncovering the Truth About Health
With all this in mind, the question you might be asking now is: can any scientific studies be trusted?
The truth of the matter, is that you have to use your own judgment and be savvy when doing research. If you head over to Google Scholar, there you’ll be able to find many studies available for free to read on almost any subject (bear in mind that sometimes only the abstract will be available – but this is a summary of the study and will often be enough on its own to draw your conclusions). Look at whether the results are statistically significant (and by how much), look at the sample size, look at who funded the research and consider any confounding variables. If the studies are well designed and there are lots that all find the same conclusions, then it’s generally safe to assume the information therein is accurate.
Most research isn’t perfect by any means, but we should at least be glad that it is being carried out and constantly analyzed and discussed. All a study is, is a test of a hypothesis. If someone is selling you a health supplement or product and they aren’t willing to test it, then that is when you should be most concerned…
Last Updated on