Correlation vs Causation
This is top of the list for very good reason. So much of the science quoted in the media is correlational yet the headlines would lead you to believe otherwise. So what does correlation mean? A correlation is nothing more than an association between two things. So we could say that red meat is correlated with cancer and we could say that ice cream consumption is correlated with shark attacks. The level of science involved is the same in these two statements. Causation on the other hand shows cause and affect. You can't say that eating red meat causes cancer as no study has ever shown this, likewise with ice cream and shark attacks.
The general idea for research is that you perform an epidemiological study to identify correlations in whatever field you are researching. Once you have identified some associations you can use these to inform hypotheses which you then test with controlled studies. Unfortunately this latter step is largely ignored and there are many well ingrained health messages that are based on nothing more than associations.
To demonstrate the issue with basing policy on correlational data let me give you an example. It is a widely held belief that breakfast is important for school children. Correlations were indeed found between children eating breakfast and getting higher grades. Now before this advice is given to parents we should look at this association in a controlled study to rule out any confounding factors. Confounds are other variables which may affect the relationship between the variables you are studying. When our breakfast example was studied in a controlled fashion the relationship between eating breakfast and getting higher grades disappeared except for instances where children were malnourished. So what happened to our initial relationship? Further analysis showed that not eating breakfast was linked to absenteeism and this had a negative impact on grades. If only this study had been performed before advising school kids to eat a sugary cereal breakfast.
Bullshit Spotter - Headlines saying 'eating X could reduce your risk of Y'
Relative vs Absolute Risk
For this we need to get a little bit statty. I apologise but it will be worthwhile as it'll help you blow holes in many stories as well as sounding clever (or boring) in front of your friends. Often we see headlines stating that a certain drug can decrease risk of a disease by a large percentage. This sounds great and you want the Dr to prescribe you this drug. The issue is that the headline is using the relative risk which is nothing more than comparing a risk between 2 groups. It doesn't tell you anything about the actual risk which is the overall chance of you getting that disease.
In many cases the relative risk figure seems to convey a massive benefit yet when we look at the actual absolute risk this is a tiny number. To give an example a few weeks ago there was a story claiming that eating 2 peaches a week could reduce your risk of breast cancer by 41%. I hope your Bullshit detector has immediately sounded given the correlational nature of this research yet lets look past that for now. When you look at the absolute risk for getting breast cancer you'll see that in the highest risk group (those eating no peaches) the risk was 0.062%. Where as in the lowest risk group it was 0.038%. Hardly mindblowing is it!
For more on this and other issues with the peach research check out this blog from Zoe Harcombe: http://www.zoeharcombe.com/2014/09/two-peaches-lower-the-risk-of-breast-cancer-by-41-puh-lease/
Bullshit Spotter - If a headline compares the risk between 2 groups rather than your overall risk
There are many, many shoddy methods which I could discuss here but I will focus on one which is the scourge of good nutrition research. The Food Frequency Questionnaire is the epidemiological nutrition researchers' best friend as it is dirt cheap, really easy to administer and really easy to report on. Essentially the researcher categorises food into certain groups and then gives it to their participants asking them to record how many items of food in each group they ate over a certain time period.
The first issue here is the groupings used. Looking at our red meat example from earlier we'd like to think that researchers focused on quality sources of meat. You may be surprised to learn that many researchers include pepporoni pizzas and burgers, including the buns, into this category. It isn't a huge leap of common sense to see that a pizza and a grass fed steak perhaps offer different nutritional profiles and maybe shouldn't share a grouping.
Now imagine I asked you to fill in a FFQ for the past week. How accurate do you think your recall would be? How about for last month? Now for last year? Now for the 3rd week in December 5 years ago? This may sound ridiculous but is relatively common practice. Again it is fairly common sense to assume that there may be some mistakes in the reporting of a participants diet.
The final big issue with the FFQ is the static nature in which it is so often applied. There are several large follow up studies currently running where research is routinely published from the data. The common design here is to get a large group of people to fill out a FFQ and some lifestyle questionnaires and then track them for many years, recording who dies and what of. This allows researchers to give statements such as the red meat and cancer correlation (when perhaps the confounders of smoking, sedentary lifestyle and carb intake may have been better shots). Some of these studies ask participants to complete an FFQ every year. Some just use the original from the start of the study. How many people do you think have the same eating habits they did last year? What about 10years ago? More common sense needed!
Bullshit Spotter - Any mention of the FFQ in longitudinal, epidemiological reseach
Conflicts of Interest
Anytime you see a study which is claiming to show a positive effect of a certain drug or foodstuff it is always worth looking up who funded the study. There is a lot of money to be made in drugs and food and if you can get some positive studies out there about your particular product then it will certainly help profits. Whilst you may think that the funder of the research surely can't have that much influence over results, you'd be surprised, There are many clever techniques available to them. To give an example in drug trials, run in periods are common in the intervention group, this means that participants are studied but the data not recorded for the start of the trail. This allows the researchers to weed out anyone who experiences side effects or doesn't respond well to the drug just leaving the positive responders to collect data from.
The most recent high profile example of conflict of interest has been related to Statins. You can read more about this in Malcolm Kendrick or Ben Goldacre's writings but to give a taster there is a 'Trials Unit' which runs all Statin studies and holds all the data for these trials. They regularly publish research showing positive benefits of statin use. However they refuse to release any of the raw data for others to analyse. Particularly data on side effects which is becoming an increasing concern in statin prescribing. This trial unit has been set up to give the illusion of impartiality yet it receives millions of pounds of funding from drug companies.
Another trick, which the trials unit above uses, is to recive part funding for trials from honourable (in theory) institutions such as The British Heart Foundation or Diabetes UK. Such sources of funding are often quoted in the press releases yet the majority of funding comes from big food/pharma.
Bullshit Spotter - A story showing benefit of X drug/food/thing where the research is funded by the manufacturer of that drug/food/thing
This final point looks comical when written down in plain English yet it is a point that is so often overlooked. Now, I hate to break it to you but everybody dies. It is impossible to save a life, you can however delay death. Not the most cheerful outlook to have I know but certainly worth bearing in mind when looking at the research. So imagine a story claiming that Drug X can save 1000 lives a year. They make this claim based on difference between death rates in their experimental groups at the end of the trial. Whilst that may seem to make sense remember that all those participants that are still alive, eventually die. It is the difference in life extension that is the crucial bit, you can't claim to 'save lives'!
For another (much better) discussion of this point please read this:
Bullshit Spotter - Any claim of potential lives saved in a certain time period
I hope that this quick run through has given you the tools to start shouting BULLSHIT at the newspaper! If you can think of any that I've missed then please comment below! Now all this doesn't mean that all research is futile or that Randomised Controlled Trials are the only things we should look at. Far from it. Research is unfortunately imperfect and all I aim to achieve with this post is to make people step back and look at the news story first rather than taking it as fact purely because it is 'science'.