Another day, another low quality meta-analysis on red meat and cancer

As luck would have it, a perfectly good Friday draws to a close with this gem a of a meta-analysis dropped into my cybersphere titled Consumption of red and processed meat and breast cancer incidence: A systematic review and meta‐analysis of prospective studies.

It’s a bit…uh…”fun” so I’m going to go over it some detail for my science friends.

Disclosure: I follow a carbohydrate-restricted diet, and tend to eat a lot of these foods which are purported to cause cancer. I tend to view most published literature, especially non-RCT statistical analyses based on Food Frequency Questionnaires with weak strength, with an air of skepticism.

What is this meta-analysis?

When a statistical analysis or trial is conducted, there are a number of sources of potential error. Meta-analyses can try to correct for a few of these sources of error, but not others:

  • We can correct for random error and use larger volumes of data to help refine statistical error.
  • We cannot correct for systemic bias. If that bias is present in all studies, it will be present in the meta-analysis.
  • We cannot get better data than the individual trials. If the data in the trials are low quality, the resulting output will be low quality.
  • If meta-analysis authors harbor the same bias as the original trials, they can also exert subtle influence on the conclusion via exclusion criteria and wording. (So if the same author writes the studies and then authors the meta-analysis, we might expect…)

Let’s dig in

In general, I follow the following algorithm:

  1. What do? What are the inputs and endpoints of the paper?
  2. How strong? Did paper achieve significant results between endpoint and input?
  3. How quality? Look for clear bullshit, overstatement of data (low hazard ratios of 1.1 or so that achieve exactly p=0.05), quality of input data, author conflict of interest, and obvious errors.

We see this Abstract:

…We identified 13 cohort, 3 nested case–control and two clinical trial studies. Comparing the highest to the lowest category, red meat (unprocessed) consumption was associated with a 6% higher breast cancer risk (pooled RR,1.06; 95% confidence intervals (95%CI):0.99–1.14; I2 = 56.3%), and processed meat consumption was associated with a 9% higher breast cancer risk (pooled RR, 1.09; 95%CI, 1.03–1.16; I2 = 44.4%).

We have some really fun vernacular in there, but first let’s examine this quote:

red meat (unprocessed) consumption was associated with a 6% higher breast cancer risk (pooled RR,1.06; 95% confidence intervals (95%CI):0.99–1.14; I2 = 56.3%)

So as you statisticians may know, when your confidence interval overlaps with 1.0, it means that there was no statistically significant difference.

This makes me irate.

It means there is a greater than 5% chance that we are looking at noise. If there is selection bias or data-dredging, the chance that we are looking at noise will be moderately higher — if each study lead rejiggers the statistical model via the readily available shenanigans to obtain statistically significant results, this can have the effect of amplifying the noise.

For example, if we were looking at noise, 1 out of 20 regressions of said noise, on average, would find statistically significant results. If we decided only to publish positive statistically significant results because the negative results “seemed wrong,” then averaged all these positive results, it would appear that we have a large body of evidence supporting the correlation of noise.

Low hazard ratio on cause

If A causes B, as we now believe “Smoking Cigarettes” causes “Lung Cancer,” we would find that the correlation was quite high, with a RR of 20 and a p of basically zero, meaning that your likelihood of getting cancer was 20 times greater if you smoked and it was essentially impossible that the difference could be due to noise. If cigarettes were the only cause of lung cancer, we would expect that number to be infinite instead of 20, but it’s possible that 1) second hand smokers may have reported not smoking but been exposed, 2) there are some other causes of lung cancer like coal mining, or 3) data collection were not 100% accurate.

Here we find that the RR is 1.09 (95% CI 1.03-1.19). What this means is that there was almost no difference between cancer rates between lowest and highest quartile of processed meat eater. There is almost a 5% probability that this difference is due to random chance by their analysis, and it’s pretty easy to argue that the probability is higher given selection bias.

In practice, this means that processed red meat is not a cause of breast cancer because the hazard ratio is simply too low (almost as many near-vegetarian processed meat eaters seem to get cancer just about as often as those who gorge). What this study could argue is that processed red meat mildly perturbs the rate of breast cancer, but we know that epidemiology is not well suited towards determining very subtle effects with any degree of confidence.

Few people churning out research in this field seem to disagree with the idea that processed meat causes cancer — the author certainly appears to have an agenda (see below). From a probability standpoint (a normal distribution), we expect that the results of some regressions should be null and the results of some regressions should show more significance. A very tight cluster around p=0.05 and RR 1.05-1.4 should be suspicious. It suggests that the average researcher tweaks statistical models until they cross the threshold of significance, then stops.

Clinical Trial Evidence

One of the major points that caught my eye was the inclusion of Clinical
Trial Evidence
in this meta-analysis. Clinical trial evidence should do a better job of elucidating reality since we can control better for the variation and individual selection biases and other confounders.

I foolishly assumed that because the paper said:

We identified 13 cohort, 3 nested case–control and two clinical trial studies.

that I might find clinical trial studies on the relationship between processed red meat consumption and breast cancer rates. Instead:

  1. Clincial trial on citation 16: did not appear to be a clinical trial of anything. Used a Food Frequency Questionnaire. All endpoints had p=0.05 — magic, right?
  2. Clinical trial on citation 33: SUVIMAX randomized participants to an antioxidant supplement, and used a Food Frequency Questionnaire to measure dietary intakes. Had this gem:

Breast cancer risk was directly associated with processed meat intake [hazard ratio (HR)Q4vsQ1=1.45 (0.92-2.27), Ptrend=0.03] and this association was stronger when excluding cooked ham [HRQ4vsQ1=1.90 (1.18-3.05), Ptrend=0.005].

This is literally the definition of p-hacking. Original endpoint was null. And it would suggest that cooked ham has a protective effect. Shall we go championing this new conclusion to the world?

So tl;dr no RCT data. Just more FFQ which was done during unrelated RCT.

Selection Bias on SUVIMAX

I’ll just explain what happened. Referencing figure 3:

Notice that the largest trial by magnitude is SUVIMAX. But remember a few seconds ago we saw that SUVIMAX data showed a null result. How do they get such large RR?

Simple: they used only the control group of the stratified analysis, rather than the overall metric which was null:

In stratified analyses, processed meat intake was directly associated with breast cancer risk in the placebo group only [HRQ4vsQ1=2.46 (1.28-4.72), Ptrend=0.001], but not in the supplemented group [HRQ4vsQ1=0.86 (0.45-1.63), Ptrend=0.7].

  1. In selecting this metric, they “created” the largest outlier in the study data.
  2. It does not strike me as common practice to perform this sort of exclusion when the overall data, which was comparing FFQ to breast cancer, were aligned with the question at hand.
  3. If we take the SUVIMAX study at face value, any and all of this horrifying effect can be negated with a low-dose antioxidant supplement. For those who consumed a low-dose antioxidant supplement, processed meat has no effect, or to use this meta-analysis’ parlance, the rate of cancer decreased not significantly by 14%.

I don’t like this. Had they taken the overall result the net would have been even closer to the threshold of statistical significance.

Though redoing their average, it may be that the chart is incorrect, as I get 1.10 when I plug in the numbers as shown. Either the data is wrong or they’re using some funny mathematical average? Maybe someone can help me here.

Conflict of interest or agenda?

As I was showing this to a friend, I said “also want to bet that the lead author works for the Harvard School of Public Health?”

Sure enough, a google search indicates lead author is (drumroll please) Maryam Farvid of the Harvard TH Chan School of Public Health.

Just a quick glance over the titles of her paper history:

  • Farvid MS, Eliassen AH, Cho E, Liao X, Chen WY, Willett WC. Dietary fiber intake in young adult and breast cancer risk. Pediatrics. 2016. 137:1-11.
  • Farvid MS, Chen WY. Adolescent diet and breast cancer risk. Curr Nutr Rep. 2016.
  • Farvid MS, Eliassen AH, Cho E, Chen WY, Willett WC. Adolescent and early adulthood dietary carbohydrate quantity and quality in relation to breast cancer risk. CEBP. 2015; 24:1111-20.
  • Farvid MS, Cho E, Chen WY, Eliassen AH, Willett WC. Adolescent meat intake and breast cancer risk. IJC. 2015;136:1909-20.

Does this look like a neutral observer well qualified to make an independent assessment in a meta-analysis? Is it possible she has…an agenda? That maybe her career has been built around this assumption?

The authors did note publication bias:

A few limitations of our study should be considered. As in any meta‐analysis, publication bias is possible. However, we did not observe significant publication bias for either red meat or processed meat.

Not entirely sure what this is supposed to mean…

Although most of the studies adjusted for major breast cancer risk factors, as with most observational studies, we cannot exclude the possibility of residual confounding.

Ok, doing well, admitting that we can’t really be sure of these results. Really should be more prominently featured if p= ~0.05.

In the majority of studies, because diet was assessed using an FFQ, the under‐ or over‐reporting of the amount of food groups could cause measurement error. However, since this equally may affect cases and noncases, likely estimates will be biased toward the null, so actual effect sizes might be larger than we observed here.

What? I guess they’re trying to argue that the statistical error introduced by the FFQ biases towards null… ok… don’t buy it. Implies characteristics about the nature of FFQ error without justification. Smells like a bullshit.

Moreover, we compared the highest level of intake vs. the lowest, but levels of intake do not match sometimes. In some studies, processed poultry was included in processed meat and total red meat.

Oh lord…definitely no mismangling here which could change our conclusions!

Conclusions

I’m thoroughly impressed at how bad science is these days.

Godspeed.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.