Editor – Hackshaw et al1 raise some methodologic concerns regarding our re-analysis and suggest on the basis of these that their original analysis2 leads to a more valid conclusion than our re-analysis.3 The first point raised was that our re-analysis of the intervention studies should have included only randomized controlled trials, implying that because the randomized controlled trial is a more valid study design for causal inference compared with the observational study design, the inclusion of the latter in our re-analysis is likely to be affected by inherent biases and confounding.1 The problem with this argument is that confounding can occur by chance or by indication.4 When confounding occurs by chance, it will occur with the same probability in randomized controlled trials and observational studies because it is, by definition, due to chance. In both randomized controlled trials and observational studies, the P-value automatically incorporates the uncertainty due to confounding by chance.4 Furthermore, in the context of a meta-analysis, confounding by chance in one direction in one study is expected to be matched by confounding by chance in the other direction in another study.4
The more important problem with observational studies is confounding by indication. If radioiodine is allocated in an observational study, we can envisage three scenarios.4 First, the allocator is not knowledgeable about important confounders, in which case it is highly unlikely that they will be causally related to exposure to treatment, and any unequal distribution must have occurred solely by chance. Second, radioiodine could have been allocated by someone who does not know the risk status of the patient, and thus, the situation is similar to the first scenario. Third, if radioiodine was allocated by someone who knew there was a high versus low chance for ablation (for example, had surgical status information and understood what this means), and thus could influence treatment allocation, there can be a bias. In the case of radioiodine, this sort of influence, however, is likely to be a higher dose bias towards high-risk patients and a lower dose bias towards low-risk patients leading to non-differential bias.5 This again suggests, under the constraints of our re-analysis, that the risk of confounding in an observational study of this sort should be minimal, and including observational studies in this meta-analysis would only lead to increased precision of the estimate.4 Therefore, the statement by Hackshaw et al1 that our “meta-analyses of observational studies need to be interpreted with great care because the effect of potential confounding in each study could be magnified when the studies are combined, thus producing a spuriously precise effect” may be true for confounding by indication in other observational studies, but not in this series of studies on radioiodine.
The second point Hackshaw et al1 make is that when our re-analysis was restricted to the six randomized trials (in which they correctly indicate that no comparison was based on more than 150 patients in total), the pooled relative risk was 0.68 (95% confidence interval [CI] 0.43–1.07), which was not statistically significant (P = 0.093). They claim that the correct interpretation of this is that while there is some evidence of a lower ablation rate when using a low dose, this effect could be due to chance, and they question why we did not comment on this. We did not think this important because this line of reasoning is based on the assumption that randomization removes the chance of confounding; however, we have just pointed out that including information from observational studies may actually improve the inference based on only randomized controlled trials. Furthermore, a review of empirical studies suggests that meta-analyses based on observational studies can also produce estimates of effect similar to those from meta-analyses based on randomized controlled trials.6,7 Indeed, these and other authors4,7,8 have found that the discrepancies between observational studies and randomized controlled trials are highly impacted upon by the quality of the studies. What is interesting to note is that Hackshaw et al2 appear to have made no attempt to assess the methodologic quality of studies in their meta-analysis, even though there is now empiric evidence that would mandate some form of quality assessment of primary studies in meta-analysis.9,10 Indeed, a minimum requirement of heterogeneity is to look up the list of studies to find out why they differ.11 We did assess the quality of every study, and more importantly, we did not simply link it to the interpretation of results or use it to limit the scope of the review.12 Instead, we created a new and robust method of meta-analysis that did justice to the quality information in this synthesis,12,13 and our conclusions remained unchanged.
Third, Hackshaw et al suggest that the “dose level” (activity of I-131) that defines ‘low’ and ‘high’ differs greatly between these trials. For example, a low dose could be 30 or 50 mCi (a difference of two-thirds), and a high dose could be 50 or 100 mCi (a difference of two-fold). The fundamental error here is that radiation dosimetry is not an exact science and involves huge approximations and thus a difference of two-thirds or two-fold does not translate into an effect change of an exact magnitude.14 Also, the use of a meta-analysis in this regard is only problematic (more so for observational studies) if we need to know the estimate of the effect of a particular dose of radioiodine, as opposed to if we need to show that there is an effect on average of a higher versus lower dosage. The same argument also stands for the combination of observational and randomized controlled trial data because here we need only to show that the observational studies agree with randomized controlled trials on average.15
Finally, we have to disagree with Hackshaw et al’s conclusions for a second time. Once these considerations have been taken into account, it is misleading to rule out the potentially greater effectiveness of a higher (as opposed to high) dose based on current evidence. Based on the majority of studies, this would be in the range of 2775 3700 MBq. We look forward to the results of future randomized controlled trials that will help to determine whether a low dose should be avoided or could be used instead of a high dose.
Footnotes
-
Current affiliation: School of Population Health (Clinical Epidemiology), University of Queensland, Brisbane, Australia, Email: [email protected]
- Received January 22, 2009.
- Revision received February 16, 2009.
- Accepted March 4, 2009.
References
- 1↵Hackshaw A, Mallick U. Low versus high I-131 dose for remnant ablation in differentiated thyroid cancer. Clin Med Res 2009;7:1.
- 2↵Hackshaw A, Harmer C, Mallick U, Haq M, Franklyn JA. 131I activity for remnant ablation in patients with differentiated thyroid cancer: a systematic review. J Clin Endocrinol Metab 2007;92:28–38.
- 3↵Doi SA, Woodhouse NJ, Thalib L, Onitilo A. Ablation of the thyroid remnant and I-131 dose in differentiated thyroid cancer: a meta-analysis revisited. Clin Med Res 2007; 5:87–90.
- 4↵Shrier I, Boivin JF, Steele RJ, Platt RW, Furlan A, Kakuma R, Brophy J, Rossignol M. Should meta-analyses of interventions include observational studies in addition to randomized controlled trials? A critical examination of underlying principles. Am J Epidemiol 2007;166:1203–1209.
- 5↵Doi SA, Woodhouse NJ. Ablation of the thyroid remnant and 131I dose in differentiated thyroid cancer. Clin Endocrinol (Oxf) 2000;52:765–773.
- 6↵Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med 2000;342:1887–1892.
- 7↵Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med 2000; 342:1878–1886.
- 8↵Furlan AD, Tomlinson G, Jadad AA, Bombardier C. Methodological quality and homogeneity influenced agreement between randomized trials and nonrandomized studies of the same intervention for back pain. J Clin Epidemiol 2008;61:209–231.
- 9↵Jüni P, Altman DG, Egger M. Systematic reviews in health care: assessing the quality of controlled clinical trials. BMJ 2001;323:42–46.
- 10↵Doi SA. The influence of methodologic quality on the conclusion of a landmark meta-analysis on thrombolytic therapy. Int J Technol Assess Health Care 2009;25:107–109.
- 11↵Senn S. Trying to be precise about vagueness. Stat Med 2007;26:1417–1430.
- 12↵Doi SA, Thalib L. A quality-effects model for meta-analysis. Epidemiology 2008;19:94–100.
- 13↵Doi SA, Thalib L. An alternative quality adjustor for the quality effects model for meta-analysis. Epidemiology 2009;20:314.
- 14↵Doi SA, Loutfi I, Al-Shoumer KA. A mathematical model of optimized radioiodine-131 therapy of Graves’ hyperthyroidism. BMC Nucl Med 2001;1:1.
- 15↵Deeks JJ, Dinnes J, D’Amico R, Sowden AJ, Sakarovitch C, Song F, Petticrew M, Altman DG; International Stroke Trial Collaborative Group; European Carotid Surgery Trial Collaborative Group. Evaluating non-randomised intervention studies. Health Technol Assess 2003; 7:iii–x, 1–173.




