HOW TO SURVIVE THE MEDICAL MISINFORMATION MESS
John P. A. Ioannidis*,†,‡, Michael E. Stuart§,¶, Shannon Brownlee**,†† and Sheri A. Strite¶ *Departments of Medicine, Health Research and Policy, and Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA, †Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA, ‡Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA, USA, §Department of Family Medicine, University of Washington School of Medicine, Seattle, WA, USA, ¶Delfini Group LLC, Seattle, WA, USA, **Lown Institute, Brookline, MA, USA, ††Department of Health Policy, Harvard T.H. Chan School of Public Health, Cambridge, MA, USAABSTRACT
Most physicians and other healthcare professionals are unaware of the pervasiveness of poor-quality clinical evidence that contributes considerably to overuse, underuse, avoidable adverse events, missed opportunities for right care and wasted healthcare resources. The Medical Misinformation Mess comprises four key problems. First, much published medical research is not reliable or is of uncertain reliability, offers no benefit to patients, or is not useful to decision makers. Second, most healthcare professionals are not aware of this problem. Third, they also lack the skills necessary to evaluate the reliability and usefulness of medical evidence. Finally, patients and families frequently lack relevant, accurate medical evidence and skilled guidance at the time of medical decision-making. Increasing the reliability of available, published evidence may not be an imminently reachable goal. Therefore, efforts should focus on making healthcare professionals, more sensitive to the limitations of the evidence, training them to do critical appraisal, and enhancing their communication skills so that they can effectively summarize and discuss medical evidence with patients to improve decision-making. Similar efforts may need to target also patients, journalists, policy makers, the lay public and other healthcare stakeholders. Currently, there are nearly approximately 17 million articles in PubMed tagged with ‘human(s)’, with >700 000 articles identified as ‘clinical trials’, and >1_8 million as ‘reviews’ (approximately 160 000 as ‘systematic reviews’). Nearly one million articles on humans are added each year [1]. Popular media also abound with medical stories and advice for patients. Unfortunately, much of this information is unreliable or of uncertain reliability. Most clinical trials results may be misleading or not useful for patients [2,3]. Most guidelines (which many clinicians rely on to guide treatment decisions) do not fully acknowledge the poor quality of the data on which they are based [4]. Most medical stories in mass media do not meet criteria for accuracy [5], and many stories exaggerate benefit and minimise harms. Clinicians and patients often do not recognise how pervasive this problem is and how profoundly it affects the care they deliver or receive. Twenty to 50 per cent of all healthcare services delivered in the United States is inappropriate, wasting resources and/ or harming patients [6–10]. Much of this waste is due to overuse of medical interventions, resulting in an unknown amount of preventable harms. Underuse of effective and safe interventions further compounds the system’s failure to meet patients’ needs [11–13]. While there are many causes for inappropriate care and waste, much of it may be attributed to the poor quality of information that clinicians and patients rely on to make decisions about the services they deliver or receive. We use the term ‘Medical Misinformation Mess’ to encompass the set of issues that relate to the low quality of medical information deeply embedded in clinical processes and decisions. Although the Medical Misinformation Mess affects multiple stakeholders – clinicians, patients, researchers, medical information content developers (e.g. producers of guidelines and decision aids), health journalists, professional associations, policymakers, politicians, hospitals, insurers, drug companies, healthcare advocates and others – here, our focus is mainly on clinician and patient issues, and on remedies for those aspects. The Medical Misinformation Mess comprises four key problems:- Much published medical research is not reliable or is of uncertain reliability, offers no benefit to patients, or is not useful to decision makers.
- Most healthcare professionals are not aware of this problem.
- Even if they are aware of this problem, most healthcare professionals lack the skills necessary to evaluate the reliability and usefulness of medical evidence.
- Patients and families frequently lack relevant, accurate medical evidence and skilled guidance at the time of medical decision-making.
Reference | Study design/size/population | intervention | Outcomes | Effect size | Risk of bias |
Linzer et al (33) | 44 internal medicine interns at Duke University who volunteered | General medicine journal club that emphasized epidemiologic methods and critical appraisal ofmedical evidence; five journal club sessions (mean); conducted over average of 9_5 months led by general medicine faculty; control group received seminars dealing with ambulatory medicine issues. | Per cent improvement in knowledge using a test instrument developed by the Delphi method. | 26% improvement in the intervention group compared with 6% improvement in the control group (P = 0_02). | Unclear risk of bias Small trial lacking in details of randomization and concealment of allocation; minimal loss to follow-p; assessors were blinded. |
MacRae et al. (34) | 81 members of the Canadian Association of General Surgeons who volunteered for 6-month internet based study; included surgeons from most provinces. | Internet curriculum in critical appraisal skills; included a clinical and methodologic article, a listserve discussion of methodology; methodologic critiques; 16 articles assessed with critical appraisal guide; control group received articles to read and had access to online critical appraisal articles. | Primary outcome measure: locally developed 51 item test to assess validity assessment and applicability skills | Intervention group score on examination: 58_8% vs. control group score of 50% (P < 0_001). | Unclear risk of bias Lacking in details of randomisation and concealment of allocation; attrition unbalanced and > 20%; adequate blinding of assessors. |
Taylor et al, (35) | 145 self-selected general practitioners, hospital physicians, allied health professionals, healthcare managers/administrators from the south-west of England. | Half-day skills training based on the Critical Appraisal Skills Programme (CASP) developed from educational methods of McMaster University. control group: waiting list for workshop. | Knowledge: validated tool – 18 multiple choice questions focused on knowledge of principles for appraising evidence. Skills assessment: appraisal of a systematic review. | Knowledge score: mean difference 2_6 (95% CI: 0_6–4_6). Skills assessment: mean difference: 1_2 (95% CI: 0_01–2_4). | Unclear risk of bias Computer generated randomisation codes; unclear concealment of allocation; balanced groups; attrition incompletely reported; adequate blinding of assessors. |