HOW TO SURVIVE THE MEDICAL MISINFORMATION MESS

John P. A. Ioannidis*,†,‡, Michael E. Stuart§,¶, Shannon Brownlee**,†† and Sheri A. Strite¶ *Departments of Medicine, Health Research and Policy, and Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA, †Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA, ‡Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA, USA, §Department of Family Medicine, University of Washington School of Medicine, Seattle, WA, USA, ¶Delfini Group LLC, Seattle, WA, USA, **Lown Institute, Brookline, MA, USA, ††Department of Health Policy, Harvard T.H. Chan School of Public Health, Cambridge, MA, USA

ABSTRACT

Most physicians and other healthcare professionals are unaware of the pervasiveness of poor-quality clinical evidence that contributes considerably to overuse, underuse, avoidable adverse events, missed opportunities for right care and wasted healthcare resources. The Medical Misinformation Mess comprises four key problems. First, much published medical research is not reliable or is of uncertain reliability, offers no benefit to patients, or is not useful to decision makers. Second, most healthcare professionals are not aware of this problem. Third, they also lack the skills necessary to evaluate the reliability and usefulness of medical evidence. Finally, patients and families frequently lack relevant, accurate medical evidence and skilled guidance at the time of medical decision-making. Increasing the reliability of available, published evidence may not be an imminently reachable goal. Therefore, efforts should focus on making healthcare professionals, more sensitive to the limitations of the evidence, training them to do critical appraisal, and enhancing their communication skills so that they can effectively summarize and discuss medical evidence with patients to improve decision-making. Similar efforts may need to target also patients, journalists, policy makers, the lay public and other healthcare stakeholders.

Currently, there are nearly approximately 17 million articles in PubMed tagged with ‘human(s)’, with >700 000 articles identified as ‘clinical trials’, and >1_8 million as ‘reviews’ (approximately 160 000 as ‘systematic reviews’). Nearly one million articles on humans are added each year [1]. Popular media also abound with medical stories and advice for patients.

Unfortunately, much of this information is unreliable or of uncertain reliability. Most clinical trials results may be misleading or not useful for patients [2,3]. Most guidelines (which many clinicians rely on to guide treatment decisions) do not fully acknowledge the poor quality of the data on which they are based [4]. Most medical stories in mass media do not meet criteria for accuracy [5], and many stories exaggerate benefit and minimise harms.

Clinicians and patients often do not recognise how pervasive this problem is and how profoundly it affects the care they deliver or receive. Twenty to 50 per cent of all healthcare services delivered in the United States is inappropriate, wasting resources and/ or harming patients [6–10]. Much of this waste is due to overuse of medical interventions, resulting in an unknown amount of preventable harms. Underuse of effective and safe interventions further compounds the system’s failure to meet patients’ needs [11–13]. While there are many causes for inappropriate care and waste, much of it may be attributed to the poor quality of information that clinicians and patients rely on to make decisions about the services they deliver or receive.

We use the term ‘Medical Misinformation Mess’ to encompass the set of issues that relate to the low quality of medical information deeply embedded in clinical processes and decisions.

Although the Medical Misinformation Mess affects multiple stakeholders – clinicians, patients, researchers, medical information content developers (e.g. producers of guidelines and decision aids), health journalists, professional associations, policymakers, politicians, hospitals, insurers, drug companies, healthcare advocates and others – here, our focus is mainly on clinician and patient issues, and on remedies for those aspects.

The Medical Misinformation Mess comprises four key problems:

  1. Much published medical research is not reliable or is of uncertain reliability, offers no benefit to patients, or is not useful to decision makers.
  2. Most healthcare professionals are not aware of this problem.
  3. Even if they are aware of this problem, most healthcare professionals lack the skills necessary to evaluate the reliability and usefulness of medical evidence.
  4. Patients and families frequently lack relevant, accurate medical evidence and skilled guidance at the time of medical decision-making.

Problem 1. Much published medical research is not reliable or is of uncertain reliability, offers no benefit to patients or is not useful to decision makers

 With the ever-increasing number of publications, there is a growing need for well-designed and conducted systematic reviews and meta-analyses to provide valid, cumulative evidence on relevant topics. This need is not easy to meet. Although systematic reviews and secondary sources may accelerate evidence uptake [14], most systematic reviews and meta-analyses appear to be either not useful or of uncertain utility [3]. The majority are unnecessary (duplicative), inaccurate or misleading due to biases in the methodology and selective reporting of results, or they address questions that have no clinical value.

Underlying concerns about the methodology and bias of systematic reviews and meta-analyses is the quality of the published medical research on which they are based [2,3]. Usefulness of clinical research [15] requires existence of a real problem to address, proper context placement, sufficient information gain, patient-centeredness, pragmatism, reasonable value-for-money, nonfutility and transparency. Very few clinical studies meet at least six of the eight criteria [15]. In one survey of 60 352 studies, a meagre 7% passed criteria of high-quality methods and clinical relevancy [16] and fewer than 5% passed a validity screening for an evidence-based journal [17]. Uncertain or poor quality evidence leaves clinicians, often under pressure, without definitive information regarding possible treatments. For brevity, our focus here is on therapeutic interventions, but similar problems are found in publications dealing with diagnosis, prevention and other areas of medical care.

Many solutions have been proposed for making medical research more reliable [18,19] and more useful [15], and these will not be discussed in detail here as they have been covered elsewhere [15,18,19], but this problem is unlikely to be fixed imminently. Moreover, the large amount of accumulated misleading information is difficult to extract from the literature. Meanwhile, we need to work on the other three components of the misinformation mess to prevent misleading evidence from flowing downstream into clinical decisions.

Problem 2. Most healthcare professionals are not aware of this problem

Based on training thousands of attendees at our educational programmes and professional interactions with colleagues at all levels – from young trainees to top clinical and academic leadership – we are convinced that very few healthcare professionals are aware of the pervasiveness of biased and inaccurate medical literature. It is our combined experience that ignorance of this problem, even at the highest levels of academic and clinical leadership is profound.

Evidence for this ignorance emerges also in several studies and surveys. In a study of journal reading habits, internists (approximately half of whom were alumni of the Robert Wood Johnson Clinical Scholars Program) reported they obtained information mostly from abstracts and not the full articles, stating that they relied on editors to assure rigour and study quality [20]. Such trust may be misplaced. For example, a recent study showed that several editors of peer-reviewed journals could not tell whether a trial was randomised without a special checklist. Even then, of the 324 studies editorial staff considered as randomised trials, 127 (39%) were actually not randomized [21].

Many healthcare professionals put too much trust in abstracts for filtering the literature or expect that systematic reviews or guidelines will get rid of the unreliability and nonutility problem. Clearly, the reliability of a study’s results cannot be accomplished solely by reading the abstract. One study found that nearly half of abstracts of randomised controlled trials contained biased reporting of study results, implying benefit when there was no statistically significant difference in the primary endpoint between study arms [22]. Flawed primary studies are compounded by flawed systematic reviews and lead to flawed clinical guidelines that make for conflicting recommendations unsupported by reliable evidence [23,24]. Most healthcare professionals are not even minimally aware of these issues.

Problem 3. Most healthcare professionals lack skills in being able to evaluate the reliability and usefulness of evidence

In our encounters with students, clinicians and others working in the healthcare industry (including academicians, researchers, editors, peer reviewers, pharmacists, regulators, politicians and employees of insurance companies, hospitals, the pharmaceutical industry and new technology companies), we have found a lack of the basic skills required for determining a study’s reliability and applicability. For example, in a pretest administered to a sampling of more than 500 physicians, clinical pharmacists and other healthcare professionals attending evidence-based medicine (EBM) training programmes in 2002 and 2003, 70 per cent failed a simple three-question critical appraisal training programme test. The three pretest questions were designed to determine if attendees could recognise the absence of a control group, understand the issue of overestimating benefit when provided with relative risk reduction information without absolute difference information and determine whether an intention-to-treat analysis was performed. Surprisingly, among those who reported feeling confident to evaluate the medical literature, 72 per cent failed the test, even with generous criteria for correct answers [25]. We have repeated the same pretest with various groups each year with similar results. A well-designed and conducted trial reported similar findings: clinicians without formal EBM training score poorly on the 15-test question Berlin Questionnaire (mean score, 4_2 correct answers compared with EBM experts’ mean score of 11_9) [26].

Critical appraisal skills matter greatly for assuring optimal patient care. When practicing clinicians cannot distinguish between valid and false results, they are at risk of delivering useless treatments, or worse, harming their patients. For example, evidence of a fourfold increased risk of myocardial infarction in patients receiving rofecoxib (Vioxx, Merck, Whitehouse Station, NJ, USA) as compared to naproxen (Novopharm Biotech, Toronto, Canada) was plainly available in the abstract of the VIGOR trial. However, peer reviewers, editors and readers of the New England Journal of Medicine accepted the spurious argument that naproxen was cardioprotective. The VIGOR investigators concluded that the increased risk of myocardial infarction with rofecoxib did not exist, stating without any supporting evidence that the ‘. . .results are consistent with the theory that naproxen has a coronary protective effect’. Millions of prescriptions were written before the drug was withdrawn from the market in 2004, after several studies reported significantly increased risks of cardiovascular events and death [27].

The potential risks of delivering poor care might be mitigated if healthcare professionals followed trustworthy clinical guidelines or based their actions on reliable systematic reviews and meta-analyses, which ought to weed out false results. However, lack of critical appraisal skills on the part of reviewers and guideline creators routinely leads to flawed systematic reviews and guidelines, leaving clinicians with few resources for sorting fact from fiction [3]. The teaching of appraisal skills in medical and other schools and other training programmes, such as residencies, appears at first glance to be fertile ground for providing clinicians with needed skills. However, studies assessing medical student competencies suggest they  frequently do not see or are not taught the relevance of EBM to clinical care and are neither motivated nor prepared to apply EBM skills. Upon entry to residency programmes, their ability to appraise the medical literature critically is extremely limited [28].

Currently, strong evidence regarding the most effective training approach to equip healthcare professionals with the required knowledge and skills to consistently apply valid research evidence in their daily work is lacking. Studies of the effectiveness of teaching EBM and critical appraisal of medical evidence are heterogeneous in study designs, populations, intervention components, outcome measures, study settings, duration and other factors. Several systematic reviews have reported that teaching EBM is effective, but study details and methodological quality vary widely [29–31]. An overview of reviews [29] found 16 systematic reviews that have tried to cover this topic and more reviews were published since then [31]. Most systematic reviews have concluded in favour of the effectiveness of EBM teaching, but outcomes vary and focus mostly on knowledge and skills rather than practical applications, while randomized trials are relatively few. For example, a Cochrane review of EBM teaching effectiveness [32] concluded that EBM teaching does have positive impact on the knowledge and skills of physicians. This is based on only three RCTs [33–35] (with total sample size n = 270, shown along with risk of bias assessments [36] in Table 1) meeting the investigators’ criteria after reviewing a total of 11 057 titles and abstracts yielding 148 potentially relevant studies. Another systematic review [31] of teaching EBM in healthcare professionals excluding physicians and medical students found only 13 eligible studies with a total of 1120 participants and of those only four (with 168 participants) were randomised. The durability of the effects and the optimal ways of maintaining acquired knowledge and skills are even less studied.

Problem 4. Patients and families frequently lack relevant, accurate medical evidence and skilled guidance at the time of medical decision-making

People are bombarded with medical news stories, television and radio talk shows, social media, pop culture magazines, spurious websites, direct-to-consumer drug and medical device ads, hospital marketing messages and other media sources, much of which are incomplete or wildly inaccurate [37]. Some television shows hosted by physicians amount to hucksterism. Today, more media articles have begun to note problems in medical science: instances of biased medical research, a lack of evidence for both alternative and allopathic treatments and the problem of conflict of interest. But many health care and medical journalists appear to remain largely unaware of the degree to which the ‘information’ they gather for stories has been shaped by the interests of manufacturers and research universities. Mass media consumers have few means of determining the accuracy of any given news item and thus often view evidence through the lens of the mass media. We need to educate the public how to deal with these sources of misinformation [38–40].

Table 1 Randomized trials assessing the effectiveness of teaching evidence-based medicine or critical appraisal of medical evidence to physicians [32]

Reference Study design/size/population intervention Outcomes Effect size Risk of bias
Linzer et al (33)

 

 

 

44 internal medicine interns at  Duke University who  volunteered General medicine journal club that emphasized epidemiologic methods and critical appraisal ofmedical evidence; five journal club sessions (mean); conducted over average of 9_5 months led by general medicine faculty; control group received seminars dealing with ambulatory medicine issues. Per cent improvement in knowledge using a test instrument  developed by the Delphi method. 26% improvement in the intervention group compared with 6% improvement in the control group (P = 0_02). Unclear risk of bias Small trial lacking in details of randomization and concealment of allocation; minimal loss to follow-p; assessors were blinded.
 

MacRae et al. (34)

 

 

81 members of the  Canadian Association of General Surgeons who volunteered for 6-month  internet based study; included surgeons from most provinces. Internet curriculum in critical appraisal skills; included a clinical and methodologic article, a listserve discussion of methodology; methodologic critiques; 16 articles assessed with critical appraisal guide; control group received articles to read and had access to online critical appraisal articles. Primary outcome measure: locally developed 51 item test to assess validity  assessment and applicability skills Intervention group score on examination: 58_8% vs. control group score of 50% (P < 0_001). Unclear risk of bias Lacking in details of randomisation and concealment of allocation; attrition unbalanced and > 20%; adequate blinding of assessors.
Taylor et al, (35)

 

 

 

 

 

145 self-selected  general practitioners, hospital physicians,  allied health  professionals, healthcare managers/administrators from the south-west of  England. Half-day skills training based on the Critical Appraisal Skills Programme (CASP) developed from educational methods of McMaster University. control group: waiting list for workshop. Knowledge: validated tool – 18 multiple choice questions focused on knowledge of principles for  appraising evidence. Skills assessment: appraisal of a systematic review. Knowledge score: mean difference 2_6 (95% CI: 0_6–4_6). Skills assessment: mean difference: 1_2 (95% CI: 0_01–2_4). Unclear risk of bias Computer generated randomisation codes; unclear concealment of allocation; balanced groups;  attrition incompletely reported; adequate blinding of assessors.

*Risk of bias ratings based on Cochrane Collaboration’s ‘Risk of bias’ assessment tool [36] that examines the following six criteria: sequence generation, allocation concealment, blinding of participants, personnel and outcome assessors, incomplete outcome data, selective outcome reporting and other sources of bias.

Informed or misinformed, patients eventually are at the core of making medical decisions [41]. The legal doctrine of informed consent requires that patients understand that they have treatment choices and the potential benefits and harms of each choice [42], while medical ethics recognises that their values and preferences must be honoured [41]. Shared decision-making (SDM) involves clinicians sharing medical evidence with patients, eliciting their values and preferences and deciding with their patients the best course of treatment. Ensuring that patients are adequately prepared to make decisions usually requires professional assistance to explore both the treatment options and the medical evidence, so that the potential outcomes that matter most to the patient can be accurately determined. This process depends on sufficient, relevant and valid information, and clarifying discussion that confers ‘agency’ – the capacity of an individual to make free choices [43,44].

Use of SDM and decision support materials (often called patient decision aids) improves decision-making around many different ‘preference-sensitive’ clinical choices. A systematic review of 115 randomised controlled trials involving more than 34 000 patients of the effects of SDM and exposure to decision aids (written, electronic, audiovisual or in webbased tool formats), reported that patients had greater knowledge gain, felt more confident regarding what mattered to them and had more accurate expectations about risks and benefits than patients who received usual care. Participants in the experimental arms participated more actively in the decision-making process and were on average ~20% more likely to make conservative choices when facing difficult decisions regarding surgical and nonsurgical interventions, resulting in no known adverse health outcomes, decreased satisfaction or anxiety [45].

Given the power of patient decision aids and clinician–patient dialogue, both need to be accurate if patients are to make properly informed decisions. Accuracy of decision aids depends upon the critical appraisal skills of their producers [46], while the effectiveness of clinician–patient conversations requires clinicians who are willing and able to engage and know the evidence. Barriers to implementing effective SDM include pervasive professional indifference, organizational inertia, lack of physician comfort with decision aids, time constraints, competing priorities, lack of training, lack of reimbursement and perceived work burden and cost [47]. Patients’ preferences for a treatment often differ from those of clinicians [48,49], yet clinicians often underestimate patients’ desires for information [50].

Not surprisingly, discussions with patients infrequently fulfil the criteria considered integral to informed decision-making and informed consent. A study of outpatient visits in primary care clinics assessed six elements of informed decision-making: description of the nature of the decision, discussion of alternatives; discussion of risks and benefits, discussion of related uncertainties; assessment of the patient’s understanding and elicitation of the patient’s preference. No discussions fulfilled all criteria. Physicians frequently described the nature of the decision (83%), but infrequently elicited patients’ preferences (19%), discussed alternatives (14%), risks and benefits (9%), uncertainties (5%) and rarely (2%) assessed the patient’s understanding of the decision [51]. In a similar study, only rarely (1_1–16_6%) did physicians relate to patients the uncertainty of evidence surrounding the recommended treatments [52]. The combination of unreliable medical evidence, the tsunami of misleading reports in the media, inadequate discussions between clinicians and patients and a culture of patients’ trust in providers’ recommendations and expectation of something to be done together produce massive medical misinformation, with suboptimal, nonpatient-centred decision-making.

Moving forward

We think that all healthcare professionals involved in medical decision-making should possess basic critical appraisal skills and be knowledgeable about which sources of information are likely to be accurate and relevant. As Glasziou et al. [53] have stated, ‘a 21st century clinician who cannot critically read a study is as unprepared as one who cannot take a blood pressure or examine the cardiovascular system’. Such illiteracy is common and clinicians thus foster unrealistic expectations about medicine. A systematic review of 48 studies on clinician expectations on the benefits and/or harms of treatments, tests or screening tests showed that in most studies most physicians had inaccurate expectations. Moreover, it was far more common for clinicians to overestimate than underestimate benefits and to underestimate rather than overestimate harms [54]. Their inability to assess evidence further contributes to skewed views among patients, the media, policy makers and others.

The problem of having so much unreliable and nonuseful published medical research may be attacked at its root, that is by funding, conducting, publishing and disseminating more true and useful research. However, it is important in the meantime to make healthcare professionals, patients, journalists and others aware of the problem, provide them with critical appraisal skills and ensure that the best evidence available is included in clinician–patient discussions about treatment choices.

How to accomplish those three goals is neither obvious nor simple. We need additional high-quality RCTs on the effectiveness of specific interventions to teach EBM. Assessed interventions may include both fixed components (e.g. basic EBM concepts and skills) and variable components (e.g. contextual elements such as settings, leadership support, involvement of opinion leaders and other details regarding employed implementation strategies) [55]. It has not been decisively shown which implementation strategies are optimal for a given clinical practice change. Important barriers and considerations for successful clinical practice change may include personal factors (e.g. motivation, time, skills required to evaluate the relevance and validity of medical information), recommendation-related factors (access, complexity) and external factors (e.g. local clinical culture) [56].

We should also caution that any of the EBM critical appraisal tools can be subverted. For example, industry-sponsored trials may be performed and presented in a way that they tick all the boxes in the CONSORT checklist and on risk of bias tools, even as some fundamental aspects of their design, for example the question asked and how it is asked and answered (what comparators, outcomes or follow-up are used), may still be highly misleading. There is no standard package or automated training tool to substitute for thinking and some healthy scepticism. Similarly, while decision tools can enhance SDM, automated tools alone cannot address some additional fundamental challenges that weaken the position and involvement of patients in the decision-making. For example, patients typically have had little or no input to the design of the research that produced the available evidence, power imbalances may exist in the clinical consultation, and many people do not seek or cannot access care [57]. In addition, journalists must be trained to bring greater scepticism and some critical appraisal skills to reporting on medical research. Addressing these challenges requires rethinking medical research and care at large.

Acknowledging these broader challenges, agents of change could include journals, government agencies, professional groups, schools for healthcare professionals, payers, accreditation bodies, as well as fellow healthcare providers who can reinforce the importance of mastering critical appraisal and communication skills in every day’s practice. The mass media have a special role to play in this regard, as all players in healthcare, from journals to clinicians to government agencies, may respond to criticism in the press.

Critical appraisal skills may have a short half-life and need continuous use and reinforcement. Moreover, given the vast and rapidly expanding nature of the literature and the limited time available to healthcare professionals, it may be easier to focus on using critically pre-appraised evidence, for example from well-done evidence synthesis efforts or guidelines, rather than try to appraise every single article. However, even systematic reviews, meta-analyses and guidelines are currently so numerous and often so poor, biased, conflicted or useless that building and maintaining skills to appraise them is not an easy task. Moreover, becoming proficient in dissecting the caveats of higher-level syntheses requires understanding the problems of primary studies.

As more journal editors recognise the Medical Misinformation Mess as an issue, they can promote awareness by publishing articles, commentaries and editorials on the subject. It seems astonishing that there is a need to point out that investigators should understand what constitutes good design, methodology, execution, performance and reporting in research; nevertheless, the need exists. Journals should require manuscripts to provide all information required for their critical appraisal. Government agencies and professional groupsmay also be influential stakeholders in ensuring that investigators possess key EBM skills. The press also needs training in critical thinking [58]. Schools of journalism should include basic epidemiology and statistics in their coursework for future healthcare and medical writers. Journalists and editors should also be aware of the evidence-based critiques of mass media stories, such as those offered by HealthNewsReview.org [38].

Schools for healthcare professionals could do a better job of ensuring that training in critical appraisal of the medical literature is integrated into the curricula and clinical care. Encouraging reports suggest that attitudes, knowledge and critical appraisal skills can improve through tightly integrated EBM teaching programmes [59,60]. Payers and accreditation bodies, such as the Accreditation Council for Graduate Medical Education, involved in the delivery of healthcare, could also require skills in critical appraisal of medical evidence.

Eventually, successful initiatives should be part of everyday clinical experience, not seen as an artificial formal imposed requirement. Teachers and trainers need ever sharper skills in critical appraisal of the medical literature [59]. Furthermore, all healthcare professionals can take up the responsibility to master skills and become teachers and trainers for themselves and for others during encounters with patients and decision making.

Acknowledgement

METRICS is funded by a grant from the Laura and John Arnold Foundation. The work of John Ioannidis is supported by an unrestricted gift from Sue and Bob O’Donnell.

Address

Departments of Medicine, of Health Research and Policy, and of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305, USA (J. P. A. Ioannidis); Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA 94305, USA (J. P. A. Ioannidis); Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA 94305, USA (J. P. A. Ioannidis); Department of Family Medicine, University of Washington School of Medicine, Seattle, WA 98115, USA (M. E. Stuart); Delfini Group LLC, Seattle, WA 98229, USA (M. E. Stuart, S. A. Strite); Lown Institute, Brookline, MA 02446, USA (S. Brownlee); Department of Health Policy, Harvard T.H. Chan School of Public Health, Cambridge, MA 02115, USA (S. Brownlee).

Correspondence to: John P. A. Ioannidis, Department of Medicine, Meta-Research Innovation Center at Stanford and Stanford Prevention Research Center, 1265 Welch Rd, MSOB X306, Stanford CA 94305, USA. Tel.: +1 650 7045584; e-mail: jioannid@stanford.edu

Received 1 September 2017; accepted 1 September 201

References

1 PubMed search terms: “humans;” “humans” with Clinical Trial article type; “humans” with Review article type; “humans” with Systematic Reviews article type. PubMed database. Available at:

https://www.ncbi.nlm.nih.gov/pubmed. Accessed on 13 February 2017.

2 Ioannidis JP. Why most published research findings are false. PLoS Med 2005;2:e124.

3 Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q 2016;94:485–514. https://doi.org/10.1111/1468-0009.12210.

4 Lenzer J, Hoffman JR, Furberg CD, Ioannidis JP; Guideline Panel Review Working Group. Ensuring the integrity of clinical practice guidelines: a tool for protecting patients. BMJ. 2013;347:f5535. 5 Schwitzer G. A guide to reading health care news stories. JAMA Intern Med 2014;174:1183–6.

6 Chassin MR, Galvin RW; the National Roundtable on Health Care Quality. The urgent need to improve quality: Institute of Medicine National Roundtable on Health Care Quality [consensus statement]. JAMA 1998;280:1000–5.

7 Kerr EA, McGlynn EA, Adams J, Keesey J, Asch SM. Profiling the quality of care in twelve communities: results from the CQI study. Health Aff (Millwood) 2004;23:247–56.

8 McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348:2635–45.

9 SkinnerJ,FisherES,WennbergJE.FortheNationalBureauofEconomic Research. The efficiency of Medicare. Working Paper No. 8395. Cambridge, MA: National Bureau of Economic Research; July 2001.

10 Olsen LA, Saunders RS, Yong PL, eds. The Healthcare Imperative: Lowering Costs and Improving Outcomes: Workshop Series Summary. Washington, DC: National Academies Press; 2010.

11 Brownlee S, Chalkidou K, Doust J, Elshaug AG, Glasziou P, Heath I et al. Evidence for overuse of medical services around the world. Lancet 2017;390:156–68.

12 Saini V, Brownlee S, Elshaug AG, Glasziou P, Heath I. Addressing overuse and underuse around the world. Lancet 2017;390:105–7.

13 Glasziou P, Straus S, Brownlee S, Trevena L, Dans L, Guyatt G et al. Evidence for underuse of effective medical services around the world. Lancet 2017;390:169–77. pii: S0140-6736(16)30946-1. https://doi.org/10.1016/s0140-6736(16)30946-1

14 Guyatt GH, Meade MO, Jaeschke RZ, Cook DJ, Haynes RB. Practitioners of evidence based care. Not all clinicians need to appraise evidence from scratch but all need some skills. BMJ 2000;320:954–5.

15 Ioannidis JP. Why most clinical research is not useful. PLoS Med 2016;13:e1002049. https://doi.org/10.1371/journal.pmed.1002049.

16 McKibbon KA, Wilczynski NL, Haynes RB. What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals? BMC Med 2004;2:33.

17 Glasziou P. The EBM journal selection process: how to find the 1 in 400 valid and highly relevant new research articles. Evid Based Med 2006;11:101

18 Ioannidis JP. How to make more published research true. PLoS Med 2014;11:e1001747. https://doi.org/10.1371/journal.pmed.1001747.

19 Munaf_o MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, du Sert NP et al. A manifesto for reproducible science. Nat Hum Behav 2017;1:0021. https://doi.org/10.1038/s41562-016-0021.

20 Saint S, Christakis DA, Saha S, Elmore JG, Welsh DE, Baker P et al. Journal reading habits of internists. J Gen Intern Med 2000;15:881–4.

21 Hopewell S, Boutron I, Altman DG, Barbour G, Moher D, Montori V et al. Impact of a web-based tool (WebCONSORT) to improve the reporting of randomised trials: results of a randomised controlled trial. BMC Med 2016;14:199.

22 Vera-Badillo FE, Napoleone M, Krzyzanowska MK, Alibhai SM, Chan AW, Ocana A et al. Bias in reporting of randomized clinical trials in oncology. Eur J Cancer 2016;61:29–35. https://doi.org/10. 1016/j.ejca.2016.03.066.

23 Shaneyfelt T. In guidelines we cannot trust. Arch Intern Med 2012;172:1633–4.

24 Kung J, Miller RR, Mackowiak PA. Failure of clinical practice guidelines to meet Institute of Medicine standards: two more decades of little, if any, progress. Arch Intern Med 2012;172:1628–33.

25 Program pre-test scores: “Using the Medical Literature”. June 6, 2003. Available at: http://www.delfini.org/Delfini_Pre-Test_Report_0306.pdf. Accessed on 1 February 2017.

26 Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer HH, Kunz R. Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ 2002;7:1338–41.

27 James MJ, Cook-Johnson RJ, Cleland LG. Selective COX-2 inhibitors, eicosanoid synthesis and clinical outcomes: a case study of system failure. Lipids 2007;42:779–85.

28 Smith AB, Semler L, Rehman EA, Haddad ZG, Ahmadzadeh KL, Crellin SJ et al. A cross-sectional study of medical student knowledge of evidence-based medicine as measured by the Fresno Test. J Emerg Med 2016;. https://doi.org/10.1016/jemermed.2016.02.006.

29 Young T, Rohwer A, Volmink J, Clarke M. What are the effects of teaching evidence-based health care (EBHC)? Overview of systematic reviews. PLoS ONE 2014;9:e86706. https://doi.org/10. 1371/journal.pone.0086706.

30 Flores-Mateo G, Argimon JM. Evidence based practice in  postgraduate healthcare education: a systematic review. BMC Health Serv Res 2007;7:119.

31 Hecht L, Buhse S, Meyer G. Effectiveness of training in evidencebased medicine skills for healthcare professionals: a systematic review. BMC Med Educ 2016;16:103. https://doi.org/10.1186/s12909-016-0616-2.

32 Horsley T, Hyde C, Santesso N, Parkes J, Milne R, Stewart R. Teaching critical appraisal skills in healthcare settings. Cochrane Database Syst Rev 2011;(11):CD001270. https://doi.org/10.1002/14651858.cd001270.pub2.

33 Linzer M, Brown JT, Frazier LM, DeLong ER, Siegel WC. Impact of a medical journal club on house-staff reading habits, knowledge, and critical appraisal skills. A randomized control trial. JAMA 1988;260:2537–41.

34 MacRae HM, Regehr G, McKenzie M, Henteleff H, Taylor M,  Barkun J et al. Teaching practicing surgeons critical appraisal skills with an Internet-based journal club: a randomized, controlled trial. Surgery 2004;136:641–6.

35 Taylor RS, Reeves BC, Ewings PE, Taylor RJ. Critical appraisal skills training for health care professionals: a randomized controlled trial. BMC Med Educ 2004;4:30.

36 Higgins JPT, Altman DG (editors). Chapter 8: assessing risk of bias in included studies. In: Higgins JPT, Green S, editors. Cochrane Handbook of Systematic Reviews of Interventions Version 5.1.0 [updated March 2011], The Cochrane Collaboration. Available at: www.cochranehandbook.org. Accessed on 10 June 2017.

37 Schwitzer G. Pollution of health news: time to drain the swamp. BMJ 2017;356:j1262.

38 Walsh-Childers K, Braddock J, Rabaza C, Schwitzer G. One step forward, one step back: changes in news coverage of medical interventions. Health Commun 2016:1–14. https://doi.org/10.1080/10410236.2016.1250706.

39 Schwitzer G. Trying to drink from a fire hose: too much of the wrong kind of health care news. Trends Pharmacol Sci 2015;36:623–7.

40 Schwitzer G. How can journalists do a better job reporting on the principles of shared decision making? Chapter 41. In: Elwyn G, Edwards A, Thompson R, editors. Shared Decision Making in Health Care: Achieving Evidence-Based Patient Choice, 3rd edn. Oxford: Oxford University Press; 2016: 270–6.

41 Mulley AG, Trimble C, Elwyn G. Stop the silent misdiagnosis: patients’ preferences matter. BMJ 2012;345:e6572. https://doi.org/10.1136/bmj.e6572.

42 King JS, Moulton BW. Rethinking informed consent: the case for shared medical decision-making. Am J Law Med 2006;32:429–501.

43 Makoul G, Clayman ML. An integrative model of shared decisionmaking in medical encounters. Patient Educ Couns 2006;60:301–12.

44 Elwyn G, Frosch D, Thomson R, Joseph-Williams N, Lloyd A, Kinnersley P et al. Shared decision-making: a model for clinical practice. J Gen Intern Med 2012;27:1361–7.

45 Stacey D, L_egar_e F, Col NF, Bennett CL, Barry MJ, Eden KB et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2014;(1). Art. No.: CD001431. https://doi.org/10.1002/14651858.cd001431.pub4.

46 Montori VM, LeBlanc A, Buchholz A, Stilwell DL, Tsapas A. Basing information on comprehensive, critically appraised, and up-to-date syntheses of the scientific evidence: a quality dimension of the International Patient Decision Aid Standards. BMC Med Inform DecisMak 2013;13(Suppl 2):S5. https://doi.org/10.1186/1472-6947-13-S2-S5.

47 Elwyn G, Scholl I, Tietbohl C, Mann M, Edwards AGK, Clay C et al. “Many miles to go . . .”: a systematic review of the implementation of patient decision support interventions into routine clinical practice. BMC Med Inform Decis Mak 2013;13(Suppl 2):S14. https://doi.org/10.1186/1472-6947-13-S2-S14.

48 Sepucha KR, Simmons LH, Barry MJ, Edgman-Levitan S, Licurse AM, Chaguturu SK. Ten years, forty decision aids, and thousands of patient uses: shared decision making at Massachusetts General Hospital. Health Aff (Millwood) 2016;35:630–6. https://doi.org/10. 1377/hlthaff.2015.1376.

49 Montgomery AA, Harding J, Fahey T. Shared decision making in hypertension: the impact of patient preferences on treatment choice. Fam Pract 2001;18:309–13.

50 Say RE, Thomson R. The importance of patient preferences in treatment decisions–challenges for doctors. BMJ 2003;327:542–5.

51 Braddock CH 3rd, Fihn SD, Levinson W, Jonsen AR, Pearlman RA. How doctors and patients discuss routine clinical decisions.Informed decision making in the outpatient setting. J Gen Intern Med 1997;12:339–45.

52 Braddock CH 3rd, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: time to get back to basics. JAMA 1999;282:2313–20.

53 Glasziou P, Burls A, Gilbert R. Evidence based medicine and the medical curriculum. BMJ 2008;337:a1253. https://doi.org/10.1136/bmj.a1253.

54 Hoffmann TC, Del Mar C. Clinicians’ expectations of the benefits and harms of treatments, screening, and tests: a systematic review. JAMA Intern Med 2017;177:407–19. https://doi.org/10.1001/jamainte rnmed.2016.8254.

55 Dizon JM, Grimmer-Somers K. Complex interventions required to comprehensively educate allied health practitioners on evidence based practice. Adv Med Educ Pract 2011;2:8. https://doi.org/10. 2147/AMEP.S19767.

56 Fischer F, Lange K, Klose K, Greiner W, Kraemer A Barriers and strategies in guideline implementation – a scoping review. Healthcare (Basel) 2016;4: pii: E36. https://doi.org/10.3390/healthcare 4030036.

57 Greenhalgh T, Snow R, Ryan S, Rees S, Salisbury H. Six ‘biases’ against patients and carers in evidence-based medicine. BMC Med 2015;1:200. https://doi.org/10.1186/s12916-015-0437-x.

58 PLoS Medicine Editors. False hopes, unwarranted fears: the trouble with medical news stories. PLoS Med 2008;5:e118.

59 Shaughnessy AF, Gupta PS, Erlich DR, Slawson DC. Ability of an information mastery curriculum to improve residents’ skills and attitudes. Fam Med 2012;44:259–64.

60 Coomarasamy A, Khan KS. What is the evidence that postgraduate teaching in evidence based medicine changes anything? A systematic review. BMJ 2004;329:1017.