Skip to main content

What kind of evidence is it that Evidence-Based Medicine advocates want health care providers and consumers to pay attention to?

Abstract

Background

In 1992, Evidence-Based Medicine advocates proclaimed a "new paradigm", in which evidence from health care research is the best basis for decisions for individual patients and health systems. Hailed in New York Times Magazine in 2001 as one of the most influential ideas of the year, this approach was initially and provocatively pitted against the traditional teaching of medicine, in which the key elements of knowing for clinical purposes are understanding of basic pathophysiologic mechanisms of disease coupled with clinical experience. This paper reviews the origins, aspirations, philosophical limitations, and practical challenges of evidence-based medicine.

Discussion

EBM has long since evolved beyond its initial (mis)conception, that EBM might replace traditional medicine. EBM is now attempting to augment rather than replace individual clinical experience and understanding of basic disease mechanisms. EBM must continue to evolve, however, to address a number of issues including scientific underpinnings, moral stance and consequences, and practical matters of dissemination and application. For example, accelerating the transfer of research findings into clinical practice is often based on incomplete evidence from selected groups of people, who experience a marginal benefit from an expensive technology, raising issues of the generalizability of the findings, and increasing problems with how many and who can afford the new innovations in care.

Summary

Advocates of evidence-based medicine want clinicians and consumers to pay attention to the best findings from health care research that are both valid and ready for clinical application. Much remains to be done to reach this goal.

Peer Review reports

Background

Evidence-Based Medicine (EBM) is based on the notion that clinicians, if they are to provide, and continue to provide, optimal care for their patients, need to know enough about applied research principles to detect studies published in the medical literature that are both scientifically strong and ready for clinical application. This opportunity for continuing to improve the quality of medical care stems from the huge ongoing public and private investment in biomedical and health research.

The challenges in applying new knowledge, however, are considerable, and EBM does not address them all. Two that EBM tries to address are as follows. First, the advance of knowledge is incremental, with many false steps, and with breakthroughs few and far between, so that only a very tiny fraction of the reports in the medical literature signal new knowledge that is both adequately tested and important enough for practitioners to depend upon and apply. Second, practitioners have limited time and little understanding of research methods.

To help practitioners meet these challenges, EBM advocates have created procedures to objectively identify and summarize evidence as it accumulates on clinical topics, and resources that allow users to find the current best evidence when and where it is needed for decisions concerning health and health care [1]. This paper reviews the origins, aspirations, philosophical limitations, and practical challenges of evidence-based medicine.

Discussion

The history and precepts of EBM

Evidence-Based Medicine (EBM), the term and current concepts, originated from clinical epidemiologists at McMaster University [2, 3]. Although the term has been adopted by many disciplines and adapted to their use (eg, as Evidence-Based Nursing, Evidence-Based Clinical Practice, Evidence-Based Pharmacy, and so on), the objectives of these congeners are the same and I will use the generic term in this essay.

EBM advocates want patients, practitioners, health care managers and policy makers to pay attention to the best findings from health care research that meet the dual requirements of being both scientifically valid and ready for clinical application.

In doing so, EBM advocates proclaimed a new paradigm and seemingly pitted EBM against the traditional knowledge foundation of medicine, in which the key elements are understanding of basic mechanisms of disease coupled with clinical experience. The latter is epitomized by the individual authority ("expert"), or, better still, collective medical authority, such as a panel of experts convened by a professional society to provide practice guidelines based on collective expert opinion. EBM claims that experts are more fallible in their recommendations (of what works and what doesn't work in caring for patients) than evidence derived from sound systematic observation (that is, health care research). This is especially so during recent decades when applied research methods have been developed for observation and experimentation in increasingly naturalistic and complex clinical settings.

Furthermore, because applied research methods are based on assessing probabilities for relationships and the effects of interventions, rather than underlying mechanistic explanations, EBM posits that practitioners must be ready to accept and deal with uncertainty (rather than seeking the reductionist allure of basic science), and to acknowledge that management decisions are often made in the face of relative ignorance of their underlying nature or true impact for individual patients.

A fundamental assumption of EBM is that practitioners whose practice is based on an understanding of evidence from applied health care research will provide superior patient care compared with practitioners who rely on understanding of basic mechanisms and their own clinical experience. So far, no convincing direct evidence exists that shows that this assumption is correct. Nevertheless, the New York Times Magazine "Year in Review" included EBM as one of the most influential ideas of 2001 [4].

Basic versus applied health research

Scientific approaches to studying health care problems developed at a leisurely pace until the end of World War II when some of the public funding that had been dedicated to killing was reallocated to saving lives through health research. Initial investments were directed firstly to basic research, to better understand the determinants and pathophysiology of disease, and medical schools reflected this stage of development in their teaching of the basic sciences of biology, pathology, physiology and biochemistry, as the foundation of medical knowledge. Increasing shares of investment were then allocated to the development and applied testing of innovations in clinical settings. Although these applied research methods were initially rooted in the observational techniques of epidemiology, clinical epidemiologists such as Archie Cochrane in the UK, Alvan Feinstein in the US, and David Sackett in Canada, pioneered and legitimized the use of experimentation in clinical settings, leading to the randomized controlled trial becoming the hallmark of testing. It is important to recognize that experimental designs were added to observational designs, not substituted for them. Different methods, observational or experimental, are needed for exploring different questions.

The first trial in which randomization was formally described and applied was published in the British Medical Journal in 1948 [5] and heralded a new era of antibiotic treatment, streptomycin for tuberculosis. Today, methodologies from other scientific disciplines have been added. For example, nonexperimental and qualitative research methods have been adopted from the social sciences. Thus, the research methods of medical science are pluralistic and expanding, driven by attempts to address a broader range of questions, and undoubtedly by the priority that people place on personal health, the obvious benefits that biomedical research has already brought, and the prospect that these benefits are just the beginning.

EBM does not clearly address the role of basic science in medical discovery, except to indicate that, in most circumstances of relevance to individual patient care, basic science alone does not provide valid and practical guidance. There are some exceptions: certain deficiency disorders, such as type 1 diabetes mellitus, for example. However, even though basic science provides definitive evidence that insulin deficiency is the underlying problem in this disorder, determining which of many possible ways of delivering exogenous insulin therapy results in the best care for patients has required myriads of applied research studies, with clear evidence concerning the benefit of multiple dose insulin regimens coming less than a decade ago [6]. In many such situations, empirical solutions, tested by applied research methods, are "holding the fort" until basic understanding – of mechanisms and interventions – is forthcoming. This will continue to be the case for the foreseeable future, the marvelous advances in genomics notwithstanding.

This schism between basic and applied research is, however, more rhetoric than reality. Rather, basic and applied research are different ends of a spectrum of health research, progressing from "bench" to "bedside". The best applied research studies are often founded on excellent basic science findings, even if basic research is neither necessary nor sufficient for the management of most medical problems. This is because applied research is a complementary way of knowing, not a participant in a scientific turf war competing to be the best way of knowing. Nevertheless, from a pragmatic, clinical focus, applied research provides evidence to practitioners and patients that is often better suited for the specific problems they must deal with. Confusion between the objectives of science and those of the practice of medicine has perhaps led to much of the misunderstanding and criticism leveled at EBM.

An example of the interplay between basic and applied clinical research

An example illustrates the complex relationship between basic and applied research, and in turn, between both of these and clinical practice. Narrowing of the arteries to the front part of the brain (the internal carotid artery and its tributaries, the anterior and middle cerebral arteries) is associated with stroke, in which a part of the brain dies when it loses its blood supply. Narrowings of the internal carotid artery above the level of the neck can be bypassed through connecting the superficial temporal artery (STA), on the outside of the head, with a branch of the middle cerebral artery (MCA), just inside the skull. STA-MCA bypass (also known as extracranial-intracranial (EC/IC) bypass) is an elegant (and expensive) surgical procedure that is both technically feasible in a high proportion of cases and leads to increased blood supply to the part of the brain beyond the narrowing of the ICA. This increased blood supply was thought, on physiological grounds, to be exactly what the brain needed to prevent future strokes in people who had had minor strokes in this vascular distribution.

About 200 reports of case series of patients undergoing STA-MCA bypass were reported in the medical literature up to 1985, almost all of them interpreted by their surgeon authors as indicating benefits for patients. In these case series studies, patients are described before and after undergoing the procedure, and sometimes compared with findings in previous reports ("historical controls"), with and without the procedure. In 1985, a large randomized controlled trial was reported [7]. This showed no reduction in the subsequent rate of stroke with the procedure when compared with the rate for patients who did not have the procedure. On further analysis, it was found that patients with STA-MCA bypass who had higher rates of blood flow were actually worse off, and that surgery blunted the natural rate of recovery from the initial stroke that led to selection of patients for surgery [8]. Dissemination of these findings was rapid and led to the elimination of this procedure for attempting to prevent stroke recurrence.

Subsequently, randomized controlled trials were conducted for patients with narrowing of the internal carotid artery in the neck. Removing this narrowing surgically, a procedure called carotid endarterectomy, had been practiced for longer than STA-MCA bypass at the time but had not been adequately tested in controlled trials. Its use was brought into question because of the negative findings of the STA-MCA bypass trial, which appeared to undermine the physiological rationale for the procedure, namely, that opening a partially blocked internal carotid artery would reduce the risk for subsequent stroke. As it happens, several randomized controlled trials of carotid endarterectomy that ensued showed that it has substantial benefit for symptomatic patients with severe narrowing of the carotid artery, but not for those who had mild narrowing or who had no symptoms associated with the narrowing [9].

These trials of STA-MCA bypass and carotid endarterectomy have led to better understanding of the basic mechanisms of stroke, elimination of a harmful surgical procedure, promotion of another procedure, and provision of evidence for tailoring the findings to individual patients [10]. These advances in knowledge have benefited many patients. Unfortunately, surveys of patient care also show that some patients receive endarterectomy when they are unlikely to benefit from it, while others who might benefit are not offered it [11]. In fact, there are numerous examples of underapplied evidence of both the benefits and harms of treatments [12]. Eliminating this mismatch between who could benefit and who is offered health care interventions is the prime objective of EBM.

The nuts and bolts of EBM

A current definition of EBM is "the explicit, judicious, and conscientious use of current best evidence from health care research in decisions about the care of individuals and populations" [1]. A more pragmatic definition is a set of tools and resources for finding and applying current best evidence from research for the care of individual patients. This practical definition reflects the fact that there are now many information resources in which evidence from health care research has been pre-graded for validity by people with expertise in research methods, and, better still, also assessed by experienced practitioners for clinical relevance. Thus, the user's task is changing from the largely hopeless one of reading the original medical literature to find out about current best care, to one of finding the right pre-assessed research evidence, judging whether it applies to the health problem at hand, and then working the evidence into the decision that must be made.

Grades of the quality of evidence are derived from scientific principles of epidemiology and its offspring, clinical epidemiology. The grades are based on several notions, the most elementary of which are as follows. First, studies that take more precautions to minimize the risk of bias (for example, through using reliable and valid measures of health care outcomes) are more likely to reveal useful truths than those that take fewer precautions. Second, studies based in patient populations that more closely resemble those that exist in usual clinical practice are more likely to provide valid and useful information for clinical practice than studies based on organisms in test tubes, creatures in cages, very select human populations, or unachievable clinical circumstances (such as extra staff to provide intensive follow up, far beyond the resources in usual clinical settings). Third, studies that measure clinical outcomes that are more important to patients (eg, mortality, morbidity and quality of life, rather than liver enzymes and serum electrolytes) are more likely to provide evidence that is important to both practitioners and patients.

Simple guidelines for critical appraisal of health care research evidence are widely available in print [for example, [1, 13]] and on the internet (for example, http://www.cche.net/usersguides/main.asp). Optimal study designs differ for determining the cause, course, diagnosis, prognosis, prevention, therapy and rehabilitation of disease, so the rules for assessing validity differ for these different questions. For example, randomized allocation of participants to an intervention and control group is held to be better than non-random allocation for controlling bias in intervention studies. This is not merely a matter of logic, common sense or faith: non-random allocation usually results in more optimistic differences between intervention and control groups than does random allocation [14]. Similarly, in observational study designs for assessing the accuracy of diagnostic tests, independent interpretation of the tests that are being compared is known to result in less optimistic reports of test performance [15].

Although most guidelines for critical appraisal are not comprehensive or fully rigorous, they provide an effective filter for the reliability and validity of health care research that screens out about 98% or more of the medical research literature as not being ready for clinical use [16]. Of those studies that make it through the filter, systematic reviews provide the firmest base for the application of evidence in practice [17]; the past decade has seen the Cochrane Collaboration forging a world wide effort to summarize evidence concerning the effects of health care interventions [18].

Once individual studies have been assembled and graded for quality, the collected evidence then can be used to make recommendations for practice, preferably with each recommendation being labeled according to the level of evidence that supports it. Various systems for indicating the level of evidence for collected evidence are available, for example, from the Centre for Evidence-Based Medicine in Oxford http://cebm.jr2.ox.ac.uk/docs/levels.html and in EBM books [1, 13].

Some difficulties with the term Evidence-Based Medicine

Many objections to EBM are based on the notion that it advocates cook-book medicine, that is, treating patients strictly according to a formula or algorithm derived from a research study. In fact, this was never intended by the advocates of EBM, but it was perhaps not initially clearly emphasized that evidence from research can be no more than one component of any clinical decision. Other key components are the circumstances of the patient (as assessed through the expertise of the clinician), and the preferences of the patient (Figure 1) [19]. Just how research evidence, clinical circumstances, and patients' wishes are to be combined to derive an optimal decision has not been clearly stated, except that "clinical judgment and expertise" are viewed as essential to success [20].

Figure 1
figure 1

Basic elements of clinical decision making

Even more problematic, the term evidence is commonly used for many types of evidence of relevance to clinical practice, not just health care research evidence. For example, clinicians collect evidence of patients' circumstances and wishes. Thus, it is hardly surprising that the term evidence-based medicine is confusing to many, who do not appreciate that its evidence is narrowly defined as having to do with systematic observations from certain types of research. The very name has been an impediment to getting across its main objective, namely, that health care research is nowadays producing important results that, if applied, can benefit patients more than the treatments that clinicians are experienced in recommending. Using the technical definition of EBM, evidence from heath care research is a modern, never-before-available complement to traditional medicine. Perhaps a better name would be "certain-types-of-high-quality-and-clinically-relevant-evidence-from-health-care-research-in-support-of-health-care-decision-making"...an accurate but mind-numbing descriptor.

Philosophical issues – from a lay perspective

The main original paper on EBM [2] proposed EBM as a paradigm shift, based on Thomas Kuhn's definition of paradigms: ways of looking at the world that define both the problems that can be legitimately addressed and the range of admissible evidence that may bear on their solution. Nevertheless, it is fair to say that not very much attention was paid by the originators of EBM to the philosophy of science. It is also easy to agree with Alan Chalmers [21] that most scientists and EBM advocates are ignorant of the philosophy of science and give little or no thought to constructing a philosophical basis for their activities.

According to Guba and Lincoln [22], in the basic science that underpins traditional medicine, the workings of the human body and basic mechanisms of disease can be discovered by observations of an individual human or organism using instruments that are objective and bias free. These mechanisms then can be discerned by inductive logic and known to a certainty. By contrast, applied research deals with more complex phenomena than disease mechanisms; often relies on experimentation rather than (just) observation; recognizes that observations of complex phenomena can be biased and takes measures to reduce bias; has groups of patients as the basis of observation; uses probabilities to judge truth, rather than expecting certainty; and uses deductive and Bayesian logic to progress. Certainly, there are differences between the approaches of basic and applied research, but are they mutually exclusive, as in a paradigm shift, or complementary ways of knowing, as in a pluralistic version of epistemology? The latter view seems to be more tenable.

The expectation of EBM that doctors should keep abreast of evidence from (certain-types-of-health-care-) research raises many issues. First, what is "valid" health care research? Second, what are the "best" findings from this research? Third, when is health care research "ready" for application? Fourth and fifth, to whom and how does one apply valid and ready evidence from health care research? EBM provides a set of increasingly sophisticated tools for addressing these questions, but, at present, the result is only partly as good as EBM advocates hope it will become.

Meanwhile there is much to criticise about EBM from both philosophical (AV Kulkarni, personal communication, 2000) and practical perspectives. For example, it is difficult to be smug about the superiority of the research methods advocated by EBM when the results of studies that are similar methodologically not infrequently disagree with one another. Meanwhile, it has been shown that the findings of observational studies agree more often than not with the findings of allegedly more potent RCTs [23, 24]. While holes can be picked in these arguments against the ascendancy of RCTs [25], there is no way to win the argument without a universal standard of truth.

Furthermore, the issue of when a research finding is ready for clinical application remains mired in the lack of a satisfactory resolution of how findings from groups can be applied to individuals. For one thing, our understanding of how to determine what patients want is primitive. Also problematic, the circumstances in which patients are treated can vary widely from location to location (including locations that are right across the street from one another): the resources, expertise and patients are often quite different and the same research evidence cannot be applied in the same way, or not at all.

Finally, we do not have convincing studies showing that patients whose clinicians practise EBM are better off than those whose clinicians do not practise EBM: no one has done a randomized controlled trial of EBM with patient outcomes as the measure of success. Such a trial would be impossible to do as the control group could not be effectively isolated from the research that EBM is attempting to transfer, and it would be regarded as unethical to attempt to do so. This situation is unfortunate in the sense that, even if it is accepted that current research is generating valuable findings for health care, there are many questions about whether the EBM movement is doing anything useful to accelerate the transfer of these findings into practice. Nevertheless, we do have limited evidence that the concepts of EBM are teachable [26].

David Hume and his followers took pains to point out the differences between is and ought. The is of EBM is that science is producing new and better ways of predicting, detecting and treating disease than were imaginable at the middle of the past century. The ought of the EBM movement, which annoys many practitioners, and would perturb Hume and his followers, is that EBM advocates believe that clinicians ought to be responsible for keeping up to date with these advances and ought to be prepared to offer them to patients. Thus, EBM has taken on the tones of a moral imperative. But it is premature to get very preachy about the ought of EBM, not that this has stopped EBM's more ardent advocates.

Worse still, the interventions that advocates of EBM insist ought to be provided in all appropriate individual circumstances would undoubtedly have some important adverse effects. For one, full implementation would cost much more than the resources currently available for health care, even accounting for some cost effective innovations and deletion of existing but ineffective practices. The increased costs of care would lead to unaddressed (let alone resolved) dilemmas in distributive justice. Second, interventions that save lives and reduce suffering in the short term may end up prolonging life beyond the point of senescence and misery.

EBM advocates try to ameliorate the latter problem by declaring that patients' values ought to be incorporated into clinical decisions, but without assuring that we know how to do this. Indeed, there is a continuing tension here between the consequentialist, population-based origins of epidemiology (doing the greatest good for the greatest number), which generates most of the best evidence that EBM advocates hope to convince practitioners and patients to pay attention to, and the deontological or individualistic approach of medicine, doing the greatest good for the individual patient, which practitioners are sworn to do. Although some components of EBM have been derided as representing ultrapragmatic utilitarianism, EBM doesn't offer a credible solution to this tension, nor even take a clear stance on it, perhaps reflecting the dual origins of many EBM advocates: most of the leaders are trained in both epidemiology and a clinical discipline, and do both research and clinical practice.

In weighing the philosophical issues raised by EBM, many epistemological issues certainly merit intense discussion. However, it is the ethical issues that I believe to be of highest concern. Will the proceeds of the new science of medicine be fairly distributed in society? Given the already stupendous and wildly escalating costs of health care, driven particularly by newer diagnostic and therapeutic interventions, how can resources be optimally and fairly allocated within the health care sector and across all sectors of public expenditure? Can the long-term consequences (for example, unproductive and miserable longevity) of the short-term gains that are regularly documented by health care research continue to be ignored? How can patient's wishes be informed, determined and taken into account in health care decision making? Should some of the funds for health research be diverted into some other sector (continuing education?) so that the health care system can catch up to the current state of knowledge? Is EBM a waste of time if we lack adequate understanding of practical methods of changing practitioner and patient actions [27]?

One hopes that the attention of philosophers will be drawn to these questions, as well as to the continuing debate about whether EBM is a new paradigm and whether applied health care research findings are more valid for reaching practical decisions about health care than basic pathophysiological mechanisms and the unsystematic observations of practitioners.

Summary

Evidence-Based Medicine has evolved substantially from its origins a decade ago, becoming less pretentious and more practical. Nonetheless, it must continue to evolve and address several important issues that will otherwise limit its value as an adjunct to health care decisions. Pressing matters include agreement on what constitutes "best" evidence; appropriate generalization beyond research projects; accurate and efficient communication with practitioners, patients and policy makers; and moral issues including distributive justice and individual autonomy. Given the substantial investment of society and commerce in fundamental and applied health research, and the high expectations of society for reducing the burden of illness, attention to these matters should have high priority.

References

  1. Sackett DL, Straus S, Richardson SR, Rosenberg W, Haynes RB: Evidence-Based Medicine: How to Practice and Teach EBM. London, Churchill Livingstone. 2000, 2

    Google Scholar 

  2. Guyatt GH: Evidence-based medicine [editorial]. ACP J Club. 1991, 114: A-16.

    Google Scholar 

  3. Evidence-based Medicine Working Group: Evidence-based medicine: a new approach to teaching the practice of medicine. JAMA. 1992, 268: 2420-2425.

    Article  Google Scholar 

  4. Hitt J: Evidence-Based Medicine. New York Times Magazine. 2001

    Google Scholar 

  5. Medical Research Council: Streptomycin treatment of pulmonary tuberculosis. BMJ. 1948, 2: 769-782.

    Article  Google Scholar 

  6. The Diabetes Control and Complications Trial Research Group: The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. N Engl J Med. 1993, 329: 977-986. 10.1056/NEJM199309303291401.

    Article  Google Scholar 

  7. The EC/IC Bypass Study Group: Failure of extracranial-intracranial arterial bypass to reduce the risk of ischemic stroke: results of an international randomized trial. N Engl J Med. 1985, 313: 1191-1200.

    Article  Google Scholar 

  8. Haynes RB, Mukherjee J, Sackett DL, Taylor DW, Barnett H, Peerless SJ: Functional status changes following medical or surgical treatment for cerebral ischemia: results in the EC/IC Bypass Study. JAMA. 1987, 257: 2043-2046. 10.1001/jama.257.15.2043.

    Article  CAS  PubMed  Google Scholar 

  9. Cina C, Clase C, Haynes RB: Carotid endarterectomy for symptomatic carotid stenosis. (Cochrane Review). In: The Cochrane Library. 1999, Oxford: Update Software., 3

  10. Rothwell PM, Slattery J, Warlow CP: Clinical and angiographic predictors of stroke and death from carotid endarterectomy: systematic review. BMJ. 1997, 315: 1571-1577.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Goldstein LB, Bonito AJ, Matchar DB, Duncan PW, Samsa GP: US National Survey of Physician Practices for the Secondary and Tertiary Prevention of Ischemic Stroke. Carotid endarterectomy. Stroke. 1996, 27: 801-806.

    Article  CAS  PubMed  Google Scholar 

  12. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers T: A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. JAMA. 1992, 268: 240-248. 10.1001/jama.268.2.240.

    Article  CAS  PubMed  Google Scholar 

  13. Guyatt G, Rennie D, eds: Users' Guides to the Medical Literature. A Manual for Evidence-Based Clinical Practice. Chicago: AMA Press. 2002

  14. Schultz KF, Chalmers I, Hayes RJ, Altman D: Empirical evidence of bias. dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995, 273: 408-412.

    Article  Google Scholar 

  15. Lijmer JG, Mol BW, Heisterkamp S, Bonsel GJ, Prins MH, van der Meulen J, PM Bossuyt: Empirical evidence of design-related bias in studies of diagnostic tests. JAMA. 1999, 282: 1061-1066. 10.1001/jama.282.11.1061.

    Article  CAS  PubMed  Google Scholar 

  16. Haynes RB: Where's the meat in clinical journals? [editorial]. ACP Journal Club. 1993, A16 (Ann Intern Med 115, suppl 3).

  17. Clarke Chalmers, M I: Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents?. JAMA. 2000, 280: 280-282.

    Article  Google Scholar 

  18. Jadad AR, Haynes RB: The Cochrane Collaboration – advances and challenges in improving evidence-based decision making. Med Decision Making. 1998, 18: 2-9.

    Article  CAS  Google Scholar 

  19. Haynes RB, Sackett DL, Gray JRM, Cook DL, Guyatt GH: Transferring evidence from research into practice: 1. the role of clinical care research evidence in clinical decisions. ACP Journal Club. 1996, 125: A-14-16.

    CAS  Google Scholar 

  20. Haynes RB RB, Devereaux PJ, Guyatt GH: Clinical expertise in the era of evidence-based medicine and patient choice. ACP J Club. 2002, 136: A-11-14.

    Google Scholar 

  21. Chalmers AF: What is this thing called science?. Indianapolis: Hackett Publishing Co. Inc. 1999, 3

    Google Scholar 

  22. Guba E, Lincoln Y: Competing paradigms in qualitative research. In: Handbook of Qualitative Research. Edited by: Denzin N, Lincoln Y. 1994, Thousand Oaks, CA: Sage Publications

    Google Scholar 

  23. Concato J, Shah N, Horwitz RI: Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000, 342: 1887-1892. 10.1056/NEJM200006223422507.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Benson K, Hartz AJ: A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000, 342: 1878-1386. 10.1056/NEJM200006223422506.

    Article  CAS  PubMed  Google Scholar 

  25. Pocock SJ, Elbourne DR: Randomized trials or observational tribulations?. N Engl J Med. 2000, 342: 1907-1909. 10.1056/NEJM200006223422511.

    Article  CAS  PubMed  Google Scholar 

  26. Norman GR, Shannon SI: Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal. Can Med Assoc J. 1998, 158: 177-81.

    CAS  Google Scholar 

  27. Oxman AD, Thomson MA, Davis DA, Haynes RB: No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ. 1995, 153: 1423-1431.

    CAS  PubMed  PubMed Central  Google Scholar 

Pre-publication history

Download references

Acknowledgements

Many people reviewed and provided helpful comments on this essay at various stages of its evolution. I thank them all without "naming names", as the historical perspective and views I've expressed are mine.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R Brian Haynes.

Additional information

Competing interests

The author is one of the originators of the concepts of Evidence-Based Medicine, and is the developer and editor of a number of evidence-based publications, including ACP Journal Club, acpjc.org and Best Evidence; co-editor for Evidence-Based Medicine; coordinating editor for Evidence-Based Mental Health, Evidence-Based Nursing; associate editor for WebMD's Scientific American Medicine; and advisory board member for Clinical Evidence.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Reprints and permissions

About this article

Cite this article

Haynes, R.B. What kind of evidence is it that Evidence-Based Medicine advocates want health care providers and consumers to pay attention to?. BMC Health Serv Res 2, 3 (2002). https://doi.org/10.1186/1472-6963-2-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-2-3

Keywords