Journal of Cancer Education ISSN: 0885-8195 (Print) 1543-0154 (Online) Journal homepage: http://www.tandfonline.com/loi/hjce20 Evaluating the cancer education program: Examining a range of approaches Richard E. Gallagher PhD , Patricia Mullan Scalzi PhD , Carolyn Norris‐Baker PhD & Richard F. Bakemeeer MD To cite this article: Richard E. Gallagher PhD , Patricia Mullan Scalzi PhD , Carolyn Norris‐Baker PhD & Richard F. Bakemeeer MD (1986) Evaluating the cancer education program: Examining a range of approaches, Journal of Cancer Education, 1:3, 141-151 To link to this article: http://dx.doi.org/10.1080/08858198609527826 Published online: 01 Oct 2009. Submit your article to this journal Article views: 1 View related articles Citing articles: 5 View citing articles Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=hjce20 Download by: [Universite Laval] Date: 06 August 2017, At: 04:58 J. Cancer Education Vol. 1, No. 2, pp. 141-151, 1986 Printed in the U.S.A. Pergamon Journals Ltd. 0885-8195/86 $3.00 + .00 © 1986 American Association for Cancer Education. EVALUATING THE CANCER EDUCATION PROGRAM: EXAMINING A RANGE OF APPROACHES RICHARD E. GALLAGHER, PhD,* PATRICIA MULLAN SCALZI, PhD,** PhD.† RICHARD F. BAKEMEEER, MD‡ Downloaded by [Universite Laval] at 04:58 06 August 2017 CAROLYN NORRIS-BAKER, Abstract—The objectives of this article are to provide people who are responsible for cancer education programs with a broadened perspective of the rich array of evaluation strategies available and responsive to the evaluation needs of cancer education programs, and to summarize selected literature resources useful for the development of an evaluation plan. This article reviews limitations found in previous evaluations of medical education programs and provides a descriptive survey of diverse strategies for the evaluation of cancer education programs. The characteristics, requirements, utility, audience, constraints, and resources are presented. Two cancer related case studies are discussed in detail as illustrations of the diversity of strategies available to evaluate cancer education programs. education programs which are drawn from the evaluation literature, and to enable people who are or will be responsible for cancer education programs to use the descriptive model provided in this paper for selecting an approach or approaches to the evaluation of a specific program. The emphasis within this article is on the rich array of evaluation strategies available which are capable of responding to evaluation needs, and the requirements for the informed selection from among them. INTRODUCTION The intent of this article is to share with those who are or will be responsible for cancer education programs a descriptive organizational framework of selected strategies for the evaluation of such programs, some commonly found deficiencies in evaluation proposals, and several strategies for avoiding or overcoming such deficiencies. The descriptive framework presents a diverse range of evaluation approaches, defining and elaborating their requirements, utility, intended audience, criteria for judging achieved rigor, constraints, resources required for implementation, and form of evaluative product. In addition, health related examples of each approach are identified. The objectives of this paper are to review implications for the evaluation of cancer SHORTCOMINGS OF PROGRAM EVALUATIONS IN CANCER EDUCATION * Department of Family Medicine, Wayne State University School of Medicine. ** Office of Educational Resources and Research, The University of Michigan Medical School. † Department of Architecture, Kansas State University. ‡ Cancer Center, University of Rochester School of Medicine and Dentistry. This work was supported in part by contract NO1-CO-95464-09 NCI, U.S. Dept. of HHS. Reprint requests to: Richard E. Gallagher, PhD, Dept of Family Medicine, U.H.C.-4J, Wayne State University School of Medicine, 4201 St. Antoine, Detroit, Michigan 48201. This organizational framework was developed as a response to two key issues in the field of educational evaluation: the differential sensitivities of evaluation strategies and utilization of findings. The first issue concerns the extent to which unnecessarily limited or inappropriately selected evaluation strategies have constrained the ability of evaluations to detect and portray a broad range of impacts of educational interventions. As indicated in a review of past evaluations in medical education,1 the questions raised as appropriate foci of study, the methodologies considered rigorous and relevant, and the sources of evidence for answering evaluation 141 Downloaded by [Universite Laval] at 04:58 06 August 2017 142 R. E. GALLAGHER ET questions raised are bound, to a large extent, in the values, expectations and perspectives of the evaluators' professions. That is, the questions identified as germane in cancer education have reflected evaluators' interpretations of the expectations of external sponsors (federal agencies or other funding sources), as well as the evaluators' own needs to maintain their professional identities, often within the academic disciplines of their training. Engel and Filling's2 review of medical education conference proceedings indicated that psychology and education constituted the professional background of the overwhelming majority of professional medical education evaluators. One implication of this finding is that the literature of both education and psychology, until very recently in the history of program evaluation, tended to emphasize a. relatively limited perspective on the nature and methods of such evaluations. Concurring with previous findings,3"5 Engel and Filling2 concluded that overdependence on a single perspective for study constrained evaluation to gathering evidence on questions that are necessarily narrow and often of minor consequence. This issue of choice of evaluation strategy has serious implications for the validity of evaluation products. The second issue relevant to those involved in the evaluation of cancer education programs is the utilization of evaluation findings. Many evaluators6"12 have criticized traditional evaluation research for its failure to detect or explain the impact of interventions. In other cases, evaluators have noted the difficulty of elucidating unintended consequences to program participants and sponsors.13 This literature reflects the assumptions that utilization is the intended and appropriate goal of evaluation14"16 and that evaluation findings are underutilized.17'18 The attention on utilization in the evaluation literature also has contributed to the impetus for a more deliberate attempt to identify a wider range of evaluation strategies, on the premise that an alternative evaluation strategy might more compellingly direct the future actions of decision makers. In his review of AL. the impact of evaluation findings on policy, Mitchell19 expressed concern with the apparent impotence of these findings to guide decisionmaking. The emerging field and literature of program evaluation is now actively incorporating concepts and methods from many diverse disciplines and is encouraging evaluation approaches that lead to an assessment of a broader range of program impact. CHARACTERISTICS OF ALTERNATE EVALUATION STRATEGIES The issues of differential sensitivity of evaluation strategies and the utilization of the findings that evaluation efforts generate are not independent issues. The following discussion (based on Table 1) will show how the selection of particular evaluation strategies can help to frame a wide range of different evaluation questions appropriate to cancer education programs. This table organizes selected strategies for program evaluation in a format designed to facilitate the identification and description of various approaches, the frameworks within which the evaluations occur, and the comparison of different strategies. The reader should note that the tabular comparison oversimplifies the actual models and implies a higher degree of mutual exclusiveness than actually is the case among some of the strategies. In order to make use of this table efficiently in preparing a plan for evaluation, it is critical that the central issues, form of data, and resources an evaluation methodology requires for data gathering, analysis and dissemination be congruent with the cancer education program's focus and activities. This consideration is essential to allow an informed choice of appropriate evaluation strategies that will be consistent with the type of program and evaluation proposed. The table should assist the reader (a) in identifying and selecting appropriate strategies and methods for addressing specific problems in the evaluation of cancer education programs; (b) in gaining access to the formal evaluation literature related to each strategy; and (c) in choosing and collaborating with Downloaded by [Universite Laval] at 04:58 06 August 2017 Evaluating the cancer education program evaluation personnel involved on the evaluation of those programs. The table highlights essential characteristics of 11 approaches or models for evaluation. The models included for presentation were chosen from a larger number of formal strategies found in the evaluation literature and were selected to portray the wide range of strategies available. The content and organization of Table 1 draw extensively from personal communications from Stake, Hogan and others (see the Acknowledgement section) as well as the work of Worthen and Sanders20 and House.21 All of the models in Table 1 were included on the basis of the judgment that they are potentially useful for evaluating aspects of cancer education. The models, in some instances, may require adaptation to the specific requirements of a given program. For each strategy, 11 characteristics, resources required, and references are tabulated across the row. The table first identifies a person or persons (proponents) who have been instrumental in the development and dissemination of the approach, and provides references to several seminal articles or books focusing on that particular strategy, its techniques, and its implementation. The additional references, shown in the last row of Table 1, identify evaluations or evaluation research conducted using each strategy in oncology or other medically-related settings. Full citations for these references are found in the reference list. The table provides a brief statement summarizing the purpose of this type of evaluation (especially in comparison with the other strategies described), the key elements, concepts and techniques associated with the strategy, and its underlying values (epistemological assumptions). The next two columns identify the characteristics of persons associated with the strategy as audience or user (e.g., administrator, instructor, client, or the public for whom the evaluation information typically is prepared), and the characteristics and roles of evaluators (e.g., the resources and skills needed by the evaluators to implement 143 that particular strategy, how the evaluation would function within the program setting, the sources of information on which evaluators would rely). The next columns briefly summarize the assessment techniques, tools, and procedures typically associated with the strategy in question (methodology), and the criteria employed to judge the quality of the methods and products of the evaluations using this approach. The types of evaluative products, ranging from quantitative information, such as test scores, to more qualitative assessments such as descriptive case studies or interpretation, are identified in the column identified as such. The next column provides a brief critical assessment of the kinds of activities for which the particular strategy is most appropriate, and some of the important constraints and limitations associated with each. Finally, the implementation and resources column identifies the sequence of activities in the evaluation and the evaluation resources required. This information will assist the reader in coordinating the timing of program planning and program activities with evaluation activities, and in selecting and working with an evaluation specialist whose skills are best suited to a particular evaluation strategy. If the planner chooses to work with an evaluation specialist, it is very important to involve this person in the early program planning phase, in order to obtain the maximum benefit from such a collaboration. DIFFERENTIAL SENSITIVITY AND SPECIFICITY OF AVAILABLE MODELS To illustrate the differential sensitivity and specificity available in the range of evaluation approaches categorized within this table, this discussion will consider the general premises of two particular evaluation approaches, referred to respectively as "preordínate" and "naturalistic/qualitiative." The consideration of an example of each approach, as it was used to evaluate cancer care education programs, follows the discussion of the sensitivity of these approaches. 144 R. E. GALLAGHER ET AL. Table 1. Selected strategies for evaluating Downloaded by [Universite Laval] at 04:58 06 August 2017 Approach Proponents* Purpose Key Elements Underlying Values Evaluation by Behavioral Objectives Popham Provus Tyler Bloom (1974) (1971) (1967) (1971) To compare performance and progress to a program's stated objectives Goal statements Test scoring, analyses Pre-post testing Discrepancy between goal and activity Radical behaviorism Reductionistic Goal Free Evaluation Scriven (1973) To assess actual effects and social impact of a program, irrespective of its stated objectives Gathering/combining data from many areas and sources Checklists Develop weighted set of goals Holistic Sum of elements Eclectic Management Analysis Stufflebeam Alkin Thompson (1977) (1975) (1980) To improve/optimize some functions in the decision-making process (e.g. cost-benefit choices) List of options Estimates Feedback loops PERT charts Costs, efficiencies Efficiency Actions = dollars and cents Social Policy Analysis Colcman (1966) To aid in developing and assessing the effectiveness of institutional policies and their implementation Measures of social conditions Use of social, economic and political indicators Pragmatic Literary Criticism Eisner Frye (1979) (1957) To assess the formation and impact of cultural processes and symbols in relation to disease and illness Description and/or analysis or metaphorical and symbolic components of interactions Role concepts Imaginative (as against imaginary) structure of encounters Intuitive Subjective Humanistic Holistic Accreditation North Central Accrediting Association (1981) Association of American Medical Colleges (1981) To develop an internal set of standards, to compare performance on the internal set to external standards, to identify deficiencies in content and procedures, and to obtain professional acceptance of a program Personal judgment; committee watch standards set by staff Subjective values are idiosyncratic to committees Adversarial Evaluation Levine Owens Tymitz and Wolf To weigh and choose between two options on the basis of arguments for and against those options Opposing advocates Cross-examination Decision by jury of peers Logical judgment Naturalistic Observation Transactional Evaluation Stake Barker Guba (1976, 1983) (1968, 1978) (1978, 1981) To describe the natural distributions of events and to provide understanding of the process of the program Clinical observation Classroom observation Case studies Ecosystems approach Ecologie Holistic Experimental Behavioral Analysis Bijou & Baer (1967) Ullman & Krasner (1965) Fordyce (1976) To generate explanations and tactics of instruction and to establish relationships between specific antecedent and consequent conditions Behavioral assessment Identify key enabling conditions (discriminative stimuli) Controlled conditions Radical behaviorism Reductionist Epidemiológica! Demographic Analysis Mechanic McKeown Susser To determine effectiveness on the health of the public in general and to evaluate interventions in the public domain Incidence and prevalence rates Demographic surveys Labels phenomena contained in population traits Meta Evaluation Glass To evaluate and integrate findings from previous evaluation studies Population parameters and variance estimates Empiricism •See Related Bibliography. (1973) (1973) (1977) (1980) (1978) (1975) (1976, 1981) Evaluating the cancer education program 145 Downloaded by [Universite Laval] at 04:58 06 August 2017 the cancer education program Major Audience and Utilization Role of Evaluator Methodology Criteria for Judging Evaluation Administrators responsible for training programs and managers who will utilize information to identify strengths and weaknesses of course and/or learners Evaluation specialty or team member who evaluates as a part of curriculum development and assessment; usually acts independently from program unit. Needs to be functioning during program development phase of the program Define and measure performance and progress toward objectives, usually via achievement testing and test and item score analyses Often employs quasi-experimental methods Clarity and specificity of objectives, statistical significance of pre- and post-test score changes Consumers or clients utilized in decision-making on basis of program's consequences Judges the merit of a program for its producers and consumers, functions independently of program staff Holistic, utilizes checklists, usually examining many factors via different performance scales Employs bias control logical analysis and weighted composite in determining program consequences Assessment of actual effects and a profile of needs against which the importance or salience of these effects can be assessed Managers, administrators and economists Utilized in day-to-day decisions Specialist who provides evaluation information to a variety of decision-makers Employs quantifiable variables, surveys questionnaires, interviews cost benefit and cost efficiency analyses, planned and/or natural variations and linear programming Objectivity and utility of information about the program (conceived as an input process product system) for decision makers Sociologists, high level administrators employ evaluations to modify instructional programs and/or policies Specialist who provides evaluation information helpful in improving and implementing institu- , tional policies Focus on measures of program implementations, social conditions, at the institutional or societal level Can employ quasi-experimental methods Devising reliable measures of large scale social conditions and administrative implementation Consumers or clients Personal utilization in decision making on the basis of critical review and comment Person who evaluates within a context of cultural practices and symbols and sensitizes people to events Critical review of process and/or outcomes Extensive use of metaphor qualitative evaluations Illumination of metaphorical analysis Evidence of structural corroboration or referential adequacy Instructors, public, utilize to improve programs that are deficient Certification Professional colleagues who make judgments and recommendations for revisions on the basis of assembled data, reports and site visits Employs self study, site visits review of existing and/or specially prepared data (e.g. annual reports) by prestigious panel, higher administration body, or colleagues Criteria may be standardized Expertise of accrediting panel Reflects interests of program administrators Standard criteria (checklists) often used Experts, jury consumers Utilize to resolve a particular issue Acts as an advocate of a particular point of view by preparing the best supported argument for or against the program Quasi-legal procedures including preparation of opposing arguments, use of briefs, cross examination, rules of evidence and jury decision Convincing presentation of ratio* nale and evidence of claims by opposing advocates Various audiences including clients and practitioners Collect and interpret descriptions of natural distributions of events and processes within the program in order to identify areas of success and difficulty Systematic approaches including observational methods, interviews, case studies, photographic, video methods, ethnography, etc. that result in quantifiable descriptions of events and processes in the program Usually explores many factors and their interrelationships Unobtrusive study of actually occurring phenomena in their natural setting Researchers, planners of new programs and instructors Usually acts independently of program to evaluate relationships between antecedent conditions and consequences May be responsible for assuring controlled conditions Planned variations in program measure specified antecedent and consequent conditions via testing, questionnaires, observations, etc. under controlled conditions Objective and reproducible measures of behavior in controlled settings Public administrators Collect and interpret data from several sources in order to identify areas of success or failure Quasi-experimental methods, survey questionnaire, mortality and morbidity records Use of large sample sizes, secondary as well as primary sources of data Reliable information on distribution of known exposures and responses Researchers and practitioners Use to plan or modify evaluations or improve evaluation practices Describes and judges technical adequacy, utility, ethics and practicality of evaluations Synthesizes findings Literature searches and critical review of previous evaluations Secondary data analysis Collective professional discussions of evaluation studies Appropriate statistical combinations and identification of appropriate common analytic units continued 146 R. E. GALLAGHER ET AL. Table 1. continued Downloaded by [Universite Laval] at 04:58 06 August 2017 Approach Type of Evaluative Product Constraints and Limitations Implementation and Resources Required Health-Related Examples* Evaluation by Behavioral Objectives Test scores Oversimplifies program; limited sampling of who determines objectives Early planning stages Requires an individual trained in areas of: (1) formulation of objectives (2) test development (3) measurement (4) data analysis Gjerde (1981) Interinstitutional Group for Oncology Education (1976) Goal Free Evaluation Ordinal ranking of the entity's actual • effects and worth Does not specify what effects to look for or how to identify them Equates performance on different criteria and assigns relative weights to criteria which creates methodological problems No methodology for assessing validity of judgments Implemented following program completion Requires an external evaluator responsible for judging the merit of an educational practice for procedures (formative) and consumers (summative) Hogan & Sirotkin (1977) Scalzi (1978) Management Analysis Cost-benefit analysis merit and efficiency of the entire program or components of the program Assumes rationality of decision makers Underestimates the difficulty of identifying and accessing decision making process Does not examine values or adequacy of standards Can be implemented during and following program completion Requires an Evaluation Specialist Whitney et al. Mahajan (1976) (1979) Social Policy Analysis Recommendations for choosing and implementing a particular policy Neglects local issues, particularly those of less organized constituents Vulnerable to manipulation during implementation Can be implemented following program completion LeLonde Epstein Rise & Cooper (1974) (1979) (1967) Literary Criticism Novel interpretation or insight into the individual experience or collective perception of disease and illness Lacks operational guidelines Relies on the intuitive judgment of the individual evaluator critic Can be implemented during and following program completion Poets, novelists, teachers of literature and others trained to draw upon the literary traditions of a society Sontag Lewis (1978) (1979) Accreditation Site visits Annual reports Certification Objectivity and empirical basis are questionable Does not balance attention to process of education with attention to consequences Replicability is questionable Following program completion Self study Requires an external expert review group Institutional Self Study (1981) North Central Accredited Association (NCAA) (1981) Adversarial Evaluation Presentation of opposing claims and evidence How the "jury" should come to a decision is not specified Tends to be personal istic, superficial and timebound Can be implemented in early program planning stages or following program completion depending on options Hill Hearings on Chemotherapy (1981) Naturalistic Observation Transactional Evaluation Case study Observations Generalizability Neglect of historical or wider social forces affecting observed behaviors Can be implemented before, during and after program process Requires an external evaluator responsible for judging the merit of an educational practice for producers (formative) and consumers (summative) Rist (1977) Stake et al. (1976) LaDuca (1980) Willems (1976) Waitzkin and Stoeckle (1972) Germain (1979) Buckingham et al. (1976) Experimental Behavioral Analysis Quantitative effect of an intervention on behavior under specified conditions Little attention to complex interrelationships among variables Relevance or applicability to behavior outside study setting is rarely established Can be implemented before, during and after program process Requires specialist who provides evaluation information to decision-makers Davidson & Davidson (1980) Stunkard (1979) Bernstein & Glasgow (1979) Epidemiological Demographic Analysis Incidence and prevalence of a condition within a population Difficulty in monitoring mobile human populations and unanticipated environmental effects over time Difficulty in establishing statistically significant or causal inferences Can be implemented before and after program process Requires individual trained in biostatistics and epidemiology and experienced in working with large databases Higginson (1976) Meta Evaluation Direction and size of effect of intervention experience obtained in many trials Inclusion of unknown design flaws and context or treatment differences may affect results Conducted after the fact Requires an individual trained and well versed in experimental and quasi-experimental research methodology Posavac Wortman (1980) (1981) *See Related Bibliography. Downloaded by [Universite Laval] at 04:58 06 August 2017 Evaluating the cancer education program The most pervasive form of existing evaluation approaches which are cited in the table is described in the evaluation literature as preordínate. The approach is considered preordínate given that a detailed pre-specification of program goals and objectives becomes the primary or sole focus for assessing program effects. The emphasis within preordínate approaches on outcomes for individual program recipients reflects the tradition in which most educational researchers have been trained. Alternately, naturalistic or qualitative approaches to evaluation focus on the observed experience of program participants in the settings in which the behavior of interest occurs. Of the models described in Table 1, those associated with proponents Popham, Provus, Tyler, and Bloom22"25 are all considered preordínate in orientation; the models associated with proponents Stake, Barker, and Guba26"30-17 are naturalistically or qualitively based. A recent article by Smith and Heshusius31 provides a more detailed explanation of the underlying differences between these two orientations. It is important to recognize that some models are not unequivocally classifiable in either category. With respect to the defensibility and desirability of combining elements of both perspectives within a single overall evaluation plan Cronbach33 has suggested that the more an evaluative effort becomes a program of studies, rather than a single study, the more place there is for a mixture of styles. Further, educational worth of a program is more likely to be revealed with an evaluation plan of some breadth which appropriately incorporates multiple methods. Also, there is general agreement that, at the level of methodology or procedures, it is possible to combine the methods from each orientation in a single evaluation plan.32 For example, an evaluation plan based primarily upon a preordínate model employing standardized instruments for data gathering could be supplemented with open-ended observation in naturalistic settings. Similarly, an evaluation based primarily on a qualitatively oriented evaluation 147 model could augment methods of naturalistic observation with the quantification of events.31 Although it is deemed possible to combine approaches, the two examples discussed below exemplify clear illustrations of a preordínate approach and a qualitative approach to the evaluation of cancer education programs. The examples present two of the most common strategies employed for such evaluations and illustrate the interpretation and use of the table. CANCER EDUCATION EVALUATION CASE STUDIES The first example considers the Handbook of Objectives in Oncology?* a cancer education evaluation product formulated by a national committee of oncologists, medical educators and evaluators. In the Handbook, the authors present their efforts to represent the cognitive knowledge base "considered appropriate goals by the time (medical) students had completed undergraduate medical education."34 The premise of this approach is that the cognitive knowledge base constituting the learner's understanding can be adequately identified by experienced content experts. A further assumption evident in this approach is that the representation and subsequent evaluation of this knowledge base can best be achieved in a collection of objectives (i.e., explicitly stated expectations for student's behavior). The sequence of presentation of the twenty-three categories of objectives contained in the handbook mirror the content and sequence of curriculum instruction within the typical medical school curriculum progression: from underlying cell biology through pathology, carcinogenesis and the bases of clinical diagnosis, treatment modalities and psychosocial management. This presentation format would make it easy for an instructor to incorporate individual items of instructional material, and for an evaluator to design a test of students' knowledge of cancer or an assessment of students' Downloaded by [Universite Laval] at 04:58 06 August 2017 148 R. E. GALLAGHER ET AL. specific or pre-identified behaviors in a clinical setting. This preordínate strategy provides assurance that the issues addressed in an educational program include those formally identified as germane by established medical educators, and provide the program director with a comparative basis on which to gauge the relative adequacy of the scope of an individual program. The objectives approach to cancer education evaluation exemplified in the Handbook34 has the advantage of making a concrete statement of what persons established within a component of a practice field perceive to be a necessary and sufficient knowledge base. Furthermore, since objectives are defined in part by persons held to be authoritative within a particular area, the formulation of objectives offers an implied statement of what material the individual and collective faculty would be prepared to represent. Relying solely upon exhaustive collections of objectives, however, may increase the risk that individuals will assume that a factual base of knowledge about a content area is sufficient for achieving overall educational goals, resolving problems, and producing a functioning physician. In contrast, qualitative approaches to evaluation focus on the study of patterns of social organization and interaction in defined, naturally occurring settings, rather than the attainment of specific objectives. For example, Germain's35 qualitative evaluation of a cancer care unit does not consider evidence of a substantitive biomédical knowledge base of cancer as evidence of a solved problem of cancer care. Instead, the qualitative evaluation approach, characterized by extended observations of the patterns of actual exchanges between clinicians, staff and patients within the clinic site, actively confronts the possibility that differential access to the resources of biomédical knowledge, and treatment or embodied disease occurrence can constrain or be constrained by patterns of interaction between participants responsible for giving or receiving care in a particular setting. A qualitative evaluation approach would be well suited for identifying issues which participants consider problematic in a given setting, anticipating sources of resistance to proposed or imminent practice change, mapping patterns of patient flow, or assessing the opportunity for students to obtain an experiential knowledge base in a particular practice area. While the qualitative evaluation would be capable of detecting unsuccessful communication outcomes, the specificity of the objectives based evaluation approach would more likely determine whether the communication reflected, for example, the learner's lack of knowledge of diagnostic procedures or alternative treatment modalities. In summary, the development of an evaluation plan entails careful explication of the premises, activities and resources of the educational program. The formulation of evaluation questions and selection of appropriate methodology should be guided by the realization that there is a wide range of formal evaluation philosophies, models, methods, and strategies available. The choice of the methodology itself should be consistent with and appropriate for the kinds of questions or goals that the evaluation plan has framed. While it is possible to incorporate within a given evaluation plan aspects of differing evaluation philosophies, models and methods, it is important that the aggregation of truly different approaches be undertaken in an informed manner and that the required consistency between any particular type of goal or question and method be maintained. We have chosen in this article to focus on a particular set of issues; however, it is important to recognize that within the larger framework of evaluation planning, there are many other issues in preparing a good plan and these are well known and discussed in the literature. This literature can provide guidance for instance with the more detailed aspects of measurement and methods within a given model36-37"17 or with the basic questions of what aspects of an educational program can or should be evaluated.38-39 We believe that the time and effort invested in the formula- Evaluating the cancer education program Downloaded by [Universite Laval] at 04:58 06 August 2017 tion and clarification of these evaluation issues and plans will increase the probability of obtaining meaningful and useful evaluation results. Acknowledgement—This document represents a revised version of an unpublished document entitled Selected Stategies for Evaluation of Cancer Education Programs: A Working Document, developed by Richard E. Gallagher, Martin J. Hogan, Carolyn Norris-Baker, Patricia Mullan Scalzi and Richard Bakemeier. Wayne State University School of Medicine, 1982. Stake RE: Nine approaches to educational evaluation (unpublished paper). Center for Instructional Research and Curriculum Evaluation. University of Illinois at Urbana-Champaign, 1974. Gallagher RE, Hogan MJ: The cost of evaluating continuing medical education programs (unpublished paper). Presented at the annual meeting of the American Association for Cancer Education, Charleston, SC, 1976. REFERENCES 1. Scalzi PM: An ethnoecological inquiry of cancer care education and practice in the American medical culture. Doctoral dissertation, 1984. 2. Engel JD, Filling CM: Research approaches in health professions education: Problems and prospects. Evaluation and the Health Professions 4:13-20, 1981. 3. Ianni FA, Orr MT: Toward a rapproachment of quantitative and qualitative methods. In: Cook TD, Reichardt CS (Eds.). Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills: Sage Publications, 1979. 4. Rucci AJ, Tweeney RD: Analysis of variance and the "second discipline" of scientific psychology: An historical account. Psychological Bulletin 87:166-184, 1980. 5. Patton MQ: Alternative Evaluation Research Paradigm. Grand Forks: University of North Dakota Press, 1975. 6. Gebhardt M: Health education evaluation: An alternative research paradigm, Evaluation and the Health Professions 3(2):205-210, 1980. 7. Guttentag M: Models and methods in evaluation research. J for the Theory of Social Behavior 4: 35-39, 1971. 8. Weiss CH: Evaluation Action Programs. Englewood Cliffs: Prentice Hall, 1972. 9. Rist RC: On the relations among educational research paradigms: From disdain to detente. Anthropology and Education 8(2):42-49, 1977. 10. Cronbach L. Beyond the two disciplines of scientific psychology. American Psychologist 30:116-127, 1975. 11. Goetz JP, LeCompte MD: Ethnographic research and the problem of data reduction. Anthropology and Education 12:51-70, 1981. 149 12. Churchman D. Guyette S: Evaluating American Indian programs: An ethnographic approach. Proc Amer Anthropology Association 5:38, 1981. 13. Sherrill S: Identifying unintended outcomes. Evaluation and Program Planning 7, 1984. 14. Alkin M: Evaluation: Who needs it? Who cares? Studies in Educational Evaluation (3):202-212, 1975. 15. Leviton LC, Boruch RF: Contributions of evaluation to educational programs and policy. Evaluation Review 7 (5):563-598, 1985. 16. Weiss CH: Measuring the use of evaluation. In : House ER, et al (Eds.) Evaluation Studies Review Annual Beverly Hills: Sage Publications, 1982. 17. Guba EG, Lincoln Y: Effective evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. San Francisco: Jossey-Bass, 1981. 18. Cronbach LJ, Ambror SR, Dornbusch SM, et al: Toward Reform of Program Evaluation. San Francisco: Jossey-Bass, 1980. 19. Mitchell WD: Reflections on the current state of knowledge of physician career development. J Medical Education 51(8):680-682, 1976. 20. Worthen BR, Sanders JR, Eds.: Educational Evaluation: Theory and Practice. Worthington, Ohio: CA Jones Publishing Company, 1973. 21. House ER: Assumptions underlying evaluation models. Educational Research 7:4-12, 1977. 22. Popham WJ, Ed.: Evaluation in Education: Current Applications. Berkeley, California: McCutchan, 1974. 23. Provus M: Discrepancy evaluation for education program improvement and assessment. Berkeley, California: McCutchan, 1971. American Educational Research Association Monograph Series on Curriculum Evaluation (Whole No. 1). Chicago: Rand McNally, 1967. 24. Tyler RW, Gagne RM, Scriven M: Perspectives on curriculum evaluation. In: American Educational Research Association Monograph Series on Curriculum Evaluation. Chicago: Rand McNally, 1976. 25. Bloom BS, Hastings JT, Madaus GF: Handbook on Formative and Summative Evaluation of Student Learning. New York: McGraw-Hill, 1971. 26. Stake RE, Klintberg IG, Land FC, et al.: A responsive evaluation of two programs in medical education. Studies in Educational Evaluation 2:19-36, 1976. 27. Stake RE: Program evaluation, particularly responsive evaluation. In: Madaus GF, Scriven M. Stufflebeam DL, Eds.: Evaluation Models. Boston: Kluwer-Nijhoff, 287-310, 1983. 28. Barker RG: Ecological Psychology. Stanford, California: Stanford University Press, 1968. 29. Barker RG: Streams of individual behavior. In: Barker RG and Associates, Eds.: Habitats, Environments and Human Behavior. San Francisco: JosseyBass, 1978. Downloaded by [Universite Laval] at 04:58 06 August 2017 150 R. E. GALLAGHER ET AL. 30. Guba EG: Toward a Methodology of Naturalistic 47. Inquiry in Educational Evaluation. CSE Monograph Series in Evaluation (Whole No. 8). Los Angeles: 48. Center for the Study of Evaluation, University of California, 1978. 49. 31. Smith JK, Heshusius L: Closing down the conversation: The end of the quantitative-qualitative debate among educational inquirers. Educational Research- 50. ers 15(7):4-12, 1986. 32. Cook TD, Reichardt CS, Eds.: Qualitative and 51. quanitative methods in evaluation research. In: Sage Research Progress Series in Evaluation. Beverly Hills: Sage Publications, 1979. 52. 33. Cronbach LJ: Designing Evaluations of Educational and Social Programs. San Francisco: Jossey-Bass, 1982: (Fiske DW, special advisor. Social and Behav- 53. ioral Science and in Higher Education Series). 34. Interinstitutional group for oncology education. 54. Handbook of Objectives in Oncology. Richmond, VA: American Cancer Society, 1976. 35. Germain C: The Cancer Unit: An Ethnograpy. Wakefield MA; Nursing Resources, 1979. 55. 36. Campbell DT, Stanley JC: Experimental and QuasiExperimental Designs for Research. Chicago, Rand 56. McNally, 1963. 37. Cook TD, Campbell DT: Quasi-experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand McNally, 1979. 38. Rutman L: Planning Useful Evaluations: Evaluability Assessment. Beverly Hills, California: Sage Publications, 1980. (Sage library of social research vol 96). 39. Rutman L, Ed.: Evaluation Research Methods: A 57. Basic Guide. 2nd ed. Beverly Hills, California: Sage Publications, 1984. 40. Alkin MC, Fitz-Gibbon CT: Methods and theories of evaluating programs. Journal of Research and Development in Education 8:2-15, 1975. 41. Bernstein A. Glasgow RE: Smoking. In Pomerleau 58. OF, Brady JP, Eds.: Behavioral Medicine: Theory and Practice. Baltimore: The Williams & Wilkins Company, 1979. 42. Bijou SW, Baer DM: Child Development, vol. I: A 59. Systematic and Empirical Theory. New York: Appleton-Century Crofts, 1961. 43. Buckingham RW, Lack SA, Mount BM, et al. : Liv- 60. ing with the dying: Use of the technique of participant observation. Canadian Medical Association Journal 1211-1215, 1976. 61. 44. Campbell EQ, Hobson CJ, McPortland J, Mood AM, Weinfeld FD, York RL: Equality of educational 62. opportunity. Washington, DC: U.S. Government Printing Office, 1966. 45. Criteria for Candidate for Accrediation Status and 63. Criteria for Accreditation. North Central Association Quarterly 55(4):391-395, 1981. 46. Davidson PO, Davidson SM, Eds.: Behavioral Medicine: Changing Health Lifestyles. New York: Brun- 64. ner/Mazel, 1980. Eisner EW: The Educational Imagination. New York: MacMillan, 1979. Epstein S: The Politics of Cancer (rev. ed.). Garden City, New York: Anchor Books, Doubleday, 1979. Fordyce WE: Behavioral Methods for Chronic Pain and Illness. St. Louis, Missouri: CV Mosby Company, 1976. Frye N: Anatomy of Criticism: Four Essays. Princeton: Princeton University Press, 1957. Gjerde CL: 'Curriculum mapping': Objectives, instruction, and evaluation. Journal of Medical Education 3:316-323, 1981. Glass G, McGaw B, Smith ML. Meta-Analysis in Social Research. Beverly Hills, California: Sage Publications, 1981. Glass GL: Primary, secondary, and meta-analysis of research. Educational Research 5:3-8, 1976. Higginson J: A hazardous society? Individual versus community responsibility in cancer prevention. American Journal of Public Health 66:359-366, 1976. Congressional (Hill) Hearings on Chemotherapy. Washington Post, 1981. Hogan MJ, Sirotkin RA, Ticknor MC, et al.: Interpersonal problem solving: A theoretical perspective and methodology for the evaluation of residency programs and its relationship to health care processes and outcomes. Proceedings of the Sixteenth Annual Conference on Research in Medical Education. Association of American Medical Colleges, Washington, DC, 3-8, 1977. Institutional Self-Study: Analysis of a College of Medicine.a. 476;58;57; Publication of Liaison Committee on Medical Education. Adopted March 31, 1976 and revised 1981. Available from the Association of American Medical Colleges, One Dupont Circle, NW, Washington, DC. 30036. Krathwohl DR: Social and Behavioral Science Research. San Francisco: Jossey-Bass, 1985. (Fiske DW, Ed. Methods of social and behavioral research series.) LaDuca A: The structure of competence in health professions. Evaluation and the Health Professions 3:253-288, 1980. Levine M: Scientific method and the adversary model: Some preliminary suggestions. Evaluation Comment 4(2):l-3, 1973. Lewis T: Medusa and the Snail. New York: Random House, 1979. LeLonde M: A New Perspective on the Health of Canadians. Ottawa, Ontario: Government of Canada, 1974. Madaus GF, Scriven DL, Stufflebeam DL, Eds.: . Evaluation Models: Viewpoints on Educational and Human Services Evaluation, Vol. 30. Boston: KluwerNijhoff, 1984. Mahajan V, Pegels CC: Systems Analysis in Health Care. New York: Praeger Publishers, 1979. Downloaded by [Universite Laval] at 04:58 06 August 2017 Evaluating the cancer education program 65. McKeown T: Determinants of health. Human Nature 1:41-47, 1978. 66. Mechanic D: Patient behavior, the provision of medical care and medical-social policy. Man and Medicine 5(l):13-23, 1980. 67. Owens TR: Educational evaluation by adversary proceedings. In House R, Ed. School Evaluation: The Politics and Process. Berkeley, California: McCutchan, 1973. 68. Posavac EJ: Evaluation of patient education programs. Evaluation and the Health Professions 3:47-62, 1980. 69. Rice DP, Cooper BS: The economic value of human life. American Journal of Public Health 57:23-30, 1967. 70. Rist R: On the relations among educational research paradigms: From disdain to detente. Anthropology and Education 8:42-49, 1977. 71. Scalzi PM, Hogan MJ, Gallagher RE, Vaitkevicius V. Reed M: An evaluation of an undergraduate oncology curriculum by time performance and preference dimensions from the point of view of functions, academic discipline and body organ system. Proceedings of the Seventeenth Annual Conference on Research in Medical Education. Association of American Medical Colleges, 341-345, 1978. 72. Scriven M: Goal-free evaluation. In: House ER, Ed.: School Evaluation: The Politics and Process. Berkeley, California: McCutchan, 1973. 73. Sherrill S: Toward a coherent view of evaluation. Evaluation Review 8, 1984. 74. Sontag S: Illness as Metaphor. New York: Farrar, Strauss & Giroux, 1978. 75. Stufflebeam DL, Foley WJ, Gephart WJ, Guba EG, 76. 77. 78. 79. 80. 81. 82. 83. 84. 151 Hammond RL, Merriman HO, Provus M: Educational Evaluation and Decision Making. Itasca, Illinois: F.E. Peacock Publishers, 1971. Stunkard AJ: Behavioral medicine and beyond: The example of obesity. In: Pomerleau OF, Brady JP, eds. Behavioral Medicine: Theory and Practice. Baltimore: The Williams & Wilkins Company, 1979. Susser M: Epidemiological models. In: Struening EL and Guttentag M, Eds.: Handbook of Evaluation Research, Vol. 1. Beverly Hills, California: Sage, 1974. Thompson MS, Fortress EE. Cost-effectiveness analyses in health program evaluation. Evaluation Review 4:549-568, 1979. Tymitz B, Wolf RL: An Introduction to Judicial Evaluation and Natural Inquiry. Washington, DC: Nero and Associates, 1977. Ullman LP, Krasner L: Case Studies in Behavior Modification. New York: Holt, Rinehart and Winston, 1965. Waitzkin H, Stoeckle J: Communication of information about illness. In Lepouski Z, Ed.: Advances in Psychosomatic Medicine. Basel:Karger, 8:180-215, 1972. Whitney MA, Holloway JD, Jones LD, Caplan RM, Anderson EE: Evaluation in Cancer Education. The University of Iowa, 1976. Willems EP: Behavioral ecology, health status and health care: Applications to the rehabilitation setting. In: Altman I, Wohlwill JF, Eds.: Human Behavior and Environment. New York: Plenum, 1976. Wortman PM, Ed.: Methods for Evaluating Health Services. Beverly Hills, California: Sage Publications, 8, 1981.