Using the Delphi method to engage stakeholders: A comparison of two studies
Introduction
Stakeholder involvement is crucial when evaluating organizations. Involving stakeholders in all phases of the process, including the framing of the evaluation, increases the attention paid to the findings (Cousins & Earl, 1992); helps ensure that relevant questions are asked (Fine, Thayer, & Coghlan, 2000); increases stakeholders’ understanding of the organization and the evaluation (Brandon, 1998; Cousins & Earl); promotes a participatory and collaborative relationship between the evaluator and stakeholders (Patton, 1997); and increases the validity of the evaluation findings (Brandon). Including stakeholders in the process also allows the researcher to ask evaluation questions in the shared terms and language of the stakeholders (Patton). In their Program Evaluation Standards, the Joint Committee on Standards provides another compelling reason to involve stakeholders.
The evaluation should be planned and conducted with anticipation of the different positions of various interest groups, so that their cooperation may be obtained, and so that possible attempts by any of these groups to curtail evaluation operations or to bias or misapply the results can be averted or counteracted (Sanders, 1994, p. 71).
When conducting a participatory and collaborative evaluation, measures should be taken to involve stakeholders who have distinct perspectives on the program. The Joint Committee on Standards suggested “Interview[ing] representatives of major stakeholders to gain an understanding of their different and perhaps conflicting points of view and of their need for information” (Sanders, 1994, p. 38). Common methods used to communicate with stakeholders include focus groups, personal interviews, meetings, semi-structured interviews, and informal interactions (Brandon, 1998).
While meetings are often convenient for collecting information from stakeholders, attendance can be a limitation, as it is not always possible to have all stakeholders represented (Renger & Bourdeau, 2004). As Rossi, Freeman, and Lipsey (1999) pointed out, important groups may be left out of the process even when an evaluation is structured as explicitly participatory and collaborative.
An important time to include stakeholders is during the framing of an evaluation, when evaluators need to be able to assess evaluability (Smith, 1989, Wholey, 1994), as well as efficiently develop and prioritize evaluation questions (Patton, 2002, Rossi et al., 1999, Weiss, 1998). A promising prioritization method involves using a consensus-building tool called the Delphi method (Dalkey and Helmer, 1963, Linstone and Turoff, 1975), where an evaluator or researcher can investigate stakeholders’ opinions on the current state of an organization long before a site visit. The Delphi method can be thought of as a series of sequential questionnaires interspersed by controlled feedback (Linstone and Turoff). This information-gathering tool is ideal for evaluation situations where it is difficult to get feedback from stakeholders due to geographical barriers and busy schedules.
The aim of this article is to describe a study where a paper–pencil version of the Delphi method (PP Delphi) was compared to a real-time computerized version of the Delphi method (RT Delphi). Both were tested as viable processes of gathering input from stakeholders to assist in framing an evaluation of a non-profit community based organization (CBO). Framing an evaluation consists of several components, beginning with an evaluability assessment (Smith, 1989, Wholey, 1994), which is the determination of an organization's evaluation readiness. Goals clarification is an important part of assessing evaluability and helps the evaluator determine whether an organization has current, agreed upon, well-defined goals or fuzzy, broad, unrealistic, or exaggerated goals (Patton, 1997).
When an organization has many stakeholders, it is not always possible to interview each one or to have everyone gathered in one place for a focus group or evaluability assessment meeting. The Delphi method allows groups of stakeholders to be located all over the world, increasing participation and the range of perspectives taken into consideration. A CBO can save money on an evaluation by conducting a Delphi study ahead of time to assist in framing the evaluation. The rationale for this study was to determine the potential and effectiveness for such a method. As with all methodologies, it is expected that the Delphi has both strengths and weaknesses. As Patton (1997) stated, “The strength of the Delphi approach – lack of face-to-face interaction – is also its weakness” (p. 151). The purpose of this study was to demonstrate the potential of both the paper–pencil and the real-time versions of the Delphi for involving stakeholders in the framing of an evaluation.
Section snippets
Delphi method
“Project Delphi” was the name given to an Air Force-sponsored Rand Corporation study focused on understanding the use of expert opinion (Dalkey & Helmer, 1963). The objective of the Delphi methodology was to “reduce the negative effects of group interactions” (Gupta & Clarke, 1996, p. 185) and to obtain the most reliable consensus of opinion of a group of experts (Dalkey and Helmer). The Delphi method is named after the ancient Greek oracle at Delphi, who offered visions of the future to those
Participants
The CBO of this study was a two-year self-sufficiency program for parenting teenage mothers. Stakeholders included members of the board of directors, staff members (counselors, case workers, and office staff) and volunteers. The stakeholders ranged from businessmen and businesswomen to clergy members and stay-at-home adult mothers. From this point forward, stakeholders will be called panelists, a term used for participants in Delphi studies (Linstone & Turoff, 1975). Sixty panelists were
Results
The major goal of the study was to determine the potential of a Delphi study to assist an evaluator of a CBO in framing an evaluation. Use of the Delphi method, was examined and will be discussed in the context of goals clarification and developing evaluation questions. Response rates, ease of use, merit for stakeholders, and cost effectiveness issues will also be reported.
Conclusions
Using the Delphi method to solicit input and feedback from stakeholders was found to be a promising technique for involving stakeholders in the framing of evaluations. The Delphi method allowed for all stakeholders to be involved, regardless of their geographic and time constraints. The following discussion compares the PP and the RT versions of the Delphi.
Lessons learned
The three most salient lessons learned in this project were related to panelists’ participation, asking stakeholders for useful evaluation information, and using the Delphi method, in general, in the framing of an evaluation.
Monica R. Geist recently earned her Ph.D. in Applied Statistics and Research Methods at the University of Northern Colorado. Her methodological interests include mixed methods research and evaluation, instrument development, and focus groups. Her areas of interests are in education and the non-profit sector.
References (33)
Stakeholder participation for the purpose of helping ensure evaluation validity: Bridging the gap between collaborative and non-collaborative evaluations
American Journal of Evaluation
(1998)- et al.
RT Delphi: An efficient, “round-less” almost real time Delphi method
Technological Forecasting and Social Change
(2006) - et al.
Theory and application of the Delphi technique: A bibliography (1975–1994)
Technological Forecasting and Social Change
(1996) - et al.
Issues in large scale Delphi studies
Technological Forecasting and Social Change
(1974) - et al.
Strategies for values inquiry: An exploratory case study
American Journal of Evaluation
(2004) - et al.
A three-step approach to teaching logic models
American Journal of Evaluation
(2002) - et al.
Delphi: A reevaluation of research and theory
Technological Forecasting and Social Change
(1991) Delphi panel: A practical ‘Crystal Ball’ for researchers
Marketing News
(1984)- et al.
An inquiry of the nominal process
Academy of Management Journal
(1971) - et al.
The case for participatory evaluation
Educational Evaluation and Policy Analysis
(1992)
An experimental application of the Delphi method to the use of experts
Journal of the Institute of Management Science
Reducing missing data in surveys: An overview of methods
Quality and Quantity
The effect of group participation on brainstorming effectiveness for two industrial samples
Journal of Applied Psychology
Steps of empowerment evaluation: From California and Cape Town
Program evaluation practice in the nonprofit sector
Alternate methods for handling attrition: An illustration using data from the fast track evaluation
Evaluation Review
Cited by (0)
Monica R. Geist recently earned her Ph.D. in Applied Statistics and Research Methods at the University of Northern Colorado. Her methodological interests include mixed methods research and evaluation, instrument development, and focus groups. Her areas of interests are in education and the non-profit sector.