Using the Delphi method to engage stakeholders: A comparison of two studies

https://doi.org/10.1016/j.evalprogplan.2009.06.006Get rights and content

Abstract

Involving stakeholders can greatly impact evaluation results. The Delphi method, a consensus-building tool, is a promising process to promote and encourage involvement from all stakeholders during the evaluation framing process. The Delphi method removes geographic challenges and time boundaries allowing all stakeholders to participate. The Delphi method uses a series of surveys interspersed with controlled feedback designed to gather information and build consensus without requiring face-to-face meetings. Two different formats of the Delphi method, a paper-and-pencil, postal-mail version and a web-based, real-time computer version are compared in this study. Both versions of the Delphi were administered to a non-profit community based organization as part of framing an evaluation. Participation rates were better with the paper–pencil version. The quantity and quality of data collected were comparable in both versions.

Introduction

Stakeholder involvement is crucial when evaluating organizations. Involving stakeholders in all phases of the process, including the framing of the evaluation, increases the attention paid to the findings (Cousins & Earl, 1992); helps ensure that relevant questions are asked (Fine, Thayer, & Coghlan, 2000); increases stakeholders’ understanding of the organization and the evaluation (Brandon, 1998; Cousins & Earl); promotes a participatory and collaborative relationship between the evaluator and stakeholders (Patton, 1997); and increases the validity of the evaluation findings (Brandon). Including stakeholders in the process also allows the researcher to ask evaluation questions in the shared terms and language of the stakeholders (Patton). In their Program Evaluation Standards, the Joint Committee on Standards provides another compelling reason to involve stakeholders.

The evaluation should be planned and conducted with anticipation of the different positions of various interest groups, so that their cooperation may be obtained, and so that possible attempts by any of these groups to curtail evaluation operations or to bias or misapply the results can be averted or counteracted (Sanders, 1994, p. 71).

When conducting a participatory and collaborative evaluation, measures should be taken to involve stakeholders who have distinct perspectives on the program. The Joint Committee on Standards suggested “Interview[ing] representatives of major stakeholders to gain an understanding of their different and perhaps conflicting points of view and of their need for information” (Sanders, 1994, p. 38). Common methods used to communicate with stakeholders include focus groups, personal interviews, meetings, semi-structured interviews, and informal interactions (Brandon, 1998).

While meetings are often convenient for collecting information from stakeholders, attendance can be a limitation, as it is not always possible to have all stakeholders represented (Renger & Bourdeau, 2004). As Rossi, Freeman, and Lipsey (1999) pointed out, important groups may be left out of the process even when an evaluation is structured as explicitly participatory and collaborative.

An important time to include stakeholders is during the framing of an evaluation, when evaluators need to be able to assess evaluability (Smith, 1989, Wholey, 1994), as well as efficiently develop and prioritize evaluation questions (Patton, 2002, Rossi et al., 1999, Weiss, 1998). A promising prioritization method involves using a consensus-building tool called the Delphi method (Dalkey and Helmer, 1963, Linstone and Turoff, 1975), where an evaluator or researcher can investigate stakeholders’ opinions on the current state of an organization long before a site visit. The Delphi method can be thought of as a series of sequential questionnaires interspersed by controlled feedback (Linstone and Turoff). This information-gathering tool is ideal for evaluation situations where it is difficult to get feedback from stakeholders due to geographical barriers and busy schedules.

The aim of this article is to describe a study where a paper–pencil version of the Delphi method (PP Delphi) was compared to a real-time computerized version of the Delphi method (RT Delphi). Both were tested as viable processes of gathering input from stakeholders to assist in framing an evaluation of a non-profit community based organization (CBO). Framing an evaluation consists of several components, beginning with an evaluability assessment (Smith, 1989, Wholey, 1994), which is the determination of an organization's evaluation readiness. Goals clarification is an important part of assessing evaluability and helps the evaluator determine whether an organization has current, agreed upon, well-defined goals or fuzzy, broad, unrealistic, or exaggerated goals (Patton, 1997).

When an organization has many stakeholders, it is not always possible to interview each one or to have everyone gathered in one place for a focus group or evaluability assessment meeting. The Delphi method allows groups of stakeholders to be located all over the world, increasing participation and the range of perspectives taken into consideration. A CBO can save money on an evaluation by conducting a Delphi study ahead of time to assist in framing the evaluation. The rationale for this study was to determine the potential and effectiveness for such a method. As with all methodologies, it is expected that the Delphi has both strengths and weaknesses. As Patton (1997) stated, “The strength of the Delphi approach – lack of face-to-face interaction – is also its weakness” (p. 151). The purpose of this study was to demonstrate the potential of both the paper–pencil and the real-time versions of the Delphi for involving stakeholders in the framing of an evaluation.

Section snippets

Delphi method

“Project Delphi” was the name given to an Air Force-sponsored Rand Corporation study focused on understanding the use of expert opinion (Dalkey & Helmer, 1963). The objective of the Delphi methodology was to “reduce the negative effects of group interactions” (Gupta & Clarke, 1996, p. 185) and to obtain the most reliable consensus of opinion of a group of experts (Dalkey and Helmer). The Delphi method is named after the ancient Greek oracle at Delphi, who offered visions of the future to those

Participants

The CBO of this study was a two-year self-sufficiency program for parenting teenage mothers. Stakeholders included members of the board of directors, staff members (counselors, case workers, and office staff) and volunteers. The stakeholders ranged from businessmen and businesswomen to clergy members and stay-at-home adult mothers. From this point forward, stakeholders will be called panelists, a term used for participants in Delphi studies (Linstone & Turoff, 1975). Sixty panelists were

Results

The major goal of the study was to determine the potential of a Delphi study to assist an evaluator of a CBO in framing an evaluation. Use of the Delphi method, was examined and will be discussed in the context of goals clarification and developing evaluation questions. Response rates, ease of use, merit for stakeholders, and cost effectiveness issues will also be reported.

Conclusions

Using the Delphi method to solicit input and feedback from stakeholders was found to be a promising technique for involving stakeholders in the framing of evaluations. The Delphi method allowed for all stakeholders to be involved, regardless of their geographic and time constraints. The following discussion compares the PP and the RT versions of the Delphi.

Lessons learned

The three most salient lessons learned in this project were related to panelists’ participation, asking stakeholders for useful evaluation information, and using the Delphi method, in general, in the framing of an evaluation.

Monica R. Geist recently earned her Ph.D. in Applied Statistics and Research Methods at the University of Northern Colorado. Her methodological interests include mixed methods research and evaluation, instrument development, and focus groups. Her areas of interests are in education and the non-profit sector.

References (33)

  • N. Dalkey et al.

    An experimental application of the Delphi method to the use of experts

    Journal of the Institute of Management Science

    (1963)
  • E.D. De Leeuw

    Reducing missing data in surveys: An overview of methods

    Quality and Quantity

    (2001)
  • M. Dunnette et al.

    The effect of group participation on brainstorming effectiveness for two industrial samples

    Journal of Applied Psychology

    (1963)
  • D.M. Fetterman

    Steps of empowerment evaluation: From California and Cape Town

  • A.H. Fine et al.

    Program evaluation practice in the nonprofit sector

    (2000)
  • E.M. Foster et al.

    Alternate methods for handling attrition: An illustration using data from the fast track evaluation

    Evaluation Review

    (2004)
  • Cited by (0)

    Monica R. Geist recently earned her Ph.D. in Applied Statistics and Research Methods at the University of Northern Colorado. Her methodological interests include mixed methods research and evaluation, instrument development, and focus groups. Her areas of interests are in education and the non-profit sector.

    View full text