Tiny House
More About The National Coalition for Dialogue & Deliberation • Join Now!
Community News

Report from NCDD 2008: Evaluation Challenge

At the 2008 National Conference on Dialogue & Deliberation, we focused on 5 challenges identified by participants at our past conferences as being vitally important for our field to address. This is one in a series of five posts featuring the final reports from our "challenge leaders."

Evaluation Challenge: Demonstrating that dialogue and deliberation works

How can we demonstrate to power-holders (public officials, funders, CEOs, etc.) that D&D really works? Evaluation and measurement is a perennial focus of human performance/change interventions. What evaluation tools and related research do we need to develop? Challenge Leaders: John Gastil, Communications Professor at the University of Washington Janette Hartz-Karp, Professor at Curtin Univ. Sustainability Policy (CUSP) Institute The most poignant reflection of where the field of deliberative democracy stands in relation to evaluation is that despite this being a specific ‘challenge’ area, there was only one session in the NCDD Conference aimed specifically at evaluation - ‘Evaluating Dialogue and Deliberation: What are we Learning?’ by Miriam Wyman, Jacquie Dale and Natasha Manji. This deficit of specific sessions in evaluation at the NCDD Conference offerings is all the more surprising since as learners, practitioners, public and elected officials and researchers, we all grapple with this issue with regular monotony, knowing that it is pivotal to our practice. Suffice to say, this challenge is so daunting that few choose to face it head-on. Wyman et al. made this observation when they quoted the cautionary words of the OECD (from a 2006 report): “There is a striking imbalance between time, money and energy that governments in OECD countries invest in engaging citizens and civil society in public decision-making and the amount of attention they pay to evaluating the effectiveness and impact of such efforts.” The conversations during the Conference appeared to weave into two main streams: the varied reasons people have for doing evaluations and the diverse approaches to evaluation.

A. Reasons for Evaluating

The first conversation stream was one of convergence or more accurately, several streams proceeding quietly in tandem. This conversation eddied around the reasons different practitioners have for conducting evaluations. These included: “External” reasons oriented toward outside perceptions:
  1. Legitimacy for dialogue and deliberation accrued by measuring whether it met its purpose and whether it added value
  2. Justification of value for money spent on process
  3. Anchors/validates public involvement as one of the inputs to decision-making
  4. Accountability and transparency, including all parties; this is often a requirement for quality management
  5. Tracking impacts and outcomes
“Internal” reasons more focused on making the process work or the practitioner’s drive for self-critique:
  1. Helps the process itself by focusing attention on the objectives and clarifying expectations and purpose of activities
  2. Captures lessons learned that can improve practice
  3. Seeking process improvement through feedback and reflection

B. How to Evaluate

The second conversation stream at the Conference - how we should evaluate - was more divergent, reflecting some of the divides in values and practices between participants. On the one hand there was a loud and clear stream that stated if we want to link citizens’ voices with governance/decision making, we need to use measures that have credibility with policy/decision-makers. Such measures would include instruments such as surveys, interviews and cost benefit analysis that applied quantitative, statistical methods, and to a lesser extent, qualitative analyses, that could claim independence and research rigor. On the other hand, there was another stream that questioned the assumptions underlying these more status quo instruments and their basis in linear thinking. This stream inquired, Are we measuring what matters when we use more conventional tools? For example, did the dialogue and deliberation result in:
  1. Transforming effects - individually, relationally and collectively?
  2. Enabling citizens to be the authors of their own lives?
  3. Citizens adding value to the decisions being made, resulting in better decisions made in the highest public good and sustainable in the long term?
  4. A virtuous cycle (in terms of systems thinking) that reinforces and increases mutual learning, understanding, trust, capacity building and increased democratization?
From these questions, at least three perspectives emerged:
  1. We need to use the quantitative, statistical methods accepted by policymakers to ‘come of age’ and ‘show our worth as a discipline’.
  2. Using these traditional methods of evaluation uncritically could distort the purpose and intent of what we are doing. Instead, we need to use more dialogic evaluation methods.
  3. There is no single ideal approach to evaluation. We can and should use a diverse tool kit of qualitative and quantitative, short term and long term measures, because if we don’t we are in danger of remaining at the edges rather than at the center of any democratic renewal.
An ecumenical approach to evaluation may keep peace in the NCDD community, but one of the challenges raised in the Wyman et al. session was the lack of standard indicators for comparability. What good are our evaluation tools if they differ so much from one context to another? How then could we compare the efficacy of different approaches to public involvement?

Final Reflections

Along with the lack of standard indicators, other barriers to evaluation also persist, as identified in the Wyman et al. session:
  1. Lack of time, resources, expertise
  2. Lack of commitment to evaluation from senior management
  3. Lack of organizational capacity within organizations
Wyman et al commenced their session with the seemingly obvious but often neglected proposition that evaluation plans need to be built into the design of processes. This was demonstrated in their Canadian preventative health care example on the potential pandemic of influenza, where there was a conscious decision to integrate evaluation from the outset. The process they outlined was as follows: Any evaluation should start with early agreement on areas of inquiry. This should be followed by deciding the kinds of information that would support these areas of inquiry, the performance indicators; then the tools most suited; and finally the questions to be asked given the context. A key learning from the pandemic initiative they examined was “In a nutshell, start at the beginning and hang in until well after the end, if there even is an end (because the learning is ideally never ending).” In terms of NCDD, we clearly need to find opportunities to share more D & D evaluation stories to increase our learning, and in so doing, increase the strength and resilience of our dialogue and deliberation community.

  More Resources  

Add a Comment

-