Report from NCDD 2008: Evaluation Challenge
Evaluation Challenge: Demonstrating that dialogue and deliberation worksHow can we demonstrate to power-holders (public officials, funders, CEOs, etc.) that D&D really works? Evaluation and measurement is a perennial focus of human performance/change interventions. What evaluation tools and related research do we need to develop? Challenge Leaders: John Gastil, Communications Professor at the University of Washington Janette Hartz-Karp, Professor at Curtin Univ. Sustainability Policy (CUSP) Institute The most poignant reflection of where the field of deliberative democracy stands in relation to evaluation is that despite this being a specific ‘challenge’ area, there was only one session in the NCDD Conference aimed specifically at evaluation - ‘Evaluating Dialogue and Deliberation: What are we Learning?’ by Miriam Wyman, Jacquie Dale and Natasha Manji. This deficit of specific sessions in evaluation at the NCDD Conference offerings is all the more surprising since as learners, practitioners, public and elected officials and researchers, we all grapple with this issue with regular monotony, knowing that it is pivotal to our practice. Suffice to say, this challenge is so daunting that few choose to face it head-on. Wyman et al. made this observation when they quoted the cautionary words of the OECD (from a 2006 report): “There is a striking imbalance between time, money and energy that governments in OECD countries invest in engaging citizens and civil society in public decision-making and the amount of attention they pay to evaluating the effectiveness and impact of such efforts.” The conversations during the Conference appeared to weave into two main streams: the varied reasons people have for doing evaluations and the diverse approaches to evaluation.
A. Reasons for EvaluatingThe first conversation stream was one of convergence or more accurately, several streams proceeding quietly in tandem. This conversation eddied around the reasons different practitioners have for conducting evaluations. These included: “External” reasons oriented toward outside perceptions:
- Legitimacy for dialogue and deliberation accrued by measuring whether it met its purpose and whether it added value
- Justification of value for money spent on process
- Anchors/validates public involvement as one of the inputs to decision-making
- Accountability and transparency, including all parties; this is often a requirement for quality management
- Tracking impacts and outcomes
- Helps the process itself by focusing attention on the objectives and clarifying expectations and purpose of activities
- Captures lessons learned that can improve practice
- Seeking process improvement through feedback and reflection
B. How to EvaluateThe second conversation stream at the Conference - how we should evaluate - was more divergent, reflecting some of the divides in values and practices between participants. On the one hand there was a loud and clear stream that stated if we want to link citizens’ voices with governance/decision making, we need to use measures that have credibility with policy/decision-makers. Such measures would include instruments such as surveys, interviews and cost benefit analysis that applied quantitative, statistical methods, and to a lesser extent, qualitative analyses, that could claim independence and research rigor. On the other hand, there was another stream that questioned the assumptions underlying these more status quo instruments and their basis in linear thinking. This stream inquired, Are we measuring what matters when we use more conventional tools? For example, did the dialogue and deliberation result in:
- Transforming effects - individually, relationally and collectively?
- Enabling citizens to be the authors of their own lives?
- Citizens adding value to the decisions being made, resulting in better decisions made in the highest public good and sustainable in the long term?
- A virtuous cycle (in terms of systems thinking) that reinforces and increases mutual learning, understanding, trust, capacity building and increased democratization?
- We need to use the quantitative, statistical methods accepted by policymakers to ‘come of age’ and ‘show our worth as a discipline’.
- Using these traditional methods of evaluation uncritically could distort the purpose and intent of what we are doing. Instead, we need to use more dialogic evaluation methods.
- There is no single ideal approach to evaluation. We can and should use a diverse tool kit of qualitative and quantitative, short term and long term measures, because if we don’t we are in danger of remaining at the edges rather than at the center of any democratic renewal.
Final ReflectionsAlong with the lack of standard indicators, other barriers to evaluation also persist, as identified in the Wyman et al. session:
- Lack of time, resources, expertise
- Lack of commitment to evaluation from senior management
- Lack of organizational capacity within organizations