Policy & Practice - A Development Education Review

 

 

Measuring Up: A review of evaluation practice in the Northern Ireland community and voluntary sector

issue11
Monitoring & Evaluation
Autumn 2010

Brendan McDonnell, Nicola McIldoon, Gladys Swanton & Norman Gillespie

In this article, Brendan McDonnell, Nicola McIldoon, Gladys Swanton and Norman Gillespie describe Community Evaluation Northern Ireland’s recent review of the current monitoring and evaluation practice in the community and voluntary sector in Northern Ireland. The Measuring Up: A review of evaluation practice in the voluntary and community sector study was conducted to explain how to better communicate the aims and needs of individual organisations and to demonstrate the value of their work to funders and stakeholders in the current economic climate.

 

Introduction
 

Community Evaluation Northern Ireland (CENI) was established in 1995 to provide evaluation support to the voluntary and community sector in Northern Ireland. As the region’s only dedicated support body on evaluation, CENI has a particular role to play in identifying and assessing the sector’s evaluation needs, influencing policy-making and decision making, and informing future strategy and practice in this area. 

 

            In the current economic climate, government policy makers and funders will need to become more strategic in targeting and allocating resources where they are most needed.  They will have to be more specific about the outcomes and impact expected from their investment, and in turn voluntary and community sector organisations will have to specify the needs they intend to address, and provide evidence of the measurable outcomes or changes produced for their communities.  This poses real challenges in terms of the capacity of the sector and its funders to be able to understand and use the tools of monitoring and evaluation to best demonstrate the value of their activities and present evidence of their successes.

 

            In this context CENI decided to carry out a review of the current monitoring and evaluation practice in the community and voluntary sector in Northern Ireland in order to assess the benefits of, and challenges in, current practice and support provision, and to identify learning that can inform evaluation policy and practice.  Measuring Up: A review of evaluation practice in the voluntary and community sector was conducted by four CENI staff members and published in 2010.

 

            This research will be of relevance to any public agency providing, investing or involved in working with the community and voluntary sector.  Development educators will benefit from the insights provided on the challenges faced by funded organisations in demonstrating the value of their work and in particular their contribution to government policy objectives in the light of impending public sector cutbacks. This article is even more relevant to development education amidst constant debate on how to best demonstrate ‘value for money’ and impact of development education work on target audiences. It is increasingly important to conduct and report in-depth, comprehensive evaluations to continue public support and funding for the sector, and to ensure effective monitoring of individual projects to improve practice throughout the duration of project delivery.

 

Methodology

 

The research was carried out between June and September 2009, and included:

 

  •   A review of relevant policy documents and other research on monitoring and evaluation issues in the sector;
  •   Interviews with representatives from twenty three funders including government departments, statutory agencies and non-governmental/independent funders; 
  •   A postal survey to a sample of 400 voluntary and community sector organisations, which generated 158 responses, a return rate of almost 40 per cent;
  •   Interviews with representatives of mainly regional umbrella/support bodies in the sector representing a range of themes and issues; and
  •   Consultation with representatives of evaluation practitioners and economists within government.

 

The breadth of consultation with a range of stakeholders, and the consistency which emerged across their different perspectives, provides a valuable insight into the main evaluation issues and challenges facing both funders and the sector at this time. 

 

Findings and conclusions

 

The research shows that, while there are variations between funders, monitoring and evaluation focuses primarily on scrutiny and accountability as funders respond to the external demands of audit. In turn, the approaches adopted by funders, particularly government funders, are focused on meeting this demand. Accordingly, data collection systems and processes have been designed around measuring project performance against targets, and testing for compliance with financial or other governance controls. This has resulted in:

 

  • Increased demands – often multiple, from different funders - for information from funded organisations;
  • Collection of data about activities and outputs, with less emphasis on other information about, for example, innovation or practice development;
  • Increased focus on good governance and quality standards, and on risk assessment;
  • Value for money and sustainability issues coming to the fore;
  • The conduct of external evaluations for accountability purposes, rather than learning; and
  • A focus on individual project evaluation/inspection; and less concern with aggregating project level data, or programmatic/strategic evaluation.

 

At the same time, a shift to an outcomes-focused approach to funding (where the function of funding is not to sustain organisations or posts but to deliver outcomes against government programme objectives) has placed further demands on both funders and organisations within the sector.  While there are some examples of good practice, the methods and skills needed to understand, develop and implement outcomes approaches remain largely underdeveloped.  The focus continues to be on outputs, generating quantitative monitoring data, as opposed to outcome measurement.  Traditionally the ‘drivers’ of evaluation within government are finance and audit departments; therefore internal systems are geared up to assess outputs, i.e. risk assessment, financial compliance, monitoring outputs against targets, etc.  For this reason, in practice evaluation has been viewed primarily within government as serving an administrative function rather than a broader strategic or planning agenda.  Existing systems, whilst necessary for administrative and audit purposes do not provide the data required to measure outcomes as they are designed to do so.  Generating and using data on outcomes requires a totally different understanding of and approach to evaluation.

 

             The research suggests that there is a growing awareness that the scope of evaluation needs to be widened beyond a focus on scrutiny and accountability to encompass improved programme outcome/impact measurement and the capture of learning.  However, achieving this will require overcoming barriers which are not just technical but also institutional, i.e. the all-pervading audit culture within government and the underdevelopment of strategic relationships between funder and funded.

 

            It is clear that scrutiny and accountability will remain key priorities for monitoring and evaluation, especially for government funders.  As resources become tighter, every pound of public money invested in the community and voluntary sector has to be accounted for.  In this sense, a focus on individual projects is important; they need to demonstrate that they are efficient, well-run organisations, delivering on funding objectives and meeting agreed targets.  The current government Green Book (HM Treasury, 2003) standards provide a good framework for examining these issues.

 

            However, beyond this there are wider questions that evaluation needs to address: primarily, is this investment of scarce public resources achieving the maximum return it can?  This needs to be considered against the following criteria:

 

  • Is investment being directed to where it is most needed and can add most value?
  • Is there a clear understanding of the change that investment is expected to achieve?
  • Is the investment producing identifiable and measurable outcomes that make a real difference?
  • Is learning being captured to inform improvements in service delivery or programme development?

 

As the research has shown, addressing these wider evaluation questions is hugely challenging for both funders and funded projects.  

           

            The challenge is particularly focused on funders.  The need to maximise return from a contracting funding base means that they will continue to take a more strategic approach to funding the community and voluntary sector.  This will have implications for relationships, particularly between government funders and the sector.  The shift from grant-making to contracting of services will continue post the Review of Public Administration in Northern Ireland, with a greater focus on a purchaser/provider split.  However, it is important that voluntary and community organisations are not viewed simply as sub-contracted service deliverers, but rather as partners in social improvement.  In this context the onus is on the funder to define their priorities for funding and negotiate the delivery of agreed outcomes with voluntary and community organisations.

 

Recommendations

 

The research has shown that the demands for, and expectations of, monitoring and evaluation are growing. Evaluation now has to address multiple needs and has become an increasingly complex and multifaceted process.  In an effort to distil some of this complexity and produce a more unified and integrated approach to evaluation, CENI proposes a possible framework.  This is informed by current literature on a ‘systems thinking’ approach. Seddon (2008) described ‘systems thinking’ as a systematic relationship between purpose, measure and method.  Measures need to be derived from purpose, which then inform the methods used to collect the information required.

 

            Translating this into a proposed framework we start with an emphasis on the broader questions for evaluation: i.e. what is the need that the investment/programme is addressing; and what change is the investment expected to achieve and how is this to be measured?  In considering these questions we refer to the headings of Intelligence, Systems, Support and Relationships.  Each element is interdependent and an essential part of the whole picture.  Generating and using data on outcomes requires a totally different understanding of and approach to evaluation and one which needs to be led by government funders and negotiated with funded organisations in a planned and integrated way.

 

            The key elements of this approach would include:

 

 

 

Intelligence

·    Strategic Investment

·    Holistic Evaluation

 

 

Systems

·    Measurement

·    Data collection

·    Analysis

 

 

Relationships - Partnership arrangements

Support

·    Skills/capacity

·    Resources

 

 

The framework promotes an integrated approach, beginning with a clear rationale for investment and the adoption of a holistic approach to evaluation to capture change.  This then informs the design of systems required to measure, collect and analyse monitoring and evaluation data.  In turn, the implementation of the systems needs to be underpinned by appropriate resources and support, to develop capacity among both funders and organisations. Finally, the whole process is predicated by the notion a partnership approach between funder and funded which seeks to ensure mutual benefits from the process.

 

The following table details the key components of the framework.

 

Intelligence

 

Strategic Funding

Evidence of need – Targeted investment;

Rationale for funding - Theory of change;

Engagement with projects – Negotiated transaction.

Holistic Evaluation

Scrutiny - Accountability, inspection ;

Outcomes - Project and programme achievements;

Learning - Practice improvement; policy development.

Systems

 

Measurement

Define - Develop Programme level outcome indicators;

Inform - Negotiate Project level outcome indicators.

Data collection

Monitoring – Appropriate, proportionate, and coordinated;

External evaluation – Terms of reference, timing, involvement;

Self-evaluation – Connected to needs of project and funder.

Analysis & Use

Project Level

Scrutiny           - Project inspection;

Outcomes        - Project achievements;

Learning          - Practice/service improvements.

 

Programme Level

Scrutiny           - Programme management;

Outcomes        - Aggregate project achievements;

Learning          -Review practice, feedback learning, inform policy.

Support

 

Skills/capacity

Resources

 

Understanding role & purpose of evaluation.

Outcomes – Define, develop;

Data collection – Design, management;

Data analysis – Understand, inform.

Relationships

 

Partnership

 

Coordination

Partnership approach between funder and funded which ensures mutual benefits from the process.

Coordination between funders to share learning.

 

Intelligence

 

Developing a strategic funding approach to investing in the community and voluntary sector should be informed by evidence of need, clarity of purpose and negotiated agreement.

 

            Evidence-based policy has long been the mantra of government investors.  There is an increasingly rich supply of datasets being developed and made available on the needs and assets of communities.  These include local area data from sources such as the Northern Ireland Neighbourhood Information Service, NINIS, (which now includes a ‘Social Assets’ database recently developed by CENI and the Community Foundation for Northern Ireland), as well as previous evaluations and research studies.  Using these sources to update understanding of need and to channel resources effectively is important, especially in spatial development programmes such as Neighbourhood Renewal, in order to better baseline community needs and measure change.

 

            The rationale for funding needs to be clear and, where possible, informed by a theory of change, i.e. what change is the investment trying to bring.

 

“Where policy does not have a stated theory of change it will become difficult to link activities to outputs and outcomes during delivery.  How can change be targeted and measured if how it happens is not understood?” (Lawlor & Nicholls, 2006).

 

            Engagement with the community and voluntary sector as delivery agents for change is also an important part of the process.  As pointed out previously the nature of the funding interaction, particularly between government funders and the voluntary and community sector, needs to be clarified.  In previous research, CENI referred to this as a negotiated transaction:

 

“Transactions involve a specification of mutual responsibilities, of what should be done at what costs and, as far as possible, of the benefits to both parties.  This requires a sharing of the different kinds of knowledge held by each side, agreement about the outputs required and negotiation about their anticipated outcomes” (Morrissey, McDonnell & McGinn, 2003).

 

The role and purpose of monitoring and evaluation in this context is widened beyond accountability to include the specification and measurement of programme and project level outcomes and the capturing of learning.  Scrutiny still remains a core function, but this is within a more holistic evaluation approach, which places more responsibility on both funders and funded organisations to embrace and operate.  Evaluation becomes a strategic part of the feedback loop, providing the evidence base to inform decision-making.  As the research has indicated, too often evaluation stops at project inspection with no feedback loop into programme or policy level.

 

Systems

 

Clarity about the purpose of funding and the adoption of a more holistic approach to evaluation informs the development of appropriate systems to measure, collect and analyse information required.  

 

            Measurement systems should be the primarily focused on outcomes.  The research shows that many funders and organisations have not sufficiently engaged with outcomes and that evaluation is often focused more on outputs than on the link between outputs and outcomes.

 

            Outcomes need to be derived from the objectives of the funding programme and the changes it wants to achieve.  Policy or programme level outcomes then need to be translated and negotiated into the project level.  Stakeholders at all levels need to be involved in the development of the desired outcomes to ensure they are meaningful, specific and useable.  This can be achieved as part of the negotiated transaction.

 

“…the indication of anticipated outcomes requires a synthesis of the different kinds of knowledge held by funder and funded organisation. Accordingly they cannot be dictated by either side, but should be the result of negotiation” (Morrissey, McDonnell & McGinn, 2003).

 

            A number of useful outcome frameworks have been developed including those used by the Supporting People Programme; there are also models such as the Social Return on Investment and the CENI Social Assets model (Morrissey, Healy & McDonnell, 2008) which can inform an outcomes approach.  These provide potential reference points to take forward with an outcomes approach.

 

            Data collection systems are then developed and informed by the specific measurement requirements of the funding programme.  The research showed that too often monitoring systems are imposed externally with a one-size-fits-all approach.  Ideally data collection systems should be appropriate to the specific needs and circumstances of both the programme and project, proportionate to the level of investment and coordinated across programmes.

 

            Similarly the external evaluation of funded projects should be informed by specific measurement needs.  Terms of reference should reflect this; they should be negotiated up front and incorporated as part of the funding contract. Furthermore, funded organisations should be briefed on what information is required for evaluation purposes so that they can prepare this for when it is needed.  This would facilitate the development of internal or self-evaluation systems which are better connected to the needs of both funders and organisations themselves. 

 

            This also re-focuses the role of the external evaluator and would make possible a more participative approach to the external evaluation process, whereby organisations would be better able to interact with the evaluator as a ‘critical friend’. This in turn would inform the experience and skills sets required of evaluators.  In light of this a set of principles and guiding standards for the conduct of external evaluations would be useful.  This could involve an update of the guidelines produced by the then Voluntary Activity Unit in 1996, Guidance on the Commissioning and Conduct of Evaluations (Voluntary Activity Unit, 1996).

 

            Analysis and use of monitoring and evaluation data collected should be made explicit at the outset.  Again, if the information required is correctly specified at project and programme level, the analysis of that data will make it possible to:

 

  • Scrutinise performance, i.e. project inspection and programme management;
  • Link outputs to outcomes for projects and then aggregate from project to programme level; and
  • Consider the implications for learning and improvement for individual projects and future programmes/policy.

 

Support

 

The development and implementation of such an integrated approach will take considerable investment, not least in the training and support of programme managers and administrators as well as funded organisations. 

 

            The research found that the primary skills/support needs identified amongst both funders and organisations within the sector relates to measuring and reporting on outcomes, and that for the former, this is matched by the need to be able to use monitoring and evaluation data to inform programme development. While these are clearly essential areas for development, the framework would suggest that there is a need to consider capacity building across a much broader range of inter-related areas including:

 

  • Understanding the role and purpose of evaluation in the context of the community and voluntary sector;
  • Defining outcomes;
  • Design and management of data collection systems;
  • Analysing and using data to inform decision making; and
  • Sharing of information to inform learning.

 

            While the research indicates that there has been some investment in supporting monitoring and evaluation practice in the sector, there is a need to continually build on this, and to consider a more strategic approach to developing capacity with the type and format of support tailored to the particular needs of both funders and funded organisations. 

 

            Both generic and specialist training provision is required.  Generic training, particularly in understanding the role and purpose of evaluation in a changing funding environment, is required at all levels.  More specialist training in outcomes measurement, data collection and analysis is required for funding programme managers and project staff. It may also be useful to consider developing the capacity of staff from support organisations to develop skills in effectively dealing with support needs on the ground.  Moreover as well as providing technical skills training for both parties, developing a culture of learning is essential to fostering a better understanding and use of evaluation. The research found that existing skills are recognised within the sector, and these should be shared, both across the sector and between funders and funded organisations.

 

            There may also be a need to consider developments around other areas related to monitoring and evaluation. Better use of information and communication technologies (ICT) could be one potential area that would assist the streamlining of data collection, and while there are issues associated with this, there may be a need to think about ways of effectively using ICT to support monitoring and evaluation processes. Similarly, there may be a need to develop greater understanding of the complementarity of quality approaches and standards with other approaches to measurement, monitoring and evaluation.  There are clearly resource implications for all of these aspects, both for the sector, and its funders. However, if monitoring and evaluation is to become an integral part of the strategic planning and funding cycle, then these sorts of investments are essential. 

 

Relationships

 

Finally and importantly the operation of this framework is contingent upon the development of relationships at a number of levels. The engagement and participation of the community and voluntary sector at all stages is essential.  If evaluation is seen as serving only funders’ needs, then organisations will not be motivated or encouraged to understand and use information for their own development.  Ownership and sharing of information is crucial to the building of partnershiprelationships and developing a more mature and strategic use of evaluation. The current development of the Concordat, a framework for co-operation between the main government departments and the local organisations responsible for the delivery of the work, will be an important step to help to ensure that this is realisable.

 

            At the same time, there is also a need for the development of relationships across different funders, to ensure better co-ordination, not just in relation to the development and implementation of approaches to monitoring and evaluation, but also to facilitate shared knowledge and learning.

 

            Here, it is worth pointing to developments elsewhere. In 2006, the Scotland Funders’ Forum, in conjunction with Evaluation Support Scotland, produced an ‘Evaluation Declaration’.  This sets out the principles for and approach to monitoring, evaluation and reporting within the voluntary and community sector in Scotland. While the declaration does not -have an official status, it is important:

 

“The declaration is evidence of shared thinking between funders and a shared agenda with the organisations they fund.  For the first time in Scotland funders have set out their view and vision of monitoring and evaluation.  And they have done it together...The declaration should help the voluntary sector and others understand what is important to funders in monitoring and evaluation and so improve relationships between funders and funded organisations” (Scotland Funders’ Forum, 2006).

 

Evaluation Support Scotland is currently reviewing the operation of the declaration, and will shortly be reporting on progress towards the development of a more coordinated approach to reporting amongst funders. It will be important to learn from this initiative and incorporate the ideas and approaches into any future framework for evaluation.

 

Conclusion

 

The Measuring Up report has attempted to review the current state of monitoring and evaluation practice from the perspective of both funders and voluntary and community sector organisations. While the research indicates that there are differing views on the purpose and usefulness of evaluation as currently practised, it is worth noting that there have been many positive developments, and practice has advanced considerably over the last decade. This has included initiatives from independent funders such as the Big Lottery Fund and Children in Need for example, as well as the development of pioneering approaches to measurement and the provision of support as developed by CENI in conjunction with the Voluntary and Community Unit and other funders. 

 

            However it is clear that in the tighter fiscal environment now looming, public investors in particular are faced with a stark choice.  On the one hand they can continue to ‘sweat’ the existing assets in order to enhance efficiency and maximise the outputs delivered, which means an even greater focus on scrutiny and accountability and a corresponding top-down, command and control relationship with voluntary and community sector deliverers.  The other option is to try to discover more effective ways of investing public resources to address need and deliver better services.  This would widen the scope for evaluation to focus on evidencing need, measuring real changes and capturing learning to inform new ways of working.  This would also involve a more proactive partnership engagement with voluntary and community sector deliverers. 

 

            The research shows that both funders and voluntary and community organisations see the need and recognise the potential for the latter approach, but it will require a shift in priorities to widen the scope for monitoring and evaluation and a corresponding commitment of time and resources to achieve this.

 

            The framework outlined in the conclusions of the report attempts to draw together all of the key issues identified through the research and provide a means of systematically considering these through a more unified and integrated approach.  It is intended that this should provide a basis for further discussion and development in order to move monitoring and evaluation forward in the new environment.

 

References

 

McDonnell, B, McIldoon, N, Swanton, G and Gillespie, N (2010) Measuring Up: A review of evaluation practice in the voluntary and community sector, Belfast: Community Evaluation Northern Ireland (CENI), available: http://www.ceni.org/publications/measuringup.pdf.

 

HM Treasury (2003) Green Book: Appraisal and Evaluation in Central government: Treasury Guidance, London: TSO.

 

Lawlor, E and Nicholls, J (2006) Hitting the target, Missing the Point, How Government regeneration targets fail deprived areas: Measuring What Matters, London: The New Economics Foundation.

 

Morrissey, M, McDonnell, B and McGinn, P (2003) Evaluating Community and Voluntary Activity, Voluntary and Community Unit, Belfast: Department of Social Development.

 

Morrissey, M, Healy, K and McDonnell, B (2008) Social Assets: A New Approach to Understanding and Working with Communities, Belfast: Community Foundation for Northern Ireland / Community Evaluation Northern Ireland.

 

Northern Ireland Neighbourhood Information Service (NINIS) (2010) Website, available: http://www.ninis.nisra.gov.uk.

 

Scotland Funders’ Forum (2006) The Evaluation Declaration, Glasgow: Scotland Funders’ Forum/Evaluation Support Scotland.

 

Seddon, J (2008) Systems Thinking in the Public Sector, Axminster: Triarchy Press.

 

Voluntary Activity Unit (1996) Guidance on the Commissioning and Conduct of Evaluations of Voluntary Organisations by NI Government Departments, Belfast: Department of Health and Social Services.

 

 

Brendan McDonnell has been Director of Community Evaluation Northern Ireland since its inception in 1995. He has an MPhil in Social Policy and has over 25 years' experience of working in the third sector as a manager, trainer and researcher. He has a particular interest in developing research and evaluation tools which support the voluntary and community sector to capture and measure change and contribute to evidence-based policy for the sector. 

 

Nicola McIldoon is Deputy Director of Community Evaluation Northern Ireland, and has been with the organisation since its inception in 1995. She has an MA in Social Policy (Economics) and has over twenty years' experience in the field of evaluation consultancy, research and training.

 

Gladys Swanton is Training Manager with Community Evaluation Northern Ireland, and holds an MSSc in the Management of Lifelong Learning. Over the last eight years, she has been responsible for the design and delivery of a range of innovative training courses and programmes around the themes of self-evaluation, outcomes and quality, drawing on her knowledge and experience in the use of appropriate models and tools developed both nationally and internationally. 

 

Norman Gillespie was previously CENI’s Evaluation Manager / Senior Evaluator, and has a PhD in Social and Community Sciences. A former community worker, he has over twenty years’ experience in the fields of social policy, evaluation and social research. 

Citation: 
McDonnell, B, McIldoon, N, Swanton, G and Gillespie, N (2010) 'Measuring Up: A review of evaluation practice in the Northern Ireland community and voluntary sector', Policy and Practice: A Development Education Review, Vol. 11, Autumn, pp. 42-57.