Policy & Practice - A Development Education Review

 

 

Editorial

issue11
Monitoring & Evaluation
Autumn 2010

Mags Liddy

Introduction

Research is an everyday practice in our lives.  We explore the multitudes of options when buying car insurance; we read about our holiday destination and the history of the tourist attractions there; we assess the value and usefulness of a product to our lifestyle.  The most commonly cited example of research in development education is evaluation work.  Formative evaluation can greatly add to the impact of development education programmes as it is implemented during their runtime, while summative evaluation provides a written account of the work completed.  In addition, some funding programmes making evaluation an obligatory requirement.

            While evaluation shares some commonality with monitoring, there is also a key difference in their overall purpose.  Both address programme performance, centring on the achievement of goals and objectives; however, monitoring concerns itself with operational and administrative issues, whereas evaluation is strategic analysis to inform practice and assess impact.  Evaluation work can be viewed as applied and strategic research, utilising social science methods to rigorously examine the added-value and acknowledge the impact of educational or training programmes.  Some criticise evaluation for being technical and functional, and view it as a mere measurement tool.  I believe this critique confuses monitoring with evaluation.  It also negates the contribution evaluation can make to programme fulfilment and its intended benefits to participants. 

            My argument here is that evaluation utilising social science research methods needs to be revisioned as a valuable research process.  C. Wright Mills commends the sociological imagination as enabling us 'to grasp history and biography and the relations between the two within society.  That is its task and its promise' (1959:6).  Evaluation as a research process needs to remind us of its task and its promise, and help us to locate development education work within specific historical and social milieu.

Evaluation and monitoring in practice

In essence evaluation is the strategic analysis of an educational or training programme.  Monitoring practices add to evaluation work by informing the written narrative of the programme; however monitoring has a separate supervising function.  Monitoring in practice asks questions centred on efficiency, budget analysis, and can address programme effectiveness to a limited extent.  It can track continuity in programme performance, and examine advancement towards programme objectives.  Some argue that utility is the prime function of monitoring as it focuses on identifying and addressing operational difficulties.  This functional characteristic of monitoring is often applied to evaluation also; however, evaluation is a deeper level of analysis, appraising results in relation to the programme goals, exploring the added value of programmes to inform future work, and establishing a written record of practice.  Evaluation asks questions based on relevancy and assessment of impact, especially the long-term impact of programmes.  Essentially it is a judgment on a programme because at its core and inherent in the actual word itself is value.

            This judgement and valuation dimension to evaluation work can cause conflict for participants and within the process itself.  Evaluation is often a requirement of publicly-funded programmes.  The European Union explicitly defines evaluation at project level as a crucial phase, particularly with regard to grant money awarded in relation to attainment of results and goals within agreed budgets (EU LLP, 2007-2013).  This approach to evaluation focuses on cost efficiency and reflects the functional measurement dimension, rather than on the long-term impact and social change, which is one of the goals of development education.  Measurement of outputs does not take into account the specific context of this work.  In the United States (US), the obligatory evaluation requirement receives considerable criticism as it is used as a justification tool for the continuance of public funding.  This focus raises concerns about the authenticity of participants whose employment or other benefits are dependent on continued funding.  The appraisal of both the merits and demerits of a programme is necessary to guide future practice and enable change; however, this can be both personally and professionally challenging.  Professionally, it can be challenging if your financial security is dependent on a favourable report.  It is also challenging on a personal level as the evaluation report is an assessment of your work and your contribution to the programme goals, which can impact on job satisfaction and future performance. 

            Mark Smith defines evaluation as 'the systematic exploration and judgement of working processes, experiences and outcomes...[which] pays special attention to aims, values, perceptions, needs and resources' (2006, no page given).  This definition identifies a subjective dimension to evaluation work through the naming of values, perceptions and processes.  A subjective dimension allows for the inclusion of participants' experiences and biography, thus placing the evaluation research within historical and social context.  Recording the personal therefore becomes important as evaluation could affect the participants’ life-world.  However I believe the inclusion of the subjective is also necessary as development education research and evaluation cannot ignore the historical and social context of its actual work.  It specifically places itself within the context of globalisation, climate change and deepening inequalities, to name just some of the issues addressed.  Development education raises questions on our personal understanding, and allows the learner to build on their understanding of the world and begin from their prior knowledge, rather than having an outlook foisted onto them.  It deliberately asks learners to explore ethical beliefs and critical decision-making, and encourages action for social justice.  This subjective focus precludes an objective stance associated with functional measurement approaches and also many research methods.

Choice of evaluation methods

Much evaluation work can centre on pre-determined sets of indicators and objectives, based on pre-determined learning outcomes and goals.  However if the subjective is the appropriate focus for development education, as argued above, then this needs to be reflected in the choice of research methods employed.  A subjective reading allows for multiple understandings of the world, enabling individual perceptions to emerge and is mostly associated with qualitative research methods.

            Choosing appropriate research methods and evaluation tools for development education programmes which reflect development education ethos is necessary to address the technical and measurement critiques discussed earlier.  Development education claims Freire as its own theorist; then as development educators we should use Freirean approaches in all of our work.  Smith (2006) applies Freire's model of banking education to evaluation work, adapting Joanne Rowland's previous work entitled How do we know its working?  In her work, she defined four characteristics to dialogical evaluative work: that evaluation is inherent in the reflection-action model of change; that it is empowering for participants where conclusions and recommendations are based on consensus; that dialogue and enquiry is central rather than measurement; finally where the evaluator is a facilitator rather than an objective and neutral outsider.

            These characteristics are strongly reminiscent of a development education ethos, especially empowerment, consensus and change.  By development education ethos, I mean inclusive and participatory teaching approaches, democratic decision-making, and an ethical commitment to global social justice.  Participatory approaches to the evaluation of development education are important as they place the learners and teachers into the research and evaluation process, rather than having evaluation done to or on them.  It makes them full participants in the work, rather than bystanders, suppliers of information or objects of study.  Enquiry-based approaches allow for dialogue and discussion to elaborate on the issues raised and develop capacity on the research process itself, while consensus decision-making allows for all participants and stakeholders to be informed of and to decide what is written about them and their work. 

            New innovative approaches in evaluation and research are constantly being developed and utilised.  One exciting area is the use and analysis of visual research methodologies, and can reflect the creativity and innovation shown in development education work.  Evaluation of development education events and conferences can be creative and fun, as well as providing insight into participants’ learning and reflections on the event.  Media including film and documentary are often used in development education to strengthen awareness and understanding, as well as the creation of new media through the accessible social media technologies. This also provides a possible venue for evaluation.  Rigorous ways of reading outputs and interpretation of results need to be developed. The Centre for Visual Methodologies at McGill University developed a guide for reading cultural texts developed from semiotic analysis (Mitchell & Reid-Walsh, 2002).  In development education work, Reading International Solidarity Centre (RISC) (2008:27) use a X and Y axis to read learners' comments on sustainability to assess their understanding, where one axis is the local to global spectrum, and the other is environmental to social justice.

           Innovative approaches to evaluation and dialogical research methods can more accurately reflect the ethos of development education; furthermore development education research and evaluation work needs to have a strong ethical stance in its methodology.  All social research has a social responsibility to its participants.  University or institute-based research work is assessed by a research review committee, and some professional organisations have binding codes of ethical practice.  However independent researchers (including myself) are not bound by any guidelines or assessed by peer review.  As part of the Irish Development Education Association (IDEA) Research Community, I am looking at developing ethical guidelines for development education research practitioners.  These are not foreseen as an enforceable code; rather they will be a guide to good practice reflecting capacity building and empowerment of participants during the research process.

Conclusion

C. Wright Mills challenges social scientists and researchers to develop their sociological imagination and locate ourselves within historical and social systems.  He says:

“By its [sociological imagination] use people whose mentalities have swept only a series of limited orbits often come to feel as if suddenly awakened in a house with which they had only supposed themselves to be familiar...Older decisions that once appeared sound now seem to them products of a mind unaccountably dense.  Their capacity for astonishment is made lively again.  They acquire a new way of thinking, they experience a transvaluation of values” (Mills, 1959:8).

            Evaluation needs to be reclaimed from being viewed as a managerial tool and from the language of objectivity to directly reflect the ethos of development education work.  At its very least and most functional level, evaluation can inform practice and guide programme development.  However evaluation has the potential to go further; it can also name the hidden and taken-for-granted practices that add merit to educational programmes by awakening the familiar within their house.  It has the potential to be transformative and enable new ways of thinking through inclusion and participation, if implemented and designed in a dialogical and empowering manner.  Evaluation can create knowledge with participants based on their lived experiences of development education, and can awaken astonishment and make us lively to the merits of research.

References

European Union (2008) The Lifelong Learning Programme 2007-2013: Glossary, available: http://ec.europa.eu/education/programmes/llp/guide/glossary_en.html.

Freire, P (1972) Pedagogy of the Oppressed, London: Penguin.

Mills, C W (1959) The Sociological Imagination, London: Oxford University Press.

Mitchell, C and Reid-Walsh, J (2002) ‘Physical Spaces: Children's bedrooms as cultural texts’ in C Mitchell and J Reid-Walsh, J (eds.) Researching children's popular culture: the cultural space of childhood, New York: Routledge, pp. 118-130.

Rowlands, J (1991) How do we know it is working? The evaluation of social development projects, cited in M K Smith (2001, 2006) 'Evaluation' in The Encyclopedia of Informal Education, and in F Rubin (1995) A Basic Guide to Evaluation for Development Workers, Oxford: Oxfam.

Reading International Solidarity Centre (2008) How Do We Know It’s Working? A toolkit for measuring attitudinal change in global citizenship, Reading International Solidarity Centre, (RISC), Reading.

Smith, M K (2001, 2006) 'Evaluation' in The Encyclopaedia of Informal Education, available: www.infed.org/biblio/b-eval.htm.

Citation: 
Liddy, M (2010) 'Editorial', Policy and Practice: A Development Education Review, Vol. 11, Autumn, pp. 1-6.