The preceding chapter established that small development non-profits conduct underutilised external and internal ‘formal’ evaluations. Meanwhile, they conduct informal everyday evaluative activities that have high instrumental use, particularly for program improvement. Despite these findings, underutilised formal evaluations dominated initial conversations about evaluation throughout the research conducted for this book. Information regarding informal evaluation surfaced through my observations of the case study organisations in action and via interview questions about how they implement change. The initial mental jump to formal evaluation demonstrates the hegemonic grip of the evaluation orthodoxy, a top-down conceptualisation of evaluation that this book has shown to be largely ineffectual in small development non-profit settings.
This penultimate chapter explores practitioners’ perceptions regarding evaluation and evaluators, bearing in mind community development aspirations and the advantages and constraints facing these small organisations. Cognisant of the first community development standard that expects the infusion of community development values throughout all aspects of practice, this chapter analyses perceptions of evaluation against the evaluation literature from Chapters 2 and 3. This particularly focuses on the applicability and relevance of traditional ideas about evaluation to small development non-profit contexts.
Practitioners’ perceptions help establish an understanding of context-sensitive evaluation that provides a framework to explore how evaluation could be more meaningful, purposeful, and useful in small non-profits. These ideas seek to ensure that conducting evaluation is not simply a mandatory obligation of donor appeasement, but an integral and valuable part of operation and improvement.
Evaluators and Small Development Non-Profits
When asked who evaluators are, the director of an Australian arts non-profit pauses before answering: ‘Look’ she says, ‘in the current world, in the current climate, the evaluator is a person who comes with that title, and comes with that bag of tricks’. She is not the only one who finds the role of evaluators rather ‘circumspect’. A board member of a European non-profit elaborates on his substantive role as an evaluator: ‘I’ve stopped saying I’m an evaluator. I’m a storyteller. And I am skilled at capturing peoples’ stories and being a really good custodian of peoples’ stories – and that is so important. And then translating those stories for the reader – whoever that may be’.
Other non-profit practitioners define evaluators as ‘data gathers, information gathers’, and ‘someone whose role is to measure something’. ‘It’s a person who can come and ask you questions’ says a practitioner from an urban East African non-profit. She thinks of evaluators as mentors who need to understand your aims and values before they can offer constructive feedback. A colleague from the same organisation explains that an evaluator is not an assessor; an assessor gives you a score whereas an evaluator identifies strengths and weaknesses for improvement. This identification links evaluation to the purpose of improvement.
The evaluation literature outlines key evaluator competencies, overviewed in Chapter 3. The competencies identified by non-profit practitioners bear strong similarities with those in the extant literature. However, they extend the literature by prioritising competencies of highest importance to small development non-profit contexts. As such, contextual understanding and matched values are perceived to be the most important competency for evaluators working in small development non-profits, a point raised independently and without prompting by twenty-eight per cent of practitioners. The director of a pan-African non-profit explains that, ‘an evaluator is a person who understands absolutely the program. What is it you are going to evaluate? They need to understand the history, the hypothesis, the vision, and mission’.
To understand and implement the values that are important to non-profits, the director of a South-East Asian non-profit suggests that: ‘an evaluator [needs to be] someone who has experience in the field. In the development field…Has worked there, but worked on the ground, as in hands on’. He comments that someone with development experience would be better able to be ‘incisive and insightful’. Similarly, another director from a suburban Australian non-profit says that she would ‘be asking them: “Have you ever, sort of, really worked yourself in a field like this? In a not-for-profit.” Because that would make a lot of difference to me. Because then I would have more confidence that they would get it’.
After voicing the importance of evaluators understanding context and values underpinning community development, a sub-contractor in a South-East Asian non-profit identifies that it is ‘interesting to think about, seeing as many evaluators work in for-profit corporations like KPMG and might have no knowledge of development at all’. While the evaluation literature identifies contextual knowledge as a vital competency, it is not considered the highest priority (e.g. AES, 2013). This highlights a possible nuance for evaluators working in small development non-profits. In line with practitioners’ prioritisation of contextual knowledge, recognition that context-sensitivity in evaluation practice needs to be an explicit focus is receiving increasing attention (Conner, Fitzpatrick, & Rog, 2012; Fitzpatrick, 2012; LaFrance, Nichols, & Kirkhart, 2012; Rog, 2012; Vo & Christie, 2015).
Technical and analytical skills are the second most commonly discussed competency necessary for conducting evaluation in small development non-profits, identified by a fifth of practitioners. This demonstrates a variation from professional evaluator competencies that prize evaluator knowledge and skills of evaluation theories, methods, and activities most highly (e.g. three of the AES’ seven competency domains relate to technical and analytical skills while only one relates to understanding of stakeholders and context). The relegation of technical skills to second priority contradicts Scriven’s (1996) strict focus on this competency although it aligns with the views of evaluators such as Patton (2012) who emphasise sensitivity to user needs and values over technical skill.
The importance of evaluator’s communication and people skills are regarded as vital for effective evaluation to occur (Harman, 2019; Patton, 2012; Stevahn, King, Ghere, & Minnema, 2005). A sub-contractor in a suburban Australian non-profit explains the importance of evaluators having people skills and demonstrating respect for practitioners and the work they are doing: ‘If you’ve gone in there, hard-nosed and completely shut off and don’t look at the humanity of it and how busy the staff are with their own jobs, their own workload - If you ignore all that and go in arrogantly, “I’m important, evaluation’s important, you’ve just got to drop everything and do what I need now.” Yeah, good luck’.
Part of this is linked to a personality and values-match between the evaluator and non-profit. Practitioners from an urban Australian non-profit mention feeling uncomfortable with some of the external evaluators they approached, while a sub-contractor from a South-East Asian non-profit highlights that there needs to be a ‘little bit of chemistry, a bit of match-up’. A practitioner from a pan-African non-profit suggests that ‘soft skills are actually more important [than technical skills] if you want to have an effective evaluation’. While evaluators such as Patton (2012) concur, once again, this is a finding that Scriven (1996) debates. Despite many of the case study non-profits working in non-English speaking countries, or with mainly non-English speaking recipients, none of the practitioners mentioned language barriers or affinity with community languages as a key competency (except as it is potentially insinuated in discussions on communication in general).
While evaluator competencies identified by practitioners generally align with the evaluation literature (e.g. AES, 2013; Davies & Brümmer, 2015; IDEAS, 2012; Russ-Eft, Bober, de la Teja, Foxon, & Koszalka, 2008; Stevahn et al., 2005), the high prioritisation of context-sensitivity belies a shift in focus for evaluators in small development non-profits. After highlighting the key competencies, practitioners discuss tensions between ‘evaluators’ as they are traditionally conceived, and concepts of evaluators who align with their community development aspirations.
‘Everybody needs to be an evaluator’ says the director of an Australian arts non-profit in a view voiced in a majority of the case study organisations. She follows up her comment with: ‘But I don’t think the powers that be see that’. She highlights that internal evaluation, let alone informal evaluation, is ‘not seen as very efficacious’ within the evaluation discourse. The director of a South-East Asian non-profit confirms this saying that ‘some donors, major donors, won’t allow internal evaluation’. He suggests that this is because outsiders surmise that, ‘internally, we can be subjective and want it to look good for ourselves. Not that that’s happened’. Many of their donors and supporters expect an external evaluation, or at least an internal evaluation conducted by someone who is not directly involved with the evaluand: ‘It’s what funding bodies expect, it’s external evaluation’ says the director of a rural Australian non-profit. Small non-profits feel pressured to be accountable upward in ways that conform to notions of evaluator objectivity and independence, despite practitioners highlighting the need for everyday evaluators.
While the need for everyday evaluators dominates practitioner comments regarding who evaluators are, a small minority briefly mention the potential positives of external evaluators. External evaluators can offer useful peer-review and a fresh set of eyes that ‘can see something you can’t see because you’re so entwined’ says a practitioner from an urban Australian non-profit. External evaluators can offer ‘independent verification’ comments a board member from a South-East Asian non-profit. A practitioner from an Australian women’s refuge adds that external evaluators can take evaluation to the next level if internal staff do not have the necessary skills. These comments encapsulate the advantages of external evaluation highlighted in the evaluation literature: fresh perspective, perceived independent objectivity, and technical skills (Conley-Tyler, 2005; Fine, Thayer, & Coghlan, 2000; Springett & Wallerstein, 2008). Whether these advantages outweigh the disadvantages in small development non-profits is less clear.
Negative comments quickly overrun the brief positive comments regarding external evaluators. A board member of a South-East Asian non-profit clarifies that evaluators are ‘Quite a mixed bag…They can be absolutely fantastic or they can just come in for their week, two weeks, three weeks, go out have a look, bang out a report and move onto the next job’. A sub-contractor from a suburban Australian non-profit concurs that external evaluators can be great but she follows up that sometimes: ‘I think there’s a falseness about it. It’s disingenuous and I think a lot of the time organisations see through that, but some don’t sadly, and they pay an enormous amount of money to someone because they [the external evaluators] are very officious. They [the non-profit] think: “This is great. This person is going to get the job done.” My experience has been that the jobs are really poor’.
The director of an East African microfinance non-profit echoes this sentiment saying: ‘I have questions about pouring so much resources into these people, who are specialists, who aren’t practitioners and who aren’t even in contact with practitioners. They come in and use their set evaluation measures with an outsider view. But I don’t know, is the jury out? Are the figures in? Are those evaluations actually achieving anything? Do the [donors] actually like those evaluation reports? Do they read them?’
A minority of practitioners raise implications articulated by the director of an Australian arts non-profit as, ‘the industrialisation of every life skill that’s happened socially in the last ten years, twenty years. So that now only people who, you know, are called evaluators can be evaluators’. A director of an urban Australian non-profit worries that the growing professionalisation of evaluation ‘sidelines people who have really experienced programs’. A board member of a European non-profit, who is also a highly experienced evaluator, is concerned about professionalisation ‘separating evaluators further away’ from the people they are working with, drawing him to conclude that ‘evaluation is a load of shit’. Current drives towards credentialing evaluation is a mire of benefits, constraints, and concerns (Altschuld & Engle, 2015; Davies & Brümmer, 2015). However, practitioners who discuss the downsides of external evaluation in this study have highlighted that evaluators need to be in touch with the reality of the people and situations they are evaluating. This includes not being consumed with ego or constricted by rigid ideas of what is rigorous and credible.
In fact, professionalisation can raise distinct problems in community development settings and exists in tension with community development notions of shared, non-directive leadership and practitioner humility (Choudry & Shragge, 2011; Markowitz & Tice, 2002). The director of an urban Australian non-profit tells a story about a group of external evaluators they commissioned to evaluate a program for highly vulnerable and traumatised women. The evaluators arrived, richly dressed in inappropriate, business-like attire, and then said they could not meet with the community recipients directly because ‘it might be dangerous for the researchers’. The director emphasises that the evaluators were not in danger and feels that the external evaluators were ‘so far removed from reality’.
As well as demonstrating how professionalisation can distance evaluators from community recipients, the story above exemplifies practitioners’ doubts regarding the accuracy of evaluations conducted by external evaluators, a point also raised in Chapter 5 in reference to conducting evaluation with children and with culturally and linguistically diverse populations. Although this book has previously drawn attention to aspects of internal practice that could affect accuracy, such as poorly maintained records, practitioners claim that internal processes can produce more accurate information than external evaluations. A minority of practitioners identify attention to accuracy as a key evaluator competency but then note that external evaluators are likely to miss subtle narratives of change and that community recipients are unlikely to tell strangers the full story. The director of an Australian arts non-profit mentions that the limited sample sizes and variations between different groups of community recipients mean that external evaluation ‘just feels like it might be inaccurate’. A director of an urban Australian non-profit wonders at the depth of data a ‘complete stranger’ is going to be able to elicit from community recipients.
Nevertheless, small non-profits continue to commission external evaluators, partly due to notions of internal bias promulgated by the evaluation orthodoxy. However, practitioners question the often-unthinking deference to external evaluators: ‘Is it a good thing if someone comes in externally? They’re still biased. They come in with a certain construct as well. People on the ground know better. That is my view’, comments the director of an East African microfinance non-profit. The director of an Australian community spaces revitalisation non-profit corroborates this sentiment saying that ‘External evaluators might come with their own bias’. A frontline practitioner working in a conflicted region in Asia suggests that bias could be more of an issue for external evaluators than internal ones, as they may have formed ideas about the ethnicities and the situation in their area of operation based on media reports. The need to question this unthinking deference is demonstrated in the evaluation literature, whereby external evaluators, particularly those conducting experimental methods, are unlikely to examine their biases and axiological assumptions, considering this form of inquiry as somehow value-free (Camfield, Duvendack, & Palmer-Jones, 2014) and ‘unbiased’ (Torgerson, Torgerson, & Taylor, 2015, p. 158). Further, external evaluators can face similar pressures as internal evaluators if they desire repeat business (Patton, 2012).
Practitioners in small development non-profits see that both external and internal evaluators have merit. However, practitioners are only able to identify a small number of potential benefits to commissioning an external evaluator, such as peer-review, independent verification, and capacity building. Conversely, they identify significant pitfalls in hiring an external that they fear could result in inaccurate, inappropriate, officious, and unused evaluations that could undermine their community development approach. If small non-profits warrant the positives worthwhile, this suggests that a considered balance of informal, internal, and external evaluation could build evaluation capacity and be more cost-effective and robust than exclusive models (Davidson, 2005; Patton, 2012). The positives practitioners identify in relation to external evaluators align with discussions in the literature on potential external evaluator roles in evaluation capacity building activities (Volkov, 2008), and with notions of external evaluators as coaches (Ensminger, Kallemeyn, Rempert, Wade, & Polanin, 2015). Overall, these findings highlight the small non-profits’ desire for the role of evaluator to be reconceptualised to include everyday forms of evaluation and everyday evaluators with strong contextual knowledge.
Evaluation and Small Development Non-Profits
Examining the ‘point’ of evaluation is central to this book; acknowledging that the ‘point’ is multifaceted and encompasses usefulness, purpose, and meaningfulness. While Chapter 6 examined usefulness, this section first discusses evaluation purpose, and then meaningfulness of evaluation, from the perspectives of non-profit practitioners. By investigating the purpose and meaningfulness of evaluation to these organisations, this section contributes another piece to the evaluation puzzle by determining the key aspects of importance, as well as identifying those considered unhelpful or irrelevant.
When asked to define evaluation, practitioners focus on formalised evaluation and discuss its ability to assess and reflect on program worth, value, impact, goal attainment, success, effectiveness, and importance, notions that align with the definitions of evaluation introduced in Chapter 2. The director of a pan-African non-profit mentions the formative and summative binary. A practitioner in an Australian women’s refuge sees evaluation as an audit. A board member from a South-East Asian non-profit highlights that ‘evaluation is better described as mid-point and definitely end-point…of the project…Monitoring is done continuously’. The director of an East African microfinance non-profit raises the lack of clear and consistent evaluation terminology: ‘I’m still trying to work out what exactly it [evaluation] is because you talk to different people, you get different ideas’. Corroborating his confusion, the South-East Asian non-profit board member mentions the regular introductions of ‘the latest buzzword’, highlighting the need for a clear and consistent lexicon of evaluation terms.
Practitioners identify that the overarching purpose of evaluation is predominantly for upward accountability to donors, implying the need to extend this research to explore donor requirements and flexibility. Practitioners identify the other two key purposes of evaluation as ensuring programs are on track and effective, and providing information and recommendations to help the program improve. Smaller themes identify a purpose of evaluation is to provide practitioners and donors with a ‘pat on the back’ and a warm feeling, to be accountable to practitioners and community recipients, to share good practice with similar organisations, and to ensure the program is viable. Practitioners elaborate that, although improvement should be the primary purpose of evaluation, this purpose is often unrealised. Further, practitioners emphasise that they should prioritise accountability to community recipients, a point corroborated by the development literature (Chu & Luke, 2018; Ife, 2016; Jacobs & Wilford, 2010; Kilby, 2006), although they comment that this is not always done well.
A board member from a European non-profit upholds the importance of conducting evaluation with clear purpose: ‘We’re not going to do an evaluation for no purpose. We’re going to do it to change the lives of the people we’re are trying to benefit’. However, the board member of an African water and sanitation non-profit laments that, ‘I’ve seen it’s too easy for funders or for stakeholders to say, “evaluate that, evaluate that, evaluate that.” And I don’t think that they almost care what happens with it [the evaluation]’.
The cornerstone of utilisation-focused evaluation concentrates on ‘intended uses by specific intended users’ (Patton, 2012, p. 82); practitioners’ comments suggest that evaluations in these small non-profits have underdeveloped purpose agreements resulting in nebulous notions of intended use with unclear pathways for uptake of recommendations by practitioners, as primary intended users, in practice. This results in non-profit scepticism surrounding the point and worthiness of evaluation (Ebrahim, 2005).
Having established that there is a disconnect between intended purposes of evaluation, actual purposes, and potential purposes, practitioners offer their opinions of evaluation. Nearly two-thirds of practitioners from across the case study organisations make at least one brief positive comment about evaluation saying: ‘I find evaluation quite exciting’, ‘I love evaluation’, ‘it’s a great thing’, it is ‘essential’, ‘it’s very necessary’, ‘it’s good’, or, less enthusiastically, ‘fine’. Evaluation can give them direction, keep them on track with their mission, and help them improve. Despite briefly mentioning the positives of evaluation, practitioners do not discuss positives in detail. Further, nearly half of the positive comments about evaluation come from practitioners who have not been involved with an external evaluation, suggesting these perceptions relate either to internal or informal evaluation, or to their untested assumptions of external evaluation.
Most practitioners, including those who find value in formalised evaluation, identify significant weaknesses in the applicability of evaluation in its current forms to usefully, feasibly, ethically, and accurately support the work of small development non-profits. Building on this, practitioners offer their opinions of formal evaluation in its dominant form as unnecessary and wasteful: ‘All of that [non-use] has fused into my feeling about the current model of evaluation. And that maybe it’s not that effective at creating change. I’m talking about external evaluation’ says the director of an East African microfinance non-profit. ‘I think most evaluations are shit. And useless…’ states a board member from a European non-profit despondently. ‘It’s terrible’ responds a practitioner from an Australian women’s refuge. Evaluation is ‘a pain in the arse’ laughs one of her colleagues.
Examining only the responses of the twenty-three practitioners within the eight organisations who have commissioned and finalised external evaluations provides a mixed response towards external evaluation. The most common theme, stated by all but two practitioners, is that the evaluation did not tell them anything new. While the vast majority feel that this lack of new information means the evaluation was of limited utility, three practitioners say that the external recommendations helped catalyse action, acting as an additional voice to motivate improvements that the non-profit had been delaying.
Despite her or her staff having said most of the comments reported in the external evaluation, a director of a rural Australian non-profit says it was helpful to have their thoughts restated clearly to consolidate their approach: ‘I mean part of the thing with the evaluation when [the university] did it was that they talked to people individually like you asking me “what are the barriers” and then bought that back to us. Because sometimes you’ll say things and then it goes out [of your head], you need it to be reported back to you’. She explains that, despite the final evaluation report restating their own comments and recommendations, it has stimulated improvement: ‘Originally, I read a hundred page evaluation and I’d said most of the stuff, or the staff had, but it’s really quite important now when they report those things we’ve said, to say “alright we’ve said this is a barrier what are we going to do about it?”’
Conversely, as mentioned above, the vast majority of practitioners in this group feel that evaluation is, at least to some extent, a waste of time as a result of this lack of new knowledge. The director of an Australian arts non-profit states that: ‘We’re basically following up on the [external] recommendations, which is what we were going to do anyway with the [program] moving forward…So they identified when our programs make the greatest impact. And we already knew that. Like, you know, they didn’t find out anything we don’t know’.
A practitioner in an urban Australian non-profit supports this sentiment: ‘There was nothing surprising or interesting in the results…Sometimes, with evaluations, the findings corroborated what we already knew, which was nice, but I don’t remember them ever jolting our thinking or catalysing change’. The arts non-profit director reinstates that: ‘Frankly, in our experience, evaluation only finds out what we already know. It’s not actually for us. It’s for external bodies who don’t trust us’. Supporting the point that evaluation is for external bodies relates back to the beginning of this section where practitioners highlighted the overarching purpose of external evaluation as appeasing and impressing donors.
In addition to providing practitioners with information of which they were already aware, external evaluators did not understand organisational values or context and, therefore, were unable to capture the essence of the evaluand. Practitioners identify that their commitment to community development values necessitates an approach to evaluative inquiry focusing on power-aware relationship building and bottom-up notions of indicators, successes, and evidence. This commitment aligns with Indigenous research methods and the importance of questioning the nature of evidence and whose evidence counts (Chambers, 2008, 2013; Chilisa, 2012; Smith, 2012). Additionally, practitioners emphasise the need for sensitive capture of change through narrative co-creation and other methods as are appropriate to context.
Upholding that methodological appropriateness is the ‘platinum standard’ (Patton, 2015, p. 95), a minority of practitioners warn against blindly using methods determined by donors or chosen through indoctrinated ideas of what is ‘best practice’ or ‘evidence-based’. A director at an urban Australian non-profit corroborates this point, commenting on the folly of contextually insensitive evaluation: ‘the government funded some external evaluators to come in and do evaluation of the project. I don’t know how much they paid them. In my view it was a waste of time’. As a small non-profit with very limited funding, this organisation felt disgruntled at the excess expenditure of money on an evaluation that was worthless in their eyes; money that could have been better spent on program delivery.
Another leader at the same organisation supports this view: ‘Sometimes we would have resources for external evaluators, not often…And to be honest, all that happened, essentially, is that you pay them a chunk of money, they interview you, they write it up, and because it’s been done externally it’s seen to be more objective and it’s not really. I mean the reality is that those contractual arrangements, they’re by and large going to write what the organisation wants. They’re by and large going to be sympathetic to the organisation. You know sometimes they’re going to see things that aren’t evident to the people inside. But I don’t think that is often the case. By and large the problems are well known to the people inside the organisation, maybe they just don’t have the resources to fix them. So it’s a useful tool for funders and governments to say that it’s been evaluated externally or objectively but I don’t actually think that intrinsically it gives you a lot that you didn’t have before’.
A practitioner in an urban Australian drop-in centre recalls his experience with an external evaluator as positive but then highlights that it was not actually useful in practice. He suggests that: ‘I think that would be useful to do that again, to get an outsider’s perspective on what was helpful or not helpful. But it didn’t really do anything. It was just information for the board. There weren’t any changes made out of it. It was just highlighting weaknesses to them. Pretty superficial’. It is curious that he mentions it would be useful to engage an external evaluator again despite finding the previous evaluation useless.
The director of a West African non-profit draws attention to these pre-existing axiomatic beliefs, identifying that normative ideas surrounding evaluators, evaluation, and the nature of evidence have been internalised resulting in ‘the hegemony that never gets questioned’. This was evident throughout the fieldwork for this book, as in the comment from the drop-in centre practitioner above, where aspects of the evaluation orthodoxy are accepted as ‘good’ in one sentence, and then dismantled as irrelevant and unhelpful in the next. This links to previously discussed new public management notions that have instated evaluation as good and common sense, like ‘motherhood and apple pie’ (Garbutt, 2013, p. 1). This ideology bears resemblance to the theory of cultural hegemony, whereby dominant norms and values are propagated to such an extent that they are unquestioningly accepted as the status quo by the general populace (Fonseca, 2016; Gramsci, 1930/1992).
Discontent to accept evaluation’s exalted position, the director of a rural East African non-profit claims that evaluation ‘limits things’. She defines evaluation as limited in its ability to clearly see and elucidate what is occurring. She sees it as a shallow and surface-skimming exercise, which only succeeds in disrupting their programs and usurping precious resources, a sentiment echoed by practitioners in other case study organisations. The director of the West African non-profit raises that evaluation focuses on the tiny restricted space occupied by the program or project and fails to address the impact of external context and other factors. She highlights the importance of thinking about histories of colonialism, warfare, gender inequalities, custom, culture, ethnic tension, and other elements that impact the effectiveness and sustainability of development interventions. Expanding this, she says that evaluation ‘only looks at, say, five per cent of the whole complexity which is only based on the metrics of the project set out by what the projects define as real or not real. And then the ninety-five per cent is all the politics, all the social and economic sort of complexities, and I can’t see organisations going deep into that’.
This insightful comment links to multiple situations observed during my time with the case study non-profits that could benefit from an in-depth existential evaluation to examine the intended and unintended impacts of their work much more deeply. As an example, I had a conversation with the director of a South Asian non-profit during a visit to a remote village. We were watching the mannerisms and actions of the young girls in the village who were so demure and quiet in comparison to the outgoing girls who board at one of the non-profit’s education programs. The girls in the village were small and skinny; sitting with their hands in their laps and looking at the ground. I spoke to them and they gently nodded. The director pointed out this stark difference in the reactions of the girls here compared to the girls at her school. She commented that she sometimes questions the work they are doing to develop outspoken, educated, critically thinking girls who live in a patriarchal society. She had noticed that there has been conflict in the villages when the empowered girls return home for the school holidays; conflict with their parents, village elders, and other community members because ‘the girls no longer know their place’. She wondered at the deeper impacts these empowered girls will effect on their communities in the long-term. Will it be positive? Alternatively, will the empowerment of a few lead to conflict that displaces these girls from their communities and culture?
Recognising the need for more depth in evaluation, a board member from a South-East Asian non-profit indicates that evaluation could do more, as, ‘It doesn’t live up to its potential when it’s done primarily as a box-ticking exercise’. Aligning with the complexity of non-profit work identified as challenging the ability to conduct evaluation outlined in Chapter 5, a practitioner from an Australian women’s refuge argues that the evidence-based movement is missing ‘something really valuable from the programs’ as it assesses the programs using the limited techniques acceptable to the evaluation orthodoxy.
Similar to the ideas of an existential evaluation above, the director of an urban Australian non-profit suggests that it could be more useful, ‘if you could get the money that’s meant to be used for evaluation and actually use it for research. That would be a more in-depth way of looking at areas that haven’t been researched. I think that would be more useful. Sometimes an evaluation is an expensive way to restate what we already know. And I guess I also don’t necessarily see a lot of impacts of an evaluation. Even if an evaluation is really good it doesn’t mean that you’re going to get more funding. So I wonder about the impacts’.
While the majority of practitioners clarify the importance of accountability and reflection that evaluation can catalyse, nearly half remark that evaluation as it is currently practised has become, in the words of a board member from a European non-profit, ‘some sort of mantra’ which often fails to meet expectations. ‘I guess I’m in two minds about it’ comments a practitioner from a South-East Asian non-profit. The director of a rural East African non-profit emphatically states that, ‘I don’t think the outcome of doing more of that specific type of [formalised external] evaluation has enough benefit to warrant doing it’.
A board member from a South-East Asian non-profit posits that the premise of evaluation is ‘essential’. However, he comments further that, ‘I’m not convinced it’s always well done or done well, or utilised, but I think it is absolutely essential’. Others corroborate this view, suggesting that evaluations must be ‘done well’ and provide lessons that lead to improvements. Practitioners do not qualify what a ‘well done’ or ‘done well’ evaluation would entail, although the evaluation literature clarifies that methodological and technical precision is not the panacea, highlighting the importance of factors such as user engagement and context-sensitivity (Patton, 2012; Stufflebeam & Coryn, 2014; Yarbrough, Shulha, Hopson, & Caruthers, 2011). Like the drop-in centre practitioner above who stated that it would be good to have another external evaluation despite the uselessness of the previous one, the board member’s comment that evaluation is essential despite sometimes poor execution and utilisation warrants further investigation. What aspects of evaluation are essential to small development non-profits and how can they enact these essential aspects? Chapter 8 deliberates these questions further.
Highlighting the need to critically appraise and evaluate evaluation in small community development settings, the board member of a European non-profit raises that he is ‘pissed off with evaluation’. He explains that he is ‘pissed off with all the talk about evaluation’ and the fact that ‘We’ve accepted unqualified assumptions about evaluation’. This comment is part of a golden thread that weaves throughout the data collected for this book: that a pervasive evaluation orthodoxy has infiltrated the minds and practices of many donors and practitioners in the non-profit sector who continue to conduct evaluation in standard ways despite questions of relevancy and utilisation (Eyben & Guijt, 2015; Lane, 2013; Schwandt, 2005). The board member highlights that he is ‘less interested in knowing if something is working - but knowing why, in what circumstances, and for who. That’s what’s important’. Coming back to concerns regarding the value and usefulness of evaluation raises the importance of thinking about this more deeply and critically, particularly in terms of the evaluation orthodoxy’s applicability to small development non-profit settings.
In summation, practitioners offer conflicted views on evaluation. They tend to agree that evaluation is essential, but paradoxically, that it is not generally useful (an opinion supported by the findings of Chapter 6). This book asks, if it is not useful, why is it essential? Practitioners recognise the importance of accountability and the value of information that can help them improve their programs and evidence their approach, yet they are unconvinced that formalised evaluation is fulfilling these vital objectives. They suggest that evaluation needs to examine the deeper effects of their impact, employing research or deeply existential evaluation techniques instead of surface-skimming inquiries. Further, practitioners highlight the importance of embedding community development values throughout evaluative practice, something that is often bizarrely absent despite first listing in the community development standards.
Non-Profit Led Innovations for Evaluation
Practitioners have thoughtful ideas on how they could improve the relevancy and utility of evaluative processes. These ideas divide into three overarching themes that rework traditional notions of evaluation to more appropriately support small non-profits’ challenges and strengths, and enhance their community development aspirations. The themes centre on capacity building and collaboration, strengthening organisational processes and critical thinking, and simplifying upward accountability.
Surprisingly, when asked an open-ended question about ideas to improve evaluation in small development non-profits, only one practitioner, a director in a rural Australian non-profit, suggests hiring an external evaluator to do it for them. Instead of this traditional role for external evaluators, practitioners mention that external evaluators could provide non-profits with peer-review and evaluation capacity building. Additionally, practitioners highlight the need for evaluation training, another task that could potentially fall to external evaluators. Practitioners suggest evaluators’ roles shift from directive ‘doers’ to non-directive capacity builders, in line with community development practitioner roles discussed in Chapter 3. The board member of a European non-profit, who also works as an external evaluator, supports this saying, ‘We have the expertise but maybe that expertise should be in the translation of stories and helping people to learn’.
Practitioners centre on the premise that external evaluators would be of greatest value as mentors and coaches, a concept gaining increasing momentum in the evaluation capacity building literature (Clinton, 2014; Ensminger et al., 2015; Labin, Duffy, Meyers, Wandersman, & Lesesne, 2012; McCoy, Rose, & Connolly, 2013; Naccarella et al., 2007). Festen and Philbin (2007) concur, recommending that small non-profits commission external evaluators to help them identify appropriate evaluation approaches and advise them on how to collect and use their data. This aligns with the community development idea that technical advice should be sought by those in need of the assistance and delivered in a short-term way that upskills them to do it themselves (Kenny & Connors, 2017).
As identified above, there is a need for more training in evaluation-related topics. While utilising an external evaluator to provide training is one option, small non-profits’ limited financial resources may require more innovative solutions. Practitioners from across the case study organisation suggest solutions such as ‘piggybacking’ on training being delivered at larger non-profits, sourcing free online or face-to-face training, or sending one employee to training who can train the others on their return. A practitioner in an Australian drop-in centre suggests that small non-profits could tap into extant knowledge in their own teams and conduct intra-organisational training where practitioners share their skills. While some identify online information as a valuable resource for evaluation in small non-profits, others remark that the amount of information is overwhelming. This suggests another area where an external evaluator could offer some short-term support to help navigate relevant online resources.
While not always the case, there are existing networks between small organisations that could provide a starting place for increased collaboration, something practitioners identify would be useful. Practitioners propose that collaboration with other non-profits could help improve evaluation, an action shown to hold promise for small non-profits (Charity Commission, 2010). Non-profits with similar focus areas could provide peer-review of programs and evaluative processes, offer secondary consult and mentorship, share resources, or work in partnership on an evaluation. This last idea aligns with approaches to shared measurement which have been heralded an effective way of jointly evaluating community development programs (Grieve, 2014). Practitioners identify sectoral networks and peak body events as good places to cultivate these relationships and circulate information about their work. Highlighting the benefit of these cross-organisational relationships, practitioners identify that collaboration could reduce time-wastage by sharing experiences and helping each other navigate the morass of the non-profit sector.
In addition to collaboration with non-profits, practitioners suggest peer-review and mentorship could come from other sources such as donors, government departments, peak bodies, or universities. Partnerships with universities could result in symbiotic relationships where small non-profits receive evaluation assistance from professional researchers while university staff have access to new data and research sites. While identifying students as possibly helpful for improving evaluative processes, non-profit practitioners are unsure about their commitment and work quality.
Recognising small non-profits’ restrictions around resourcing, practitioners mention innovative uses for funding through collaboration with these other institutions. They suggest it would be helpful if a peak body, government department, or group of non-profits, set aside a pool of money for evaluation. These funds could cover evaluation training or capacity building activities, or they could fund a consortium of non-profits that share a long-term evaluator.
Ideas around capacity building and collaboration outlined above offer pragmatic suggestions for how small non-profits could utilise outsiders to enhance evaluation. The next overarching theme focuses on ways that they could rethink internal organisational practices. Practitioners highlight the importance of cultivating critical and evaluative mindsets, identifying this as an area for continual improvement. While this has links with evaluation capacity building discussed above, practitioners spoke of evaluation capacity building as support to understand and use evaluation tools, develop logical frameworks and theories of change, and learn about methods of inquiry. When addressing the need for critical and evaluative mindsets, practitioners identify this as scheduled time to unpack and critically analyse their programs and approach, with a board member from a South-East Asian non-profit proposing that, ‘changes that are talked through are more likely to be implemented, I suggest, than most written evaluations’.
This evaluative space could occur in a multitude of forms in each organisation. As well as ongoing informal discussions, practitioners highlight the need for dedicated times for evaluative discourse at events such as strategic planning days, board meetings, network meetings, and team meetings. Further, practitioners identify the value of focus groups with community recipients and topical interest groups between non-profits where people can critically analyse particular topics to reflect and consider program impacts (good or bad) hidden beneath the scope of standard evaluations. Linking to the reimagined role of external evaluators as mentors, practitioners suggest that evaluators could help them develop tools to capture the outcomes of these critical thinking sessions in rigorous ways. These could include tools such as organisation-specific report templates, surveys, interview question prompts, and fieldwork journal headings to guide notetaking.
Recognising that sustaining critical and evaluative mindsets is crucial, practitioners suggest that these could be encouraged by giving interested colleagues responsibility for specific evaluative tasks, an approach supported in the evaluation literature on small non-profits (Festen & Philbin, 2007). Further, practitioners highlight the importance of ensuring non-profit executives are on board, as they hold power over organisational diffusion of evaluative thinking and action. This aligns with Patton’s (2012, p. 103) identification that internal evaluators require ‘high status in the organization and real power’ for their evaluations to be ‘useful and credible’. While it may be easier when driven from the top, people championing evaluation can be influential wherever they sit in an organisation (Rogers & Gullickson, 2018).
In addition to building organisational evaluation capacity, the importance of including community recipients in evaluation is independently raised by a fifth of practitioners interviewed as an area where many non-profits fall woefully short, suggesting that recipient inclusion is often tokenistic and, in the words of one board member, ‘fluffy’. A director of an urban Australian non-profit advocates that non-profits need to include ‘people in a deep way’: ‘You know, usually people are asked what they think but it’s often not very rigorous or very real’. This highlights the irrelevance of the scientific approaches commonly advocated by the evaluation orthodoxy which rarely focus on co-inquiry or genuine inclusion of community recipients (Goodkind et al., 2017; Trickett, 2011).
Additionally, excluding, or superficially including, recipients in evaluative processes disconnects community development non-profits from the principles and theoretical foundation that guides them (Lennie & Tacchi, 2014). In such cases, the evaluation approach contradicts and undermines the program objectives, an inconsistency that breaches the first community development standard of imbuing organisational values throughout all processes (Ross et al., 2018). To amend this disconnect, practitioners suggest that community recipients should be actively involved in evaluation design, implementation, and analysis. This promotes recipient ownership of evaluative processes, which is shown to enhance evaluation utilisation (Johnson et al., 2009; Patton, 2012). Further, it reinforces organisational objectives, such as enhancing empowerment through recipient inclusion (Fetterman, Kaftarian, & Wandersman, 2015). This approach makes sense in logistical ways as well, as recipients are well placed to collect baseline data, conduct needs assessments, and gather and interpret research and evaluation data.
Technology and multimedia offer a new world of possibilities for recipient inclusion and enhanced evaluation quality and usefulness in community development settings (Roberts & Muniz, 2018). A practitioner at a South Asian utilities non-profit comments that they use social media apps like WhatsApp as evaluative forums to facilitate discussion and that this could be a useful platform for other small non-profits, particularly those operating from diverse and geographically distant sites. Others discuss the value of methods that make use of smartphones such as participatory video journaling and photovoice reporting. Practitioners from across the case study organisations suggest that the evaluative potential of these methods is underutilised and could provide a means for greater inclusion of recipient voice, a finding supported in the development literature (Bau, 2015; Roberts & Muniz, 2018; Sutton-Brown, 2014).
Clearly defined purposes and intended uses are vital for evaluation utilisation, highlighting the need for a strong strategic plan, which is well understood and accepted by organisational staff (Patton, 2012; Yarbrough et al., 2011). As well as necessitating a clear strategic plan document, this includes staffing roles with complete and current position descriptions, an accurate mission statement, and an agreed understanding of operational philosophy and guiding principles. Practitioners comment that, in the words of the director of an Australian homelessness non-profit, they are ‘pretty much stumbling around in the dark’, as some do not have clearly defined job roles or articulated organisational strategic direction. They could improve strategic clarity through discursive discussions between practitioners and recipients to unpack their theory of change in a deeply critical and iterative way, an extension of practice that can help capture the complexity within which these non-profits operate (Ebrahim & Rangan, 2010; Funnell & Rogers, 2011; Green, 2016). Then, remarks a board member from a European non-profit, evaluation becomes ‘a story of the program and whether it is impactful’. This represents a change in mindset for those non-profits who operate according to donor or executive defined objectives tied to a linear cause-and-effect logic such as a logical framework, as is common practice in evaluation (Fushimi, 2018; Markiewicz & Patrick, 2016).
The need for strategic clarity links to notions of building rigour into informal processes as a way to improve evaluative processes. As discussed in Chapters 5 and 6, case study organisations widely practice and utilise informal evaluative activities. Some of these practices are largely ad hoc, an issue that practitioners suggest limits their potential. This links to comments above regarding the development of tools to guide and capture the outcomes of reflective discussions. Additionally, building rigour into informal practices includes ensuring small non-profits have structured monitoring processes that enable routine data gathering and manageable and accessible data storage that promotes use and facilitates evaluation. Part of this includes having a technologically current and tailored database to improve the rigour of monitoring that provides a vital base for evaluation.
While ideas regarding collaboration and evaluation training gently rethink dominant evaluation approaches to enhance their applicability to small non-profits, nearly a fifth of practitioners involved in the research for this book offer more radical alternatives to evaluation. Despite identifying monitoring as ‘essential’, when asked how evaluation could be improved, one director of a West African non-profit retorts that question ‘assumes that evaluation is, like, good’. A board member from a European non-profit, who also works as an external evaluator, comments that there needs to be a ‘shift’ in evaluation that would result in evaluation being considered from a different angle. He says of evaluators: ‘We’ve worked so hard to become accredited – but I think we’ve worried more about the science and not enough about the community context. Maybe we’ve taken a top-down approach when it should be a bottom-up approach’.
Identifying standard evaluation as shallow and oppositional to the work conducted by non-profits, the director of the West African non-profit concludes that: ‘Evaluation generally is a mechanism that feeds the development industry machine but it falls dramatically short of being able to really reform problems with the development industry and how it currently produces or upholds western hegemony’.
As discussed in the previous section, research and deeply existential evaluation could facilitate this shift in the evaluation orthodoxy through critical examination of their positionality and aspects of program delivery. Conducting evaluative research that rethinks or deeply contemplates non-profits’ premise, mission, approach, and context could usefully ignite a critical consciousness that would help them consider aspects of their work that they may have failed to recognise or not had time to examine. This post-development inspired approach to evaluation takes the idea of nurturing critical and evaluative mindsets to the next level, to unpack these ‘unqualified assumptions’ that surround evaluation of community development evaluands.
The final theme addresses improving and streamlining evaluation for donors. Interestingly, the vast majority of suggestions for improving evaluation in small non-profits focus on the organisations and community recipients as the audience for evaluation, despite findings, particularly in Chapter 6, showing that the bulk of formal evaluation is intended for donors. This suggests that practitioners feel evaluation is overly focused on upward accountability at the cost of evaluation’s other purposes to improve, check effectiveness, and be accountable downward. When specifically discussing upward accountability, nearly a fifth of practitioners interviewed consider ways to make it quicker and more efficient by utilising templates to provide donors with short, acquittal-like precis of funding allocations and basic outputs and outcomes.
The non-profit led ideas presented in this section capture what practitioners consider important in evaluation. Rather than necessarily conducting formal internal or external interval-based evaluations, practitioners highlight the value of developing skills in evaluation that can be adapted for context through collaborating with others, engaging in deep critical reflection, including community recipients, and building rigour into their informally evaluative processes. Those who consider upward accountability discuss simplification to ease administrative burdens. This suggests that a potentially appropriate way to be accountable, ensure effectiveness, and improve, is through simplified acquittal-like evaluations for upward accountability while the real evaluative work is built into an investigative research plan and discursive informal evaluative activities.
A Need for a New Way
This chapter, and the preceding results chapters, have built on the paradox identified in Chapter 1 whereby evaluation is simultaneously exalted and underutilised. While Chapters 5 and 6 examined the practicalities of how evaluation is conducted and used in small development non-profits, the current chapter presented practitioner perceptions surrounding evaluators, evaluation, and ideas for cultivating more relevant and useful evaluative processes in these settings.
As predicted by theories of hegemony (e.g. Bourdieu, 1977; Gramsci, 1992), practitioners continue to espouse that evaluation is ‘essential’ despite evidencing that formalised evaluation provides nothing new and is largely unused by its primary intended users: the practitioners in small non-profits. However, when probed more deeply, practitioners explain that it is not formal interval-based evaluation reports that are essential, it is critically reflective processes of evaluative monitoring that enable program improvement, effectiveness checks, and accountability (both upward and downward). Despite the essential nature of this concept, practitioners lament that traditional forms of evaluation rarely fulfil their purpose, warranting an alternative approach.
This chapter identified that standard evaluation regularly disconnects with the case study organisations’ values. Practitioners offer that this necessitates a turn to community development whereby evaluative practice incorporates these values, in alignment with the first community development standard. This includes prioritising a focus on contextual sensitivity and reworking evaluators’ role as experts into roles that complement non-directive and shared leadership approaches. Further, practitioners highlight that everyone should be an evaluator, demonstrating a desire to invert the hierarchy of evidence and instate bottom-up approaches to hear and actively include community recipients who are typically silenced and unheard. As demonstrated in Chapters 5 and 6, much of this inclusion of practitioners and community recipient perspectives in evaluative processes is occurring at an informal level. Practitioners suggest developing these informal mechanisms to build rigour and collect data currently being lost. The community development and evaluation standards, presented in Chapter 3, provide clear guidelines to help small non-profits navigate this process in alignment with organisational values and accelerating expectations for them to conduct rigorous evaluations.
Formalised evaluation as it is currently practised in small non-profits is seen as surface-skimming and essentially repeating practitioners’ views back to them in a manner that offers nothing new. Practitioners express a wish to investigate their impact with greater depth through existential evaluative research that seeks to examine non-profits’ positionality in the cultural, political, historical, and environmental context in which they operate. Again, the community development and evaluation standards could help structure this deeply thoughtful approach.
Rather than fulfilling multiple purposes of accountability, effectiveness, and improvement, standard evaluation typically serves to satisfy donor accountability demands. However, when questioned about improving the relevancy and utility of evaluation, upward accountability was almost an afterthought. Future-thinking evaluative processes transform evaluations’ audience from predominantly for donors to predominantly for non-profit practitioners and community recipients. This converts evaluation from a static document to a living process, from a product to a process, owned and driven by people with everyday experience of the context and with emic understanding of the changes they seek to realise.
While practitioners suggest that standard evaluation is, ultimately, a tool for upward accountability, in Chapter 6 they voiced suspicions that evaluation as it is currently practiced is not utilised by donors, suggesting a need to explore what donors of small development non-profits actually expect and require, and for what purposes. Examining the ‘point’ of evaluation in this chapter has uncovered a divide between upward-focused, interval-based standard evaluation and bottom-up informal evaluative processes that incorporate monitoring and research. Standard top-down evaluation has shown itself to have little meaning or utility to non-profit practitioners except to appease donors. Practitioners were unsure of further utility, citing unfunded programs despite technically sound evaluation demonstrating positive results. On the other hand, informal processes regarded as useful and meaningful to non-profit practitioners grew from the bottom-up with ownership at the organisational or community recipient level, rather than with donors.
Despite fifty years of research on evaluation utilisation, the evaluation orthodoxy continues to drive approaches that engender limited utilisation. It is time that these findings trigger a radical conversation about assumptions surrounding evaluations’ worth, instead of using proof of low-use to promote tweaked amendments to ineffective designs. This conversation can be continued and reworked in each organisational context to enable relevant and utilised processes for upward and downward accountability, effectiveness checking, and improvement, ensuring that these evaluative tasks do have a point. The final chapter of this book unpacks these findings against the literature to argue that current approaches to evaluation in small development non-profits are resulting in parallel tracks of evaluation. One track is playing the game to appease donors while the other is seeking alternative routes to improvement, effectiveness, and downward accountability.