The field of positive psychology (PP) has incited a genuine belief that well-being can be enhanced through deliberate training. This is reflected via conceptual frameworks which identify areas of focus for promoting well-being. For example, the Architecture of Sustainable Change model asserts that there is scope to enhance happiness levels through volitional activities which focus on lifestyle behaviours (Lyubomirsky, Sheldon, & Schkade, 2005). Moreover, leading scholars within PP have seized the opportunity to develop and empirically investigate the effects of a number of positive psychology interventions (PPIs) designed to enhance well-being. A key study (Seligman, Steen, Park, & Peterson, 2005) examined the effects of five positive interventions on well-being and depression in comparison to a placebo control condition. The findings indicated that three of these interventions not only enhanced well-being post-intervention, but also decreased symptoms of depression. The authors concluded that “Positive interventions can supplement traditional interventions that relieve suffering and may someday be the practical legacy of positive psychology” (p. 410). From this point forward the development and dissemination of well-being programs incorporating positive interventions proliferated across a number of contexts.
One growth area that has been particularly striking is within a subfield of PP referred to as positive education. Positive education involves applying PP and well-being strategies with young people in schools (Seligman, Ernst, Gillham, Reivich, & Linkins, 2009) and draws on best practices in education (e.g., pedagogy, curriculum development). It is anticipated that positive education can help to address many growing concerns about the well-being of young people, such as the high incidence of mental illness in developed countries (Patel, Flisher, Hetrick, & McGorry, 2007) and reports that mental disorders are the single greatest health burden for young people (World Health Organization, 2014). If early intervention does not occur, mental disorders can adversely impact the day-to-day functioning of these young people into and throughout their adult years (Sawyer et al., 2012), creating poor life quality, a prolonged burden on health care systems, and a draining of finite community health resources.
Adopting an early intervention approach to health promotion is an important feature of positive education, as it is typically a universal program suited to students irrespective of their mental health status, and can be delivered at school during the critical formative years of pre-adolescence and adolescence. Positive education, due to its universal nature, is also more widely accepted in comparison to traditional clinical methods of mental health services, which tend to be socially stigmatised. It is not surprising, therefore, that programs focusing on social and emotional learning, resilience, and more explicit PP frameworks have been developed and are burgeoning. However, this significant and immediate need has resulted in the widespread and, arguably, premature dissemination of programs. Although there are reports that school-based well-being and positive education programs are effective (e.g., Waters, 2011), often these programs lack the necessary level of empirical evidence and fine-level detail about the conditions in which they are most successful in achieving their goals to warrant their implementation. With finite resources available to schools, it is important that they select evidence-based interventions if they wish to achieve maximal benefits with the least expense. This objective can be compromised when the specific effects of a program are not known and the factors and processes associated with the program content and delivery that facilitate optimal outcomes have not been fully identified. To date, research on well-being programs has been concerned primarily with the intervention outcomes, namely whether or not the interventions work in achieving the target health outcomes, such as increasing well-being and reducing mental illness. While it is often necessary to measure the rate of change resulting from an intervention, this in itself is not sufficient. There are a number of enabling and disabling factors that can influence the effectiveness of the program and these warrant further consideration for a more nuanced understanding of the program’s utility. Ironically, these factors are rarely measured and considered in positive education evaluation research, or at least they are seldom reported. Hone, Jarden, and Schofield (2015) have supported this claim with their recently devised RE-AIM framework, which they have systematically applied to studies evaluating PPIs. This framework evaluates the reach, efficacy, adoption, implementation, and maintenance of a program or intervention. In their evaluation of 40 PPI effectiveness trials, these researchers found that although details around efficacy and adoption were reasonable, information on implementation and maintenance was sparse. For instance, the costs involved in developing and maintaining positive interventions are rarely presented. This poses a problem in determining if the interventions and programs are ready or feasible for large scale dissemination in the real world.
The aim of this chapter is to present information about the types of methods and factors that need to be considered for future work evaluating school-based well-being and positive education evaluations if the field is to advance and recommendations for the widespread implementation of positive education are justified. The hope is that these guidelines will encourage a greater number of well-being scholars and practitioners to strive for and employ gold practice standards in their work so that successful and lasting impacts can be achieved. First, based on the literature reviewed, a summary of limitations and corresponding recommendations for best practice in developing, disseminating, and evaluating well-being programs targeting young people will be presented. Then a research case study which was conducted by the authors with young people will illustrate both the benefits and challenges associated with developing, delivering, and understanding the effects of positive education.
One key point is to first examine the quality of work undertaken on developing various positive education interventions or programs. Seldom is the development process adequately described in publications examining the effectiveness of positive education programs. For example, information about the program goals and anticipated outcomes, the pedagogical approach employed, and the level of input or feedback from facilitators or recipients about the program content and structure is difficult to find in published papers. Although one reason may be that limited journal space does not permit this level of detail to be presented, this needs to be reconsidered by editors, particularly in the formative stages of a new field or approach where the foundation interventions are still being developed and refined. Another reason may be that the program developers and the program evaluators are often separate groups. Hence when the paper is written up by the researchers, there is often insufficient information about the program content and dissemination. This shortcoming underscores the importance of the partnership and knowledge exchange between these two groups.
There are some key components of the development phase that will be singled out specifically in relation to positive education. The first concerns the program content and whether the intervention is focused on a single component (e.g., gratitude) or multiple components (e.g., gratitude, strengths, hope, meaning, and mindsets). It is important to understand what exactly is being evaluated so that the active ingredients can be pin-pointed through the right methods, such as the use of recipient ratings and interviews or focus groups. The second concerns the intent or goals of the program. What is the program expected to change, and through which mechanisms is this likely? For example, a strengths program may be expected to enhance strengths knowledge and use, which in turn may lead to greater engagement and flow. The third factor relates to the extent to which the program has a theoretical or empirical basis. The largely atheoretical nature of positive frameworks such as positive psychology has been noted as a limitation (Brink & Wissing, 2012; Parks & Biswas-Diener, 2013) and has implications for the premise on which interventions are developed. It is useful to note the basis from which the program has been formed and whether this includes theory and/or empirical data. Where this is possible, the integration of theory and practice is recommended (Brink & Wissing, 2012). Moreover, if the program includes multiple components, why were each of the components included, and what is the rationale for the sequencing of the components?
Furthermore, the impact of systems-level (or “person-in-environment”) factors also warrants attention, as noted by those in the field of Positive Youth Development (PYD; Catalano, Hawkins, Berglund, Pollard, & Arthur, 2002) and Waters (2011) in her review of positive education programs. Waters underscored the importance of a school-wide approach to positive education and the need to consider factors such as the curriculum, the broader teaching and learning context, and organisational factors related to structure, policies, and processes. This information is critical as the sustainability of the program is often dependent on a supportive infrastructure which includes champions, enforced policy, and adequate resources. Some detail about who specifically is developing the program and whether there is “buy in” from program facilitators and/or recipients seems relevant. A process of consultation is often warranted and sometimes program co-development is also advantageous – as promoted by proponents of PYD – whereby the focus shifts from thinking of young people as passive program recipients to seeing them as valuable resources who can be active advisors and partners. Given the importance of program uptake, reporting information about stakeholder support and input, particularly from the targeted program recipients, seems highly relevant. Table 32.1 provides an overview of some key limitations associated with the development phase of school-based positive interventions.
How a program is delivered and by whom can influence its success. It is therefore important to report these details in any published program evaluations. It is also critical to assess and report on the extent to which the program was delivered in accordance with the “script” or set of instructions. This is referred to as program fidelity. Program fidelity is often identified as a key factor and concerns specific questions about whether or not the program was delivered as planned with respect to content, staffing, and timing; whether recipients actually received the planned intervention; and their level of engagement with the program.
Incomplete analysis of program goals from the perspective of all relevant parties (e.g., management, content developers, facilitators) |
Lack of consultation with facilitators to gauge program feasibility, level of training needed and provided, understanding of content and desired pedagogy, use of resources such as program manuals, and receptiveness to program |
Lack of consultation or co-creation with program recipients to ensure real world relevance and anticipated level of engagement |
Insufficient data about school/organisation readiness to receive the program (e.g., support systems, previous experience, commitment, champions) |
Insufficient detail about the teaching philosophy adopted |
Incomplete reporting of study methods in published works |
Who were the facilitators and what level of training did they have? |
Lack of detail about the monitoring of program frequency and duration |
Insufficient information about level of program uptake by schools/organisations, facilitators, and recipients in terms of their respective roles during the program delivery |
Lack of feedback from recipients about program experiences with respect to the full range of program components |
The dissemination phase is highly dependent on facilitator characteristics, such as who is delivering the program (e.g., background and training), and program qualities, such as the dosage (frequency and duration) and delivery format (face to face, on-line, individual, group). All these factors can influence the program’s success. Meta-analyses on PPIs have found that individual as opposed to group formats and interventions of a longer duration are more effective, as there is greater opportunity to practice over time and form good habits (Bolier et al., 2013; Sin & Lyubomirsky, 2009). Sin and Lyubomirsky also found that programs with multiple activities (components) are more effective than those which involve engaging in a single activity.
Another important factor associated with the program dissemination phase concerns the choice of facilitator: Should this be the teacher, mental health professional, young person or (near) peer? For the Penn Resiliency Program, it was found that the strongest effects are obtained when the facilitators are the program developers, members of the research team, or graduate students who have been trained and are closely supervised by the program developers, rather than by individuals external to the research team such as teachers, health professionals, or community providers (Gillham et al., 2007). This has prompted a need to carefully develop, trial, monitor, and assess different dissemination strategies so that more informed decisions can be made about using the most appropriate and efficacious facilitators. Table 32.2 presents an overview of some key limitations associated with the implementation phase of school-based positive interventions.
Program outcomes are the focus of most evaluation studies; however, the methods used to assess program success via outcomes also have their shortcomings. For example, the limited way in which key outcome variables have been operationalized and measured is noteworthy. Well-being is often measured using positive affect and life satisfaction scales because this is most familiar and practical to the researchers rather than via methods that will provide the most accurate and comprehensive account. Consequently, this has led to a reliance on the use of self-report questionnaires which assess cognitive and affective aspects of well-being. However, well-being is multi-faceted and comprises other dimensions beyond cognition and affect. For example, the biopsychosocial model developed by Engel (1977) asserts that biological, psychological (e.g., thoughts, emotions, behaviours), and social (socio-economic, socio-environment, culture) elements all contribute to healthy human functioning. Despite recognition of the importance of multiple factors for good health and well-being, seldom are behavioural, environmental, or biological well-being indices used to supplement or corroborate self-report questionnaire data.
Although subjective measures are ideal for capturing the phenomenology of well-being, our emotional states can also be influenced by events without our conscious awareness. However, these changes in emotional state can influence our judgments, memories, and perceptions of others. Implicit measures (which can be both physiological and cognitive task-related) can often capture subtle changes in affective states of which individuals are unaware, or, for reasons such as social desirability, are unwilling to voluntarily report. Hence, these additional measures can provide valuable supplementary information about well-being that cannot be detected via self-report measures and support the use of biological indices of well-being.
Another often overlooked point relates to the sustainability of program effects. It is important to distinguish between interventions that temporarily enhance well-being from those that have longer-lasting benefits for recipients. As with all subjective states, well-being rests on biological foundations. Attempts to modify well-being must therefore consider the underlying mechanisms supporting this subjective state. The success or otherwise of well-being interventions will ultimately depend on the plasticity of these systems. Interventions that permeate deeply to the biological foundations of well-being are likely to have more sustained effects (see Rickard & Vella-Brodrick, 2014). The inclusion of biomarkers of well-being enables the identification of interventions which have an impact on these substrates, and, subsequently, those interventions which may have greater longevity. Some leading examples of multi-level assessments of well-being involving psychophysiological or biological indices include work by Ryff and Singer (1998), Davidson et al. (2003), and Kok et al. (2013). In addition, evaluation studies need to employ longitudinal designs with repeated measures over time. This will enable the detection of delayed (“sleeper”) effects as well as providing an indication of the point in time when any beneficial effects are no longer present.
Another issue concerns the overdependence on quantitative methods and under-use of qualitative methods in well-being research, particularly within PP (e.g., Ong & Dulmen, 2007). Although there have been recent initiatives to include qualitative methods through the use of open-ended survey questions (e.g., Delle Fave, Brdar, Freire, Vella-Brodrick, & Wissing, 2011) or interviews and photographs (e.g., Steger et al., 2013), in the main there is a strong focus on quantitative methods which rely on self-report questionnaires.
More thought is needed about whether the use of mixed methods would be advantageous to progress practical knowledge in the field. The use of mixed methods introduces many benefits to understanding the well-being of young people. Qualitative data can extend on quantitative survey results by providing details about the human experience and can add clarification and depth to the statistics obtained through quantitative methods. Open-ended questions and focus groups enable the perspective of the respondent to be captured and better understood. They also empower participants to voice their inner thoughts. Multi-level assessments of well-being provide information on program effects relating to a wide range of well-being dimensions including psychological, social, behavioural, and biological. In addition, using multiple methods helps to minimise the deficiencies inherent in any one method.
An over-reliance on quantitative methods for establishing outcome success |
The exclusive use of self-report questionnaires in the measurement of well-being |
A focus on short term indicators of well-being rather than more long term effects |
Insufficient integration of process data when analysing and interpreting the study findings |
Mixed methods can be used to corroborate data about the same construct using different methods; however, there are numerous other functions for mixed methods. The use of mixed methods enables researchers to go beyond “cause and effect” to also address questions about the “why and how” of some of the observed outcomes. In other words, mixed methods can provide complementary information through the process of strategic probing and elaborating on key findings (e.g., to explore some of the quantitative survey results via an interview). Basically, the use of both qualitative and quantitative methods can provide a more complete knowledge base from which to inform policy and practice (Burch & Heinrich, 2015).
Limitations of the mixed methods approach are that it can be highly labour intensive for the researchers, can be onerous and demanding for the participants, may require additional specialist skills to develop and analyse, and can add considerable expense. Moreover, mixed methods often require numerous phases of data collection. It is important, therefore, that funding bodies and research teams take this into account and develop strategies for promoting these large scale studies. Table 32.3 provides an overview of some key limitations associated with the outcome phase of school-based positive interventions.
With the use of the following case study on a research project evaluating well-being programs from the Reach Foundation, we will highlight ways in which the three phases of program development, implementation, and outcomes can be addressed. Both the advantages and challenges will be discussed, but ultimately we wish to demonstrate that even a modest research study can adopt some of these varied processes to collect rich and useful data that can help well-being practitioners at the ground level. The case study will focus on the methods and their implications rather than on the actual study findings. The use of three clear categories of program development, implementation, and outcomes is not intended to be exclusive and exact. It is acknowledged that there is an interdependence and blurring that can occur across these three phases and therefore some of the issues raised may in reality fall under multiple categories, despite being classified into only one category for this chapter.
The Reach Foundation (hereafter “Reach”) is a national youth organization in Australia providing community and school-based programs for young people. Their broad aim is to promote the mental health and well-being of young people. The independent team of researchers from two reputable universities conducted a study to assess the short-term well-being and mental health outcomes of three Reach programs selected for inclusion in this project.
The study was structured into several separate events:
The specific aims of the research project were to:
In sum, Reach sought to gain an in-depth understanding of how program participants were responding to their programs and to acquire information to guide future program refinements. What now follows are details about the methodology employed for this research project. These are presented within three broad sections: program development, program dissemination, and program outcomes.
While the research team was not involved in the program development, background knowledge about the organisation and their vision, target group(s), staff structure, and general operations was deemed to be important for the evaluation task.
In the initial stages of the project, several meetings were held with the Reach management team. Focus groups were also conducted with program facilitators known as “Reach Crew.” Questions were developed to gain information about theoretical underpinnings, goals, pedagogical framework, support systems, and general organisational readiness for the implementation and evaluation of well-being programs.
From these meetings and focus groups, as well as from examining internal and public documents (e.g., program plans for facilitators and the Reach website), broad program goals for each of the three Reach programs being evaluated in the research study were identified. The main theoretical frameworks underpinning the programs were Positive Youth Development and the narrative of the “Heroes Journey,” which draws on stories from one’s own life and from fictional characters to better understand patterns and processes associated with approaching challenges with greater success. A strengths-based approach consistent with PP also guided some of the Reach program content and approaches. The pedagogical style that was promoted was based on youth-led experiential learning and hence Reach Crew, who are trained youth leaders, were primarily involved in the program facilitation. Reach Crew members ranged in age from approximately 16 to 25 years and could relate well to program recipients, who were only marginally younger. Workshop session plans, including objectives, were available to guide facilitators in their workshop preparation. Programs, particularly the Fused and Secondary School Workshops, required a high degree of participation and interaction from program recipients.
Reach Crew were provided with on-going training in program development and facilitation. Professional support was also available to Reach Crew and program participants, in the form of access to fully qualified and experienced mental health professionals and peer support groups. The organisation had a main geographical base as well as a number of strategically placed hubs around Victoria to enable greater access to services by community groups and schools. There were also some high profile “champions” of the organisation, including football players and TV presenters. These partners provided excellent endorsements and heightened the organisation’s public profile and engagement.
In brief, based on the information collected, Reach seemed adequately prepared to provide quality programs to young people within the community and schools. In other words, they demonstrated a good level of “readiness.”
Although the research team were able to gain some helpful insight about the general philosophy and practices of Reach, there was the possibility that these were idealised by staff. Hence, careful monitoring of how the programs are actually disseminated is important. This is commonly referred to as program fidelity.
Information about program content, delivery style, delivery frequency, delivery duration, and facilitator qualities (skill, level of training, enthusiasm, commitment) were monitored as part of this research study. Members of the research team were granted permission by Reach to attend some of the programs. This provided the research team with the opportunity to assess how well the facilitators adhered to program delivery instructions. At least two members of the research team attended Fused and Heroes Days programs. Although a formal checklist with quantitative ratings was not used, semi-structured questions guided discussion among research team members and enabled the extent of convergence about program fidelity aspects, across researchers, to be ascertained. In all instances, there was a high degree of agreement endorsing program fidelity for the two programs. A striking finding by the researchers concerned the level of skill, enthusiasm, and connection with the participants the Reach Crew demonstrated in the program delivery. The training and peer support among crew appeared to translate effectively into practice. This was confirmed at post-program focus groups with program participants who identified the Reach Crew as a real asset to the program.
A challenge faced by the research team when monitoring the Reach programs related to the request by Reach to assimilate as much as possible with the participant group so as not to stand out as an “outsider.” The research team was instructed to engage fully in the program, as if they were a “genuine” participant. This was encouraged so as not to disrupt the natural group dynamics in any significant way, permitting the program to be delivered as authentically as possible. Assimilation was possible with the Heroes Days (where there were 500 participants, including a small number of staff from two schools as well as a restricted number of Reach staff) and Fused workshops, which also included a restricted number of Reach staff and students from the community. However, assimilation was not possible with the classroom-based Secondary School Workshops, and hence monitoring did not occur for this program. In hindsight, teachers and/or students could have been invited to assess program fidelity by rating the program content and dissemination aspects. Making audio recordings of the sessions is another option for monitoring program delivery. Using multiple sources to assess program fidelity, while labour intensive, can provide valuable information.
From the outset the research team provided opportunities for the Reach Crew to be involved in the selection of measures to be used in the evaluation, particularly for the on-line surveys. They were asked to trial the on-line survey and to indicate how relevant they felt the questions were in terms of the program objectives and relevance for the target sample of young people. Feedback from Reach crew was deemed to be appropriate considering their similarity with the target group and their knowledge of the program objectives. In fact, many of the Reach Crew had been program participants in the recent past and could identify with both the recipient and facilitator experiences.
Reach Crew were also asked to identify and rate the degree to which the various Reach programs taught specific coping strategies to participants. This information contributed to the formulation of a list of Reach coping strategies for use with the experience sampling task (detailed shortly). As the Reach Crew were at the heart of both the development and delivery of the programs, they were an invaluable source of information. It was also important for the Reach Crew to understand the research project aims, to recognise the importance of the research, and to feel like their experiences mattered. This connection with the project and research team fostered a good level of co-operation from the Reach Crew in terms of completing specific research tasks such as trialling the on-line survey.
An on-line well-being survey was developed for completion by all consenting participants. In addition, a mobile app called Wuzzup was tailor-made for the study to assess real-world, real-time experiences of young people. The mobile technology and on-line platform was familiar to the study participants, thus enhancing the usability and compliance. This led to a good level of uptake and sustained involvement in the study (see Chin, Rickard, & Vella-Brodrick, 2016).
A range of methods was used to collect both qualitative and quantitative data on mental health and well-being. These included on-line surveys, ESM, salivary cortisol, and focus groups.
Measures which assess the full spectrum of mental health were completed by all research participants (see the variables included below in Table 32.4). These questionnaires were made available on-line via Qualtrics (www.qualtrics.com) and were completed in class, or in the case of the community-based Fused program, just prior to the program commencement, in a group setting.
A sub-group of participants were randomly selected to be part of this more intensive study. These participants were prompted via an iPod Touch device twice a day over one week. At each prompt, participants were asked to complete questions using Wuzzup, a tailored software app which presented them with highly intuitive graphic scales which they could complete quickly (around 2 minutes) and with minimal interference to their current activities. Random prompts were used to minimise expectancy effects (Alliger & Williams, 1993). Via Wuzzup, participants were asked to report on a range of experiences, including:
Respondents also provided contextual information regarding where they were at the time of the prompt, what they were doing, and who they were with.
More specifically, mood was assessed using a 7-point sliding scale ranging from unpleasant to pleasant. This measure of “current mood” was highly sensitive, as it was an aggregate obtained from 14 individual self-reports – twice a day, over a weeklong period.
Young people were asked to report the types of strategies they used in their everyday functioning, in response to both negative events (“something unpleasant happened” since the last time they were prompted by the iPod Touch device) and positive events (“something pleasant happened” since last time). The strategies identified by Reach Crew as fundamental to the Reach programs, as well as other positive and negative strategies, were included as options. The aim of this list of strategies was to ascertain the real-world applicability the Reach program had on the day-to-day events and experiences of the young person.
Well-being variables: |
Mental health variables: |
---|---|
Student life satisfaction |
Depression |
Well-being |
Anxiety |
Hope |
Stress |
Positive and negative affect |
Lack of emotional awareness |
Autonomy and autonomy support |
Difficulties engaging in goal-directed behaviours when distressed |
Relatedness |
|
Competence |
|
Strengths knowledge and use |
The use of ESM was included in this research study to: (a) increase measurement accuracy and minimise memory biases associated with retrospective reporting; (b) enable dynamic processes between young people and their environment to be detected through repeated assessments; and (c) enhance generalisability of findings due to the real-life context of the assessment (Ebner-Priemer & Trull, 2009; Scollon, Kim-Prieto, & Diener, 2003). This repeated sampling of moments also enabled multi-dimensional assessments to be tracked in parallel with the introduction of the program and any specific program activities, while considering contextual aspects. Contextual information, such as where the participants were, who they were with, life events, and mood states, were collected as these can influence the outcome measures and produce some helpful information about the extent to which participants could apply program skills and knowledge in their real-world context.
One important point to consider regarding the ESM is the extent to which participants can access the ESM software and respond to prompts in their day-to-day functioning. In addition, the software needs to be developed by Information Technology programmers and the hardware needs to be updated regularly due to continually changing operating systems. Also, with multiple data collection points and the need to carefully match data for each participant at different time points, preparing the data for analysis can be very time consuming and costly, and the statistical analyses involved are quite specialised. However, there are now a number of statistical packages available to assess multi-level data of this type.
An objective assessment of well-being was also obtained from a randomly selected subset of the sample. Salivary cortisol was used as a biological index of mental health (stress). Cortisol is a stress hormone (part of the hypothalamic pituitary adrenal axis system), and the pattern of its release has been associated with stress, well-being, and physical health. One of the more widely used biomarkers of well-being is the cortisol awakening response (CAR). Cortisol levels rapidly increase upon waking and typically peak 30 to 60 minutes later. The steepness of the CAR has been found to be inversely associated with both hedonic (Steptoe, Gibson, Hamer, & Wardle, 2007) and eudaimonic (Lindfors & Lundberg, 2002) well-being.
Saliva samples were obtained from participants on waking, then again 30 minutes later, and finally at bedtime, using purpose-designed oral collection swabs (Salimetrics, Inc., Carlsbad, CA). Participants were instructed to avoid certain drinks and foods prior to providing the samples. They were instructed to remove the cotton swab from the tube, place it in their mouth and chew it for a few seconds, and then to place the cotton swab back into the tube. Participants were able to do this at home, but were asked to bring their samples to school at a certain time. All samples were stored in a freezer both at school and at the research lab until such time that they were analysed. Samples were analysed using a competitive assay (Enzyme-linked immuno sorbent assay or ELISA; Salimetrics, Inc.) to measure salivary cortisol concentration, which correlates with bodily cortisol levels. As two members of the research team were trained in conducting the assays and had access to a suitable wet lab at their university, all the testing was conducted “in house.” Ordinarily though, this specialised testing would need to be contracted out to specialists and would incur a service fee on top of the cost of the materials.
Generally, the study findings relating to the salivary cortisol showed that this marker was correlated with well-being levels (Rickard, Chin, & Vella-Brodrick, 2016). The measure was consistent with subjective report measures, and Reach reported that the use of an objective indicator (cortisol) provided greater credibility with their client groups and with potential funding partners.
One issue regarding the saliva samples concerned the level of information provided to participants and their parents or guardians around the purpose of collection. A small number of participants wanted reassurance that the samples were being collected purely for the purpose of measuring cortisol levels and not for conducting any other tests such as drug testing.
A subset of study participants also attended focus group sessions post-intervention. Four focus groups were conducted for the Secondary School Workshops, two for the Heroes Days, and one for the Fused participants. Each focus group session included between 6 and 12 participants, with a total of 60 students interviewed. Semi-structured sessions fostered discussion about previous knowledge of Reach, expectations about the Reach program prior to attending, their experience of the Reach program and Crew, whether or not they shared their Reach experience with others, lessons learned, and the ways in which the program was helpful (or not) to the participants in their everyday lives.
Focus group participants were also asked to complete survey questions using a 5-point Likert scale, about their level of interest in the Reach program, how helpful and inspiring they found the program to be, and how well they related to the Reach Crew. They were asked to indicate if they experienced any “a-ha” or lightbulb moments as a result of participating in the program and, if so, to describe these moments. Participants also had the opportunity to provide any general comments if they wished. Qualitative data were examined using NVivo software (QSR International, Doncaster, VIC) to identify key themes from the focus groups. Results were presented via pie charts, and a variety of quotes from participants about aspects of each program were also included to illustrate the differences between young people’s experiences in each program.
The focus group data provided useful supplementary information to the on-line survey and ESM data. In line with Positive Youth Development frameworks, participants had the opportunity to voice their opinions and describe their experiences. The use of quantifiable and open ended survey questions at the beginning of the focus groups enabled each individual to express their view without being influenced by the thoughts of others. This was then followed up with some group discussion which was audiotaped and later transcribed and analysed. This enabled the research team to quantify participants’ thoughts and feelings and report these back to Reach staff via reports which include easy-to-interpret graphs. The thematic analysis was also a useful source of information that provided more intricate details about the program’s strengths and weaknesses. Reach was able to use this feedback to make some tangible changes to future programs. This information also helped Reach to identify their main competencies and to streamline their focus, program content, and offerings accordingly. It also influenced the way they marketed their services to potential clients and the broader community.
In sum, we are proposing a best practice standard of evaluating positive interventions which focuses on program development, implementation, and outcomes. We have provided a case study to illustrate ways in which research studies can begin considering ways of improving evaluation practices within school-based well-being programs, and also to demonstrate that no one study can give a perfect evaluation. We need to design a series of studies which, collectively, will provide comprehensive information about the various facets of program development, implementation, and outcomes. In addition, a list of recommendations for researchers evaluating well-being programs to consider is provided below, which can serve as a useful prompt to improve both the study designs and the reporting quality.
Although there are a number of very helpful models and approaches for evaluating the effectiveness of health and well-being programs (Brannen, 2005; Durlak & Dupre, 2008), a simple framework that moves beyond focusing exclusively on outcomes is needed. Examining processes rather than just outcomes, and asking questions about whether the program was implemented as planned, whether it was well received (and by whom specifically), and what the enablers and disablers of program quality, implementation, and application were, provides important information to assist the delivery of effective, evidence-based well-being programs for the community. The main thread of these recommendations is the use of comprehensive and robust data generating methods. Over time and through multiple studies conducted by various scholars, information pertaining to program development, implementation, and outcomes can cumulatively serve as evidence for program effectiveness.
It is anticipated that these recommendations will guide and prompt those working with well-being interventions to assess program success more fully. A substantial level of information is needed to ensure that only evidence-based well-being programs are being promoted and disseminated into the community under the auspices of science-based disciplines such as PP and its subfield of Positive Education. Practitioners and scholars need to work collaboratively and strategically to gather this level of information over time, through carefully controlled trials and real-world studies. This will enhance best practice whereby the dissemination of evidence-informed school-based well-being programs becomes the norm, and the beneficial impact, correspondingly, more profound and long-term.
A special note of thanks to the Reach Foundation for their financial support, collaboration, and genuine interest in supporting young people to feel and be well.
Alliger, G. M., & Williams, K. J. (1993). Using signal-contingent experience sampling methodology to study work in the field: A discussion and illustration examining task perceptions and mood. Personnel Psychology, 46, 525–549. http://dx.doi.org/10.1111/j.1744-6570.1993.tb00883.x
Bolier, L., Haverman, M., Westerhof, G. J., Riper, H., Smit, F., & Bohlmeijer, E. (2013). Positive psychology interventions: A meta-analysis of randomized controlled studies. BMC Public Health, 13, 119. http://dx.doi.org/10.1186/1471-2458-13-119
Brannen, J. (2005). Mixing methods: The entry of qualitative and quantitative approaches into the research process. International Journal of Social Research Methodology, 8, 173–184. http://dx.doi.org/10.1080/13645570500154642
Brink, A. J. W., & Wissing, M. P. (2012). A model for a positive youth development intervention. Journal of Child and Adolescent Mental Health, 24, 1–13. http://dx.doi.org/10.2989/17280583.2012.673491
Burch, P., & Heinrick, C. J. (2015). Mixed methods for policy research and program evaluation. Thousand Oaks, CA: SAGE.
Catalano, R. F., Hawkins, J. D., Berglund, M. L., Pollard, J. A., & Arthur, M. W. (2002). Prevention science and positive youth development: Competitive or cooperative frameworks? Journal of Adolescent Health, 31(Suppl. 6), 230–239. http://dx.doi.org/10.1016/S1054-139X(02)00496-2
Chin, T., Rickard, N. S., & Vella-Brodrick, D. A. (2016). Development and feasibility of a mobile experience sampling application for tracking program implementation in youth well-being programs. Psychology of Well-Being, 6(1), 1–12. http://dx.doi.org/10.1186/s13612-016-0038-2
Davidson, R. J., Kabat-Zinn, J., Schumacher, J., Rosenkranz, M., Muller, D., Santorelli, S. F., … Sheridan, J. F. (2003). Alterations in brain and immune function produced by mindfulness meditation. Psychosomatic Medicine, 65, 564–570. http://dx.doi.org/10.1097/00006842-200401000-00022
Delle Fave, A., Brdar, I., Freire, T., Vella-Brodrick, D., & Wissing, M. P. (2011). The eudaimonic and hedonic components of happiness: Qualitative and quantitative findings. Social Indicators Research, 100, 185–207. http://dx.doi.org/10.1007/s11205-010-9632-5
Durlak, J., & Dupre, E. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350. http://dx.doi.org/10.1007/s10464-008-9165-0
Ebner-Priemer, U. W., & Trull, T. J. (2009). Ecological momentary assessment of mood disorders and mood dysregulation. Psychological Assessment, 21, 463–475. http://dx.doi.org/10.1037/a0017075
Engel, G. L. (1977). The need for a new medical model: A challenge for biomedicine. Science, 196, 129–136. http://dx.doi.org/10.1126/science.847460
Gillham, J. E., Reivich, K. J., Freres, D. R., Chaplin, T. M., Shatté, A. J., Samuels, B., … Seligman, M. E. P. (2007). School-based prevention of depressive symptoms: A randomized controlled study of the effectiveness and specificity of the Penn Resiliency Program. Journal of Consulting and Clinical Psychology, 75, 9–19. http://dx.doi.org/10.1037/0022-006X.75.1.9
Hone, L. C., Jarden, A., & Schofield, G. M. (2015). An evaluation of positive psychology intervention effectiveness trials using the re-aim framework: A practice-friendly review. Journal of Positive Psychology, 10, 303–322. http://dx.doi.org/10.1080/17439760.2014.965267
Kok, B. E., Coffey, K. A., Cohn, M. A., Catalino, L. I., Vacharkulksemsuk, T., Algoe, S. B., … Fredrickson, B. L. (2013). How positive emotions build physical health: Perceived positive social connections account for the upward spiral between positive emotions and vagal tone. Psychological Science, 24, 1123–1132. http://dx.doi.org/10.1177/0956797612470827
Lindfors, P., & Lundberg, U. (2002). Is low cortisol release an indicator of positive health? Stress and Health, 18, 153–160. http://dx.doi.org/10.1002/smi.942
Lyubomirsky, S., Sheldon, K. M., & Schkade, D. (2005). Pursuing happiness: The architecture of sustainable change. Review of General Psychology, 9, 111–131. http://dx.doi.org/10.1037/1089-2680.9.2.111
Ong, A., & Dulmen, M. H. M. (Eds.). (2007). Oxford handbook of methods in positive psychology. New York, NY: Oxford University Press.
Parks, A. C., & Biswas-Diener, R. (2013). Positive interventions: Past, present, and future. In T. B. Kashdan & J. Ciarrochi (Eds.), Mindfulness, acceptance, and positive psychology: The seven foundations of well-being (pp. 140–165). Oakland, CA: Context Press.
Patel, V., Flisher, A. J., Hetrick, S., & McGorry, P. (2007). Mental health of young people: A global public-health challenge. The Lancet, 369, 1302–1313. http://dx.doi.org/10.1016/S0140-6736(07)60368-7
Rickard, N. S., Chin, T.-C., & Vella-Brodrick, D. A. (2016). Cortisol awakening response as an index of mental health and well-being in adolescents. Journal of Happiness Studies, 17, 2555–2568. http://dx.doi.org/10.1007/s10902-015-9706-9
Rickard, N. S., & Vella-Brodrick, D. A. (2014). Changes in well-being: Complementing a psychosocial approach with neurobiological insights. Social Indicators Research, 117, 437–457. http://dx.doi.org/10.1007/s11205-013-0353-4
Ryff, C. D., & Singer, B. (1998). The contours of positive human health. Psychological Inquiry, 9, 1–28. http://dx.doi.org/10.1207/s15327965pli0901_1
Sawyer, S. M., Afifi, R. A., Bearinger, L. H., Blakemore, S. J., Dick, B., Ezeh, A. C., & Patton, G. C. (2012). Adolescence: A foundation for future health. The Lancet; 379, 1630–1640.
Scollon, C. N., Kim-Prieto, C., & Diener, E. (2003). Experience sampling: Promises and pitfalls, strengths and weaknesses. Journal of Happiness Studies, 4, 5–34. http://dx.doi.org/10.1007/978-90-481-2354-4_8
Seligman, M. E. P., Ernst, R. M., Gillham, J., Reivich, K., & Linkins, M. (2009). Positive education: Positive psychology and classroom interventions. Oxford Review of Education, 35, 293–311. http://dx.doi.org/10.1080/03054980902934563
Seligman, M. E. P., Steen, T. A., Park, N., & Peterson, C. (2005). Positive psychology progress: Empirical validation of interventions. American Psychologist, 60, 410–421. http://dx.doi.org/10.1037/0003-066X.60.5.410
Sin, N. L., & Lyubomirsky, S. (2009). Enhancing well-being and alleviating depressive symptoms with positive psychology interventions: A practice-friendly meta-analysis. Journal of Clinical Psychology, 65, 467–487. http://dx.doi.org/10.1002/jclp.20593
Steger, M. F., Shim, Y., Rush, B. R., Brueske, L. A., Shin, J. Y., & Merriman, L. A. (2013). The mind’s eye: A photographic method for understanding meaning in people’s lives. The Journal of Positive Psychology, 8, 530–542. http://dx.doi.org/10.1080/17439760.2013.830760
Steptoe, A., Gibson, E. L., Hamer, M., & Wardle, J. (2007). Neuroendocrine and cardiovascular correlates of positive affect measured by ecological momentary assessment and by questionnaire. Psychoneuroendocrinology, 32, 56–64. http://dx.doi.org/10.1016/j.psyneuen.2006.10.001
Vella-Brodrick, D. A., Rickard, N. S., & Chin, T.-C. (2013). Evaluation of youth-led programs run by the Reach Foundation. Clayton, VIC: Monash University. Retrieved from http://wuzzupapp.weebly.com/uploads/1/1/8/8/11889274/reach_foundation_final_report.pdf&usg=AFQjCNHKRoCOdqWMVpmpcaYJ4dSGQGoi-g&sig2=kM1sFKo0CYb3EK59edNn5A
Waters, L. (2011). A review of school-based positive psychology interventions. The Australian Educational and Developmental Psychologist, 28, 75–90. http://dx.doi.org/10.1375/aedp.28.2.75
World Health Organization. (2014). Health for the world’s adolescents – A second chance in the second decade. Geneva, Switzerland: Author. Retrieved from http://apps.who.int/adolescent/second-decade/