CHAPTER 7

Using assessment data to support student learning

NERIDA SPINA

Research on the use of assessment data in schools and in education systems has grown enormously over the past decade. This activity has produced a variety of literature. One body of literature extols the benefits of using assessment data to inform practice, and provides guidance on how to make evidence-based decisions that drive equity and student improvement (Boudett & Steele 2007; Schnellert et al. 2008). Evidence-informed practice typically refers to the use of quantitative assessment results—either formative or summative—to inform teaching and learning decisions (Boudett & Steele 2007; Hattie 2012). Thus, when teachers or principals refer to ‘data-informed practice’, they are frequently referring to the use of numeric, standardised, assessment data. These practices are increasingly appearing in education policies that use data and evidence to monitor and drive systemic improvement by building new forms of accountability (O’Brien 2018). Education departments and systems frequently use large-scale mandated assessments, such as Australia’s National Assessment Program—Literacy and Numeracy (NAPLAN) tests, to hold schools and educators accountable for student achievement. For example, NAPLAN data has been used by Australian governments to monitor and manage the performance of school principals (Bloxham et al. 2015; Heffernan 2016). As Earl and Katz (2006: 2) point out, ‘accountability and data are at the heart of contemporary reform efforts worldwide. Accountability has become the watchword of education, with data holding a central place in the current wave of large-scale reform.’

Another body of literature examines the adverse effects that can occur when data is used to orchestrate teachers’ everyday work in this new era of accountability (Roberts-Holmes 2015). Harris and colleagues (2018) examine some of these adverse effects and consider how data use can create ethical dilemmas for school leaders as they work to create and sustain inclusive schools and classrooms (see also Chapter 10). Ethical concerns include allocating greater resourcing to those students more likely to succeed in order to improve overall school data (Nichols & Berliner 2007). Researchers have raised concerns about how data systems can harm student identity, causing some students to worry about ‘being a nothing’ (Stobart 2008: 2) or that their performance might mean they may ‘never ever get a job and get money and maybe couldn’t even get a house!’ (Howell 2017: 580). Research has documented how these practices marginalise students from their home cultures (Elwood & Lundy 2010; Klenowski 2014), limit students’ subject choices and preclude certain career choices (Smyth 2011).

If inclusive education is about accepting and responding to difference to ensure that ‘all people are valued and treated with respect’ (Carrington & Elkins 2002: 51), then we must ask how assessment data can be used to promote learning and fairer outcomes for all. However, as indicated by the research literature, using assessment data to support inclusive practice and student learning is a double-edged sword. Data can be used to provide an opportunity to promote fairness and inclusion, and ensure a focus on student learning. Yet, at the same time, educators are vulnerable to the performative pressures and inevitable conflicts of interest that arise when data is the fulcrum of teacher accountability and performance management. This chapter puts forward a series of considerations for teachers and school leaders who aim to use data to support inclusive practice and student learning in the current era of accountability.

What is Assessment Data and What Can It Tell Us?

In recent decades, the explosion of large-scale standardised tests and assessment programs has increased the accessibility and visibility of assessment data that is linked to student learning. From global data sets, such as that generated through the Organisation for Economic Co-operation and Development’s (OECD) Programme for International Student Assessment (PISA), to national assessment systems such as NAPLAN, and classroom data collected using standardised tests, such as the Progressive Achievement Tests (PAT) published by the Australian Council for Educational Research (ACER) and run by individual schools, it is now common for student learning to be represented by and interpreted through quantitative assessment data. Yet there is increasing recognition that data should include more than numeric assessment data (Schildkamp et al. 2012).

For the purposes of this chapter, assessment data includes the wide range of evidence that teachers generate and collect to document student learning. Combining and analysing multiple assessment sources—from the diagnostic data collected before teaching commences, through to formative data that helps monitor student learning during teaching, and finally a range of summative data—can help teachers to establish a picture of what students know and can do. As will be discussed below, summative data must be collected in ways that provide all students—including students with disability—with the opportunity to demonstrate what they know and can do. However, focusing only on high-stakes summative data can increase the likelihood of practices that increase inequity—an issue that will be discussed later in the chapter. Similarly, discounting summative data can mean ignoring a valuable source of information that could be used to interrupt teachers’ assumptions and practices in useful ways (Harris et al. 2018).

Timperley and Phillips’ (2003) research suggests that teachers are more likely to have low expectations of student achievement when they do not engage with high-stakes assessment data, meaning teaching is unlikely to be sufficiently challenging. The past decade has seen ongoing discussion in Australia and internationally about the participation of students with disability in high-stakes assessment. Some countries, such as the United States, have developed alternative assessments and have required reasonable adjustments to be made to assessments and test conditions so that all students can participate in large-scale testing. The United States also requires the achievement of students with disability to be monitored through state-based assessment and reporting (Danforth 2016). Australia, however, has a complex system in which students with disability may be exempt from participation in NAPLAN. Under this system, exempt students are included in school data (being counted as not having achieved the National Minimum Standard), despite not having taken the test. Students who are absent or whose parents choose to withdraw their child (rather than seeking an exemption) are counted as not having participated in the test. This means that there is a perceived advantage for schools when students are withdrawn or absent from NAPLAN, rather than being exempt from it. Dempsey and Davies (2013) estimate that only up to one-third of students with disability participate in NAPLAN. Worryingly, while Australian policy and legislation expect inclusive education practices, national programs such as NAPLAN are yet to develop and promote equitable access for all students. As Graham (2016) has argued, participation in NAPLAN may be one way of bringing about inclusion. This chapter, therefore, includes the broad range of assessment data—both mandated and teacher-instigated—that can help teachers establish a picture of student learning and respond in inclusive ways.

Slowing down and making conscious decisions

It is broadly accepted that in the current era of accountability, schools are ‘awash with data’ (Hattie 2005: 11). In this landscape, educators spend significant time collecting and producing assessment data. This focus on data production means that many teachers move from one assessment item to the next without having time to analyse or respond to the valuable evidence they have collected (Spina 2017; Stroud 2018). There is also research (Earl & Katz 2006; Schildkamp & Kuiper 2010) suggesting that many teachers make decisions quickly using intuition, rather than data. In the busyness of classroom life, teachers are required to make a raft of day-to-day decisions that draw on their professional judgement and experience. Balancing this rapid thinking with more careful consideration of assessment data can support teachers to focus on evidence of learning, and to reveal any assumptions they might have about student learning.

Nobel laureate Daniel Kahneman (2011) proposed that there are two ways that people make decisions, which he described as ‘System 1’ and ‘System 2’ thinking. System 1 is fast thinking that relies on intuition and responding to established patterns in established ways. System 2 is slower, more effortful thinking that requires controlled and deliberate mental activity. System 1 thinking forms an important part of teachers’ assessment work. This system requires teachers to actively monitor data on an ongoing basis, and to look for established patterns of student responses. For example, in marking student spelling, teachers might look for known spelling error patterns and process the information quickly, reducing the cognitive load required for marking and decision-making. However, fast judgements are known to increase cognitive bias (Kahneman 2011). Boudett and Steele (2007) have argued that when we analyse education data we often jump to conclusions, rather than taking the time to carefully examine evidence and consider our own biases before drawing any conclusions. Rather than rushing to draw conclusions based on known patterns in assessment data, System 2 thinking requires teachers to slow down, consider possible biases and examine multiple analytic options. If a student continues to perform poorly on written tasks, a teacher might consider whether there are less obvious explanations for what might at first seem to be a self-evident conclusion. For example, data may reveal that cognitive load is interfering with the student’s ability to demonstrate their knowledge under the conditions in which the assessment was designed and delivered (Gillmor et al. 2015; Graham et al. 2018). In the busyness of school life, it is vital that teachers are provided with time to engage with assessment data in meaningful ways so they can both observe and analyse student learning, before using their professional knowledge to develop teaching that allows all students to engage in meaningful learning. It is only when we move ‘from data to knowledge’ (Earl & Seashore-Louis 2013: 199) that assessment data can contribute to positive change.

Developing an inquiry approach to data

Looking for patterns in assessment data can provide teachers with useful insights about the overall trends in student learning. One of the criticisms of large-scale data is that while it reveals trends—at the school, system and national level—it can be less useful as a means of providing sufficient (or timely) evidence of individual student learning. For example, while national assessment data in Australia indicates that there are significant gaps between remote Indigenous student achievement and their metropolitan and regional counterparts, this evidence has not been able to deliver sustained, positive outcomes (Guenther et al. 2013). Patterns of achievement at the whole-class or small-group level are also unlikely to provide information that is specific enough to understand individual student learning, unless it is part of an ongoing inquiry approach.

Using a student-centred question as the basis for an inquiry can help to narrow the analytic focus, and to minimise the likelihood of being overwhelmed by the amount of data available (Boudett & Steele 2007). While teachers can and should engage with a range of data, it is also useful to make time for detailed analysis of assessment evidence. For example, examining a student’s written work can highlight several areas where the student could benefit from targeted teaching—from the development of adequate topic-specific vocabulary to knowledge of spelling conventions. When teachers develop their own research questions as part of critical inquiry, they can focus their attention on one area of learning and student engagement at a time (Comber et al. 2018), thus ensuring that both the analysis and the use of evidence are manageable. In taking such an approach, teachers must also be mindful of the breadth of knowledge and understanding that students need to be successful.

Investigating multiple data sources such as students’ performance on a range of summative assessment items, along with formative data such as students’ work samples and teacher observations, makes it possible to triangulate data and form a better picture of student learning. As discussed in Chapter 5, Australian educators are legally obliged (under the Disability Standards for Education 2005 [DSE; Cth]) to consult with parents/guardians and students in the design and implementation of reasonable adjustments. Aside from this important legal imperative, inviting students into conversations about their learning is a useful way to broaden understandings of student knowledge and misconceptions. For example, Comber and colleagues (2018) worked with teachers from an Australian secondary school and used large-scale assessment data to spark teacher inquiry into student writing. The teachers in their study subsequently collected a range of further data—including student classroom assessment—to inform practice. Students participated in focus groups to talk about their perceptions of teaching and classroom practice, and shared their views about their learning. Students were also given a further opportunity to reflect on their learning using a survey entitled ‘Me as a Writer’ that was developed by the teachers as part of their inquiry. This range of evidence was assembled to help teachers develop a deep understanding of individual student learning, needs and preferences. As Harris and colleagues (2018) point out, equitable practices can be enhanced by gathering diverse data sources—including assessment data—and listening to a range of stakeholders. Doing so can interrupt existing patterns of thinking and encourage educators to explore new ways of working with individual students. As one of the teachers in the research noted, this process helped her to think about the challenges that one of her students faced as ‘a puzzle to be solved [rather than thinking] in deficit terms’ (Harris et al. 2018: 56). This process also typically includes discussing data with colleagues, who may see something different and be able to offer novel suggestions about student learning and pedagogical responses.

Using Data to Inform Teaching

Assessment data can inform teaching because it can provide important insights into a child’s ‘zone of proximal development’ (ZPD) (Vygotsky 1978, 1986). For Vygotsky (1978: 87), ‘what is in the zone of proximal development today will be the actual developmental level tomorrow—that is, what a child can do with assistance today she will be able to do by herself tomorrow’. When a student is functioning within the ZPD, they have sufficient mastery to perform the task with assistance but cannot yet complete the task successfully on their own. If a task is too easy, no new learning will occur. Similarly, a task that is too difficult or incomprehensible will not result in new learning. The teacher’s job is to ascertain the ZPD—the point at which a task is slightly more difficult than the student can do on their own—and provide enough guidance to support learning until the student can complete the task independently. However, a key challenge for teachers is assessing students’ ZPD across the wide variety of curriculum skills and knowledge required during a school year. As Mehlinger (1995: 154) writes, being able to ‘customise schooling for individual learners, rather than mass produce students who have essentially been taught the same thing in the same way in the same amount of time . . . is not a superficial change’. Detailed, ongoing analysis of assessment data must therefore be embedded in teachers’ work.

More recent pedagogical models that have become popular in schools over the past decade have expanded on Vygotsky’s work. The Gradual Release of Responsibility model (Fisher & Frey 2008, 2013; Pearson & Gallagher 1983) provides multiple stages of instruction that aim to extend students’ progress within their ZPD. Using assessment data to plan for each of these phases of instruction is an important way of appropriately differentiating whole-class (‘I do’ and ‘we do’), small-group (‘you do together’) and individual (‘you do’) instruction. According to Fisher and Frey (2013), this model is not a lock-step sequence of teaching, but rather a fluid approach that uses a combination of data to plan for and guide teaching. Teachers might therefore combine and analyse diagnostic data along with summative assessment data from a previous unit of work as a means of ascertaining specific gaps in student knowledge.

The work of Sharratt and Fullan (2012) similarly advocates for deep analysis of data as a means of ensuring that instruction is appropriately differentiated for individual students. They provide numerous examples, including that of Year 11 student Luis (Sharratt & Fullan 2012: 4–5), who was largely known as an aggressive and belligerent student, and who was frequently suspended or excluded from school. Careful analysis of his assessment data revealed that Luis was reading at a Year 2 level; his reading challenges had been masked by his behaviour. While many teachers (and even parents) were disbelieving, this data analysis ultimately paved the way for intensive reading support and differentiated classroom instruction. Ultimately, focused teaching based on analysis of assessment data allowed Luis to become a less frustrated learner and attain grades that were expected for his year level.

Assessment data can also be used to develop teaching using a framework developed utilising Universal Design for Learning (UDL) principles (Meyer et al. 2016). For example, in targeting specific groups of students, teachers may find that teaching using virtual manipulative mathematics touch-screen apps (most commonly accessed via iPads or tablets) offers a range of affordances to all students in the class, even though the effects are variable for individual students (Moyer-Packenham et al. 2016). Digital technologies offer an ever-increasing toolkit for reducing barriers through options such as incorporating social-media use into teaching, learning and assessment (Barden 2012; Gillmor et al. 2015). In addition to adopting a universal design approach, using the principles described in Chapter 8, teachers can use assessment data to tailor instruction for individual students to reduce barriers to learning.

Using assessment data to identify barriers and reflect on practice

Assessment data is often used to diagnose a ‘problem’ with student performance. However, employing the social model of disability challenges teachers to consider the conditions that may prevent students from accessing learning opportunities, rather than conceptualising problems as being inherently situated within individuals. As teachers reflect on lessons that were planned using assessment data, they can begin to identify and understand the barriers to learning that may exist within their classrooms. In identifying possible barriers to student learning, it is useful to return to the earlier discussion about slowing down and making time for deep analysis of data. Rather than rushing to conclude that a student lacks understanding, a teacher can use assessment data to identify areas where they might employ universal design principles to address barriers to access.

As an example, imagine two primary-school students, Amy and Jake, who both arrive at the same incorrect answer to a problem-solving question. While this data highlights that both Amy and Jake are experiencing difficulties with problem-solving, it does not explicate the different barriers to learning that exist for each student. In this example, Amy had not yet acquired automaticity or procedural fluency in basic number facts, and experienced both the assessment task and the preceding instructions as inaccessible. Amy did not have any misconceptions about number, but instead ran out of time and cognitive capacity to answer the more complex maths problem being asked. The other student, Jake, memorised a range of mathematics procedures but applied the incorrect procedure to the question, and thus also produced an incorrect final answer. As Lewis (2010: 29) points out, students with a mathematical learning disability will tend to make ‘qualitatively different errors on math fact problems’ that require detailed analysis. Lewis demonstrates that extensive additional individual tutoring did little to improve the mathematical progress of participants in her study, yet ‘once the origin of the errors was understood’ (Lewis 2010: 29), it was possible to help students more effectively employ atypical strategies.

For many students, challenges with mathematical problem-solving may be linked to language difficulties or Developmental Language Disorder (DLD). In these cases, the visual and procedural complexity of tasks might not provide appropriate conditions of access (Gillmor et al. 2015; Graham et al. 2018). Understanding barriers to learning and developing suitable responses, therefore, require teachers to make links between student assessment data and their own practice as they work to understand which teaching practices to employ. As Lai and McNaughton (2016: 436) describe, ‘putting the evidence from classroom practices together with achievement data provides a basis for [identifying effective practice]’. There is a growing body of evidence (Ainscow 2012; Boudett & Steele 2007; Harris et al. 2018; Lai & McNaughton 2016) that this type of analysis is most effective when carried out collaboratively.

Using Data to Inform Teacher Inquiry and Professional Development

For decades, educators have used assessment data—alongside other forms of qualitative and quantitative data—when seeking to adopt an evidence-informed approach to their practice. Critical scholars (e.g. Cochran-Smith 2015) argue that inquiry approaches must begin from a stance that focuses on equity and social justice, rather than accountability. The challenge is that the same or similar language can conceal vast differences in motivations, practices and consequences. For example, if a teacher explains that reading assessment data is being analysed and used to inform reading instruction, does this mean that data is being used to form static reading groups based on ‘ability’, or does it mean that much more inclusive practices are being implemented as a means of differentiating instruction with inclusion and fairness in mind? Researchers such as Boomer (1985), Carr and Kemmis (2003), and Cochran-Smith and Lytle (2009) have advocated for teachers to adopt an inquiring approach not only to understand what their students know and can do, but also to critique their own work.

Analysing assessment data can lead teachers to identify areas where they can target subsequent teaching, but it may also highlight areas where teachers need to do further learning or investigation themselves. This is important, as research demonstrates that while teachers are often able to analyse assessment data to determine student learning, they are often unable to develop suitable follow-up teaching and learning activities (Callingham 2010; Watson et al. 2008). Detailed data analysis can help teachers to not only identify possible barriers to student learning, but also to develop pedagogical approaches that maximise student access to learning. Ongoing cycles of data analysis can help teachers to reflect on the effectiveness of their practice, and to understand whether their teaching is indeed reducing barriers to learning and meeting student needs. If data analysis exposes an area where students need new or different pedagogies that are beyond teachers’ current expertise, teachers must consider looking towards other sources of knowledge. As Harris and colleagues (2018) have demonstrated, schools traditionally hold significant knowledge that is not shared, because teachers’ work has tended to be isolated. In arguing for a collaborative approach, they describe how teachers can see new patterns in data, as well as share ideas and knowledge about pedagogy, curriculum and assessment.

Another source of knowledge is academic research, which continues to explore and document new approaches towards working with diverse learners, and to minimise barriers to learning. The rise of social media means that a great deal of academic research is now promoted by individual academics, as well as by research associations. Teachers can increasingly access academic research via internet sites (such as The Conversation) and research-organisation blogs (such as the Australian Association for Research in Education’s [AARE] EduResearch Matters blog). Following the social-media accounts of key organisations and Twitter hashtags (such as #InclusiveEducation) is another useful way of accessing up-to-date research. Teachers are also working directly with academics to undertake collaborative research, the results of which are often shared at conferences such as AARE’s national conference and the Australian Council for Educational Research’s (ACER) annual research conference.

A Caution: (Un)Ethical Data-Based Decision-Making

While assessment data has enormous potential to inform teacher practice in ways that are just, the rise of high-stakes data has been linked to practices that are known to increase inequity. Greater awareness of these tensions will assist teachers who use assessment data for inclusive learning. Testing and assessment have emerged as essential characteristics of quality education systems (Smith 2016). The practice of counting everything from student growth to attendance has become a ubiquitous form of management that affords legitimacy to decision-making. Instead of relying on decisions made by people such as classroom teachers—which inevitably appear subjective—decisions based on statistics provide a ‘veneer of objectivity’ (Hacking 1990: 4). Underpinning these decisions are inherently political decisions, such as who gets access to what. In analysing and making use of class data, teachers and school leaders must decide whether to adopt an ethic of care (Starratt 1996) towards individual students, or whether to adopt a more utilitarian approach that seeks to allocate resources and time in ways that serve the best interests of the cohort (Harris 2016).

One worrying example is the practice known as ‘rationing’ education (Gillborn & Youdell 2000: 134). Gillborn and Youdell describe a situation in which data is used to classify students into three groups: those able to achieve with no intervention, those perceived as ‘hopeless cases’, and those whose results are most likely to improve with targeted intervention. This grouping is used as a basis for rationing resources according to those most likely to improve school data. In a similar way to medical triage, attention is focused on the students who will achieve the greatest gains with targeted intervention. Students who require extensive support but do not deliver improvements on standardised tests might be seen as detracting from the school’s overall ability to meet key performance indicators or to position itself favourably in the eyes of potential families as they make school-choice decisions. Using data in this way clearly has serious implications for equity. Similar experiences have been documented both in the United States (Booher-Jennings 2006) and Australia (O’Mara 2014).

An additional risk, according to Sherman (2009), is the growth in standardised ‘recipe’ or quick-fix approaches to inclusion, such as grouping students by ability. In the business of classroom life, the practice of targeting pedagogy towards small groups of students based on data might (at first glance) seem to be an effective and pragmatic way of targeting instruction at students’ ZPDs. Yet ability grouping is known to increase inequity (McGillicuddy & Devine 2018; Spina 2018). The Brown Center Report (Loveless 2004) reveals that the rise of ability grouping (and associated practices such as streaming, setting and tracking) is linked to the use of data as the basis of evidence-informed practice. The report dedicates an entire section to what it calls the ‘resurgence’ of ability grouping. Sweden provides another example of a system that has increasing evidence of teachers engaging in ability-grouping as a form of differentiation (Ramberg 2016). As Hart and colleagues (2004)—and others—have demonstrated, grouping by ability can inhibit teacher expectations of students in ‘low’ groups, undermine students’ self-confidence and dignity, and narrow curriculum offerings. These practices can work to reproduce and exacerbate inequity, as students in the so-called ‘higher’ ability groups receive higher-order thinking and extension pedagogies, while students in the ‘low’ groups are more likely to receive didactic and basic-skills pedagogies (Luke et al. 2013). Grouping students by ability has been shown to have significant negative effects on both academic and nonacademic outcomes, particularly for students in the lowest streamed groups (Steenbergen-Hu et al. 2016). The use of assessment data to group students by ability for extended periods of time places too much trust in the seeming objectivity of data.

A further caution is around the entrenched belief that assessment data assembled in numeric form is a fair and objective basis for decision-making. However, there is ongoing evidence (Paugh & Dudley-Marling 2011; Spina 2018; Waterhouse 2004) that the discourses associated with contemporary data-driven practices do little to challenge harmful deficit views of students. Careful and ethical data analysis must instead be conducted with a view to challenging assumptions about students, about what is fair, and about teacher practice (Ainscow 2012). Assessment data cannot be separated from philosophical beliefs about the use of data to draw conclusions and develop practices that will increase equity and student access to learning.

Conclusion

While there is a proliferation of policies insisting that teachers use assessment data to inform their teaching practices, analysing and making use of data are fraught with problems. The strong global interest in national and high-stakes assessment has often assumed that an increase in assessment is linked to an increase in education quality (Smith 2016). However, the rise in this form of testing has not always led to significant and/or sustained improvements in student learning (Savage 2017). Data use can unintentionally create deficit discourses about students, and lead to a resurgence in practices such as ability grouping, which have been shown to have negative effects. Alternatively, data can help teachers to understand barriers that might prevent students accessing learning opportunities. To ensure assessment data is used to promote equity and inclusion, teachers must slow down and think deeply about the possible meanings of assessment data and how this knowledge can provide insights that will inform teacher practice in ways that are just. Earl and Katz’s (2006) point that data-driven decision-making is often too simplistic is highly relevant when data is being used as the basis for building inclusive classrooms. As Earl and Seashore-Louis (2013) argue, gathering assessment data should not be an end in itself. Instead, data should provide a platform for generating ‘quality knowledge’ that emerges from ‘asking good questions, having good data, and engaging in good thinking’ (Earl & Seashore-Louis 2013: 193). Data-informed practice should focus on the knowledge that assessment data generates, and the way this knowledge is used (rather than focusing on the data itself).

However, assessment data can work as a double-edged sword. While there is a great deal of potential for assessment data to be used as a basis for quality differentiation of practice, there is also significant potential for assessment data to reproduce deficit discourses and exacerbate inequitable practices and outcomes. It is crucial that teachers begin with a philosophical understanding and adopt a genuine critical-inquiry approach to interrogate their assessment data and to understand what individual students know and can do. It is also critical that teachers augment assessment data with knowledge of their students’ strengths, interests and areas of difficulty in order to build a full picture through which to make sense of students’ progress. This knowledge, collected throughout cycles of teaching, provides an invaluable means of planning for teaching that is pitched appropriately in students’ zone of proximal development (ZPD).

For teachers, questions such as ‘what do I do with Amy, who still doesn’t answer problem-solving questions correctly?’ or ‘how can I change the overall achievement of my class?’ require a combination of thinking strategies. The generation of meaningful knowledge about not only student learning but also barriers to achievement, effectiveness of teaching, and students’ ability to access assessment tasks requires deep and considered thinking. Looking at displays of graphs and visual displays of quantitative data is unlikely to provide any important insights on its own. Rather, assessment data is only useful at the classroom level if it provides teachers with new knowledge and assists teachers to build on their own understandings so they might continue to extend students’ learning in ways that ensure equity and fairness for every student in their class.

References

Ainscow, M., 2012, ‘Moving knowledge around: Strategies for fostering equity within educational systems’, Journal of Educational Change, vol. 13, no. 3, pp. 289–310

Barden, O., 2012, ‘“. . . If we were cavemen we’d be fine”: Facebook as a catalyst for critical literacy learning by dyslexic sixth-form students’, Literacy, vol. 46, no. 3, pp. 123–32

Bloxham, R., Ehrich, L.C. & Iyer, R., 2015, ‘Leading or managing? Assistant regional directors, school performance, in Queensland’, Journal of Educational Administration, vol. 53, no. 3, pp. 354–73

Booher-Jennings, J., 2006, ‘Rationing education in an era of accountability’, Phi Delta Kappan, vol. 87, no. 10, pp. 756–61

Boomer, G., 1985, Fair Dinkum Teaching and Learning: Reflections on literacy and power, Upper Montclair, NJ: Boynton/Cook Publishers Inc

Boudett, K.P. & Steele, J.L. (eds), 2007, Data Wise in Action: Stories of schools using data to improve teaching and learning, Cambridge, MA: Harvard University Press

Callingham, R., 2010, ‘Mathematics assessment in primary classrooms: Making it count’, Research Conference 2010 Proceedings, <https://research.acer.edu.au/cgi/viewcontent.cgi?article=1094&context=research_conference>

Carr, W. & Kemmis, S., 2003, Becoming Critical: Education, knowledge and action research, London: Routledge

Carrington, S. & Elkins, J., 2002, ‘Bridging the gap between inclusive policy and inclusive culture in secondary schools’, Support for Learning, vol. 17, no. 2, pp. 51–7

Cochran-Smith, M., 2015, ‘Teacher communities for equity’, Kappa Delta Pi Record, vol. 51, no. 3, pp. 109–13

Cochran-Smith, M. & Lytle, S.L., 2009, Inquiry as Stance: Practitioner research for the next generation, New York, NY: Teachers College Press

Comber, B., Klenowski, V. & Harris, J., 2018, ‘Listening to the voices of teachers’, in J. Harris, S. Carrington & M. Ainscow (eds), Promoting Equity in Schools: Collaboration, inquiry and ethical leadership, New York, NY: Routledge, pp. 45–68

Danforth, S., 2016, ‘Social justice and technocracy: Tracing the narratives of inclusive education in the USA’, Discourse: Studies in the Cultural Politics of Education, vol. 37, no. 4, pp. 582–99

Dempsey, I. & Davies, M., 2013, ‘National test performance of young Australian children with additional educational needs’, Australian Journal of Education, vol. 57, no. 1, pp. 5–18

Disability Standards for Education 2005 (DSE), Cth, <www.comlaw.gov.au/Details/F2005L00767>

Earl, L.M. & Katz, S., 2006, Leading Schools in a Data-Rich World: Harnessing data for school improvement, Thousand Oaks, CA: Corwin Press

Earl, L. & Seashore-Louis, K., 2013, ‘Data use: Where to from here?’, in K. Schildkamp, M.K. Lai & L. Earl (eds), Data-based Decision Making in Education: Challenges and opportunities, New York, NY: Springer, pp. 193–204 Elwood, J. & Lundy, L., 2010, ‘Revisioning assessment through a children’s rights approach: Implications for policy, process and practice’, Research Papers in Education, vol. 25, no. 3, pp. 335–53

Fisher, D. & Frey, N., 2008, Better Learning Through Structured Teaching: A framework for the gradual release of responsibility, Alexandria, VA.: Association for Supervision and Curriculum Development

—— 2013, Gradual Release of Responsibility Instructional Framework: Engaging the adolescent learner, Newark, DE: International Reading Association

Gillborn, D. & Youdell, D., 2000, Rationing Education: Policy, practice, reform and equity, Buckingham, PA: Open University Press

Gillmor, S.C., Poggio, J. & Embretson, S., 2015, ‘Effects of reducing the cognitive load of mathematics test items on student performance’, Numeracy, vol. 8, no. 1, art. 4

Graham, L.J., 2016, ‘Reconceptualising inclusion as participation: Neoliberal buck-passing or strategic by-passing?’, Discourse: Studies in the Cultural Politics of Education, vol. 37, no. 4, pp. 563–81

Graham, L.J., Tancredi, H., Willis, J. & McGraw, K., 2018, ‘Designing out barriers to student access and participation in secondary school assessment’, The Australian Educational Researcher, vol. 45, no. 1, pp. 103–24

Guenther, J., Bat, M. & Osborne, S., 2013, ‘Red dirt thinking on educational disadvantage’, The Australian Journal of Indigenous Education, vol. 42, no. 2, pp. 100–10

Hacking, I., 1990, The Taming of Chance, Cambridge: Cambridge University Press

Harris, J., 2016, ‘Speaking the culture: Understanding the micro-level production of school culture through leaders’ talk’, Discourse: Studies in the Cultural Politics of Education, vol. 39, no. 3, pp. 323–34

Harris, J., Carrington, S. & Ainscow, M. (eds), 2018, Promoting Equity in Schools: Collaboration, inquiry and ethical leadership, New York, NY: Routledge

Hart, S., Dixon, A., Drummond, M.J. & McIntyre, D., 2004, Learning Without Limits, Maidenhead: Open University Press

Hattie, J., 2005, ‘What is the nature of evidence that makes a difference to learning?’, 2005—Using Data to Improve Learning, ACEReSearch, <http://research.acer.edu.au/research_conference_2005/7/>

—— 2012, Visible Learning for Teachers: Maximizing impact on learning, London: Routledge

Heffernan, A., 2016, ‘The Emperor’s perfect map: Leadership by numbers’, The Australian Educational Researcher, vol. 43, no. 3, pp. 377–91

Howell, A., 2017, ‘“Because then you could never ever get a job!”: Children’s constructions of NAPLAN as high-stakes’, Journal of Education Policy, vol. 32, no. 5, pp. 564–87

Kahneman, D., 2011, Thinking, Fast and Slow, New York, NY: Penguin

Klenowski, V., 2014, ‘Towards fairer assessment’, The Australian Educational Researcher, vol. 41, no. 4, pp. 445–70

Lai, M.K. & McNaughton, S., 2016, ‘The impact of data use professional development on student achievement’, Teaching and Teacher Education, vol. 60, pp. 434–43

Lewis, K.E., 2010, ‘Understanding mathematical learning disabilities: A case study of errors and explanations’, Learning Disabilities: A Contemporary Journal, vol. 8, no. 2, pp. 9–18

Loveless, T., 2004, The 2004 Brown Center Report on American Education: How Well Are American Students Learning?, vol. 1, no. 5, Washington, DC: Brookings Institution Press

Luke, A., Cazden, C., Coopes, R., Klenowski, V., Ladwig, J., Lester, J., MacDonald, S., Phillips, J., Shield, P.G., Spina, N., Theroux, P., Tones, M.J., Villegas, M. & Woods, A.F., 2013, A Summative Evaluation of the Stronger Smarter Learning Communities Project, Brisbane: Queensland University of Technology, <http://eprints.qut.edu.au/59535>

McGillicuddy, D. & Devine, D., 2018, ‘“Turned off” or “ready to fly”—Ability grouping as an act of symbolic violence in primary school’, Teaching and Teacher Education, vol. 70, pp. 88–99

Mehlinger, H., 1995, School Reform in the Information Age, Bloomington, IN: Indiana University Center for Excellence in Education

Meyer, A., Rose, D.H. & Gordon, D., 2016, Universal Design for Learning: Theory and practice, Wakefield, MA: CAST Professional Publishing

Moyer-Packenham, P.S., Bullock, E.K., Shumway, J.F., Tucker, S.I., Watts, C.M., Westenskow, A., Anderson-Pence, K.L., Maahs-Fladung, C., Boyer-Thurgood, J., Gulkilik, H. & Jordan, K., 2016, ‘The role of affordances in children’s learning performance and efficiency when using virtual manipulative mathematics touch-screen apps’, Mathematics Education Research Journal, vol. 28, no. 1, pp. 79–105

Nichols, S.L. & Berliner, D.C., 2007, Collateral Damage: How high-stakes testing corrupts America’s schools, Cambridge, MA: Harvard Education Press O’Brien, P.C., 2018, ‘The adaptive professional: Teachers, school leaders and ethical-governmental practices of (self-) formation’, Educational Philosophy and Theory, vol. 50, no. 3, pp. 229–43

O’Mara, J.A., 2014, ‘Closing the emergency facility: Moving schools from literacy triage to better literacy outcomes’, English Teaching: Practice and Critique, vol. 13, no. 1, pp. 8–23

Paugh, P.C. & Dudley-Marling, C., 2011, ‘“Speaking” deficit into (or out of) existence: How language constrains classroom teachers’ knowledge about instructing diverse learners’, International Journal of Inclusive Education, vol. 15, no. 8, pp. 819–33

Pearson, P.D. & Gallagher, M.C., 1983, ‘The instruction of reading comprehension’, Contemporary Educational Psychology, vol. 8, no. 3, pp. 317–44

Ramberg, J., 2016, ‘The extent of ability grouping in Swedish upper secondary schools: A national survey’, International Journal of Inclusive Education, vol. 20, no. 7, pp. 685–710

Roberts-Holmes, G., 2015, ‘The “datafication” of early years pedagogy: “If the teaching is good, the data should be good and if there’s bad teaching, there is bad data”’, Journal of Education Policy, vol. 30, no. 3, pp. 302–15

Savage, G., 2017, ‘NAPLAN 2017: Results have largely flat-lined, and patterns of inequality continue’, The Conversation, 13 December 2017, <https://theconversation.com/naplan-2017-results-have-largely-flat-lined-and-patterns-of-inequality-continue-88132>

Schildkamp, K., Ehren, M. & Lai, M.K., 2012, ‘Editorial article for the special issue on data-based decision making around the world: From policy to practice to results’, School Effectiveness and School Improvement, vol. 23, no. 2, pp. 123–31

Schildkamp, K. & Kuiper, W., 2010, ‘Data-informed curriculum reform: Which data, what purposes, and promoting and hindering factors’, Teaching and Teacher Education, vol. 26, no. 3, pp. 482–96

Schnellert, L.M., Butler, D.L. & Higginson, S.K., 2008, ‘Co-constructors of data, co-constructors of meaning: Teacher professional development in an age of accountability’, Teaching and Teacher Education, vol. 24, no. 3, pp. 725–50

Sharratt, L. & Fullan, M., 2012, Putting FACES on the Data: What great leaders do!, Thousand Oaks, CA: Corwin Press

Sherman, S.C., 2009, ‘Haven’t we seen this before? Sustaining a vision in teacher education for progressive teaching practice’, Teacher Education Quarterly, vol. 36, no. 4, pp. 41–60

Smith, W.C., 2016, The Global Testing Culture: Shaping education policy, perceptions, and practice, Oxford: Symposium Books

Smyth, J., 2011, ‘The disaster of the “self-managed school”—genesis, trajectory, undisclosed agenda, and effects’, Journal of Educational Administration and History, vol. 43, no. 2, pp. 95–117

Spina, N., 2017, ‘The quantification of education and the reorganisation of teachers’ work: An institutional ethnography’, PhD thesis, Brisbane, Queensland University of Technology, <https://eprints.qut.edu.au/104977/>

—— 2018, ‘“Once upon a time”: Examining ability grouping and differentiation practices in cultures of evidence-based decision-making’, Cambridge Journal of Education, vol. 49, no. 3, pp. 329–48

Starratt, R.J., 1996, Transforming Educational Administration: Meaning, community, and excellence, New York, NY: McGraw Hill

Steenbergen-Hu, S., Makel, M.C. & Olszewski-Kubilius, P., 2016, ‘What one hundred years of research says about the effects of ability grouping and acceleration on K–12 students’ academic achievement: Findings of two second-order meta-analyses’, Review of Educational Research, vol. 86, no. 4, pp. 849–99

Stobart, G., 2008, Testing Times: The uses and abuses of assessment, New York, NY: Routledge

Stroud, G., 2018, Teacher, Crows Nest: Allen & Unwin

Timperley, H.S. and Phillips, G., 2003. Changing and sustaining teachers’ expectations through professional development in literacy. Teaching and Teacher Education, vol. 19, no. 6, pp. 627–41

Vygotsky, L.S., 1978, Mind in Society: The development of higher psychological processes, Cambridge, MA: Harvard University Press

—— 1986, Thought and Language, A. Kozulin trans. & ed., Cambridge, MA: MIT Press

Waterhouse, S., 2004, ‘Deviant and non-deviant identities in the classroom: Patrolling the boundaries of the normal social world’, European Journal of Special Needs Education, vol. 19, no. 1, pp. 69–84

Watson, J.M., Callingham, R. & Donne, J., 2008, ‘Proportional reasoning: Student knowledge and teachers’ pedagogical content knowledge’, in M. Goos, R. Brown & K. Makar (eds), Navigating Currents and Charting Directions. Proceedings of the 31st Annual Conference of the Mathematics Education Research Group of Australasia, vol. 2, Adelaide: MERGA, pp. 555–62