To achieve equity and promote excellence for all learners, internal evaluation must involve both good processes, and good evaluative discussion. In schools with effective internal evaluation, there were different points of view about what the data was saying, about issues and successes that affected students' learning and about what teachers might do next. Leaders, teachers and trustees did not simply go through the evaluation process as a series of discrete steps. They asked good questions, collected, analysed and made sense of good data and reasoned clearly and robustly about why and how their chosen response would result in the changes necessary for improvement.
Learners are at the heart of these processes, providing a lens through which schools:
Placing learners at the heart of review and decision-making means:
Often the catalyst for internal evaluation, especially those that were emergent, rather than planned or strategic, was 'noticing' what was happening for learners. Leaders, teachers or trustees noticed something that caused them to pause and think. Often this was accompanied by questions such as:
In these schools, there were always many eyes scanning for potential issues for students, and a variety of ways in which teachers, leaders and trustees knew further investigation was needed.
The most commonly cited catalyst was student achievement data such as NCEA, National Standards, or other assessment information regularly gathered by teachers and leaders.
Other formally collected data provided
catalysts too. Sources included:
Alternatively, the catalyst may have come from a more informal source, such as:
School leaders and teachers sought to obtain a more complete picture of what was happening and why before making any decisions about what and how to improve. The investigation focused on finding out what was currently happening in the school, and examining relevant research evidence and good practice guidelines about what effective practice looks like.
By investigating together, leaders and teachers had shared understandings and owned the process and the findings.
Data was collected over and above what was routinely collected. Leaders and teachers were clear about what data would provide sufficient evidence to understand the issue or problem. Schools asked questions such as:
Trustees, leaders, teachers, students and whanau had knowledge, beliefs and attitudes they could apply to understanding the issue.
It was important not to assume what these were ahead of time. Internal evaluation took into account the different ways in which participants could contribute, and tailored data collection methods to suit.
Leaders and teachers used a wide range of data collection approaches. Methods included focus groups, interviews, planning checks, classroom observations and reflecting on samples of student work. The perspectives of students, parents and teachers were often sought through questionnaires or discussion opportunities.
Some of the schools found that video was a useful way to collect data. Having video evidence made it possible to repeat observations and notice things that were not initially obvious or to look for change. The use of video also allowed teachers to share their teaching strategies and approaches with one another in a professional learning context where capability building was a key focus.
Investigating what 'good' looks like was also part of the process. Teachers and leaders pulled together what they already knew about what they were investigating. This enabled them to identify gaps in their knowledge.
Further sources of evidence included research literature, external experts, other schools, Ministry publications like the Best Evidence Syntheses and ERO's School Evaluation Indicators. Sound evaluative reasoning helped to ensure a match between the school's context and the kinds of evidence that they drew on to identify what 'good' looks like. They did this by investigating the kinds of practices that were likely to make the most difference for all the learners in their school. They also investigated whether the improvements achieved were good enough in terms of the school's vision, strategic direction and their priorities for equity and excellence. Leaders and teachers could then make defensible judgements about valued student outcomes.
To make sense of the data gathered, leaders and teachers went from asking "what is happening here?" or "what is so?" to ask "so what?" Investigating and sense making were not totally separate processes. The process of analysis began when the first data was collected. Sense making could and did inform the direction of further data collection or research. Investigating and sense making were iterative and interwoven.
In these schools leaders, teachers and trustees understood that data often provided an incomplete representation of a more complex underlying reality. They were able to evaluate the quality of the data they had collected, and analyse and scrutinise it well. Some data were quantitative, like test scores; and some were qualitative, like classroom observations or survey responses. Both forms of data were valuable, and leaders and teachers understood the strengths and limitations of each. Many of the schools had a staff member with expertise in data collection and analysis. That person was working on building the capability of others at the school to understand and use data.
Leaders and teachers worked together to interpret the data and often reported what they had found to other staff and trustees, sharing their insights and testing to check the adequacy of the interpretations that they had made. Making sense of the data involved asking questions such as:
After investigating and making sense of the issue or problem, schools were clear about where their strengths were, and where they needed to improve. This understanding usefully informed their response.
Leaders and teachers carefully prioritised actions in order to plan for change in practice. Any response incurs a resource cost of some kind, so at this stage leaders asked:
Leaders were clear about what capability and capacity they had, and what support they would need. They recognised that in an environment of limited resources, not all avenues can be explored at once. They were able to draw on relevant expertise to support the change they wanted to make and rejected any professional learning and development opportunities that would distract them from the agreed changes.
Most improvement actions focused on providing well-targeted, timely professional learning and development opportunities to support improved teaching practice. In many cases, collaborative professional learning groups that had participated in the evaluation process were continued
Other responses included changes to curriculum design, assessment practices and expectations, or performance management processes.
Planning for how change would be managed was closely linked to the evaluation findings. Leaders and teachers were clear about what success for students would look like and how they would know whether or not the actions taken were working.
Internal evaluation did not end with the implementation of improvement actions. Monitoring the impact of any changes made was crucial. This stage focused on questions such as:
Ongoing noticing, investigation and sense making enabled leaders and teachers to see whether what they were doing was having the desired result. Adjustments or further changes were sometimes needed. Where things were working well, ongoing monitoring provided opportunities to recognise and celebrate success.