by clicking the arrows at the side of the page, or by using the toolbar.
by clicking anywhere on the page.
by dragging the page around when zoomed in.
by clicking anywhere on the page when zoomed in.
web sites or send emails by clicking on hyperlinks.
Email this page to a friend
Search this issue
Index - jump to page or section
Archive - view past issues
Inside Teaching : April 2010
Inside Teaching | April 2010 OPINION 6 National literacy and numeracy assessment and now the new My School website raise a key question: how is a school’s performance best measured? Geoff Masters offers an answer. Second, it sets lower expectations of some students than others. A school in a low socioeconomic area can be judged to be performing as well as expected, even if students’ levels of literacy and numeracy are unacceptable by anybody’s standard. Third, this approach assumes that the difference between predicted and actual student results is due only to the influence of the school. As British statistician Harvey Goldstein puts it, parents relying on measures of this kind to select schools for their children are using a tool not fit for purpose. An alternative, and preferable, approach to measuring the value that a school adds is to measure student growth across the years of school. For example, average growth in reading between Year 3 and Year 5 is likely to be a better indicator of the contribution a school is making than reading results for a single year level. Rates of growth can, however, also reflect influences beyond the control of schools, including non-attendance, high rates of student mobility and learning difficulties. Direct measures In contrast, direct measures of school performance are based on what a school is currently doing. The focus is on establishing the extent to which the school is pursuing strategies that are known from research to lead to better student outcomes. Direct measures require direct observations of the school and its work, The launch of the My School website and the public reporting of National Assessment Program – Literacy and Numeracy (NAPLAN) results have invited questions about how schools are ‘performing.’ There are two broad methods you can use to measure a school’s performance – direct and indirect measures. Direct measures of performance are based on observations and judgements of the quality of what is happening in a school. Indirect measures are based on measures of student performance. While both have their uses, I argue that the attempt to draw inferences about a school’s performance from student test scores alone is inherently problematic. Indirect measures need to be supplemented by more direct performance measures. Indirect measures Because the ultimate purpose of schooling is to improve outcomes for students, it may seem obvious that the best basis for measuring a school’s performance would be measures of student performance, but there are several reasons why this may not be so. First, reliable measures of student performance exist for a very limited set of outcomes. Literacy and numeracy tests measure only part of what students learn in school and so only partially capture the contributions that schools are making. Second, student performances reflect a range of influences unrelated to a school’s performance. Socioeconomic backgrounds are an obvious example. So are pre-existing learning difficulties, low attendance rates and high levels of student mobility. Many influences on student test scores are largely beyond the control of schools. Third, student performances can reflect the circumstances of the school in ways that are unrelated to the efforts of current staff. Limited school facilities and resources, high rates of staff turnover and low levels of community engagement and support often are more a function of a school’s location, history and financial circumstances than its current performance. In some parts of the world, attempts have been made to construct indirect measures of school performance from measures of student performance. This is done by first predicting the test performances of students in each school based on their socioeconomic background and other factors. The difference between the predicted and actual scores in a school is then taken as a measure of that school’s ‘contextualised value-added’ performance. The better students do than predicted, the higher the school’s measured performance. There are several well-recognised problems with this approach. First, it can obscure actual student results. Measuring school performance