The following was submitted as an assignment for a Waikato University paper on Using Evidence for Effective Practise.
Education, like many other modern endeavours, is rich with sources of data. But often educators are too busy in the work of educating to take the time to reflect on the usefulness of the data collected and then the analysis of this data. This essay investigates what the current literature is saying about the selection of data and it’s purpose in the education setting. In beginning this investigation, we must first look at defining data.
What is data? At its most basic definition data is a piece of information. In the educational setting it is a representation of a measurement or quality. For example, a test score, the reading age of a student, the number of behavioural referrals a student has. So something that can be collected and organised. Madinach (2009) proposes that educators should go beyond this simplistic definition of data and use data to make meaning, then translate this into action.
Further, Schildkamp (2012, p. 11) describes four different types of data: output, context, input and process. This is a useful framework for understanding data in the educational setting as we often focus on output data. That is a test score, the result of some assessment, something that is measured at the end of a unit of work. We often ignore input data (socio-economic status of the learner, prior knowledge etc) and very rarely look at context and process data. In my own practice, I think of the test results of my Y9 and 10 classes in Algebra (outcome) and how I used to interpret this as just ‘Algebra is difficult’ rather than look at how I taught this topic (process). As Unger (2013) promotes, teachers need to link achievement data (outcome) with teaching practice data (process) and whether patterns of one can be explained by patterns in the other.
Many of the authors promote the ideal to use data to inform learning but also provide evidence that the use of data makes a difference (Chick & Pierce, 2013; Mandinach, 2016; Schildkamp, 2011; Unger, 2013) . Indeed the consistent theme is data based decision making. As Schildkamp (2012) explains:
“By data-based decision making, we mean that schools make decisions about students, about instruction, and about school and system functioning based on a broad range of evidence, such as scores on students’ assessments and observations of classroom teaching.” (p. 1)
One of the barriers to more effective data-based decision making is the lack of data literacy amongst educators. Educators need to be upskilled in data literacy (Schildkamp, 2011; Mandinach, 2013; Mandinach, 2016; Chick & Pierce, 2013). Mandinach (2013) describes this as the ability to understand and use data effectively to inform decisions. Many teachers are too caught up in the busyness of teaching to step back and reflect on what the results of that recent test actually mean. As Unger (2013, p. 51) explains; “I sometimes feel that we are all working very, very hard, but we are not always sure of what we are doing and why.”
Further, they may not actually have the understanding of statistical tools such as correlations, effect size, sample populations and the like to make informed decisions from the data they are looking at. And it is not just a simple fix. As Mandinach (2013, p. 31) states: “Educators need multiple experiences to develop data literacy across their careers,… “. A further way to improve this is to change the culture of school and have teachers supported by data coaches (Unger, 2013).
Implied in this need for more data literacy is the shift of use of data from compliance and accountability purposes to continued learning and use for improving student outcomes. A good example of this is Hipkins (2011) study of Albany Senior School. While based on a student survey as the main data set, it was interesting to note that whenever the authors made conclusions from the results, they were always supported by qualitative observations or other explanations. This helped to enhance the usefulness for moving forward rather than just the processed numerical survey data.
A reason why educators may not engage in these multiple experience to develop their data literacy is expressed by the Scottish Teaching Union. They, in a paper on the use of pupil performance data, note that “These tasks [keeping and filing records] not only risk undermining the work/life balance of teachers but also distract teachers and school leaders from their core responsibilities for teaching and leading teaching and learning (p. 6)”. Again, this notion of the busy-ness of the teacher to step back and interrogate what all this data we collect as teachers actually means and how it can improve outcomes for students.
Another theme was the use of technology to analyse data and that this analysis of data is “not easily available on existing school analyses software” (Schildkamp 2011, p. 40). I think of the behemoth that is KAMAR (the software that my school uses) that is very powerful in terms of how you can analyse data – if you are an expert in relational databases.
Mandinach (2013) details investment by States in the US into data systems to handle the data ($610 million) but not in an equivalent amount in developing the human capacity to use this. So this may be similar in a New Zealand context – we have expensive systems (think the online PAT testing costing $2700 for 800 students and 2 subjects) but not the same investment in helping teachers become more data literate to make the most out of the data.
In conclusion, the literature appears consistent in the message that the collection of data without action is a waste of time and resources. Action not based on data is ill informed. The combination of data and action leads to informed decisions more likely to have a positive impact on student outcomes. In my own teaching, I want to be more selective about when I formally assess and make sure I analyse the results, inform students of this analysis and make sure they can interpret their result in the context of the data.
Boyd, S., & McDowall, S. (2001). Techno magic, whizz or fizz?: The relationship between writing mode, editing process, and writing product. Wellington: New Zealand Council for Educational Research.
Chick, Helen, and Robyn Pierce. “The Statistical Literacy Needed to Interpret School Assessment Data.” Mathematics Teacher Education and Development 15.2 (2013): 19. ERIC. Web. 23 July 2016.
Hipkins, R., Hodgen, E., & Dingle, R. (2011). Students’ experiences of their first two years at Albany Senior High. Wellinton: NZCER.
Mandinach, E. B., & Gummer, E. S. (2013, 01). A Systemic View of Implementing Data Literacy in Educator Preparation. Educational Researcher, 42(1), 30-37. doi:10.3102/0013189×12459803
Mandinach, E. B., & Gummer, E. S. (2016). Every teacher should succeed with data literacy. Phi Delta Kappan, 97(8), 43-46.
NASUWT-The Teachers’ Union. (n.d.). The use of pupil performance data in target setting and in the evaluation of the effectiveness and capability of teachers (Scotland) (Rep.). Edinburgh: NASUWT Scotland.
Schildkamp, K. (2012). Data-Based Decision Making in Education: Challenges and Opportunities.
Unger, J. (2013, August). Flex your school’s data muscles: Leadership strategies strengthen data’s impact. JSD, 34(4), 50-54.