So having recently gained my Google Trainer certification, I thought it may be a good idea to embark on some more post graduate study – let’s see how this goes over the next few years…
I have enrolled in Waikato University’s School of Education’s Master of Education (Professional Learning) course. This was 2 compulsory papers, one of which I am starting on Friday – Using Evidence for Effective Practice.
Our first assigned reading was:
Chapter one was a good introduction to the paper and data driven decision making: “By data-based decision making, we mean that schools make decisions about students, about instruction, and about school and system functioning based on a broad range of evidence, such as scores on students’ assessments and observations of classroom teaching. (p1)”
We often just think of analysing data as relating to outcome (test) results so Chapter 1 gave a good explanation about the importance of other types of data (context, input and process-p11) and how schools can use this range to best improve student learning.
I also found myself reflecting on how the authors reinforced the underpinnings of the Teaching as Inquiry process (e.g “synthesis of the literature on professional learning that makes a difference to student achievement found that schools that used data to inquire into the effectiveness of their teaching and school practices made significant improvements in achievement (Timperley et al.2007) p15). In fact, Fig 2.1 on p16 is another way of illustrating the TAI process.
At my school I’m responsible for guiding 2nd year teachers through a TAI and so this chapter was useful and in fact I emailed one of my colleagues the flow chart to highlight the purpose of his Inquiry. So this idea of ‘instrumental’ use of data (p19) which “involves analyzing and interpreting and data as well taking actions to improve based on the analysis and interpretation” means we need to do more than just give kids a test and record a number in a mark book.
I was also intrigued withe the section on how data can be used and abused. An example is roll based vs participation based pass rate at NCEA and how that data can be used to overstate actual student achievement.
Chapter 3 was a good description of how to analyse achievement data in context of classroom practice data – the two are obviously linked. The concept of the ‘ill-structured problem’ (p27) succinctly defines the challenge we face in the classroom where there are “no definable procedures for reaching a solution and uncertainty about the information required to solve the problem.”
Another quote that resonated with me is how data can be used to ‘blame’ the student’s family circumstances rather than analysed to improve teaching (p32). With the multiple facets to student achievement (socio-economic, family, peer group, mindset, teacher, school…) we as teachers can often feel a bit more comfortable in explaining low achievement on other factors that we can’t control rather than looking at our own practice.
I was also interested to see how the authors acknowledge than one challenge for schools is not only staff being unfamiliar with analysing data and aspects of Pedagogical Content Knowledge (how students understand and misunderstand their subjects p 42), but also that analysis of data is “not easily available on existing school analyses software p40.” I think of the behemoth that is KAMAR (the software that my school uses) that is very powerful in terms of how you can analyse data – if you are an expert in relational databases…
My main take away from the two chapters was how we need to link achievement data (outcome) with teaching practice data (process) and whether patterns of one can be explained by patterns in the other (p36). In my own practice, I think of the results of my Y9&10 classes with Algebra and how I used to look at this as just ‘Algebra’s tough’ rather than look at how I taught this topic.