When @teachertoolkit asked me this earlier, it got me thinking. What follows below is my attempt to try and unpick further the journey I summarised in the original post in just one paragraph.
2010 – Triangulating evidence bases / whole school coaching model
In 2010, the recognition that it was inherently flawed to evaluate teaching based on one off lesson observations began. We were clear that it didn’t promote a developmental or rigorous approach to improving the quality of teaching. Nor did it actually improve outcomes or the learning experience for our students.
Instead, we moved to a system of triangulating a range of evidence bases to come to a decision about how teachers were graded.
Hindsight is a wonderful thing and I can now see the issues with this although I was convinced it was innovative and helpful at the time. The issues are as below:
1) Although the evidence base was wider and we removed the artificiality of grading one off lessons, the practice of grading a teacher still existed and therefore regardless of anything developmental on top of this, the grading culture still existed. It was also unhelpful looking back to try and almost scientifically come to a conclusion about something which doesn’t lend itself to a simple algorithm.
2) Workload – the whole school coaching model required time and the paperwork accompanying it in reality was unhelpful. The methodology was right but again the way the process was set up meant that, if I was being critical, a lot of the time it was simply given lip service and had little impact on initiating change whole school.
3) The main driver came from the top down, and although we had a team of 27 coaches, it was very much that responsibility for learning and teaching came from me as the senior leader in charge of learning and teaching.
2013 – Driving the quality of teaching from where it matters most
Three years on, and the underpinning principles of coaching and collaboration still existed but it was clear that the whole school coaching approach was not having the impact it could. To cut a long story short, this was ultimately because there was no real ownership from the most important change agents in the school – the middle leaders.
Over the year we worked together, meeting each week for a TLR breakfast to discuss, share, trial, collaborate, refine and adapt the way we both monitored and developed the quality of teaching. The outcomes of this resulted in the following (with particular thanks to Jane Phillipson for what transformed the way we evaluated the impact of feedback):
Over the next two years we made this consistent approach and shared set of values and beliefs underpin our faculty work on learning and teaching. In reality, this approach embedded the coaching cycle in a way that never happened with the whole school approach. And the expectation that the fundamental role of our TLR holders was to develop teaching in an individualised and collaborative way is just part of how our faculties operate now.
But there was still a problem. In 2013 we still reported and attached grades to teachers even though we had become more refined at the evidence bases we used to arrive at these judgements, and they were agreed together through discussion of the evidence by the DoL and SLT line manager. I was happier than I was in 2010 and I could see the shift in culture so that grading was not the most important thing. But I still couldn’t reconcile what I perceived as a discord between the whole school evaluation of the quality of teaching and our actual day to day practices in developing it. We used a forensic approach to explore the quality of teaching based on our agreed common language and this enabled us to share good practice and create directories of expertise, but it still ‘felt’ inauthentic to me.
2014 onwards – Squaring the circle
From 2014, and inspired by Leverage Leadership and various other books, conversations and thinking, I finally began to square the circle. I line manage the Director of Learning for science and on many occasions he challenges my thinking – but none more so then when we got into debate in the Autumn Term of 2014 about evaluating the quality of teaching within the faculty. His argument was that he wanted his team to have collaborative responsibility for the faculty’s quality of teaching which meant that everyone shared ownership of student outcomes. I argued that I wasn’t sure how this would work in practice as I wondered if it would provide a potential hiding place for someone who was underperforming, or be frustrating for someone who was excelling. Again, underpinning all this conflict, was the issue that we still graded teachers and reported faculty percentages.
In 2015, a resolution was finally achieved and led to this post:
I finally believe we have an authentic model which puts into practice the following set of principles:
We no longer report faculty percentages of the quality of teaching. Instead, we use the teacher standards to identify where each and every one of us have strengths and areas for development in our teaching practice. This is used to inform PPD and cross-faculty collaboration. It allows us to articulate whole school teaching strengths and areas for development and gives us all the evidence we would ever need to show any external scrutiny that we have a real handle on the quality of teaching in our school. @benbainesSLE is going to blog about this soon.
The real beauty of the evaluating teaching not teachers cycle for me is that I can finally see a process which is fit for purpose for leaders, teachers and students.
It was worth the wait.