CIE0528

Written evidence submitted by Amanda Pike

 

 

Predicted Exam Grade DebacleThe evidence of an experienced maths teacher and tutor

 

The recent critical media commentary concerning the grade prediction our GCSE and A level students are about to experience this summer highlights a wholly inadequate process but unconsciously points to deeper systemic failures in teaching.

 

As an experienced maths teacher and tutor I wondered how many students, parents and commentators really understood the realities of the process that is to be applied. It is no surprise that the grades predicted by teachers were very optimistic and that actual standardised grades will be inflated as well. It is a brave teacher who predicts a student a grade other than the one they need for their next step! These grades will sadly not reflect the attainment of many students, particularly as many failed to do any significant work to even complete their course content once the exam cancellation was announced. This leads to the laughable outcome of this year’s cohort attaining grades significantly higher than last year’s while having learned probably 10% less course material.

 

The concerns about unconscious bias affecting students’ predicted grades this summer are inevitable. The process of prediction whereby schools are required to rank students by predicted grade in each subject is practically impossible, particularly in larger schools. A typical state comprehensive has 240 students per year and the likely spread of outcomes for a core subject such as maths means that a given grade say a level 6 could be the exam outcome from students in perhaps 6 out 10 teaching sets. How can these students be compared and ranked if they have different teachers with varying levels of expertise? The problem is exacerbated for levels 4 and 5, which can be awarded by different level papers with different content. No head of department can possibly moderate objectively unless there is assessment evidence to support decisions. As an experienced maths tutor I can assure you that there is huge variability in mock assessments between schools - some use the most recent past papers (3 for maths), some only 2 of 3, some use practice or shadow papers where number content and mark schemes are untrialled.

 

Marking is perfunctory, rarely moderated (only really embraced by independent schools in my experience) and often wrong with students usually under marked as lack of teacher subject knowledge prevents full understanding of the possible responses. Most shocking of all are the schools that didn’t succeed in marking March mocks before the lockdown announcement of cancellation and simply accepted the highly unreliable self-marking scores that their students had completed in class! Truly shocking!

 

The assumption in Ofqual’s guidance that, for example, teacher assessment could be used to support predictions is naive and demonstrates how out of touch government is with current practices. In state schools it is rare to see any book marking of homework that has beennumerically graded” and is therefore comparable. We have a categorical system of colours (red, amber and green) or simply formative comments. This has been recommended practice for some years. The weakest schools have an overreliance on an online maths website system which again largely colour codes understanding. Categorical systems are largely useless for the purposes of statistical analysis and comparison and independent methods can never be viewed.

 

The lack of consistency in the quality and standards of subject content and homework marking of different teachers would again make any rank ordering on a large scale wholly impractical and unfair. Again, the independent sector show rigour and discipline in this process and would have more student knowledge to enable student ranking.

 

If Ofqual had been brave enough to demand more evidence from schools in terms of their assessments process and quality control they would have a much easier task of executing their ‘standardisation model’ across all schools and justifying their decisions. They would also expose just how far teaching standards have declined and why the public feeling that examinations have been progressively “dumbed down” is justified. The application of a standardisation model will be difficult as for many subjects there is very little past evidence of attainment for the recently reformed ‘more challenging’ exams.

 

The pandemic has clearly affected our young students profoundly but what a shame that Ofqual didn’t consider that this was an opportunity to level up the standards of teaching across the education sector. Without demands for verifiable past performances the proposed grading process clearly exposes itself to bias and it is highly likely that there will be many innocent casualties on results days. This year’s students deserve better as do future year’s students and the nation as a whole.

 

 

September 2020