The New York State Education Department (SED) was proud to announce this week that they have lifted the embargo on last year’s state test scores in reading and math. Does this mean the public gets to see them now?
No. That won’t be happening until December.
A sneak peek came in the form of a data release by the New York City department of education; students there scored proficiently in ELA and Math at higher rates than last year. However, there’s more to the story. School year 2023-24 marks a historic change in the state’s definition of “proficient.”
The new, lowered cut score on grades 3-8 state assessments means that lower test scores will be considered proficient this year. Because of this, one can expect all schools, districts, and indeed the entire state, to demonstrate growth—no matter what happened in the classroom.
But the purpose of testing is not to demonstrate growth. It is to determine where students stand on the scale of grade-level proficiency that is required to move on to the next one. If that scale continuously shifts, whether via a change in cut scores, or something else, is it possible to make accurate determinations at all? Especially over time, and especially after years of disrupted learning from the pandemic and a decade or more of poor proficiency in these very skill areas.
This is the third year in a row that SED advised against direct, comparative analysis of state test results; first, due to pandemic disruptions and emergency flexibility policies, and now due to an adjustment to the scoring criteria itself.
As far as how students are doing in the rest of the state, it will likely be months before we have the full picture. SED provided school-level data to individual school leaders weeks ago, but forbade them from sharing with the public, or even their own elected board members, until now.
As the embargo was lifted this week, school and district leaders now have the freedom to talk about and share their results with the community (depending on their capacity to work with and visualize large datasets).
The test score data given to schools has not yet undergone final formatting. Meaning that for big school districts like New York City, who have resources dedicated to analysis, it is possible to disseminate information to the public and offer a glimpse into how students performed.
But for the vast majority of individual schools and districts across the state, test scores will not be seen by the public until the final, statewide data is released in December. Until then, meaningful analysis is nearly impossible.
Other states, even those making their own adjustments to state assessment criteria, provided data to schools before the end of the year, and to the public before the start of the next.
There are additional factors holding up this process that SED is not mentioning, like the fact that they are currently operating with an outdated assessment framework, as well as inconsistent and unreliable reporting procedures.
Most states now utilize modern, high-tech assessments to address learning loss and ensure their students are ready for life after graduation. They are often characterized as being adaptable, diagnostic, and continuous—meaning they adapt to students’ ability-level to get them out of the test quickly, have accessibility options for students with disabilities, and intentionally do not change over time. They diagnose for English language learning needs, potential learning differences and grade-level knowledge gaps. They are immediately and transparently scored to provide meaningful and timely data to teachers and parents.
New York is one of only three states using the assessment system known as Questar, along with Alabama and Tennessee (who recently chose to replace it after several years of missteps). Students in New York are still largely taking tests by filling in bubbles on scantron sheets with #2 pencils and producing handwritten responses in paper booklets—while students who need adaptations don’t have them. In some cases, tests are even being scored by the schools’ own teachers.
The final presentation of statewide student data is not user-friendly for parents and the public, and the raw format often contains inconsistencies in formatting and final counts. Our local and statewide data collection methods provide no longitudinal measurement capabilities for meaningful analysis of what has worked and what has not over time (especially as we have spent more than any other state in the nation, while demonstrating little improvement over the past decade or more).
These factors make every process harder than it needs to be, especially those that would otherwise contribute to efficiency, accountability and transparency in a functional statewide education system. They manifest in very real ways like a delay in the statewide report on student achievement until December of the following year—not to mention what those results are likely to display: that most students in New York have not mastered grade-level skills in reading and math (a pattern dating back a decade in this state).
At the very least, it prevents timely intervention for the students and schools who may be falling behind, and for the public to have an idea of how their school stacks up around the district, the state, and the rest of the country, before making decisions for the following year