Professional Learning

Assessment in K–12 Schools This Fall: What Education Leaders Should Expect

8 Min Read
WF1327468 Blog Assessment hero

It’s difficult to talk about the state of assessment for back-to-school season 2021 when the current picture of the next school year still seems so unclear. Although states were given waivers for their assessment requirements in spring 2020, the U.S. Department of Education is calling for state testing to resume in spring 2021. To respond to the current challenges states now face, the federal government is allowing for greater flexibility in how and when testing takes place.

The absence of testing last spring—and the specter of a perilous testing season in spring 2021—has left many educators wondering about how to address gaps in their understanding of student progress. The challenge is understanding the role of and need for formal testing and what those assessment systems should look like come this fall.

When the school year resumes, regardless of where we are, the disruptions of the past year will continue to impact instruction in different ways. How each state implements its testing this spring will also have implications for this fall when it comes to assessment and making instructional decisions. Given this uncertainty, there are several considerations for school administrators to think about at this time. Here is a set of key questions administrators should ask as they begin their preparations.

How will we assess eligibility for special programs?

With the possibility of modified testing conditions and the potential for significant numbers of missing students this spring, districts that usually use state assessment data to determine eligibility for special programs, such as intervention, may need to instead rely on alternative sources—for instance, teacher recommendations, course grades, and locally-administered interim benchmark assessments like HMH Growth Measure with Waggle.

Waggle assessment growth tablet growth measure

How will programs that relied on student performance data be evaluated?

Similar to questions of eligibility, users of existing educational programs, tools, or curricula such as READ 180 or Into Reading, who would previously rely strictly on state assessment data to gauge efficacy, may need to identify additional key performance indicators. These include but are not limited to grades, interim assessments, Social-Emotional Learning (SEL) constructs, and attendance. The specific indicators chosen may vary from district to district. But regardless, they should be measurable, meaningful, and valid for the intended outcomes of the program.

How will researchers conduct studies to evaluate the impact of COVID-19?

The true impact of COVID-19 has yet to be fully realized. Although there have been some attempts to quantify the effects on student learning, there’s still a lot we don’t know. Early evidence points to students who are traditionally at risk feeling the greatest negative effects. Unfortunately, it’s still unclear how much ground they are losing because they disproportionately make up the volume of students who aren’t taking assessments. We need to understand where students are in their educational achievement to recognize the impacts of COVID and identify solutions.

How will a lack of test scores from the current and previous school year affect growth model calculations this fall?

Testing this spring will provide some vital information for educators, but addressing the loss of a single year is still not without its challenges. Some growth models, such as the popular Student Growth Percentiles (SGPs), will be better equipped to estimate growth even without a prior spring test score. However, even in the best of circumstances, the missing test score from the prior year can wreak havoc on the estimation of student growth.

Damian Betebenner and Adam Van Iwaarden, co-authors of the SGP package, discussed a number of considerations for educators to measure student growth post-COVID. Importantly, when it comes to educational equity, measuring growth can help in identifying the impacts of COVID on student learning and ultimately narrow the widening achievement gap. Measuring growth is what interim benchmark assessments tend to excel at, given that these assessments are administered multiple times during the school year.

How will multi-year growth models account for a missing year of data?

A strength of a consistent yearly assessment model is the ability to compare student performance from the previous year with the current year to gauge whether schools and districts achieved their stated goals or at least moved in the right direction. When there exist multiple years of longitudinal data, the model becomes even stronger and more useful to evaluate growth and learning gap trends.

When a single year is missing from this trend data, there are methodologies that exist to provide estimates via statistical modeling. However, when there are multiple years of missing data, which could be the case if the state testing results are not comparable to previous years, continuing to evaluate these trends becomes much more challenging.

What alternative testing formats can educators use to offset the lack of state accountability testing?

District leaders regularly turn to state tests as a critical, though flawed, way to understand their schools’ strengths and weaknesses. But for the first time in many years, state summative assessment data was unavailable or—at best—unreliable. Districts turned to interim benchmark assessments for the data they needed to help inform their decision-making. Interim benchmark assessments, such as the newly developed HMH Growth Measure, have been in the market for some time. They can provide achievement data for multiple points throughout the year, which allows schools and districts to not only track year-to-year gains, but more importantly, track gains within the school year.

Despite their flexibility, these assessments are limited when it comes to the rigor and accuracy of their results, and state assessments offer certain advantages. State assessments focus on an individual state’s content instructional standards, whereas most of the popular interim benchmark assessments only have some varying degree of content standard coverage. State assessments also tend to have more comprehensive testing accommodations available for students.

However, interim benchmark assessments have their advantages as well. Due to their adaptive configurations, they are capable of tailoring to any given state’s specific standards. If this alignment is conducted with a large degree of specificity, this can improve the assessment results’ validity. These assessments can also give back instructional time to teachers. Although they are typically administered up to three times within a school year, they generally interfere less with classroom instruction than traditional state accountability assessments. So as long as the content administered by these assessments is close to that of the state assessment, they can play a key role in a district’s filling a void from any missing state accountability data.

Testing this spring will provide some vital information to educators, but addressing the loss of a single year is still not without its challenges.

Are test scores comparable from in-class testing versus remote testing?

Districts are currently utilizing a series of instructional models to provide lessons to their students. Variability inherently exists within a district or school where some students opt to attend in person and others opt to learn away from school. However, an important factor in delivering assessment is a similar test-taking environment.

There’s a certain amount of standardization that can be accomplished at home, but it has its limits and can be difficult to verify. Allowing some students to test online and others in person can create serious threats to the validity of these scores. It’s important to account for this when comparing student performance across different modes of administration.

Should state testing proceed without the attachment of high-stakes embedded into ESSA legislation?

One feature that raises the stakes of state assessments is their connection to the Every Student Succeeds Act (ESSA) and the availability of federal funds. Perhaps we should consider removing this element from the equation and free up districts to focus solely on using test scores to monitor their students’ progress toward educational goals. This appears to be happening as some state education departments, where they evaluate having their vendors develop through-course assessments, such as adaptive interim benchmark assessments, to serve as their revised accountability programs.

The national educational accountability landscape will surely change, and it will be interesting to see how flexible the USDOE will be in approving ESSA accountability plans. District leaders may focus instead on growth within the school year (as opposed to across years) and, in effect, place even more emphasis on student growth as part of the accountability measure. Growth measured using through-course assessments can be measured at a more direct individual level. Moreover, teacher accountability will likely be pushed towards using student growth measures, not state assessments.

Some Final Thoughts

Regardless of the testing outlook, the need for assessment has still not changed. Assessment data is still needed to drive instructional decision-making and accelerate learning. The value of data is as important as ever, considering we still don’t know the full effects of this pandemic on our educational system. We must continue to adapt as the educational landscape evolves to meet the needs of all our students.

***

Learning didn’t stop during the pandemic. It just took new forms. This article is part of a series of resources focused on COVID learning recovery and designed to help you plan now for summer school and next year.

Get a free guide to choosing the right assessments for your district.

Related Reading

Which student needs are going unmet hero

Dr. Suzanne Jimenez
Director of Academic Planning and Data Analytics at HMH

Using Data in Education hero

Dr. Michael J. Bolz

Principal Learning Experience Designer, HMH

WF1954463 Shaped 2 WF1927250 Shaped 2024 Blog Post Math Summative Assessment

Richard Blankman

Shaped Executive Editor