Compliance or Quality?

There’s been so much discussion around how to define and measure classroom quality over the past several years. Most would agree that both environmental and human interactions factor into creating quality programming. It seems assessment companies and researchers are competing for the market share on these needed tools. But rushing tools to publication does not advance quality. A recent QRIS-related statistical analysis of nation-wide program data actually challenges the metrics and validity of some instruments our field has used for over 20 years, calling into question whether these tools have actually been measuring quality (defined as whatever brings about the best child outcomes), at all.

Our country is spending millions of dollars each year on “advancing quality.” And it’s true, the only way we can get a handle on whether quality is existing and growing, is to measure it. I once heard Dr. Robert Pianta, lead investigator and author of the CLASS Assessment Scoring System, say in an interview, “What gets measured, gets done.” So, do assessments measure quality, or drive it? Maybe both, but I’m afraid in some cases, the answer may be neither!

I think that many programs and even some state leaders have forgotten about the WHY of the assessment and monitoring processes — TO CREATE AND EXTEND QUALITY. States that are not using an assessment instrument backed up by significant validity research are doing their own children a disservice. State leaders should do their research and ensure high levels of assessment validity! Furthermore, in our efforts to “get everything covered,” to “check off all the boxes,” for the sake of meeting rigid monitoring timelines and following strict assessment protocols, somehow the entire purpose behind the process can get lost.

We’ve all worked with teachers and supervisors who are stressed out and overwhelmed by these assessments, which in too many programs, are only discussed and implemented when it’s time for a monitoring visit. I think that many many teachers view tools like ECERS and CLASS to be an extreme burden–something to “get through.” We all know of teachers and programs who can “turn it on” for the evaluation, and go right back to shady practices the next day. Why is this? I think it’s because the connection between the “what” and the “why” of each checked indicator has been forgotten (or was never evident in the first place). The human element to quality ratings cannot be an afterthought, but it often is. Somehow, our field needs to strike a balance between the need for data to drive policy and funding decisions, and the desire for programs and teachers to authentically produce the quality ratings, and also to receive their data-feedback in humane and useful ways.

Programs leaders and state leaders and higher education folks and coach/consultants can all take a role in addressing this crazy, confusing system by taking the time to consider teachers’ perspectives. Do teachers understand WHY they are being observed? WHY they only received a 4 instead of a 7? Has someone been there to explain to them why their block area was not deemed to be “ample,” or how it could be that “higher-order thinking was not observed”? If program leaders don’t have the answers and the state’s report doesn’t spell it out, is there a process for getting definitive answers to these very legitimate questions? If not, teachers are being disrespected and will not learn from these ratings. I would maintain that this is probably one of the biggest reasons they cease to really care about authentically striving for quality.

There really are answers–we can all take some responsibility for smoothing out the balance between compliance and true quality. Here are a few ideas to get us started:

1. Program Directors: Designate an Instructional Leader who can put teachers’ learning needs first. They need (and deserve) time and support to learn and understand the tools, items, and indicators they’re being assessed on. Provide regular, job-embedded opportunities to discuss them.
2. Colleges: Integrate assessment indicators into coursework and practicums. Place a heavy focus on the integration of best practices and child development while students are still learning about both of them. (the WHY of assessments). Finally, give teachers authentic experiences with these assessments without the high stakes, and a chance to deconstruct their meanings.
3. State leaders: When creating standards, guidelines, credentials, professional development, etc… help teachers out by connecting assessments that are commonly required by the state or Head Start to these initiatives in as many explicit ways as possible (e.g. a CCR&R training on Creative Curriculum could include strong connections to ECERS and CLASS indicators, helping teachers understand how these tools provide insight into lesson planning).

If you would like to continue this dialogue, please enter a comment below, and send your contact information. Let’s tame this beast!

Leave a Reply