A Look into Program Evaluation | Knowledge Development Tool

George Noell is a professor at Louisiana State University and a nationally sought-after expert on educator evaluation systems. As a service provider in a psychiatric hospital for children with severe mental illness, his interest grew in “schools as a malleable context where virtually all children participate.” George realized that changes to improve children’s life outcomes could be made by focusing on the interconnectedness of instruction, mental health, and behavior. George discovered early on working with kids in intensive settings that “so many of the problems we deal with could possibly have been better addressed by primary prevention rather than interventions after a kid is already in trouble.”

Through supporting teacher preparation redesign and evaluation of redesign efforts, George learned how little data were being collected on student outcomes for the purposes of accreditation and evaluation of teacher preparation programs. He began working with key stakeholders in Louisiana to develop a statewide value-added model, which led to other states and researchers examining similar issues. George’s visibility in the state and Washington, D.C. led state leaders to seek his assistance with their work on teacher preparation and teacher evaluation. The CEEDAR Center team sought him out to author a literature synthesis on educator evaluation with Mary Brownell (director of the CEEDAR Center), Heather Buzick (Educational Testing Service), and Nate Jones (Boston University). This paper is now available for download.

George acknowledges that educator evaluation is context specific and that there are great opportunities and challenges to implementation. He believes “the big opportunity is really pretty fundamental in the sense that it has created this moment of focus in which hopefully substantial stakeholders are coming away with the observation that it is incumbent on teacher preparers, educational leaders, and educator associations to develop evidence of the effective teachers that is clear and dispositive and affirmative.” The broad base challenges include determining how we do this (a) in different grades, (b) in different subjects, and (c) with kids with different needs. George added that policy is moving the field in the right direction for all of us to focus on student outcomes and educator effectiveness. The construction and implementation, however, are complex, contextual, and local. To illustrate the complexity, he offered the following examples: “Some states are at the place where they have some policy will to move forward with a value-added assessment looking at their teacher preparation, but they have fundamental data and infrastructure problems that they are working on solving. Other states have policy in place and are in the process of developing both analytic models and the policy construct around that, and they have very specific challenges such as what to do with the data once we have it? And what is the right policy construction around that?”

In terms of stakeholder engagement, George regularly sees involvement from the governor and legislature, state boards of education, educator associations, and school board associations. He asserted, “The process is always fundamentally more sound if you engage parent advocacy groups and get them sitting at the table. They are uniquely vocal about the voice of the children being heard. They are invaluable and far too often left out of the conversation.” George also suggests that issues of teacher preparation must involve higher education leaders. He acknowledged the trade-off between getting as broad a coalition as possible—to be informed by the views of as many as is practical­—versus having the entire process collapse under the weight of a group that is too large or having such a large group that individuals do not feel heard.

Finally, George provided the following three primary considerations gleaned from the literature synthesis on measuring teacher effectiveness written for the CEEDAR Center:

  • There is tension between having made significant, rapid progress in recent years and still having a long way to go with evaluation and accountability.
  • There is a clear need for alternative and thoughtful assessments to measure the effectiveness of educators of students with special needs whose needs and gains are not adequately captured by widely available standardized measures.
  • We have the possibility of capturing multiple types of indicators, which will be informative about different dimensions of the problem. Our ability to effectively act is going to be so strongly influenced by the kinds of indicators (e.g., student learning outcomes, practice data) that we have available from the field.

If you would like to access the synthesis to better understand the issues and needs related to educator evaluation to begin to apply this information in your context, please click here.

Click here to download Dr. Noell’s Literature Synthesis on Program Evaluation

Hello

OSEP LogoThis website was produced under U.S. Department of Education, Office of Special Education Programs, Award No. H325A120003. David Guardino serves as the project officer. The views expressed herein do not necessarily represent the positions or polices of the U.S. Department of Education. No official endorsement by the U.S. Department of Education of any product, commodity, service, or enterprise mentioned in this website is intended or should be inferred.