The case for integrating teacher preparation into K–12 schools

By:

Mar 24, 2016

One well-known philosophy for improving teaching in K–12 schools is to define clear and rigorous standards for teacher licensure and teacher preparation programs—and then hold teachers and programs to those standards. Our research on the theory of modularity and interdependence, however, suggests that this approach is not likely to succeed until we develop a stronger understanding of and a greater consensus around what constitutes good teaching.

First, let’s consider what the theory tells us about standards.

Standards define the interfaces or connections between the various parts of a system. They “standardize” how different components or parts must fit and work together so that be developed by independent providers. In computing, USB and Bluetooth standards specify the interfaces for how devices from different manufacturers connect and interact. In teacher development, the intent of CAEP standards, National Board standards, InTASC standards, and specific state licensure standards is to ensure that teachers from different preparation programs interface effectively into the school environments where they work to advance student learning.

According to the theory, standards must meet certain conditions to ensure smooth interoperability between different parts of a system or value network—such as between K–12 schools and teacher preparation programs. Clayton Christensen and his co-authors describe these conditions in their book, Seeing What’s Next. First, all parties involved must specify which aspects of the interface are important and which are not. Second, managers must be able to measure or verify that the parameters and exchanges that comprise this interface are correct and are what they need. Lastly, the interactions across the interface must be well understood and predictable.

For example, in the early days of the mainframe computer industry, IBM could not have existed as an independent manufacturer of mainframe computers because manufacturing was unpredictably interdependent with the design process for the operating systems, core memory, and logic circuitry of the mainframe system. At the outset, specifying how these discrete features should fit together was impossible because IBM’s understanding of the technology had not advanced to the point where verifiable and reliable standards could be specified. If the company had tried prematurely to create standards to subcontract each of the components, the product would have suffered.

Similarly, the problem with teaching standards is that they also do not meet the conditions of specifiability, verifiability, and predictability. In an article on teacher credentialing, Rick Hess illustrates that the reason why teacher certification standards fall short in meeting the requirement of specifiability is because the field has not yet reached consensus on what attributes make a good teacher or on the effective methods for developing those attributes. He writes:

The theory behind certifying or licensing public school teachers is that … aspiring teachers master a well-documented and broadly accepted body of knowledge and skills important to teaching. … The problem is that no comparable body of knowledge and skills exists in teaching. Debate rages over the merits of various pedagogical strategies, and even teacher educators and certification proponents have a hard time defining a clear set of concrete skills that makes for a good teacher.

Hess also illustrates that the reason why teacher certification standards fail to meet the requirement of verifiability is because the education field does not yet agree on how to judge good teaching. He writes, “Educational ‘experts’ themselves argue that teaching is so complex that it can be difficult to judge a good teacher outside of a specific classroom context. This makes it difficult, if not impossible, to determine abstractly which aspirants possess satisfactory ‘teaching skills.’”

Additionally, a research paper by the Brookings Institution demonstrates that the reason why teacher standards fail to meet the requirement of predictability is because of the variability in performance among teachers meeting the standards. The study  found that “paper qualifications,” such as meeting the teacher standards required by states’ teacher credentialing policies, “have little predictive power in identifying effective teachers.”

The challenge of creating teacher standards that meet the requirements of specifiability, verifiability, and predictability is further exacerbated by the fact that K–12 schools are not all the same. Schools with non-traditional instructional models—such as project-based learning or blended learning—likely need teachers to fulfill different roles than traditional schools; and these different roles require different modes of teacher preparation.

So, if teacher standards fall short in defining a reliable interface between independent teacher preparation programs and U.S. K–12 schools, how can the field ensure that teacher preparation programs produce effective teachers?

Christensen explains in The Innovator’s Solution that when product performance—or school and teacher performance—is not good enough, there are complex, reciprocal, and unpredictable interdependencies in the system that keep the conditions of specifiability, verifiability, and predictability from being met. Christensen further points out that when there are complex interdependencies between the parts of a system, “a single organization’s boundaries must span those interfaces [because] [p]eople cannot efficiently resolve interdependent problems while working at arm’s length across an organizational boundary.”

For IBM to succeed in the manufacture and sale of mainframe computers, it had to integrate backward through all the parts of the value chain of its production that were not yet well understood and established. This meant that IBM had to design the logic circuitry, the application software, the memory systems, and so on because each of those systems had to be designed interdependently with the other systems. A change in one part of a memory system might necessitate a tweak in the application software, which could in turn cause a change in how all the pieces fit together. In short, IBM had to do everything in order to do anything.

In a similar way, the need for a single organization to span the unpredictable interfaces in a system explains some of the developments in teacher preparation. For example, a number of charter school networks—such as High Tech High, Success Academy, Aspire Public Schools, and the schools that founded the Relay Graduate School of Education—have taken teacher preparation into their own hands to ensure that the pipelines preparing their teachers align with the needs of those teachers and the schools where they work. It also explains why some university-based teacher preparation programs have set up lab schools.

The theory of interdependence and modularity makes clear that adherence to teacher standards likely will not produce better teachers until we have a better understanding of the attributes of good teachers and the means for determining whether teachers have those attributes. Relying prematurely on standards to guarantee quality before we know how to specify and measure quality puts the cart before the horse. Instead, improvements in teacher preparation will likely come from organizations that bring teacher preparation and K–12 school operation under the same roof. Organizations that do this can advance their understanding of how different approaches to teacher preparation affect teacher performance outcomes—and by so doing can develop reliable teacher preparation standards.

Thomas Arnett is a senior research fellow for the Clayton Christensen Institute. His work focuses on using the Theory of Disruptive Innovation to study innovative instructional models and their potential to scale student-centered learning in K–12 education. He also studies demand for innovative resources and practices across the K–12 education system using the Jobs to Be Done Theory.