Susanna Loeb: Don’t Let Data Get in the Driver’s Seat
Stanford GSB professor Susanna Loeb warns against an over-reliance on research in education decision-making and argues for a more nuanced approach that blends data and key practitioner insights.
In recent years, the education innovation community has embraced the use of data in decision-making and program design — a welcome shift to academic researchers, no doubt, but also a trend that some think can be taken too far.
In a 2012 essay published by the Association for Education Finance and Policy, Professor Susanna Loeb, who teaches an education policy course at Stanford GSB, warns against an over-reliance on research in education decision-making, arguing for a more nuanced approach that blends empirical data with the keen insights of on-the-ground decision makers who can put the findings in context.
According to Loeb, “More information and more accurate information can improve decision making, but consequential education decisions are almost uniformly and inherently gray. That is, even with the best available information about the present and the past, decisions about the future rely on judgments as well as knowledge.”
Loeb goes on to describe the importance of who makes education decisions, writing “improvement in the education system may depend partially on increasing the available information and refining frameworks for processing that information, but it also depends on wisely choosing the person who is making the decisions.” In Loeb’s opinion “the optimal distribution of decision-making authority depends on who has access to the five features of good decision-making,” which she identifies as “knowledge of context, knowledge of options, an understanding of the world, time and other resources for processing information, and aligned goals.”
While Loeb was writing primarily about the use of academic research in program design, her observations would likely resonate with the many teachers and educational administrators who fear that data could replace their expertise and permit others less familiar with the school context to dictate what happens in the classroom.
Loeb’s point about putting data in the hands of the right decision-makers is also relevant to organizations conducting internal assessments. Consider two rapidly scaling, highly innovative education organizations — Bridge International Academies and Khan Academy — each of which aims to provide low-cost, global access to high-quality education.
Bridge International Academies is a mission-driven company that designs its “School in a Box” curriculum for low-income children in the developing world. The enterprise is currently operating only in Kenya but has plans to expand to a dozen countries over the next ten years. To scale its model cost-effectively, Bridge International relies on a highly standardized, scripted curriculum, developed by U.S.-based educators, which can be delivered by individuals who are not trained teachers.
According to a 2010 HBS Case study, “Regular high-quality assessments standardized across Bridge International schools [are] delivered on a monthly basis to provide checkpoints on progress against specific learning objectives for students, teachers, school management, parents, and Bridge International itself.” These data contribute to the ongoing development of the Bridge International model, including both curriculum development and teacher training and support.
On a recent study trip to Kenya, where they observed the School in a Box firsthand, Stanford GSB students expressed concern about the fact that data collected through the assessments were not used by locals to make their own creatively inspired changes, but rather by non-natives who lacked a full understanding of community contexts. While such a standardized model may be initially necessary in places that suffer from a lack of properly trained teachers and educational infrastructure, over the long run — particularly as Bridge International expands into new geographies — cultural context and the feedback of on-the-ground teachers will be important inputs to the curriculum development.
A contrasting case is Khan Academy, which aims to “provide a free world-class education for anyone, anywhere.” Khan offers teachers the opportunity to supplement their curriculum with an array of online lessons, customizing for student needs. They are then able to track the progress of their students, both at the class level and at the individual level, to see where they should be focusing their efforts, which students are struggling in certain topics, and which students are ready for new challenges.
Khan Academy’s model puts tools and data in the hands of qualified teachers, enabling them to drive their classrooms, adjust their curricula, and personalize their teaching. Many would concur that under this model, the decision-making power rests with the appropriate individuals, although it’s not been studied whether the automated feedback mechanism offered by the online platform in the absence of qualified teachers does a better job than the Bridge International model.
Each year at Stanford GSB, dozens of students enroll in the joint MA-MBA program, a unique partnership with Stanford Graduate School of Education. Many of these students aspire to start the next Khan Academy or Bridge International. Under the guidance of Professor Loeb and other research faculty, they will develop an appreciation not only for the importance of good data — but also for the pivotal role of the decision-makers in whose hands the data ultimately rests.
For media inquiries, visit the Newsroom.