Digital Learning

Why Digital Learning Testing & Evaluation are Broken

Welcome to the fourth in a series on why the digital learning media market is broken. (For an introduction, and links to the rest of the posts, click here.) Today, we’re talking about testing and evaluation—what, in theory, people and organizations would use to find out whether children are actually learning something (or more things) by using digital tools.

Resource gaps: The folks with the money, from foundations to venture capitalists to the federal government, fail to invest enough to properly ascertain whether projects and products actually teach and engage children. The buyers lack the resources to test the quality of digital learning products—reflecting an underutilization of the billions of dollars[i] spent by school districts on computers and technology infrastructure. There’s a lot to be culled from the results of the data systems investments spurred by Race to the Top. But in most states, that’s just not yet happening in a way that’s useful.

Information gaps: There is a paucity of rigorous research evaluating the efficacy of digital learning products.[ii] There is no consensus on which aspects of these tools are worth testing. Several magazines, websites, and industry organizations rate or review digital media, but the rubrics and methods vary drastically, and are seldom based on the learning value of the products. Government agencies with the power to be arbiters of quality are just beginning to think about these problems in a serious way.

Infrastructure gaps: With so many players, the market for ratings/certification of high-quality products is fragmented, with no universally accepted system. Although several organizations offer valuable services to test the learning value of specific products, the costs are often too high for start-up companies (or even established firms), and most are geared toward the traditional K-12 “ed tech” market. The establishment of Common Core Standards will be useful here—once you have a common standard of “what is to be learned,” it’s much easier to ask “does this product help children learn it?” But the Standards leave much to be desired on complex skills like systems thinking, and are extremely weak on important social-emotional skills like resilience, persistence, and empathy.

Misaligned incentives: Because the sector lacks a common quality standard, much less a trustmark like the Good Housekeeping Seal, many purchasing decisions are made based on marketing muscle and sales relationships, not on what works. So businesses have little incentive to pay to rigorously test whether their products are effective for learning.

It’s hard to write about this without mentioning that SCE and our partners are about to launch a major project focused on addressing these issues, but this series is focused on what’s wrong. We’ll get to how to capitalize on what’s right soon enough.

Disagree? Find us on Twitter or reach out via our site. We’re a learning foundation and we believe in iterative knowledge and action. Everything we know (or think we know) we know because people like you have told us.


[i] Disrupting Class claims $60 billion during the last two decades, although depending on how you look at the numbers, the figured could be far larger. Much of this money came from foundations, especially during the 1990s when “access” to technology was the watchword. It’s worth noting that many of the “let’s give kids computers” organizations have evolved with the times in positive ways (e.g. CFY then vs. now).

[ii] One example, this U.S. Department of Education/SRI meta-study that notes the lack of solid research into online learning (although it does say, on average, online and blended learning have advantages).