Using Pilots to Inform Purchasing Decisions

October 15, 2014
CATEGORIES Technology Integration

In the not-so-distant past, decisions about textbooks dominated education procurement discussions. Schools would create committees of educators, and after reviewing the merits of a half dozen textbooks, they’d land on something the grade or subject team would use for the next five years. Today’s procurement landscape has shifted dramatically, and decisions about what’s used in our classrooms are far more complex; educators may have access to hundreds of software products, and licenses are typically renewed every year.

So how do today’s educators decide what products to buy in such a massive marketplace? Many resources exist to help educators narrow down the list of possible choices (Common Sense Education is a great place to start), and long-term studies like those reviewed by the What Works Clearinghouse are the gold standard in education research. And while these resources are useful for many educators, in New York City, we’re leveraging our role at the district level to develop something in between product reviews and randomized control trials. Our goal is to create reliable measures of leading indicators and provide the results to school leaders so they can make more informed decisions about what they’re buying.

The iZone
The iZone serves as an incubation lab for the New York City school system. We started this specific strand of work -- incubating new approaches to pilots that support educators' purchasing decisions -- two years ago when we launched the Gap App Challenge and received nearly 200 submissions from app and game developers seeking to improve middle school math and engagement. We subsequently matched 13 companies and schools for a yearlong prototyping experience. Based on what we learned from the Gap App, we launched the Short-Cycle Evaluation Challenge (SCEC) with a call for cutting-edge personalized learning programs and innovative NYC teachers. The SCEC is pairing 12 tools with schools over the 2014-15 school year based on the defined needs of the schools’ teacher teams, and then rapidly evaluating who the product works for, when, and under what circumstances.

Short-cycle evaluations are semester-long studies that assess the effects of an edtech program in terms of implementation quality, user feedback, and preliminary indications of learning outcomes. The SCEC is an opportunity for edtech developers to receive product evaluations from an independent research team and improve their programs by working with New York City educators. For NYC educators, it’s an opportunity to access cutting-edge technology and develop professional skills. After launching both the Gap App and the SCEC, we’ve learned a few lessons we hope to share with other districts.

Five considerations for districts and administrators using pilots to inform purchasing decisions.

1. Put in place enabling conditions.
Before you begin a pilot, there are a set of conditions that should exist to enable success. Most important, teachers should have prior experience with blended or personalized learning and with implementing edtech tools. Participating educators should also have sufficient access to technology, including devices and bandwidth. The school leader and your school’s culture should support teachers and give them the space to try new things. By making sure these conditions exist, you will better understand if and how the product works under ideal conditions.

Keep in mind, this also means the pilot conditions will have to be replicated to get the same results in the future. This may require additional support for educators who aren't early adopters or who have limited experience implementing edtech tools.

2. Encourage like-minded teachers to form teams.
Research suggests providing teachers with more time to collaborate is one of the best ways to support them. Pilots similarly benefit when educators collaborate. We’ve seen the most effective collaborations happen when teachers choose their own teams (they know how to find like-minded educators in their schools). Another important support for teams is giving them the time, space, and compensation to do the work. These supports signal the importance of their role in the pilot process.

3. Start with the problem.
Often, we see schools and districts purchase products based on a great sales pitch, the way the product looks, or word of mouth. Instead, start by having educators define a problem of practice they think technology might support, and then seek products to solve that problem. The problems they define might sound something like, "It’s hard to differentiate content and pathways to all learners," "We struggle to give parents easy ways to understand their child’s progress," or "It’s not easy to organize and analyze the data we capture." Starting with the problem makes the pilot more relevant for the educators, increases buy-in, and focuses the team on a specific challenge or set of challenges.

4. Emphasize communication and empathy.
The best product developers seek to gain insight from many different end users. But open communication channels between developers and educators are hard to establish when the typical relationship between these two stakeholders is focused on selling products. To set teams out on equal footing and to catalyze a relationship built on common goals, it’s important to build empathy and create a shared vocabulary of innovation. We often do this through pilot-wide workshops that include improv games, user-centered design activities, and plenty of work time with access to experts and mentors. This can put in motion an important shift in mindset.

5. Determine measures of success.
First, think about all the stakeholders who care about the outcomes of your pilot. This may include your school or district administrators, parents, teachers, and the broader edtech community. Then, think about what types of data those stakeholders need. For example, what will it take to convince the district or administrators to renew licenses or expand to the rest of the school or district? It’s also critical to document the underlying conditions so schools seeking to try the product in the future understand what conditions must first be in place before attempting to repeat successes. Next, identify how you’re going to measure those data points. In NYC, we’re working with a group of researchers from Johns Hopkins University to develop leading indicators of product efficacy, looking specifically at measures of engagement, net promoter, and student learning. If you don’t have a university partner, other groups like Panorama Ed are developing open source, validated measures of non-cognitive skills that may prove useful for edtech pilots.  

Keep in mind that every pilot is going to be different. Even if you follow these five suggestions, be prepared to iterate, adapt, reflect, and refine the process.