Keeping Pace 2011, Part 2: Does Online Learning Work?

“Just because online learning can work does not mean that online learning will work.” (Keeping Pace, 2011)

No truer words about eLearning have been spoken, but you wouldn’t know from the variety of recent reports and blogs about online learning. To summarize them, online learning is either all bad, consisting of for-profit companies churning out students who are far below grade level, or that they are all good and that eLearning is transforming teaching and learning.

The truth, I’m afraid, is somewhere in the middle and I’m somewhat disappointed that many eLearning advocates, those who believe in the promise and potential of online learning, are not more forthcoming about acknowledging some of the problems that do exist in virtual schools.  Michael Barbour, in his excellent Virtual Meandering’s post, Politics Of K-12 Online Learning?, framed it well. “…not only is K-12 online learning a political/partisan issues, but anyone who claims otherwise is either naive or intentionally trying to mislead you.” Keeping Pace 2011 tackles this issue pointing out some of eLearning’s problems while providing steps to correct them. This post reviews their findings and recommendations regarding quality, accountability, and research.

Data doesn’t lie.  (Although, data can be both misleading and misinterpreted). Given our country’s obsessive compulsion to judge quality based on test scores, all good journeys begin with data.

Keeping Pace 2011(KP) honestly accepts that “….online learning can be beneficial, but there are quite a few poorly performing schools.”  It cautions that data sets should not be compared to each other and worries that some blogs and articles compare virtual students’ proficiency levels to state averages (which I have done). However, KP warns that because many online schools work with at-risk students, we shouldn’t be surprised when student scores are below state averages. They believe a better measure would be to track individual growth over time. While I agree with the latter, the assumption that many virtual students are at-risk is anecdotal. I, and many others, assume that many full-time virtual students are at-risk, but we have no data to back-up our assumption. And frankly, if we’re going to make brick-and-mortar schools suffer penalties when their students perform badly, particularly when many have similar levels of at-risk, poverty-stricken, or language-challenged students, I see no reason for excusing virtual schools from similar penalties.

Keeping Pace 2011 begins their data journey with the now famous Minnesota report, “Evaluation Report: K-12 Online Learning” which was published last September.  KP quotes the same data both sides have been writing about since September: 1) Students are less likely to finish the courses they start; 2) Full-time online students dropped out much more frequently than students in general; and 3) Full-time students had significantly lower proficiency rates. However, rather than disputing the results, as some advocates have done, the Evergreen group instead lauds the report because “it looks not only at student proficiency, but also at student growth.”

This is how smart people advocate for online learning. You promote, you recognize flaws, and you suggest solutions.

KP rightly places the responsibility for virtual school quality on education providers including schools, teachers, content providers, students and families. It all comes down to accountability, though, and Keeping Pace outlines some of the problems and solutions regarding our current assessment system in their “Toward improved accountability systems” section. KP begins this section by recognizing that some current assessment systems, like required state tests, apply to both face-to-face and virtual charter schools so you can compare the two and hold each accountable. However, KP points out three stumbling blocks that are specific to online (virtual) schools: 1) Online students have high mobility, so it’s nearly impossible to compare a virtual schools progress from year to year; 2) Some school districts create virtual school programs that are a part of a face-to-face school. The virtual school & students don’t have a separate school ID, so it’s impossible to measure virtual student proficiency; and 3) Online schools often serve at-risk students whose needs haven’t been met by traditional schools. Our current system isn’t set-up to measure their growth on a year-by-year basis.

Clouding the issue are a rising number of virtual schools, consisting of public institutions receiving public education dollars, that are run by for-profit education management organizations (EMO). In those cases, KP asks who should be accountable?

What separates the Evergreen Group from other blogs and articles about online learning is that they not only shine a light on the problem, they also point out solutions.  They conclude their accountability section by recommending three changes to our assessment systems.

1.    Accountability should be based on outcomes, not on inputs.

Here they’ve adopted Michael Horn’s thesis that we must always look at the end results, which is a strong statement that results matter. However, I do appreciate their attention to “quality” inputs, which includes teacher credentials, student-teacher ratios, course design, and (I would advocate also) the quality of the course materials. Garbage in. Garbage out. Yes, a highly trained teacher can overcome bad instructional materials, but it makes more sense to begin with quality instructional materials.

2.    Data from online and blended schools must be disaggregated from overall district numbers.

Back in KP’s executive summary, they found that “Single district programs are the fastest growing segment of online and blended learning.” Many of these are small virtual schools embedded within a traditional school, so it’s impossible for anyone to determine how each group performs. I agree that this would be an important data point that states should require and track.

3.    Accountability must exist at the course level if students are choosing courses among multiple providers.

While it’s easy to hold traditional schools and their teachers accountable for student progress, we face a dilemma when considering online courses that are purchased. If a district hires an education management organization to provide the course and teacher, whose feet should be held to the fire when students’ don’t advance? KP suggests we need to consider tracking scores by individual course so we can identity weaknesses. They acknowledge that this is a confusing area, but suggest that we must strive to identify responsible entities.

Given a variety of data showing both virtual learning failures and stunning successes, Keeping Pace nicely frames the most important question, “Therefore, the challenge accepted by many researches is to change the question from ‘does online work?’ to ‘under what conditions does online learning work?’”  Online learning can work and while all advocates want to celebrate the many stunning successes among virtual and blended programs, we must also acknowledge the many stunning failures among them, not making excuses but providing solutions. Keeping Pace 2011 does not disappoint here either.

 

Leave a Reply

Your email address will not be published. Required fields are marked *