Sunday, May 03, 2015

The Study, and Other Stuff

There are three separate threads in Siemens's response to my last post, all of which are fascinating:

  • The thread concerning whether or not the study he published was bad,
  • The thread examining the question of whether universities can be a valuable force for social equity, and
  • My own experiences of the university system.
Though the latter two threads are of endless interest, I'd really rather only focus on the first, for today.

Whether or not the study he published was bad

Siemens writes, "Stephen expands on his primary concerns which are about educational research in general." Let me be clear: I was making this statement about this study in particular. That's why I cited work from the study itself. Yes, I believe that educational work in general is pretty poor. But my focus was on this particular example.

I think he agrees with me, in part:

Educational research is often poorly done. Research in social systems is difficult to reduce to a set of variables and relationships between those variables. Where we have large amounts of data, learning analytics can provide insight, but often require greater contextual and qualitative data. ... The US Department of Education has a clear articulation of what they will count as evidence for grants. It’s a bit depressing, actually, a utopia for RCTs (Randomized Controlled Trials).

And he says:
Stephen then makes an important point and one that needs to be considered that the meta-studies that we used are “hopelessly biased in favour of the traditional model of education as practiced in the classrooms where the original studies took place.” This is a significant challenge. How do we prepare for digital universities when we are largely duplicating classrooms? Where is the actual innovation? (I’d argue much of it can be fore in things like cmoocs and other technologies that we address in chapter 5 of the report). Jon Dron largely agrees with Stephen and suggests that a core problem exists in the report in that it is a “view from the inside, not from above.”
So, from this, it appears that he agrees with my criticisms.

He nonetheless persists with his defense, focusing on the fifth paper in the study, first suggesting I don't find a lot to disagree with about it, and second, suggesting it is a vehicle for a conversation between two versions of myself. He also finds fault with some other criticisms:
The names listed were advisors on the MOOC Research Initiative – i.e. they provided comments and feedback on the timelines and methods. They didn’t select the papers. The actual peer review process included a much broader list, some from within the academy and some from the outside. 

Who selected the review committee? Who are the people 'from the outside' that were on it? Here's the best we have on the review process itself. Here are the project reports. All of this was set in motion by the committee I named in my previous post. If there's another list of names of people who were responsible for the outcome, they should be named. Otherwise, the people named are the people responsible. You can't name a list of names and then say it wasn't them.

In his defense of the fifth paper (he seems not to defend the first four studies, the 'histories', at all) he also writes:
In my previous post, I stated that we didn’t add to citations. We analyzed those that were listed in the papers that others submitted to MRI. Our analysis indicated that popular media influenced the MOOC conversation and the citations used by those who submitted to the grant.
I recognize this. What I am is saying is that it seems to me that the 28 winners of a major education research grant competition would have demonstrated more depth of understanding that is apparent from the summary study that resulted. Maybe I should not have expected more from what was essentially an automated and quantitative analysis of the papers (because there are individually some bright spots). But when we look at the citations - which is essentially what we were provided - the results overall are not reassuring.

That's it for Siemens's defense of the study. The core of my criticism, which is addressed mostly at the first four chapters, s is not addressed. Let me reiterate them here:
  • They all have very small sample sizes, usually less than 50 people, with a maximum size less than 200 people
  • The people studied are exclusively university students enrolled in a traditional university course
  • The method being studies is almost exclusively the lecture method
  • The outcomes are assessed almost exclusively in the form of test results
  • Although many are 'controlled' studies, most are not actually controlled for "potential confounders"
  • All these criticisms apply if you think this is the appropriate sort of study to measure educational effectiveness, which I do not.
I would not like to add that my criticisms are reinforced by two additional authors.

Although Jon Dron says "as such reports go, I think it is a good one," he writes:

For the most part, this report is a review of the history and current state of online/distance/blended learning in formal education. This is in keeping with the title, but not with the ultimate thrust of at least a few of the findings. That does rather stifle the potential for really getting under the skin of the problem. It's a view from the inside, not from above. 

And additionally, George Veletsianos writes,

One of Downes  criticisms is the following: “the studies are conducted by people without a background in education.” This finding lends some support to his claim, though a lot of the research on MOOCs is from people affiliated with education, but to support that claim further one could examine the content of this papers and identify whether an educational theory is guiding their investigations.

I don't think it matters whether the investigation is informed by an educational theory - all I care about is that the studies contribute in a useful, relevant and credible way to the field.

Finally, Siemens says, "The appeal to evidence is to essentially state that opinions alone are not sufficient."

It can be allowed that Siemens's use of "we" in the Chronicle article "is about the academy’s embrace of MOOCs." But as I pointed out, there's no mistaking his suggestion that the people outside the academy, the Alt-ac people, do not rely on evidence. This is what he says when he says, "Another approach, and one that I see as complimentary and not competitive, is to emphasize research and evidence."

I have never suggested that opinion alone is sufficient, and never would. But he has to cease characterizing the alternatives as not evidence based. Because I believe the opposite. I believe that the controlled trials offered in the study misrepresent what little evidence they provide, and I believe that the alternative approaches offer substantially more evidence than is allowed.

Siemens says, "While Stephen says our evidence is poor, he doesn’t provide what he feels is better evidence." I did once author a Guide to the Logical Fallacies, where I discuss the statistical problems. I've also talked about the same issue of evidence as it related to public policy. I've talked about research methodologies a number of times. And just the other day, I linked to a study I felt did pass muster (and indeed, over the years, I've linked to lots of things that I felt met the appropriate standards of research and evidence). And the body of my work, grounded in practical application and observation, stands as an example of what I feel constitutes "better evidence."

The Other Stuff

It's late and I don't want to longer on the off-topic stuff. But I also want to address a few things.

It's true that I am not a fan of universities and do not feel they support our common objective of " an equitable society with opportunities for all individuals to make the lives that they want without institutions (and faculty in this case) blocking the realization of those dreams."

This does not mean that I want to see them eliminated. And (contrary to Sebastian Thrun) I expect their numbers will multiply exponentially in the future.

But they need to be reformed, and they need to be brought around to the idea that social and economic equity are important. Because as it stands, they are one of the largest bastions in society standing against that idea. Here are a few of the ways:

- universities foster the perpetuation of a social elite, especially through exclusive institutions (Harvard, Yale, etc), legacy admissions, and perpetuation of a private social society consisting pretty much only of the one-percent

- universities bleed those outside the upper classes by consistently responding to society's demand for access with higher and higher tuition fees

- universities have fostered the creation of a low-paid academic underclass in order to support the students that pay these higher fees, and resist any suggestion that they should be fairly compensated, and actively resist unionization

- universities and professors continue to contribute to mechanisms which keep academic research behind expensive paywalls - indeed, they are so indifferent to these costs that they must be required by mandates and laws to open access to their research

- private universities operate tax-free, raise substantial endowment funds (sometimes in the billions), yet always plead poverty, and are typically the prime recipient of funding provided by governments and foundations attempting to support projects leading to the betterment of social and economic conditions

- they then waste that money, and a lot of other money, padding their own resumes and producing research such as the body of work I find myself criticizing today

Yes, perhaps universities could act as a force that promotes social and economic equity. They certainly have the talent and resources. But they don't, they don't want to, and they resist any attempt to make them do it.

It is true that I was badly treated by my PhD committee. But this is not a case of "today affirming that the Stephen in front of the phd committee made the right decision – that there are multiple paths to research, that institutions can be circumvented and that individuals, in a networked age, have control and autonomy." Why not? A couple of reasons:

On the idea that, individuals, in a networked age, (should) have control and autonomy: I have always believed that. I believed that long before I ever stood before a PhD committee.

On the idea that "the Stephen that today has exceeded the impact of members on that committee through blogging, his newsletter, presentations, and software writing."This may or may not be true. But I have never believed that I have been more influential because I have worked outside of academia.

I have been influential despite being outside academia. I have been influential despite not having a professor's wages, the support of grad students, a year off every seven, tenure, funding from foundations, grants and agencies, book contracts, and the rest. No university in the world would ever hire me, because they consider me unqualified. I don't regard any of this really as an upside.

Because that's what academia does. It wields huge sums of money and the support to achieve certain social and economic outcomes. I just wish it was wielding this power for good, rather than indifference. But I don't think it ever will.

Saturday, May 02, 2015

Research and Evidence

I wrote the other day that the study released by George Siemens and others on the history and current state of distance, blended, and online learning was a bad study. I said, "the absence of a background in the field is glaring and obvious." In this I refer not only to specific arguments advanced in the study, which to me seem empty and obvious, but also the focus and methodology, which seem to me to be hopelessly naive.

Now let me be clear: I like George Siemens, I think he has done excellent work overall and will continue to be a vital and relevant contributor to the field. I think of him as a friend, he's one of the nicest people I know, and this is not intended to be an attack on his person, character or ideas. It is a criticism focused on a specific work, a specific study, which I believe well and truly deserves criticism.

And let me clear that I totally respect this part of his response, where he says that "in my part of the world and where I am currently in my career/life, this is the most fruitful and potentially influential approach that I can adopt." His part of the world is the dual environments of Athabasca University and the University of Texas at Arlington, and he is attempting to put together major research efforts around MOOCs and learning analytics. He is a relatively recent PhD and now making a name for himself in the academic community.

Unfortunately, in the realm of education and education theory, that same academic community has some very misguided ideas of what constitutes evidence and research. It has in recent years been engaged in a sustained attack on the very idea of the MOOC and alternative forms of learning not dependent on the traditional model of the professor, the classroom, and the academic degree. It is resisting, for good reason, incursions from the commercial sector into its space, but as a consequence, clinging to antiquated models and approaches to research.

Perhaps as a result, part of what Siemens has had to do in order to adapt to that world has been to recant his previous work. The Chronicle of Higher Education, which for years has advanced the anti-technology and anti-change argument on behalf of the professoriate, published (almost gleefully, it seemed to me), this abjuration as part and parcel of its article constituting part of the marketing campaign for the new study.
When MOOCs emerged a few years ago, many in the academic world were sent into a frenzy. Pundits made sweeping statements about the courses, saying that they were the future of education or that colleges would become obsolete, said George Siemens, an author of the report who is also credited with helping to create what we now know as a MOOC.

“It’s almost like we went through this sort of shameful period where we forgot that we were researchers and we forgot that we were scientists and instead we were just making decisions and proclamations that weren’t at all scientific,” said Mr. Siemens, an academic-technology expert at the University of Texas at Arlington.

Hype and rhetoric, not research, were the driving forces behind MOOCs, he argued. When they came onto the scene, MOOCs were not analyzed in a scientific way, and if they had been, it would have been easy to see what might actually happen and to conclude that some of the early predictions were off-base, Mr. Siemens said.
This recantation saddens me for a variety of reasons. For one this, we - Siemens and myself and others who were involved in the development of the MOOC - made no such statements. In the years between 2008, when the MOOC was created, and 2011, when the first MOOC emerged from a major U.S. university, the focus was on innovation and experimentation in a cautious though typically exuberant attitude. 

Yes, we had long argued that colleges and education had to change. But none of us ever asserted that the MOOC would accomplish this in one fell swoop. Those responsible for such rash assertions were established professors with respected academic credentials who came out of the traditional system, set up some overnight companies, and rashly declared that they had reinvented education.

It's true, Siemens has moved over to that camp, now working with EdX rather than the connectivist model we started with. But the people at EdX are equally rash and foolish:
(Anant) Argarwal (who launched EdX) is not a man prone to understatement. This, he says, is the revolution. "It's going to reinvent education. It's going to transform universities. It's going to democratise education on a global scale. It's the biggest innovation to happen in education for 200 years." The last major one, he says, was "probably the invention of the pencil". In a decade, he's hoping to reach a billion students across the globe. "We've got 400,000 in four months with no marketing, so I don't think it's unrealistic."
Again, these rash and foolish statements are coming from a respected university professor, a scion of the academy, part of this system Siemens is now attempting to join. As he recants, it is almost as though he recants for them, and not for us. But the Chronicle (of course) makes no such distinction. Why would it?

But the saddest part is that we never forgot that we were scientists and researchers. As I have often said in talks and interviews, there were things before MOOCs, there will be things after MOOCs, and this is only one stage in a wider scientific enterprise. And there was research, a lot of it, careful research involving hundreds and occasionally thousands of people, which was for the most part ignored by the wider academic community, even though peer reviewed and published in academic journals. Here's a set of papers by my colleagues at NRC, Rita Kop, Helene Fournier, Hanan Sitlia, Guillaume Durand. An additionally impressive body of papers has been authored and formally published by people like Frances Bell, Sui Fai John Mak, Jenny Mackness, and Roy Williams. This is only a sampling of the rich body of research surrounding MOOCs, research conducted by careful and credible scientists.

I would be remiss in not citing my own contributions, a body of literature in which I carefully and painstakingly assembled the facts and evidence leading toward connectivist theory and open learning technology. The Chronicle has never allowed the facts to get in the way of its opinions, but I have generally expected much better of Siemens, who is (I'm sure) aware of the contributions and work of the many colleagues that have worked with us over the years.

Here's what Siemens says about these colleagues in his recent blog post on the debate:
One approach is to emphasize loosely coupled networks organized by ideals through social media. This is certainly a growing area of societal impact on a number of fronts including racism, sexism, and inequality in general. In education, alt-ac and bloggers occupy this space. Another approach, and one that I see as complimentary and not competitive, is to emphasize research and evidence. (My emphasis)

In the previous case he could have been talking about the promulgators of entities like Coursera, Udacity and EdX, and the irresponsible posturing they have posed over the years. But in this case he is talking very specifically about the network of researchers around the ideas of the early MOOCs, connectivism, and related topics.

And what is key here is that he does not believe our work was based in research and evidence. Rather, we are members of what he characterizes as the 'Alt-Ac' space - "Bethany Nowviskie and Jason Rhody 'alt-ac' was shorthand for 'alternative academic' careers." Or: "the term was, in Nowviskie’s words,' a pointed push-back against the predominant phrase, 'nonacademic careers.' 'Non-academic' was the label for anything off the straight and narrow path to tenure.'" (Inside Higher Ed). Here's Siemens again:

This community, certainly blogs and with folks like Bonnie Stewart, Jim Groom, D’Arcy Norman, Alan Levine, Stephen Downes, Kate Bowles, and many others, is the most vibrant knowledge space in educational technology. In many ways, it is five years ahead of mainstream edtech offerings. Before blogs were called web 2.0, there was Stephen, David Wiley, Brian Lamb, and Alan Levine. Before networks in education were cool enough to attract MacArthur Foundation, there were open online courses and people writing about connectivism and networked knowledge. Want to know what’s going to happen in edtech in the next five years? This is the space where you’ll find it, today.
He says nice things about us. But he does not believe we emphasize research and evidence.

With all due respect, that's a load of crap. We could not be "what’s going to happen in edtech in the next five years" unless we were focused on evidence and research. Indeed, the reason why we are the future, and not (say) the respected academic professors in tenure track jobs is that we, unlike them, respect research and evidence. And that takes me to the second part of my argument, the part that states, in a nutshell, that what was presented in this report does not constitute "research and evidence." It's a shell game, a con game.

Let me explain. The first four chapters of this study are instances of what is called a 'tertiary study' (this is repeated eight times in the body of the work). And just as "any tertiary study is limited by the quality of data reported in the secondary sources, this study is dependent on the methodological qualities of those secondary sources." (p. 41) So what are the 'secondary sources'? You can find them listed in the first four chapters (the putative 'histories') (for example, the list on pp. 25-31). These are selected by doing a literature search, then culling them to those that meet the study's standards. The secondary surveys round up what they call 'primary' research, which are direct reports from empirical studies.

Here's a secondary study that's pretty typical: 'How does tele-learning compare with other forms of education delivery? A systematic review of tele-learning educational outcomes for health professionals'.The use of the archaic term 'tele-learning' may appear jarring, but despite many of the studies being from the early 2000s I selected this one as an example because it's relatively recent, from 2013. This study (and again, remember, it's typical, because the methodology in the tertiary study specifically focuses on these types of studies):
The review included both synchronous (content delivered simultaneously to face-to-face and tele-learning cohorts) and asynchronous delivery models (content delivered to the cohorts at different times). Studies utilising desktop computers and the internet were included where the technologies were used for televised conferencing, including synchronous and asynchronous streamed lectures. The review excluded facilitated e-learning and online education models such as the use of social networking, blogs, wikis and BlackboardTM learning management system software.

Of the 47 studies found using the search methods, 13 were found to be useful for the purposes of this paper. It is worth looking at the nature of this 'primary literature':

(Sorry about the small size - you can view the data in the original study, pp. 72-73)

Here's what should be noticed from these studies:
  • They all have very small sample sizes, usually less than 50 people, with a maximum size less than 200 people
  • The people studies are exclusively university students enrolled in a traditional university course
  • The method being studies is almost exclusively the lecture method
  • The outcomes are assessed almost exclusively in the form of test results
  • Although many are 'controlled' studies, most are not actually controlled for "potential confounders"
This is what is being counted as "evidence"for "tele-learning educational outcomes." No actual scientific study would accept such 'evidence' for any conclusion, however tentative. But this is typical and normal in the academic world Siemens is attempting to join, and this is by his own words what constitutes "research and evidence."

Why is this evidence bad? The sample sizes are too small for quantificational results (and the studies are themselves are inconsistent so you can't simply sum the results).The sample is biased in favour of people who have already had success in traditional lecture-based courses, and consists of only that one teaching method. A very narrow definition  of 'outcomes' is employed. And other unknown factors may have contaminated the results. And all these criticisms apply if you think this is the appropriate sort of study to measure educational effectiveness, which I do not.

I said above it was a con game. It is. None of these studies is academically rigorous. They are conducted by individual professors running experiments on their own (or sometimes a colleague's) classes.The studies are conducted by people without a background in education, subject to no observational constraints, employing a theory of learning which has been for decades outdated and obsolete. These people have no business pretending that what they are doing is 'research'. They are playing at being researchers, because once you're in the system, you are rewarded for running these studies and publishing the results in journals specifically designed for this purpose.

What it reminds me of is the sub-prime mortgage crisis. What happened is that banks earned profits by advancing bad loans to people who could not afford to pay them. The value of these mortgages was sliced into what were called 'tranches' (which is French for 'slice', if you ever wondered) and sold as packages - so they went from primary sources to secondary sources. These then were formed into additional tranches and sold on the international market. From secondary to tertiary. By this time they were being offered by respectable financial institutions and the people buying them had no idea how poorly supported they were. (I'm not the first to make this comparison.)

Not surprisingly, the reports produce trivial and misleading results, producing science that is roughly equal in value to the studies that went into it. Let's again focus on the first chapter. Here are some of the observations and discussions:
it seems likely that asynchronous delivery is superior to traditional classroom delivery, which in turn is more effective than synchronous distance education delivery. (p. 38)

both synchronous and asynchronous distance education have the potential to be as effective as traditional classroom instruction (or better). However, this might not be the case in the actual practice of distance education (p. 39)

all three forms of interaction produced positive effect sizes on academic performance... To foster quality interactions between students, an analysis of the role of instructional design and instructional interventions planning is essential.

In order to provide sufficient academic support, understanding stakeholder needs is a main prerequisite alongside the understanding of student attrition (p.40)

I'm not saying these are wrong so much as I am saying they are trivial. The field as a whole (or, at least, as I understand it) has advanced far beyond talking in such unspecific generalities as 'asynchronous', 'interaction' and 'support'. Because the studies themselves are scientifically empty, no useful conclusions can be drawn from the metastudy, and the tertiary study produces vague statements that are worse than useless (worse, because they are actually pretending to be new and valuable, to be counted as "research and evidence" against the real research being performed outside academia).

Here is the 'model' of the field produced by the first paper:

It's actually more detailed than the models provided in the other papers. But it is structurally and methodologically useless, and hopelessly biased in favour of the traditional model of education as practiced in the classrooms where the original studies took place. At best it could be a checklist of things to think about if you're (say) using PowerPoint slides in your classroom. But in reality, we don't know what the arrows actually mean, the 'interaction' arrows are drawn from Moore (1989) , and the specific bits (eg. "use of LMS") say nothing about whether we should or whether we shouldn't.

The fifth chapter of the book is constructed differently from the first four, being a summary of the results submitted from the MOOC Research Institute (MRI). Here's how it is introduced:
Massive Open Online Courses (MOOCs) have captured the interest and attention of academics and the public since fall of 2011 (Pappano, 2012). The narrative driving interest in MOOCs, and more broadly calls for change in higher education, is focused on the promise of large systemic change.

The unfortunate grammar obscures the meaning, but aside from the citation of that noted academic, Laura Pappano of the New York Times, the statements are generally false. Remember, academics were studying MOOCs prior to 2011. And the interest of academics (as opposed to hucksters and journalists) was not focused on 'the promise of large systemic change' nearly so much as it was to ionvestigate the employment of connectivist theory in practice. But of course, this introduction is not talking about cMOOs at all, but rather, the xMOOCs that were almost exclusively the focus of the study.

Indeed, it is difficult for me to reconcile the nature and intent of the MRI with what Siemens writes in his article:
What I’ve been grappling with lately is “how do we take back education from edtech vendors?”. The jubilant rhetoric and general nonsense causes me mild rashes. I recognize that higher education is moving from an integrated end-to-end system to more of an ecosystem with numerous providers and corporate partners. We have gotten to this state on auto-pilot, not intentional vision.

Let's examine the MOOC Research Institute to examine this degree of separation:
MOOC Research Initiative (MRI) is funded by the Bill & Melinda Gates Foundation as part of a set of investments intended to explore the potential of MOOCs to extend access to postsecondary credentials through more personalized, more affordable pathways.
To support the MOOC Research Initiative Grants, the following Steering Committee has been established to provide guidance and direction:
Yvonne Belanger, Gates Foundation
Stacey Clawson, Gates Foundation
Marti Cleveland-Innes, Athabasca University
Jillianne Code, University of Victoria
Shane Dawson, University of South Australia
Keith Devlin, Stanford University
Tom (Chuong) Do, Coursera
Phil Hill, Co-founder of MindWires Consulting and co-publisher of e-Literate blog
Ellen Junn, San Jose State University
Zack Pardos, MIT
Barbara Means, SRI International
Steven Mintz, University of Texas
Rebecca Petersen, edX
Cathy Sandeen, American Council on Education
George Siemens, Athabasca University
With a couple of exceptions, these are exactly the people and the projects that are the "edtech vendors" vendors Siemens says he is trying to distance himself from. He has not done this; instead he has taken their money and put them on the committee selecting the papers that will be 'representative' of academic research taking place in MOOCs.

Why was this work necessary? We are told:
Much of the early research into MOOCs has been in the form of institutional reports by early MOOC projects, which offered many useful insights, but did not have the rigor — methodological and/or theoretical expected for peer-reviewed publication in online learning and education (Belanger & Thornton, 2013; McAuley, Stewart, Siemens, & Cormier, 2010).

We already know that this is false - and it is worth noting that this study criticizing the lack of academic rigour cites a paper titled  'Bioelectricity: A Quantitative Approach' (Belanger & Thornton, 2013) and an unpublished paper from 2010 titled 'The MOOC model for digital practice' (McAuley, Stewart, Siemens, & Cormier, 2010). A lot of this paper - and this book - is like that. Despite all its pretensions of academic rigour, it cites liberally and lavishly from non-academic sources in what appears mostly to be an effort to establish its own  relevance and to disparage the work that came before.

I commented on this paper in my OLDaily post:

The most influential thinker in the field, according to one part of the study, is L. Pappano (see the chart, p. 181). Who is this, you ask? The author of the New York Times article in 2012, 'The Year of the MOOC'. Influential and important contributors like David Wiley, Rory McGreal, Jim Groom, Gilbert Paquette, Tony Bates (and many many more)? Almost nowhere to be found.

Here is the chart of citations collated from the papers selected by the committee for the MOOC Research Network (p. 181):

 Here is the citation frequencies from the same papers (p. 180):

What is interesting to note in these citations is that the people who Siemens considers to be 'Alt-Ac' above - Mackness, Stewart, Williams, Cormier, Kop, Williams, Mackness - all appear in this list. Some others - Garrison (I assume they mean Randy Garrison, not D.D.) and Terry Anderson, notably, are well known and respected writers in the field. The research we were told several times does not exist apparently does exist. The remainder come from the xMMOC community, for example,  Pritchard from EdX, Chris Peich from Stanford, Daniel Seaton (EdX). Tranches.

But what I say about the rest of the history of academic literature in education remains true. The authors selected to be a part of the MOOC Research Institute produced papers with only the slightest - if any - understanding of the history and context in which MOOCs developed. They do not have a background in learning technology and learning theory (except to observe that it's a good thing). The incidences of citations arise from repeated references to single papers (like this one) and not a depth of literature in the field.

What were the conclusions of this fifth paper? As a result, nothing more substantial than the first four (quoted, pp. 188-189):
  • Research needs to create with theoretical underpinnings that will explain factors related to social aspects in MOOCs
  • Novel theoretical and practical frameworks of understanding and organizing social learning in MOOCs are necessary
  • The connection with learning theory has also been recognized as another important feature of the research proposals submitted to MRI
  • The new educational context of MOOCs triggered research for novel course and curriculum design principles
This is why I said in my assessment of the paper that "the major conclusion you'll find in these research studies is that (a) research is valuable, and (b) more research is needed." These are empty conclusions, suggesting that either the authors of the original papers, or the authors summarizing the papers, had almost nothing to say.

In summary, I stand by my conclusion that the book is a muddled mess. I'm disappointed that Siemens feels the need to defend it by dismissing the work that most of his colleagues have undertaken since 2008, and by advancing this nonsense as "research and evidence."