posted Dec 5, 2012, 4:00 AM by Yishay Mor
updated Dec 5, 2012, 10:08 AM
There's a fascinating discussion going on the MOOC partners' mailing list. I thought it raises some issues of general interest, so with the permission of the authors, I'm sharing it here:
From: Thomas C Reeves <email@example.com>
From: Diana Laurillard <D.Laurillard@ioe.ac.uk>
As the OLDS MOOC deadline looms, I am increasingly concerned about the coherency and quality of the eventual offering, especially with respect to its alignment within and across the nine weeks of the course.
Whenever I evaluate a course, I begin by constructing a matrix that describes the learning objectives (or expected outcomes), the activities in which learners will be engaged that will allow them to accomplish the stated objectives, and the assessment strategies that will be used to indicate to both the instructor and the learners that the objectives have been accomplished.
In analyzing the portrayal of a course in such a matrix, I try to answer these types of questions:
- How clear are the objectives within any given week?
- To what extent are the objectives within each week actually learning objectives or enabling objectives? (The former describes a learning outcome whereas the latter describes a learning activity.)
- To what extent do the objectives from one week to another build on each other systematically in a manner to enables the accomplishment of the overall goals of the course (e.g., to become a competent learning designer)?
- How well do the various activities in each week provide adequate opportunities to accomplish the objectives for that week?
- How well do the various activities from week to week relate to each other systematically so that learners can perceive they are engaged in meaningful activities and making progress toward the ultimate accomplishment of the course goals?
- To what extent do the assessment strategies for each week provide evidence that the learning objectives for that week were accomplished?
- To what extent are the assessment strategies across the whole course building a portfolio of evidence that the ultimate course goals have been accomplished?
The attached document is the beginning of such a matrix. I have not included activities yet, and there are major gaps where I could not find information, primarily about time expectations.
But an analysis of even this sketchy matrix is worrying. The objectives vary quite a bit in format and substance....some are clearly learning objectives (e.g., Define learning design, as a field of research and a practice.), but others seem more like enabling objectives (e.g., Explore approaches to understanding and using context in learning design).
The alignment between objectives and assessment strategies within weeks is also worrisome as it is not very clear in some weeks. Also, I don't perceive an adequate development toward the overall goals of the course which I think (but may be wrong) is to enable people to become better learning designers. It is especially not clear what people will actually be designing....an OER, a course, a module, all of the above.
All the best,
Interesting. I think we may have a problem with the alignment of approaches to learning design as well.
The approach we have taken in developing this MOOC is very different to the one I'm used to, I.e. the old ed tech/OU course team approach – no idea if it's still done that way. That was a more collaborative and iterative approach, whereas this approach is more like a workflow sequence – inquire, ideate, connect, etc. And we've more or less followed that sequence in the way we developed the MOOC. It was intended to be collaborative I think, but that's difficult to manage online with very few meetings (though admittedly I missed most of them, so I guess that's why I feel it was less collaborative).
Tom's analysis is based, I think, on the ed tech approach where you begin with the needs analysis, learning outcomes and assessment and then develop draft outlines for content and activities, then critique across initial drafts, revise, develop D2s, and refine in light of other D2s. Designers are involved from the start but they then take over from the academic drafting and the workflow to publication follows that, with academic designers coming in again to approve final designs.
We have the overall LOs but then everyone did their own week and ass't, and we didn't really do the Connect stage as a formal milestone, where we would achieve alignment as in the D1 stage. So Tom was looking for something that had not yet happened I think. And weeks are still incomplete – at least mine certainly are I'm afraid – still way behind, for which apologies. At any rate, the learning design approach we are using here is different from his I think.
Tom's questions 3, 5 and 7 are the ones to discuss across the whole course once all the weeks are complete. The other are just what we should all be doing within our week – does he think we have not done that for all/some weeks?
So if we have a general discussion I think we should focus on the alignment questions across weeks, rather than within weeks as we have to leave that to the teams for each week.
From: Thomas C Reeves <firstname.lastname@example.org>
Dear OLDS MOOC Team Members,
Diana hit the nail of the head when she wrote: "I think we may have a problem with the alignment of approaches to learning design as well." In a follow-up note to Yishay, Simon, and Rebecca that I sent after sending my initial alignment concerns message, I actually wrote: "I may be just thinking like an old fashioned instructional designer, but I think the pieces need to fit together much more neatly than they do now."
Diana is also correct in discerning that questions 3, 5, and 7 of the ones I raised are focused on the coherency of the learning experience and outcomes across weeks within the whole MOOC. Some of my worries (Truth told, I may worry too much!) stem from the fact that we are only a few weeks (weeks that include three weeks at the beach when I promised my wife I would not work…..much) from the MOOC launch and so many pieces are incomplete. Along with Yishay, I am leading the effort on Week 7 (Review) and I am struggling to figure out what I am going to be helping the participants actually review/evaluate that week. And I must produce a video introduction to Week 7 by December 5.
Stepping back even more, we also may not all share the same vision of what kind of MOOC this will be. MOOC pioneer, George Siemens (2012) wrote that the MOOCs that he, Stephen Downes and other Canadians (God love them!) offered are sometimes referred to as cMOOCs to reflect their connectivist and constructivist pedagogical origins whereas the MOOCs offered by people and entities associated with certain elite universities in the USA (most notably Stanford, MIT, and Harvard) are sometimes referred to as xMOOCs to reflect what some see as their roots in behaviorist or “transmissionist” (teaching by telling) pedagogy and/or their stated goals of making profits.
I view the OLDS MOOC as much more in the cMOOC camp, but with a significantly greater focus on learning assessment. The cMOOC folks are not overly concerned with assessing participant learning. For example, Kop, Fournier, and Mak (2011) describe two connectivist MOOCs originating in Canada in which neither course included “formal assessments of learning outcomes as the learning objectives for each learner on the MOOCs was different, dependent on his or her context” (p. 74). The basic philosophy of these types of cMOOCs seems to be enabling opportunities to learn as well as to contribute to the learning of others.
The providers and sponsors of xMOOCs, on the other hand, are very concerned with assessing participant learning. At this time, the options for assessing student learning within an xMOOC are not unlike the methods used in traditional face-to-face courses as well as online and blended courses, although there are arguably more complex challenges in MOOCs involving issues such as cheating and plagiarism (Oliver, 2012; Young, 2012). Assessment choices in xMOOCs at this time appear to mainly machine-graded programming tasks, multiple-choice item tests, and peer or crowd-sourced assessment of short essays, blog postings, discussion contributions, etc.
The OLDS MOOC assessments will include peer assessment to be sure, but the ultimate assessment will be based on an authentic task (designing and producing an OER….I think). Oliver (2012) wrote that Coursera is utilizing the crowd sourcing strategy of peer assessment to assess the learning that cannot be easily marked by a computer algorithm. She questioned this strategy and stated “crowd sourcing can’t be relied upon when self-interest is at play.” In her brief essay, Oliver promoted “authentic assessment” strategies whereby the methods used to assess learning closely approximate the actual activities people face in the real world. My Aussie friends and I (Herrington, Reeves, and Oliver, 2010) promote this idea and in our book, we describe examples of how whole higher education courses have been designed around significant authentic tasks in which assessment is inherent in the tasks. This is essentially the strategy I think we should be using in our MOOC.
But I may be wrong in this conception. If I am correct, however, then I am feeling that we need to come to some sort of consensus both within and across the weeks of the OLDS-MOOC about the nature of what we are helping people to learn to design and produce through this learning experience, and that we need to do this as soon as possible. I also think we need more agreement on a common set of terms for what will be developed by the participants, and some design specifications regarding size and scope of the OER to be developed. But I also recognize that this type of agreement may not be palatable to every one on our team, and that others may be much more comfortable with a more relaxed “Let It Be” approach to having the MOOC emerge in the way it has to date.
I look forward to hearing from our team about these issues. As I was writing this response I received an update from my Kiwi friend, Richard Elliott. He wrote this about one of our team members who spoke at the ASCILITE conference held in Wellington, NZ this week: “The effervescent Grainne Conole was an invited speaker, ran a full day workshop and also launched her new book 'Designing for learning in an Open World'. “ Coincidentally, Diana and I were both keynoters are the ASCILITE conference held in Auckland, NZ in 2002! I wish I’d been able to go this year to hear Grainne!
Herrington, J., Reeves, T. C., & Oliver, R. (2010). A guide to authentic e-learning. New York: Routledge.
Kop, R., Fournier, H., & Mak, J. S. F. (2011). A pedagogy of abundance or a pedagogy to support human beings? Participant support on massive open online courses. International Review of Research in Open and Distance Learning, 12(7), 74-93.
From: Joshua Underwood <email@example.com>
If the main thread of the design studio is formulating and addressing a meaningful design challenge (which is what I had understood?) might we state upfront to kinds of outcomes for each week:
- design outcomes
- learning outcomes
Might this help clarify how we envisage design work progressing across the weeks and lead to better alignment?
From: Yishay Mor <Yishay.Mor@open.ac.uk>
What a fascinating discussion! For me - this is exactly the kind of thing I was hoping to get out of this project. You touch on several issues which are specific to our work (i.e. particular weeks statement of objectives and alignment of tasks), issues that seem to be specific but I suspect are generic ("I was hoping for an intense, collaborative, iterative process, but that happened only partially, to an extent due to my constraints" - I'm willing to bet that almost anyone engaged in designing a MOOC experiences that), and some issues that are clearly MOOC-generic, e.g. the tension between pre-set learning objectives and learners' emerging agenda, the tension between our recognition of the importance of assessment and the limited resources we have to implement it.
A quick response to the cMOOC / xMOOC question: I see ours as a pMOOC (project-based). I've sadly just learnt yesterday that we won't be the first.
In the meanwhile, I suggest we all do the following:
- Review your weeks, and use Tom's matrix to improve your alignment. It may mean tweaking activities, and it may mean tweaking objectives. No shame in that!
- Review the week before and after yours and after yours, and discuss the flow between weeks with the leaders of those weeks.
- Focus our immediate attention on weeks 1-3, and make sure we get those right, and that we establish some coherence across these weeks. If we loose people in week 1, they won't come back. If we figure out weeks 1-3, it will solve a lot of our problems for later on.
- Also, in response to the discussion on objectives, I added an activity to week 1: "define your objectives for this week", and in the end, I asked participants to reflect back and note how they met those objectives. Have a look, and think if you would like to adopt this pattern across the MOOC.
And finally, Tom - I think authentic assessment is a powerful idea. Can you have a look at weeks 1-3, and suggest how you would apply this idea to those weeks? I think I tried to capture a sense of authentic assessment in the pMOOC activity pattern, but I don't know if I succeeded.
all the best,
Comments are closed, but please contribute to the discussion here: