Mapping del.icio.us with Anthracite and OmniGraffle

Several people have asked me how I constructed the visualizations that I used in my talk on del.icio.us. My own approach involved a fair amount of hand-rolled custom code – not fun for people unaccustomed to writing their own software and working from the command line. So as to give non-programming-savvy researchers a chance to explore del.icio.us – or other online systems with implicit network structures – for themselves, I have put together a short guide on using two Mac OS X applications, Anthracite and OmniGraffle, for this purpose.

Comments and suggestions are welcome.

A Moveable Feast: the del.icio.us web

I have added to the resources page a talk on del.icio.us, presented within the context of the Horizon Project VCOP. If you’ve heard about del.icio.us, but aren’t quite sure what the fuss is all about, or if you’ve already tried it out, but would like to know how to get the most out of it, this talk is for you.

As always, I’m interested in any comments or feedback people might have.

The OECD Education Report, Redux

The results from the OECD PISA 2003 study of learning skills among 15-year-olds are now out. As could be expected from last year’s coverage of the PISA 2000 results, news reports have tended to misrepresent the information contained in the new report. I have already covered these misrepresentations in two previous posts, so I will not rehash that material here; however, I will spend some time looking at the new data conveyed by the report.

PISA 2003 incorporates a new (very welcome) category for student evaluation in the form of a set of questions covering Problem Solving. An analysis of this category would be worth a separate post unto itself; since my main goal here is to update the results I had obtained for PISA 2000, I will omit it from consideration in the discussion that follows.

A statistical analysis similar to that from my previous post yields, as before, four main groups with the labels shown in the table below. A new classification resulting from the PISA 2003 analysis is the subdivision of the “Substantially Below Average” group into a better-performing “High Group”, and a lower-performing “Low Group”. Countries that did not participate in PISA 2000 are highlighted in gray; countries that improved their results sufficiently to be promoted from one group to the next higher group are highlighted in green. The United Kingdom, which participated in PISA 2000, was excluded from PISA 2003 due to noncompliance with OECD response rate standards. The following table, with countries arranged in alphabetical order within groups, summarizes these results:

Performance of 15-Year-Old Students in
Reading, Mathematics, and Science
Better than Average
Average
Below Average
Substantially Below Average
Australia Austria Greece
High Group
Canada Belgium Italy Serbia
Finland Czech Republic Portugal Thailand
Hong Kong – China Denmark Russian Federation Turkey
Japan France   Uruguay
Korea Germany  
Low Group
Liechtenstein Hungary   Brazil
Netherlands Iceland Indonesia
New Zealand Ireland Mexico
Latvia Tunisia
Luxembourg
Macao – China
Norway
Poland
Slovak Republic
Spain
Sweden
Switzerland
United States

As we can see, only four countries (Liechtenstein, Latvia, Luxembourg, and Poland) improved their results substantially from 2000. Of these, the result for Luxembourg has to be discarded from consideration, since (as noted on page 30 of the OECD report) assessment conditions in this country were changed significantly between 2000 and 2003. In the case of Liechtenstein, only 332 students were assessed, due to the small size of the country. Because of this small sample size, changes at the individual school level are just as likely to affect the final results as national policy decisions. Hence, it is difficult to ascertain the cause of the observed improvement. Finally, in the cases of Latvia and Poland, it is tempting to attribute the improvement to their respective large-scale educational reforms, which started in 1998. However, data that would allow for the determination of cause-and-effect relationships in these two cases is currently lacking.

It is unsurprising that little has changed between PISA 2000 and PISA 2003 – after all, only three years have elapsed between the two studies. However, news reports – and, I fear, some public officials – have made much of minor increases or decreases in scores that are not significant. What still stands is my conclusion from my previous post: no country can be said to have provided a solid educational floor in these categories for all of its citizens. Getting to the point where this educational floor can be guaranteed will require more than slight changes to expenditures, school year duration, or class sizes – it will require a significant rethinking of how the educational process occurs at all levels.

Achieving Fairness and Excellence in Social Software

I have added to the resources page a talk on social software, presented at the NMC Online Conference on Social Computing. If you’re interested in using tools such as wikis, forums, and blogs in education, but want to avoid some of the potential pitfalls, you might find this talk – and the free software that accompanies it – rather useful.

Needless to say, comments and suggestions are always welcome.

New Resources for Digital Storytelling and Learning Object Development

Hippasus is growing nicely. While this is wonderful, it has prevented me from making regular posts to this weblog. Having just completed two major projects, I now have the time to post more frequently. To celebrate this, I would like to highlight two new sets of resources on the Hippasus website:

As always, I welcome all comments people might have on these resources.

The Meaning and Implications of the OECD Literacy Measurements

In the wake of my previous
post
about the recent OECD education
report
, several people have
asked for further analysis of the OECD evaluation tools. More specifically,
they have
been interested in how these tools differ from the usual standardized tests
(with their known pitfalls and blind spots), and how the results obtained
should be
interpreted. In order to discuss this, I have to preface the body of this post
with a brief definition of two terms commonly used within the educational community:
content and competency.

Broadly defined, content can be viewed as the “stuff” that
makes up a subject area. For instance, part of the content involved in a basic
health
class might be the definition of virii and bacteria, with their different characteristics
and modes of reproduction. Competencies can then be viewed as the “what
can the student do” with this content. Thus, at a very basic level of
competency, the student will be able to reproduce the definitions acquired.
However, this
level of competency is generally not the only thing that would be desired –
one would hope that the student would be able to integrate this with other
knowledge,
and thereby be able to follow simple cause-and-effect procedures supplied them
in order to keep themselves healthy. Even better, one would hope that they
would be able to create new knowledge for themselves, so that they could take
an active
role in their health decisions. Clearly, the evaluation tool to be used in
determining these progressively higher levels of competency has to go well
beyond “fill
in the blank”-type tests.

Take, for instance, the content area of basic
algebra. An evaluation tool that measures whether the student can carry out
the basic operations involved in
a standard “solve for x”-type problem is indeed providing some information
about how the student performs in this content area – but at a very low level
of competency, namely that of mimicry of a standard algorithmic procedure.
A tool that evaluates how the student performs at applying these basic operations
to a standard word problem where the necessary information is already provided
to the student corresponds to the measurement of a somewhat higher level of
competency.
Replacing the standard word problem with one using nonstandard phrasing provides
an evaluative tool for a substantially higher level of competency. At even
higher levels of competency we would find tools that evaluate how the student
performs
when presented with a problem solvable using the algebraic procedures known
to them, but where the data needed to solve the problem is not provided to
the student
a priori, and must be requested by them. Finally, at the highest level of competency,
it becomes of interest to evaluate how the student performs when applying the
tools of basic algebra to a real-world problem defined autonomously by the
student.

Most standardized tests operate on only the first two levels of our
algebra example, and hence miss a large – perhaps the most important – part
of the
competency
picture. The OECD evaluation tools are designed to provide a broad picture
of the competency spectrum within the context of an equally broad content area.
Furthermore, the “top level” of the OECD competencies corresponds
to a “competency floor” that the majority of a country’s population
can reasonably be expected to achieve. In other words, high scores on the OECD
tests
should be attainable by most people, and not just a privileged elite. The fact
that no country came close to this result indicates the distance yet to be
covered in terms of educational quality and equity throughout the world.

To
better understand the conclusions that can be derived from the OECD report,
we need to take a look at how scores relate to competency levels. A difference
of a few
points in the OECD results between two students means absolutely nothing in
terms of the relative competencies achieved; instead, the OECD provides rough
categorical
scales that correspond to broad swaths of competencies. For the three areas
studied, these categories, listed here with a few sample representative tasks
are:

Categories and Representative
Tasks
Reading Literacy Mathematical Literacy Scientific Literacy
Level 1 (335 to 407 points): locating a single piece of information;
identifying the main theme of a text
Level 1 (around 380 points): carrying out single-step mathematical processes Level 1 (around 400 points): recalling simple scientific facts
Level 2 (408 to 480 points): locating straightforward information; deciding
what a well-defined part of the text means
Level 3 (481 to 552 points): locating multiple pieces of information;
drawing links between different parts of the text
Level 2 (around 570 points): carrying out multiple-step mathematical
processes for predefined problems
Level 2 (around 550 points): using scientific concepts to make predictions
or provide explanations
Level 4 (553 to 625 points): locating embedded information; critically
evaluating a text
Level 3 (around 750 points): creating new mathematical processes as required
by problems
Level 3 (around 690 points): creating new conceptual models to make predictions
or provide explanations
Level 5 (over 625 points): locating difficult to find information; building
new hypotheses based upon texts

The source for this table is the OECD
Education at a Glance 2003
report, from which most of the language
describing representative tasks is drawn. More detailed information can be
found on the OECD Programme
for International Student Assessment (PISA) website
– the executive
summary
of
the Knowledge
and Skills for Life 2001
report is particularly useful in
this regard.

In my previous post, I had identified a group of seven countries that could
reasonably be said to perform “better than average” in the context
of these literacies. Looking at New Zealand as a representative country in
this group, we find that
its averages are 529, 537, and 528 for reading, mathematical, and scientific
literacy respectively – still a ways from providing the majority of its population
with the desired “competency floor”.

As I mentioned at the end of
my last post, remedying the deficiencies revealed by the OECD report will
take more than minor changes in expenditures or classroom
organization. Instead, it will take educational redesign that directly addresses
the idea that higher-level competencies are what is desirable and achievable
in all content areas. Interestingly, I believe that this redesign can be
accomplished within the context of existing educational structures – I have
yet to see any
data that indicates, for instance, that dramatically changing the mix of
public to private educational institutions in any given country would fundamentally
transform the results measured by the OECD.

What can play
a crucial role is the set of technological transformations that can be
brought about by the availability of networked
computers to all
participants
in the educational process. A serious discussion of why this is the case
will have to wait until a later post. However, let me provide the following
example
as a tidbit in the meantime. Returning to our earlier algebra example,
researchers working from a constructivist learning perspective have found that
in mathematics
– and in fact, in most subject areas – project-oriented learning has the
potential to work wonderfully as a way of addressing both the teaching
and evaluation
of
the highest competency level described above. However, this potential
can be effectively sabotaged in situations where a scarcity of information
resources
works against the student’s capacity to research, define, and present a
project. Additionally, at the evaluation stage, it is very important that teachers
have access to a broad range of student projects and serious collegial
discussion
across a broad range of institutions about them – it is too easy otherwise
for
evaluative inbreeding to take place. Intelligent use of networked computers
can address both these issues efficiently in ways no other resource can,
both
by
providing low-cost access to resources, as well as by allowing students
and teachers alike to share and discuss projects. Of course, just throwing
technology
at the
schools will not accomplish this result – both new research and new solutions
derived from existing research will be necessary to change the landscape
described in the OECD report. However, the early results of experiments
such as the Maine
Learning Technology Initiative
make me
hopeful that this is indeed the most promising direction for change.

Some Comments on the Recent OECD Education Report

About a week ago, the OECD (Organisation
for Economic Co-operation and Development) Education
at a Glance 2003
report was released to the press. The main thrust of
the report was portrayed in the press as follows:

Report: U.S. No. 1 in school spending
Test scores fall in middle of the pack

WASHINGTON (AP) — The United States spends more public and private money
on education than other major countries, but its performance doesn’t measure up
in areas ranging from high-school graduation rates to test scores in math,
reading and science, a new report shows.

(taken from the September
16th article
on the CNN website)

This rather damning lead was followed in the body of the article by a quote from
Barry McGaw, education director for the OECD:

“There are countries which don’t get the bang for the bucks, and the U.S. is one of them.”

The rest of the press
report cited a figure of $10,240 spent per student in the U.S., and included
tables showing
listings for 15-year-olds’ performance in math, reading, and science that rank
the US below thirteen to eighteen other countries.

Whenever I see a report from a reasonably serious organization such as the
OECD described in sensationalistic terms with potential for malicious use,
I get suspicious.
And when I get suspicious, I go to the source and check out the numbers. Which
is what I did in this case. Not to spoil the rest of the story, but while I
found many interesting and worthwhile nuggets of data in the OECD report (many of which
are summarized in the briefing notes for the U.S., downloadable in PDF
format
), I found nothing to substantiate
the explicit and implicit allegations of the news report.

Let’s start out with
the figure for $10,240 spent per student. This figure is not as simple as might
seem at first. First, the figure represents adjusted U.S. dollars – in other words, the figures that it is being compared to are
not actual dollar amounts spent in each country, but have been adjusted for
purchasing power parity (PPP) so as to provide a better basis for comparison. While some
type of correction of this type is needed for cross-country comparisons to
be meaningful, the adjustment formula used can artificially inflate or deflate
the actual magnitudes involved. In other words, while the numbers obtained
from this adjustment can be reasonably used to claim that country A spends more than
country B per student on education, it would be foolhardy to claim that the
ratio of expenditures between the two countries is more than a rough estimate.

More importantly,
the $10,240 figure includes expenditures per student from primary school through
college inclusive. In other words, while the performance of
fifteen-year-old high school students is being used as the yardstick for educational
quality comparisons, the monetary amount being referenced includes expenditures
for college education. As anyone living in the U.S. knows, the ways colleges
are funded differ drastically from those for high schools. To measure the “bang
for the buck” being obtained would require some equivalent performance measure
for college students, which is nowhere to be found in the report. A more relevant
figure to the critique would be the total secondary school expenditure
per student. Using Table
B1.1
, we obtain a figure of $8,855 – high, but far from the highest in this category
(Switzerland, at $9,780), and comparable to other countries such as Austria
($8,578) and Norway ($8,476).

So much for the dollar amount. What about those
tables showing the U.S. trailing the pack in the knowledge demonstrated by
fifteen-year-olds in reading, mathematics,
and science? As before, the story is more complex than these tables would seem
to show. While the rankings published are “correct” inasmuch as they follow
the published scores, they neglect to take into account the fact that in many cases,
score differences between countries are too small to be significant. For instance,
the U.S. indeed trails Norway in science scores – by all of 0.18%. A more useful
way to think about data such as this is to look for “clusters” of countries
that perform in like fashion. Using the data from Tables
A5.2, A6.1, and A6.2
,
and the cluster analysis tools from R,
I find that the data can reasonably be clustered into four groups. The first group, made
up of seven countries, exhibits performance demonstrated by fifteen-year olds that
is better than average. The second group, which includes the U.S., exhibits
performance that is average. The third group exhibits performance below average,
and the fourth group exhibits performance that is substantially below average. The
following table, with countries arranged in alphabetical order within groups,
summarizes these results:

Performance of 15-Year-Old Students in
Reading, Mathematics, and Science
Better than Average
Average
Below Average
Substantially Below Average
Australia Austria Greece Brazil
Canada Belgium Italy Mexico
Finland Czech Republic Latvia
Japan Denmark Luxembourg
Korea France Poland
New Zealand Germany Portugal
United Kingdom Hungary Russian Federation
Iceland
Ireland
Liechtenstein
Norway
Spain
Sweden
Switzerland
United States

While this indicates that the U.S. is not in an optimal position, it is far
from indicating results as dire as those implied by the press report. Secondary
school systems in the seven countries in the first group are worthwhile
studying further – while the difference in performance between the first and
second groups is not dramatic, it is certainly significant and noticeable.

What does this tell us,
then, about the appropriateness of the adjusted expenditures? It tells us that
we cannot, at this point, and based upon these numbers, make any judgment
about the appropriateness of per student adjusted educational expenditures
for any given country. Expenditures per secondary school student do not in any significant
way correlate to the observed grouping. Nor does coupling these numbers to
any other data included in the report yield any particularly insightful results:
percentage of GDP spent on education, class size, number of hours of classroom
instruction, and teacher pay all fail to yield any significant correlations
with our observed clustering either when taken alone or when taken in groups. Again,
this does not mean that none of these factors matter – rather it means that
predictive models for educational success require the study of additional variables
not considered in the current report.

Finally, a cautionary note about the interpretation
of the results for the seven better-than-average performers: the data in the
report simply points to something “interesting” happening
in these seven countries, worthy of further investigation. It does not point
to these countries as occupying a pinnacle that other countries should strive
to achieve and then rest on their laurels. I chose the label for this group
carefully: “better than average” implies just that – not an ultimate target in any sense of the
word. The instruments used for the evaluation of 15-year-old student proficiency
in reading, mathematics, and science are only intended to provide a rough picture
of what could reasonably be expected as universal knowledge in these areas.
No country even approached a near-perfect score on these tests for a majority
of its tested population; thus, no country could be said to have provided a solid
educational floor in these categories for all of its citizens. Getting to the
point where this educational floor can be guaranteed will require more than
slight changes to expenditures, school year duration, or class sizes – it will
require a significant rethinking of how the educational process occurs at all levels.

Tools for Thinking About Social Networks

In the past few years, there has been a burst of interest in the topic of social networks outside the traditional confines of the field. Some of this interest comes, of course, as a result of new research published in the academic press, but has been fueled additionally by at least three other factors:

  • the publication of several well-written popular accounts of current research, such as Malcolm Gladwell’s The Tipping Point, Albert-Laszlo Barabasi’s Linked, and Duncan J. Watts’ Six Degrees;
  • the availability of cheap computer power;
  • the existence of the ultimate playground for inexpensive and original social network research – the Internet.
Many of the topics currently being discussed in the social networks arena have the potential to transform how we think about the design of educational structures. I’ll come back to where I see this potential being realized most fruitfully at a later date, but for now I would like to focus on some of the (free!) tools available for people to explore for themselves the concepts discussed in the books mentioned above.
There exist three free tools that cover quite nicely the spectrum of visualization and analysis that newcomers to the subject might find useful. Agna has a gentle learning curve and is easy to use – it is probably the ideal choice for someone looking for a simple analysis and visualization tool to explore the concepts outlined in the books by Gladwell, Barabasi and Watts. The statistical analysis tool R, when coupled to add-on packages such as sna, allows for greater depth in the exploration of social networks, but does so at the price of a far steeper learning curve and less friendly user interface. In between these two packages, both in terms of ease of use, as well as in exploratory power, is the free version of UCINET. Unlike Agna and R, both of which are cross-platform, this version of UCINET is DOS-based; the good news is that it runs just fine under many of the free DOS emulators available for Mac OS X or Linux, such as Bochs coupled to the FreeDOS operating system. Even if you decide not to use UCINET, it is worthwhile downloading it for the sample network files that accompany it – to decompress it on any platform, simply change the .exe ending on the downloaded file to .zip, and run it through your favorite decompression program. Additional sample data can be found on the INSNA site.
For anything beyond the simplest explorations, some additional instruction in the science of social networks will be necessary. Several excellent tutorials by active researchers are available on the Web: Valdis Krebs has a simple yet effective introduction to the subject. Steve Borgatti’s slide-show overview of the basics of network analysis is available in PDF format. Finally, Robert Hanneman’s well-written and thorough introductory textbook on social network methods can also be downloaded in PDF format.

On Learning Objects

The MERLOT conference provided an excellent opportunity to share ideas with other educators, and listen to some thought-provoking presentations on the subject of learning objects. Rather than rehash my favorite presentations (since the materials from all the talks will be available within the next few weeks on the MERLOT website), I would like to share some thoughts about learning objects with an audience that might not have heard of them.
A good starting place would be the definition of a learning object. A learning object can be defined as being made up of a core consisting of a content object (which could be as small as a single image or video fragment, or as large as a set of books), wrapped in a layer that contains information relevant to its educational use (e.g., pedagogical goals, knowledge prerequisites, forms of assessment), with this information structured in standardized fashion. The core need not be digital – it could be a physical book, or a particular geographic location for use in an ecology lesson – but since the wrapper is digital, all sorts of fun things regarding the collection, sharing, and evaluation of these learning objects can now take place. It is important to realize that learning objects are defined by their pedagogical purposes and context – a famous painting by itself could form the core of a learning object, but would not be a learning object by itself. A more detailed discussion of the structure of learning objects can be found in this paper by Larry Johnson.
Simple as the concept might seem in theory, quite a bit of work is needed to make it become a reality in practice. Among the things required are standards for semantic annotation, tools for creating learning objects, databases for storing and searching these objects, ways of sharing the objects, and structures for evaluating the pedagogical quality and effectiveness of those objects. MERLOT is one of several institutions providing a framework for the sharing of research on learning objects, as well as a repository for learning objects and their evaluation. Many other projects have taken on the task of providing end-to-end solutions for the creation, storage, and sharing of learning objects. One of the most interesting in this regard is eduSource Canada for its comprehensiveness, thoughtfulness, adherence to open standards, and (particularly important, in my view) the attention they have paid to scalability – this bodes well for the products of this effort being usable by institutions and individuals without massive financial and hardware resources.
The learning objects movement is still in its infancy – for instance, the wrapper and evaluation tools provided for most objects on the current MERLOT website are primitive at best – but development is proceeding rapidly. There are many potential and unique advantages to be realized by the use of learning objects, but also (unfortunately) some pitfalls. From these, I would like to highlight five key advantages, and three potentially perilous pitfalls.
Advantages:
Aiding in the democratization of learning: wealthy institutions (e.g., MIT) are now sharing their course materials with the world. Learning objects provide a way to make these products available, usable and digestible – the materials for a full MIT course might not be particularly usable in raw form, but could be readily incorporated by other institutions into their teaching practice if broken down into learning object-style components. At the K-12 level, where instructor training and materials creation can become a particularly pressing problem in less wealthy institutions, the use of a learning object-type approach would allow for instructors to use well-evaluated components, while simultaneously reducing the problem of content creation and training to manageable scale.
Assuming a (truly) creative role for the learner: the structure of learning objects is such that learners are not restricted to using objects passively, but can create their own learning objects to share with others as part of the educational process. A simple yet powerful example of the type of tool that can assist in doing this is given by Pachyderm – templates of the type used in Pachyderm would allow learners to express their understanding of the material in ways that are both deeper and more active than standardized testing. In fact, the very process of choosing among, using, discussing, and evaluating learning objects by learners can be viewed as an essential portion of the learning object creation methodology – a recent presentation by Ulrich Rauch and Warren Scott (summarized by Sarah Lohnes here) argued just this point.
Providing a basis for real discussion: the creation and use of learning objects implies a “theory into practice” approach – any given object is intimately tied to a particular point of instructional practice, but requires clear understanding of its related theory (as, for instance, when creating its semantic tags). This could have a very salutary effect on pedagogical discussion: theoretical conversations in the area of pedagogy without actual examples tend to devolve into fluffy wordplay with little or no relevance to actual teaching practice. However, the choice that is frequently made to schematize or omit relevant theory results in narrowly technical solutions that are copied across institutions with little understanding and less success. Learning objects sidestep the divorce between theory and practice, and could provide educators with tangible objects for productive discussion.
Respecting flexibility in learning styles without sacrificing content: in some applications of current pedagogical thought, differences in learning styles have been mistakenly taken as the equivalent of exclusion from areas of knowledge. I have been present – although not silently, I can assure you – at meetings where instructors insisted that “student X, being primarily a visual learner, could not be expected to understand mathematical abstractions”. This is dangerous, condescending, elitist nonsense, and a thorough misrepresentation of the research conducted into learning styles. Learning objects allow for the creation of multiple approaches to the same objectives, which the learner can choose to tailor by selecting different paths based on their individual learning style – a superb example of this was presented at the conference by Laura Franklin as part of a joint talk with Cathy Simpson.
Allowing for greater potential integration of content across levels (K-12, college, adult learners, etc.): because learning objects need not be tied to a given course or lesson plan, they can be recontextualized by different instructors and learners at different levels in varying fashion. For instance, the learning objects on the senses on Tutis Vilis’ website could be readily used (with varying degrees of instructor contextualization) by learners of all levels.
Pitfalls:
The pitfalls I see as not emanating from anything intrinsic to learning objects, but rather from the fallacies that can arise when enthusiasm for a tool crosses over the line to zealotry. In all fairness, I have not heard these voiced frequently within the learning objects community – but I have heard them voiced often enough to be worth the cautionary note. The three fallacies are:
The fallacy of the LEGO™ bricks: this can best be expressed as “snap a course together from learning object bricks – presto, you’re done”. The LEGO metaphor for learning objects can be useful in conceptualizing their interchangeability and multiplicity – up to a point. When taken too literally, it implies both an excess of structure and passivity in the instructor/learner roles. Additionally, learning objects lack the right features to be literally LEGO-like: the scope of any given object is not uniform, different objects may overlap or leave gaps between them, and the objects themselves need not be immutable objects. The only way to make LEGO brick learning objects is to artificially constrain the production of these objects, and the learning contexts within which they are to be used in ways that are, if anything, less interesting than the least creative aspects of current teaching practice.
The fallacy of the experts: summarizable as “ok, I’ll put in the content, you put in the usability, they put in the accessibility, someone else puts in the semantic markup – presto, a new learning object”. This viewpoint is far more widespread than the previous one – even some people who acknowledge that this type of super-specialized multiple expert development is probably financially infeasible seem to be nostalgic for it. Beyond financial considerations, however, I view this as an example of the malady of overspecialization that affects many sectors of the educational establishment. As someone who has taught courses in usability and accessibility, I can assure you that the material in these areas required to create learning objects does not demand years of study – one or two courses of the same scope and duration as those routinely taken by teachers for recertification will more than suffice. Additionally, a well-designed learning object requires attentions to all aspects of its construction from the start – while it is possible to “bolt on” a tolerable interface to a learning object where usability was not a primary design concern of the content creator’s, it tends to yield mediocre results at best. The experts should be able to focus on those tasks for which deep expertise is required – the creation of tools for the creation of learning objects, research and development in particularly difficult areas of user accessibility, etc.
The fallacy of authoritarianism: which can be simply put as “this is the only worthwhile way to do things – join us or be marginalized”. Whenever I have heard this viewpoint expressed, it has had a particularly dramatic chilling effect upon its listeners. I can think of few things that can kill off a promising pedagogical tool faster than this type of attitude. Learning objects have great pedagogical potential – but only if combined with a broad range of other new and existing tools, and an equally wide scope of critical opinions – none of which are likely to flourish in a “do it my way or else” type of atmosphere.

A Matrix Model for Designing and Assessing Network-Enhanced Courses

I have added to the Resources section of the Hippasus website my paper summarizing a matrix model for course design and evaluation that formed the basis for two recent presentations at the NMC and MERLOT conferences. Extensions of this model are at the heart of Hippasus’ approach to pedagogical design – I’ll have more to say about this in later posts. In the meantime, I welcome all comments people might have on the current paper.