The Meaning and Implications of the OECD Literacy Measurements

In the wake of my previous
about the recent OECD education
, several people have
asked for further analysis of the OECD evaluation tools. More specifically,
they have
been interested in how these tools differ from the usual standardized tests
(with their known pitfalls and blind spots), and how the results obtained
should be
interpreted. In order to discuss this, I have to preface the body of this post
with a brief definition of two terms commonly used within the educational community:
content and competency.

Broadly defined, content can be viewed as the “stuff” that
makes up a subject area. For instance, part of the content involved in a basic
class might be the definition of virii and bacteria, with their different characteristics
and modes of reproduction. Competencies can then be viewed as the “what
can the student do” with this content. Thus, at a very basic level of
competency, the student will be able to reproduce the definitions acquired.
However, this
level of competency is generally not the only thing that would be desired –
one would hope that the student would be able to integrate this with other
and thereby be able to follow simple cause-and-effect procedures supplied them
in order to keep themselves healthy. Even better, one would hope that they
would be able to create new knowledge for themselves, so that they could take
an active
role in their health decisions. Clearly, the evaluation tool to be used in
determining these progressively higher levels of competency has to go well
beyond “fill
in the blank”-type tests.

Take, for instance, the content area of basic
algebra. An evaluation tool that measures whether the student can carry out
the basic operations involved in
a standard “solve for x”-type problem is indeed providing some information
about how the student performs in this content area – but at a very low level
of competency, namely that of mimicry of a standard algorithmic procedure.
A tool that evaluates how the student performs at applying these basic operations
to a standard word problem where the necessary information is already provided
to the student corresponds to the measurement of a somewhat higher level of
Replacing the standard word problem with one using nonstandard phrasing provides
an evaluative tool for a substantially higher level of competency. At even
higher levels of competency we would find tools that evaluate how the student
when presented with a problem solvable using the algebraic procedures known
to them, but where the data needed to solve the problem is not provided to
the student
a priori, and must be requested by them. Finally, at the highest level of competency,
it becomes of interest to evaluate how the student performs when applying the
tools of basic algebra to a real-world problem defined autonomously by the

Most standardized tests operate on only the first two levels of our
algebra example, and hence miss a large – perhaps the most important – part
of the
picture. The OECD evaluation tools are designed to provide a broad picture
of the competency spectrum within the context of an equally broad content area.
Furthermore, the “top level” of the OECD competencies corresponds
to a “competency floor” that the majority of a country’s population
can reasonably be expected to achieve. In other words, high scores on the OECD
should be attainable by most people, and not just a privileged elite. The fact
that no country came close to this result indicates the distance yet to be
covered in terms of educational quality and equity throughout the world.

better understand the conclusions that can be derived from the OECD report,
we need to take a look at how scores relate to competency levels. A difference
of a few
points in the OECD results between two students means absolutely nothing in
terms of the relative competencies achieved; instead, the OECD provides rough
scales that correspond to broad swaths of competencies. For the three areas
studied, these categories, listed here with a few sample representative tasks

Categories and Representative
Reading Literacy Mathematical Literacy Scientific Literacy
Level 1 (335 to 407 points): locating a single piece of information;
identifying the main theme of a text
Level 1 (around 380 points): carrying out single-step mathematical processes Level 1 (around 400 points): recalling simple scientific facts
Level 2 (408 to 480 points): locating straightforward information; deciding
what a well-defined part of the text means
Level 3 (481 to 552 points): locating multiple pieces of information;
drawing links between different parts of the text
Level 2 (around 570 points): carrying out multiple-step mathematical
processes for predefined problems
Level 2 (around 550 points): using scientific concepts to make predictions
or provide explanations
Level 4 (553 to 625 points): locating embedded information; critically
evaluating a text
Level 3 (around 750 points): creating new mathematical processes as required
by problems
Level 3 (around 690 points): creating new conceptual models to make predictions
or provide explanations
Level 5 (over 625 points): locating difficult to find information; building
new hypotheses based upon texts

The source for this table is the OECD
Education at a Glance 2003
report, from which most of the language
describing representative tasks is drawn. More detailed information can be
found on the OECD Programme
for International Student Assessment (PISA) website
– the executive
the Knowledge
and Skills for Life 2001
report is particularly useful in
this regard.

In my previous post, I had identified a group of seven countries that could
reasonably be said to perform “better than average” in the context
of these literacies. Looking at New Zealand as a representative country in
this group, we find that
its averages are 529, 537, and 528 for reading, mathematical, and scientific
literacy respectively – still a ways from providing the majority of its population
with the desired “competency floor”.

As I mentioned at the end of
my last post, remedying the deficiencies revealed by the OECD report will
take more than minor changes in expenditures or classroom
organization. Instead, it will take educational redesign that directly addresses
the idea that higher-level competencies are what is desirable and achievable
in all content areas. Interestingly, I believe that this redesign can be
accomplished within the context of existing educational structures – I have
yet to see any
data that indicates, for instance, that dramatically changing the mix of
public to private educational institutions in any given country would fundamentally
transform the results measured by the OECD.

What can play
a crucial role is the set of technological transformations that can be
brought about by the availability of networked
computers to all
in the educational process. A serious discussion of why this is the case
will have to wait until a later post. However, let me provide the following
as a tidbit in the meantime. Returning to our earlier algebra example,
researchers working from a constructivist learning perspective have found that
in mathematics
– and in fact, in most subject areas – project-oriented learning has the
potential to work wonderfully as a way of addressing both the teaching
and evaluation
the highest competency level described above. However, this potential
can be effectively sabotaged in situations where a scarcity of information
works against the student’s capacity to research, define, and present a
project. Additionally, at the evaluation stage, it is very important that teachers
have access to a broad range of student projects and serious collegial
across a broad range of institutions about them – it is too easy otherwise
evaluative inbreeding to take place. Intelligent use of networked computers
can address both these issues efficiently in ways no other resource can,
providing low-cost access to resources, as well as by allowing students
and teachers alike to share and discuss projects. Of course, just throwing
at the
schools will not accomplish this result – both new research and new solutions
derived from existing research will be necessary to change the landscape
described in the OECD report. However, the early results of experiments
such as the Maine
Learning Technology Initiative
make me
hopeful that this is indeed the most promising direction for change.

Some Comments on the Recent OECD Education Report

About a week ago, the OECD (Organisation
for Economic Co-operation and Development) Education
at a Glance 2003
report was released to the press. The main thrust of
the report was portrayed in the press as follows:

Report: U.S. No. 1 in school spending
Test scores fall in middle of the pack

WASHINGTON (AP) — The United States spends more public and private money
on education than other major countries, but its performance doesn’t measure up
in areas ranging from high-school graduation rates to test scores in math,
reading and science, a new report shows.

(taken from the September
16th article
on the CNN website)

This rather damning lead was followed in the body of the article by a quote from
Barry McGaw, education director for the OECD:

“There are countries which don’t get the bang for the bucks, and the U.S. is one of them.”

The rest of the press
report cited a figure of $10,240 spent per student in the U.S., and included
tables showing
listings for 15-year-olds’ performance in math, reading, and science that rank
the US below thirteen to eighteen other countries.

Whenever I see a report from a reasonably serious organization such as the
OECD described in sensationalistic terms with potential for malicious use,
I get suspicious.
And when I get suspicious, I go to the source and check out the numbers. Which
is what I did in this case. Not to spoil the rest of the story, but while I
found many interesting and worthwhile nuggets of data in the OECD report (many of which
are summarized in the briefing notes for the U.S., downloadable in PDF
), I found nothing to substantiate
the explicit and implicit allegations of the news report.

Let’s start out with
the figure for $10,240 spent per student. This figure is not as simple as might
seem at first. First, the figure represents adjusted U.S. dollars – in other words, the figures that it is being compared to are
not actual dollar amounts spent in each country, but have been adjusted for
purchasing power parity (PPP) so as to provide a better basis for comparison. While some
type of correction of this type is needed for cross-country comparisons to
be meaningful, the adjustment formula used can artificially inflate or deflate
the actual magnitudes involved. In other words, while the numbers obtained
from this adjustment can be reasonably used to claim that country A spends more than
country B per student on education, it would be foolhardy to claim that the
ratio of expenditures between the two countries is more than a rough estimate.

More importantly,
the $10,240 figure includes expenditures per student from primary school through
college inclusive. In other words, while the performance of
fifteen-year-old high school students is being used as the yardstick for educational
quality comparisons, the monetary amount being referenced includes expenditures
for college education. As anyone living in the U.S. knows, the ways colleges
are funded differ drastically from those for high schools. To measure the “bang
for the buck” being obtained would require some equivalent performance measure
for college students, which is nowhere to be found in the report. A more relevant
figure to the critique would be the total secondary school expenditure
per student. Using Table
, we obtain a figure of $8,855 – high, but far from the highest in this category
(Switzerland, at $9,780), and comparable to other countries such as Austria
($8,578) and Norway ($8,476).

So much for the dollar amount. What about those
tables showing the U.S. trailing the pack in the knowledge demonstrated by
fifteen-year-olds in reading, mathematics,
and science? As before, the story is more complex than these tables would seem
to show. While the rankings published are “correct” inasmuch as they follow
the published scores, they neglect to take into account the fact that in many cases,
score differences between countries are too small to be significant. For instance,
the U.S. indeed trails Norway in science scores – by all of 0.18%. A more useful
way to think about data such as this is to look for “clusters” of countries
that perform in like fashion. Using the data from Tables
A5.2, A6.1, and A6.2
and the cluster analysis tools from R,
I find that the data can reasonably be clustered into four groups. The first group, made
up of seven countries, exhibits performance demonstrated by fifteen-year olds that
is better than average. The second group, which includes the U.S., exhibits
performance that is average. The third group exhibits performance below average,
and the fourth group exhibits performance that is substantially below average. The
following table, with countries arranged in alphabetical order within groups,
summarizes these results:

Performance of 15-Year-Old Students in
Reading, Mathematics, and Science
Better than Average
Below Average
Substantially Below Average
Australia Austria Greece Brazil
Canada Belgium Italy Mexico
Finland Czech Republic Latvia
Japan Denmark Luxembourg
Korea France Poland
New Zealand Germany Portugal
United Kingdom Hungary Russian Federation
United States

While this indicates that the U.S. is not in an optimal position, it is far
from indicating results as dire as those implied by the press report. Secondary
school systems in the seven countries in the first group are worthwhile
studying further – while the difference in performance between the first and
second groups is not dramatic, it is certainly significant and noticeable.

What does this tell us,
then, about the appropriateness of the adjusted expenditures? It tells us that
we cannot, at this point, and based upon these numbers, make any judgment
about the appropriateness of per student adjusted educational expenditures
for any given country. Expenditures per secondary school student do not in any significant
way correlate to the observed grouping. Nor does coupling these numbers to
any other data included in the report yield any particularly insightful results:
percentage of GDP spent on education, class size, number of hours of classroom
instruction, and teacher pay all fail to yield any significant correlations
with our observed clustering either when taken alone or when taken in groups. Again,
this does not mean that none of these factors matter – rather it means that
predictive models for educational success require the study of additional variables
not considered in the current report.

Finally, a cautionary note about the interpretation
of the results for the seven better-than-average performers: the data in the
report simply points to something “interesting” happening
in these seven countries, worthy of further investigation. It does not point
to these countries as occupying a pinnacle that other countries should strive
to achieve and then rest on their laurels. I chose the label for this group
carefully: “better than average” implies just that – not an ultimate target in any sense of the
word. The instruments used for the evaluation of 15-year-old student proficiency
in reading, mathematics, and science are only intended to provide a rough picture
of what could reasonably be expected as universal knowledge in these areas.
No country even approached a near-perfect score on these tests for a majority
of its tested population; thus, no country could be said to have provided a solid
educational floor in these categories for all of its citizens. Getting to the
point where this educational floor can be guaranteed will require more than
slight changes to expenditures, school year duration, or class sizes – it will
require a significant rethinking of how the educational process occurs at all levels.

Tools for Thinking About Social Networks

In the past few years, there has been a burst of interest in the topic of social networks outside the traditional confines of the field. Some of this interest comes, of course, as a result of new research published in the academic press, but has been fueled additionally by at least three other factors:

  • the publication of several well-written popular accounts of current research, such as Malcolm Gladwell’s The Tipping Point, Albert-Laszlo Barabasi’s Linked, and Duncan J. Watts’ Six Degrees;
  • the availability of cheap computer power;
  • the existence of the ultimate playground for inexpensive and original social network research – the Internet.
Many of the topics currently being discussed in the social networks arena have the potential to transform how we think about the design of educational structures. I’ll come back to where I see this potential being realized most fruitfully at a later date, but for now I would like to focus on some of the (free!) tools available for people to explore for themselves the concepts discussed in the books mentioned above.
There exist three free tools that cover quite nicely the spectrum of visualization and analysis that newcomers to the subject might find useful. Agna has a gentle learning curve and is easy to use – it is probably the ideal choice for someone looking for a simple analysis and visualization tool to explore the concepts outlined in the books by Gladwell, Barabasi and Watts. The statistical analysis tool R, when coupled to add-on packages such as sna, allows for greater depth in the exploration of social networks, but does so at the price of a far steeper learning curve and less friendly user interface. In between these two packages, both in terms of ease of use, as well as in exploratory power, is the free version of UCINET. Unlike Agna and R, both of which are cross-platform, this version of UCINET is DOS-based; the good news is that it runs just fine under many of the free DOS emulators available for Mac OS X or Linux, such as Bochs coupled to the FreeDOS operating system. Even if you decide not to use UCINET, it is worthwhile downloading it for the sample network files that accompany it – to decompress it on any platform, simply change the .exe ending on the downloaded file to .zip, and run it through your favorite decompression program. Additional sample data can be found on the INSNA site.
For anything beyond the simplest explorations, some additional instruction in the science of social networks will be necessary. Several excellent tutorials by active researchers are available on the Web: Valdis Krebs has a simple yet effective introduction to the subject. Steve Borgatti’s slide-show overview of the basics of network analysis is available in PDF format. Finally, Robert Hanneman’s well-written and thorough introductory textbook on social network methods can also be downloaded in PDF format.

On Learning Objects

The MERLOT conference provided an excellent opportunity to share ideas with other educators, and listen to some thought-provoking presentations on the subject of learning objects. Rather than rehash my favorite presentations (since the materials from all the talks will be available within the next few weeks on the MERLOT website), I would like to share some thoughts about learning objects with an audience that might not have heard of them.
A good starting place would be the definition of a learning object. A learning object can be defined as being made up of a core consisting of a content object (which could be as small as a single image or video fragment, or as large as a set of books), wrapped in a layer that contains information relevant to its educational use (e.g., pedagogical goals, knowledge prerequisites, forms of assessment), with this information structured in standardized fashion. The core need not be digital – it could be a physical book, or a particular geographic location for use in an ecology lesson – but since the wrapper is digital, all sorts of fun things regarding the collection, sharing, and evaluation of these learning objects can now take place. It is important to realize that learning objects are defined by their pedagogical purposes and context – a famous painting by itself could form the core of a learning object, but would not be a learning object by itself. A more detailed discussion of the structure of learning objects can be found in this paper by Larry Johnson.
Simple as the concept might seem in theory, quite a bit of work is needed to make it become a reality in practice. Among the things required are standards for semantic annotation, tools for creating learning objects, databases for storing and searching these objects, ways of sharing the objects, and structures for evaluating the pedagogical quality and effectiveness of those objects. MERLOT is one of several institutions providing a framework for the sharing of research on learning objects, as well as a repository for learning objects and their evaluation. Many other projects have taken on the task of providing end-to-end solutions for the creation, storage, and sharing of learning objects. One of the most interesting in this regard is eduSource Canada for its comprehensiveness, thoughtfulness, adherence to open standards, and (particularly important, in my view) the attention they have paid to scalability – this bodes well for the products of this effort being usable by institutions and individuals without massive financial and hardware resources.
The learning objects movement is still in its infancy – for instance, the wrapper and evaluation tools provided for most objects on the current MERLOT website are primitive at best – but development is proceeding rapidly. There are many potential and unique advantages to be realized by the use of learning objects, but also (unfortunately) some pitfalls. From these, I would like to highlight five key advantages, and three potentially perilous pitfalls.
Aiding in the democratization of learning: wealthy institutions (e.g., MIT) are now sharing their course materials with the world. Learning objects provide a way to make these products available, usable and digestible – the materials for a full MIT course might not be particularly usable in raw form, but could be readily incorporated by other institutions into their teaching practice if broken down into learning object-style components. At the K-12 level, where instructor training and materials creation can become a particularly pressing problem in less wealthy institutions, the use of a learning object-type approach would allow for instructors to use well-evaluated components, while simultaneously reducing the problem of content creation and training to manageable scale.
Assuming a (truly) creative role for the learner: the structure of learning objects is such that learners are not restricted to using objects passively, but can create their own learning objects to share with others as part of the educational process. A simple yet powerful example of the type of tool that can assist in doing this is given by Pachyderm – templates of the type used in Pachyderm would allow learners to express their understanding of the material in ways that are both deeper and more active than standardized testing. In fact, the very process of choosing among, using, discussing, and evaluating learning objects by learners can be viewed as an essential portion of the learning object creation methodology – a recent presentation by Ulrich Rauch and Warren Scott (summarized by Sarah Lohnes here) argued just this point.
Providing a basis for real discussion: the creation and use of learning objects implies a “theory into practice” approach – any given object is intimately tied to a particular point of instructional practice, but requires clear understanding of its related theory (as, for instance, when creating its semantic tags). This could have a very salutary effect on pedagogical discussion: theoretical conversations in the area of pedagogy without actual examples tend to devolve into fluffy wordplay with little or no relevance to actual teaching practice. However, the choice that is frequently made to schematize or omit relevant theory results in narrowly technical solutions that are copied across institutions with little understanding and less success. Learning objects sidestep the divorce between theory and practice, and could provide educators with tangible objects for productive discussion.
Respecting flexibility in learning styles without sacrificing content: in some applications of current pedagogical thought, differences in learning styles have been mistakenly taken as the equivalent of exclusion from areas of knowledge. I have been present – although not silently, I can assure you – at meetings where instructors insisted that “student X, being primarily a visual learner, could not be expected to understand mathematical abstractions”. This is dangerous, condescending, elitist nonsense, and a thorough misrepresentation of the research conducted into learning styles. Learning objects allow for the creation of multiple approaches to the same objectives, which the learner can choose to tailor by selecting different paths based on their individual learning style – a superb example of this was presented at the conference by Laura Franklin as part of a joint talk with Cathy Simpson.
Allowing for greater potential integration of content across levels (K-12, college, adult learners, etc.): because learning objects need not be tied to a given course or lesson plan, they can be recontextualized by different instructors and learners at different levels in varying fashion. For instance, the learning objects on the senses on Tutis Vilis’ website could be readily used (with varying degrees of instructor contextualization) by learners of all levels.
The pitfalls I see as not emanating from anything intrinsic to learning objects, but rather from the fallacies that can arise when enthusiasm for a tool crosses over the line to zealotry. In all fairness, I have not heard these voiced frequently within the learning objects community – but I have heard them voiced often enough to be worth the cautionary note. The three fallacies are:
The fallacy of the LEGO™ bricks: this can best be expressed as “snap a course together from learning object bricks – presto, you’re done”. The LEGO metaphor for learning objects can be useful in conceptualizing their interchangeability and multiplicity – up to a point. When taken too literally, it implies both an excess of structure and passivity in the instructor/learner roles. Additionally, learning objects lack the right features to be literally LEGO-like: the scope of any given object is not uniform, different objects may overlap or leave gaps between them, and the objects themselves need not be immutable objects. The only way to make LEGO brick learning objects is to artificially constrain the production of these objects, and the learning contexts within which they are to be used in ways that are, if anything, less interesting than the least creative aspects of current teaching practice.
The fallacy of the experts: summarizable as “ok, I’ll put in the content, you put in the usability, they put in the accessibility, someone else puts in the semantic markup – presto, a new learning object”. This viewpoint is far more widespread than the previous one – even some people who acknowledge that this type of super-specialized multiple expert development is probably financially infeasible seem to be nostalgic for it. Beyond financial considerations, however, I view this as an example of the malady of overspecialization that affects many sectors of the educational establishment. As someone who has taught courses in usability and accessibility, I can assure you that the material in these areas required to create learning objects does not demand years of study – one or two courses of the same scope and duration as those routinely taken by teachers for recertification will more than suffice. Additionally, a well-designed learning object requires attentions to all aspects of its construction from the start – while it is possible to “bolt on” a tolerable interface to a learning object where usability was not a primary design concern of the content creator’s, it tends to yield mediocre results at best. The experts should be able to focus on those tasks for which deep expertise is required – the creation of tools for the creation of learning objects, research and development in particularly difficult areas of user accessibility, etc.
The fallacy of authoritarianism: which can be simply put as “this is the only worthwhile way to do things – join us or be marginalized”. Whenever I have heard this viewpoint expressed, it has had a particularly dramatic chilling effect upon its listeners. I can think of few things that can kill off a promising pedagogical tool faster than this type of attitude. Learning objects have great pedagogical potential – but only if combined with a broad range of other new and existing tools, and an equally wide scope of critical opinions – none of which are likely to flourish in a “do it my way or else” type of atmosphere.

A Matrix Model for Designing and Assessing Network-Enhanced Courses

I have added to the Resources section of the Hippasus website my paper summarizing a matrix model for course design and evaluation that formed the basis for two recent presentations at the NMC and MERLOT conferences. Extensions of this model are at the heart of Hippasus’ approach to pedagogical design – I’ll have more to say about this in later posts. In the meantime, I welcome all comments people might have on the current paper.

In Defense of Ephemerality

A couple of weeks ago, while reading through some weblogs, I came across the following quote in Don Park’s weblog:

“Blogs will fade away within two years. What we know now as blogs will not be recognized by web users of tommorrow, not as blogs, but as websites. Website technologies and blogging technologies will converge into one.”

When I first read this, I had a fairly clear-cut reaction to the statement – it went something like “here’s hoping you’re completely, totally, and absolutely wrong, Don”. The reason for this reaction has to do with today’s topic – ephemerality and education.
Much of the worrying taking place on the Internet today has to do with issues of ephemerality and its prevention – what do you do about newspaper archives that become pay-only after a while? How do you react when someone objects to their content being archived on Google or the Wayback Machine? How do you prevent permalinks on weblogs from breaking? In all of these discussions, there seems to be an unspoken assumption that permanence=good, ephemerality=bad. Now, it is absolutely true that in many of the discussions I’ve mentioned other important issues are at stake – for instance, some of the groups trying to dearchive their content from Google are doing so as a way of covering up evidence about some rather unsavory activities. That being said, though, the preceding dichotomous equation always seems to be taken as a given. This is very unfortunate, since I believe that ephemerality is not only not always negative, but is in fact essential to many aspects of life that are now mediated by the Internet, not least of all education.
Consider the following scenario: you are at the neighborhood watering hole, and you’ve run into someone who shares your interest in early blues music. You have some fairly unorthodox ideas about the genealogy of the field, but when you mention them to your newfound friend, they react with enthusiasm, and make their collection of recordings available to you for your research. Now, replay the preceding scenario, but this time have your newfound friend pull out a tape recorder as soon as you start to talk, and announce enthusiastically that every word you say will be archived for the ages to come. How likely are you now to share that unorthodox idea that could potentially make you look foolish? How likely is it now that that research partnership could come into existence? Somehow, ephemerality is starting to look much more like a virtue than a vice here…
Anyone who has ever worked in education knows that a similar dynamic operates in the context of a successful classroom. For an instructor to stimulate thoughtful and creative discussions, they have to provide an environment that encourages risk taking on the part of the students. Risk taking does not occur in environments where every single act is permanent, indelible, registered for the ages. Rather, there needs to exist a range of possibilities that can accommodate everything from the truly ephemeral (comments in a brainstorming session) to the permanent (a final project) and everything in between, with the possibility that elements can increase or decrease in ephemerality (for instance, allowing a set of comments from a brainstorming session to be selectively archived so that they can form the basis for a project).
Where do weblogs come into all of this? The richness of opportunity in face-to-face interaction in the classroom deserves no less of a wealth of options in the electronic tools now available. To cite just three examples, chat rooms belong to the realm of the highly ephemeral, traditional architected web pages are perceived as highly nonephemeral, and weblogs are somewhere in between. You’ll notice that I used the term “perceived as” in the previous sentence – this is actually an important point. While, as Don correctly points out, weblogs are just another form of web page, technologically distinct only because of the way they are currently created, they are perceived at this point in time as quite distinct from traditional websites. A traditional website is expected to grow and change, but retain a core of stability in its content; again, at this point in time, weblogs have much weaker expectations in this regard. If Don’s vision comes to pass (which, speaking from a technological viewpoint, is not unlikely), and the concept of a weblog as a distinct entity becomes merged with that of the traditional website, we will be the poorer for it.
It is beginning to sound like I’m in favor of incorporating ephemerality as an explicit design constraint in the networked tools arena. Which I am, in a sense. The issue is not just one of incorporating an “archive after time x, and delete after time y” feature in weblog software, but rather incorporating tokens of intent within the tools that are clearly and visibly communicated to users. As with all other issues regarding tools for social interaction, I do not believe blunt interdictions on forms of use are the way to go; rather, thought needs to be given to the issue by software designers so that social norms and tool features can coevolve. There are generally no laws barring you from bringing a tape recorder to a public gathering place and recording everyone’s conversations – but in most societies it would be viewed as unspeakably rude, and could very quickly make you a social pariah.
My research and experience in using networked tools for education points to the issue of ephemerality as one of the most crucial ones in this area of pedagogical design – and, unfortunately, one of the most neglected ones. If enough people start discussing this topic and using it as an explicit component of how they plan their work, I’m hopeful that this situation can be remedied.
Postscript: on Wednesday, August 6, at 4:30pm I’ll be presenting a talk at the MERLOT International Conference in Vancouver – some of the material I’ll be discussing there relates directly to the topic of ephemerality.

Creating Cognitive Art

I had to give some careful thought to today’s weblog before committing it to the electron stream – I wanted to make sure that it was more than a list of nice tools for creating analytical images, and instead provided a framework for educators to develop new digital storytelling approaches incorporating elements such as graphs, charts, diagrams, and maps.
The toolset should not be viewed as a “one solution fits all” dictum, but rather a means to achieve a broad range of expressive explorations in this domain. An essential portion of this process is a workflow designed for all the parts to fit together – a lovely tool for creating graphs is not much good if, when the graphs are brought into the drawing tool for further annotation, they become a horrid pixelated mess. In the tools selected below, I found that PDF on Mac OS X, WMF/EMF on Windows, and SVG on Linux are the preferred file formats to avoid interchange issues.
Those preliminaries taken care of, there are some essential components that I would suggest should be a part of the toolkit:

  • a tool for creating tables, the simplest form of analytical images, and one that can provide structure for embedding the results of other tools;

  • a general drawing tool, that can both be used for semi-freeform drawing, as well as to edit and touch up the results from other tools;

  • a structured drawing tool, oriented towards the creation of diagrams where the parts bear systematic relationships to each other, and where modifications to one part are correspondingly reflected in the other parts;

  • a data plotting tool that can produce multiple visualizations of the quantitative relationships among different types of data;

  • a map generation and analysis tool that can present the spatialized relationships among different types of data.

Of course, one of these tools can do the job of many – a general drawing tool can in fact substitute for any of the others – but will generally do so clumsily at best. The tools I suggest below are not meant to be the only possible ones – in fact, I will be very happy to hear suggestions others may bring to the table. I chose them for reasonable ease of use, suitability to the task at hand, and free or reasonably low cost (i.e., under fifty dollars academic pricing for any one tool, with free try-before-you-buy periods available for all). Also, this is not intended as an exercise in denying the usefulness of some of the more expensive tools – for instance, one of the best data graphing packages for Windows is well worth the five-hundred-odd dollars it sells for. Instead, the goal is to compile a lean toolkit that still allows for significant exploration of the expressive space outlined in Tufte, Monmonier, and MacEachren.
An Additional General Consideration: for many of these tasks, standard office software suites provide an appropriate point of departure. For those people who lack such a suite, the free OpenOffice is fully comparable to the commercial offerings, and suitable for many of these tasks.
Tables: the standard office software suites are generally adequate to the task of producing expressive and communicative tables – once their presets, generally laden with ugly and unnecessary graphical elements have been overridden, that is. Fortunately, this can be done rather easily in most cases.
General Drawing: while the office suites usually include a minimal set of drawing tools, a dedicated application will tend to provide a more graceful drawing experience. On Mac OS X, iDraw is an inexpensive and elegant tool. For Windows, DrawIt is a surprisingly powerful tool for the price, capable of integrating and exporting to many different file formats. For Linux, the free Sodipodi has a very nice feature set, even though it is strictly limited to SVG for its file import capabilities at this point in time.
Structured Drawing: this is a far less common feature within the office suites – OpenOffice, interestingly enough, is one of the few to include some diagramming features. Fortunately, excellent alternatives are available on all three platforms. For the Mac, OmniGraffle is powerful and easy to use, with very strong integration features with other applications such as outliners and presentation software. On Windows, EDGE Diagrammer has one of the richest feature sets available for software of this type. For Linux, the free Dia provides most of the features of the commercial software packages with a particularly compact and efficient interface.
Data Graphing Tools: as was the case for tables, basic graphing types are reasonably well covered by the office suites. However, the presets are, if possible, even worse than those for tables – many of the defaults could serve as perfect examples of Tufte’s chartjunk. The best results tend to be obtained by turning off unneeded or ill-designed features, and adding any necessary elements in a drawing application. When a range of graphs beyond what most of the suites provide is desired, there exist several options. On Mac OS X, the free trial version of pro Fit is not time limited, and its scope is perfectly well suited to most educational needs. On Windows, Dplot provides a reasonable subset of pro Fit’s capabilities, albeit not for free. For Linux, Grace is currently the most mature interactive graphing program, although its interface can take some getting used to. If an even greater range of options is desired, three free non-interactive programs are available on all three platforms: gnuplot, R, and ploticus. All three of these programs require learning a series of commands for producing graphs, but the range of creative options they provide far exceeds that of the previous programs – ploticus is particularly well suited to developing some of the ideas presented in Tufte’s books.
Map Generation Tools: this is perhaps the trickiest area in terms of both cost and complexity of tools. The best way to start is not with the more complex standalone GIS (Graphical Information Systems) software, but instead by exploring some of the online options. Two free sites stand out in this regard. The first, the David Rumsey Map Collection has over 8800 maps that can be explored online via its specialized GIS software. The richness of this resource cannot be overstated – the maps range from a 1657 map of Osaka, Japan to the 1970 USA National Atlas, and in many cases can be overlaid with contemporary geospatial data. The second, ESRI’s Geography Network allows for the exploration of additional mapping and GIS concepts via the online ArcExplorer application, coupled to a broad range of free data. More adventurous readers may want to try their hand at using a full-blown GIS application, the free cross-platform TNTlite. TNTlite is a very powerful, but quite complex, piece of software; fortunately, the program is accompanied by a generous complement of tutorials that constitute one of the best introductions to GIS I have ever encountered. These tutorials can be digested in small bites, with good rewards at each stage – I would strongly recommend starting at the first one, and progressing through the set.
Thus ends this exploration of a basic toolkit for bringing cognitive art into digital storytelling – if it helps anyone find a new way to tell their stories, please let me know – it would make me feel happy to hear that.

Thinking About Cognitive Art

Much of the richness of digital storytelling is due to the use of a wide range of images as an integral component of the narrative. There is, however, a type of image essential to education that is underrepresented in many of the current digital storytelling projects. This class of images could be called “analytical images” – images that are structured in such a way as to enhance the systematic investigation of a subject. These include – but are not limited to – graphs, charts, diagrams, and maps, a group described by Philip Morrison as “cognitive art”. Unfortunately, these tools are used in much of education in an excessively compartmentalized and narrow fashion that negates their broader expressive potential. Thus, while graphs are used in math class, diagrams in biology class, and maps in geography class, very little is done in terms of teaching students how to conceptualize any of these tools as interrelated members of a wider set of tools for thinking. Some of the materials detailed below might help remedy this situation.
The best sources I have found for clear thinking about analytical images are offline. I would recommend starting with a trilogy of books by Edward R. Tufte: The Visual Display of Quantitative Information, Envisioning Information, and Visual Explanations. Rather than focusing on the technical details of a particular graphical tool for the presentation of information, Tufte develops a rigorous theory of communication via analytical imagery. The first volume in the trilogy is probably the most important – the basic components of Tufte’s theory are laid out here, from the identification of those elements that interfere with visual communication (e.g., the commonly encountered forms of visual clutter that he terms “chartjunk”), to those that promote it (for instance, ways of optimizing the data to ink ratio). The second volume extends Tufte’s thought from the realm of quantitative information into a broader sphere of concepts to be represented, including spatial, chronological and part-to-whole relations. Volume three in turn places these concepts within the context of their use in narrative and evidentiary contexts. It is important to keep in mind that Tufte’s theories can (and should be) thought of separately from the specific examples he proposes – in most instances the examples only represent one particular instantiation of some of his principles, and not a general set of graphical design dictates. In fact, translating Tufte’s thought from the printed page to the computer screen yields results that can look quite different from his examples.
Complementing Tufte’s approach are three books from a specific subset of the cognitive arts – the discipline of mapmaking. While it might seem to run counter to the spirit of this commentary to highlight mapmaking by itself, these books are rich with implications and ideas that stretch well beyond their disciplinary confines. Additionally, they also embody a definition of mapmaking practice that is far more expressive than the “turn left at the gas station, then go for another mile and a half” images that are commonly evoked in educational contexts. Two books by Mark Monmonier – How to Lie With Maps and Mapping It Out: Expository Cartography for the Humanities and Social Sciences act as an outstanding introduction to the subject. The first, despite its ironic title, spends at least as much time exploring how to use maps to communicate as it does warning about their possible misuse. The second book is an efficient guide to cartographic techniques, accessible to even the least experienced mapmakers, and rich in examples of the use of maps to visualize data, make arguments, and tell stories. As an added bonus, both of these books are available in inexpensive paperback editions. A final volume by Alan M. MacEachren, How Maps Work: Representation, Visualization, and Design, parallels Tufte’s work in proposing a theory of maps that goes beyond a particular graphic practice, while developing a theoretical backdrop with applications to all uses of analytical imagery. The scope of MacEachren’s work is outstanding, incorporating topics ranging from cognitive psychology to the theory of signs; while it may take more than one reading to digest all the material presented here, the effort will be richly repaid with original and powerful conceptual tools.
Having a strong set of conceptual tools is good; being able to bring this set into active practice via technological tools is even better. More later on software that does just that…

Digital Storytelling and Education

Some time back I came across the suggestion (by Richard Feynman, I think?) that if a scientist could not explain what they did to a nonscientist in 15 minutes or less, they were a quack. This may be a little too harsh – in my experience, the difficulty many scientists have in communicating what they do has less to do with quackery and more to do with the fact that, unlike Feynman, they are poor or downright bad storytellers. Which brings me to the subject of today’s post: digital storytelling.
Digital storytelling can best be viewed as an expansion of traditional storytelling arts and techniques. My own introduction to the subject came some years back via Joe Lambert, Nina Mullen, and the late Dana Atchley. Their creation, the Center for Digital Storytelling, can be reached at I strongly recommend checking this site out – it contains excellent examples of the craft, as well as resources for those people interested in implementing digital storytelling programs. More recently, Scott Rosenberg has started a site called Storyvine (at with a good collection of links, materials, and news on digital storytelling.
I would argue that digital storytelling has an important role to play in education at all levels. For one thing, it provides students and teachers with a rich and interesting range of concepts and tools to express their ideas in ways they might not have thought possible before. For another, it has the potential to revive the interest of jaded students – perhaps worn out by one too many of those lethal “What I Did During My Summer Vacation” assignments – in telling stories, and telling them well.
Many people have fascinating stories to tell about their work that deserve a better audience, both within and without the bounds of their own disciplines – this is one way to teach them how to tell these stories.

An Introduction

Welcome to my weblog – allow me to introduce myself. My name is Ruben Puentedura, and I’m the Founder and President of Hippasus, the consulting company that hosts the weblog you are reading. After teaching for eighteen years – six as a teaching fellow at Harvard, and twelve as a faculty member at Bennington College – and after directing the New Media Center at Bennington College for nine years, I decided it was time to try something new. Hence – Hippasus – a consulting company designed to make the best use of the experience I garnered via teaching, administration, and research in the physical, biological, and social sciences, and to bring together some of the most interesting minds I have encountered in those years.
From here on, I will let Hippasus speak for itself. This weblog is designed to continue the research I have carried out over the years in the theories and practice of pedagogy, and to comment on the work done by others. I’ll try to keep the tone more conversational than professorial – I’ve always preferred discussions in small groups to master lectures anyway. At any rate, once again – welcome.