A Gift From a Master

Every now and then, a book comes out that is so good, that I can only view it as a gift from its author. Two books of this caliber have recently come out: Edward Tufte‘s Beautiful Evidence, and Scott McCloud‘s Making Comics. I’ll save the Tufte for another day, and talk about the McCloud for now.

I’ve been working on projects using Digital Storytelling techniques since the nineties, and McCloud’s books have been a key tool in that work. Why? Well, first of all, digital comics are one of the many possible manifestations of digital storytelling. Setting that aside for a moment, though, there’s a deeper reason that McCloud’s work is important: his analysis of the craft is so rich and deep that it provides both a guide to broader topics in communication and the visual arts, as well as an exemplar for how to communicate about the workings of these fields.

All of McCloud’s books analyzing comics have been written as comics. His first book, Understanding Comics, dealt with the core syntactic and semantic elements that make comics work; the followup, Reinventing Comics, covered the potential for change and new directions in comics, including their transformation as they entered the digital sphere. Sounds pretty thorough — so why is this third book needed?

Making Comics fills in the gap between the general theory covered in Understanding Comics and the translation of that theory into actual comics-making practice. In other words, what is covered here is how the elements of comics are harnessed in the process of actually making them. This does not refer to the “here’s how artist X draws character Y” approach taken by a million dreary “You can draw comics too!” tutorials, but rather refers to how symbolic elements and aspects of person and place are chosen and translated into an actual rendering for the purpose of telling a story.

The audience for this book is most emphatically not just budding comics artists and comics enthusiasts — McCloud’s analysis of process in comics creation sheds light on a broad range of topics in the study of media and communication. In particular, any educators who are serious about these issues in the context of their own practice should definitely consider picking up this book — and its two predecessors. Me, I think I’ll try to make it to one of McCloud’s talks to thank him in person for his wonderful gift…

Transformation, Technology, and Education in the State of Maine

For the past few years, I have had the good fortune to work closely on a number of projects with the Maine Learning Technologies Initiative. The MLTI has provided all middle school students in the state of Maine with one-to-one access to laptops and software. The software bundle is rather interesting, since it encompasses far more than the traditional office suite, including software for music composition, systems modeling, digital storytelling, lab data acquisition, and structured information processing and sharing.

In recent weeks, I have had several people ask me about the current status of the project and its future directions. In particular, there has been considerable interest in how schools involved in the project plan to keep “pushing forward” to significantly enhance the quality of education that children receive. One part of the answer to this question is described in my slides and audio from a series of workshops conducted with Maine superintendents, which outline a model currently being used for this purpose. This same model has also been used in sessions with school principals throughout the state — the goal is to make sure that all schools use the laptops as an engine for educational transformation, rather than just a fancy textbook or typewriter.

As always, I welcome all questions and feedback.

Images of a Parade

It’s the day after Thanksgiving here in the US — usually called “Black Friday“, but which I’ve heard better described as “Sleep Off The Turkey Day” — and it seems as good a day as any other to get back in the blogging saddle.

I was looking over the New York Times’ coverage of the Thanksgiving Day Macy’s parade, and noticed that their slide show, while competently shot, was, well, somewhat lacking in the narration and emotion departments. I decided to try an experiment: what would happen if I ran a Flickr search for photos of the parade?

The results were astounding — not only were many photos far more interesting and compelling than the Times’ slide show, many were better composed and executed in formal terms as well. Compare this photo to the Times’ photo of Garfield — which do you think does a better job of telling a story?

What’s more, the Flickr search will only get better as time goes by — more people will post their photos of the parade, and more people will comment on them, pushing the interesting/unusual/powerful ones to the top of the stack.

Now, I am not suggesting that the Times should get rid of their photographers, nor that the quality of their work is subpar — but I am suggesting that something very interesting happens when a community (and Flickr is most definitely a community) shares its creative work in an open social space. And since this blog focuses on education, I would like to gently urge educators to overcome some long-held prejudices about work that takes place in informal spaces, and think about how these mechanisms can be harnessed for learning.

Avoiding Self-Delusion: New Technologies and Expectation Effects

I have posted the slides and audio for my Horizon Project VCOP talk on the subject of expectation effects. If you are interested in finding out how to separate a new technology’s true pedagogical merits from other factors that might influence its reception in the classroom, this talk may be of use to you.

As always, I’ll be happy to hear any comments or feedback people might have.

Mapping del.icio.us with Anthracite and OmniGraffle

Several people have asked me how I constructed the visualizations that I used in my talk on del.icio.us. My own approach involved a fair amount of hand-rolled custom code – not fun for people unaccustomed to writing their own software and working from the command line. So as to give non-programming-savvy researchers a chance to explore del.icio.us – or other online systems with implicit network structures – for themselves, I have put together a short guide on using two Mac OS X applications, Anthracite and OmniGraffle, for this purpose.

Comments and suggestions are welcome.

A Moveable Feast: the del.icio.us web

I have added to the resources page a talk on del.icio.us, presented within the context of the Horizon Project VCOP. If you’ve heard about del.icio.us, but aren’t quite sure what the fuss is all about, or if you’ve already tried it out, but would like to know how to get the most out of it, this talk is for you.

As always, I’m interested in any comments or feedback people might have.

The OECD Education Report, Redux

The results from the OECD PISA 2003 study of learning skills among 15-year-olds are now out. As could be expected from last year’s coverage of the PISA 2000 results, news reports have tended to misrepresent the information contained in the new report. I have already covered these misrepresentations in two previous posts, so I will not rehash that material here; however, I will spend some time looking at the new data conveyed by the report.

PISA 2003 incorporates a new (very welcome) category for student evaluation in the form of a set of questions covering Problem Solving. An analysis of this category would be worth a separate post unto itself; since my main goal here is to update the results I had obtained for PISA 2000, I will omit it from consideration in the discussion that follows.

A statistical analysis similar to that from my previous post yields, as before, four main groups with the labels shown in the table below. A new classification resulting from the PISA 2003 analysis is the subdivision of the “Substantially Below Average” group into a better-performing “High Group”, and a lower-performing “Low Group”. Countries that did not participate in PISA 2000 are highlighted in gray; countries that improved their results sufficiently to be promoted from one group to the next higher group are highlighted in green. The United Kingdom, which participated in PISA 2000, was excluded from PISA 2003 due to noncompliance with OECD response rate standards. The following table, with countries arranged in alphabetical order within groups, summarizes these results:

Performance of 15-Year-Old Students in
Reading, Mathematics, and Science
Better than Average
Average
Below Average
Substantially Below Average
Australia Austria Greece
High Group
Canada Belgium Italy Serbia
Finland Czech Republic Portugal Thailand
Hong Kong – China Denmark Russian Federation Turkey
Japan France   Uruguay
Korea Germany  
Low Group
Liechtenstein Hungary   Brazil
Netherlands Iceland Indonesia
New Zealand Ireland Mexico
Latvia Tunisia
Luxembourg
Macao – China
Norway
Poland
Slovak Republic
Spain
Sweden
Switzerland
United States

As we can see, only four countries (Liechtenstein, Latvia, Luxembourg, and Poland) improved their results substantially from 2000. Of these, the result for Luxembourg has to be discarded from consideration, since (as noted on page 30 of the OECD report) assessment conditions in this country were changed significantly between 2000 and 2003. In the case of Liechtenstein, only 332 students were assessed, due to the small size of the country. Because of this small sample size, changes at the individual school level are just as likely to affect the final results as national policy decisions. Hence, it is difficult to ascertain the cause of the observed improvement. Finally, in the cases of Latvia and Poland, it is tempting to attribute the improvement to their respective large-scale educational reforms, which started in 1998. However, data that would allow for the determination of cause-and-effect relationships in these two cases is currently lacking.

It is unsurprising that little has changed between PISA 2000 and PISA 2003 – after all, only three years have elapsed between the two studies. However, news reports – and, I fear, some public officials – have made much of minor increases or decreases in scores that are not significant. What still stands is my conclusion from my previous post: no country can be said to have provided a solid educational floor in these categories for all of its citizens. Getting to the point where this educational floor can be guaranteed will require more than slight changes to expenditures, school year duration, or class sizes – it will require a significant rethinking of how the educational process occurs at all levels.

Achieving Fairness and Excellence in Social Software

I have added to the resources page a talk on social software, presented at the NMC Online Conference on Social Computing. If you’re interested in using tools such as wikis, forums, and blogs in education, but want to avoid some of the potential pitfalls, you might find this talk – and the free software that accompanies it – rather useful.

Needless to say, comments and suggestions are always welcome.

New Resources for Digital Storytelling and Learning Object Development

Hippasus is growing nicely. While this is wonderful, it has prevented me from making regular posts to this weblog. Having just completed two major projects, I now have the time to post more frequently. To celebrate this, I would like to highlight two new sets of resources on the Hippasus website:

As always, I welcome all comments people might have on these resources.

The Meaning and Implications of the OECD Literacy Measurements

In the wake of my previous
post
about the recent OECD education
report
, several people have
asked for further analysis of the OECD evaluation tools. More specifically,
they have
been interested in how these tools differ from the usual standardized tests
(with their known pitfalls and blind spots), and how the results obtained
should be
interpreted. In order to discuss this, I have to preface the body of this post
with a brief definition of two terms commonly used within the educational community:
content and competency.

Broadly defined, content can be viewed as the “stuff” that
makes up a subject area. For instance, part of the content involved in a basic
health
class might be the definition of virii and bacteria, with their different characteristics
and modes of reproduction. Competencies can then be viewed as the “what
can the student do” with this content. Thus, at a very basic level of
competency, the student will be able to reproduce the definitions acquired.
However, this
level of competency is generally not the only thing that would be desired –
one would hope that the student would be able to integrate this with other
knowledge,
and thereby be able to follow simple cause-and-effect procedures supplied them
in order to keep themselves healthy. Even better, one would hope that they
would be able to create new knowledge for themselves, so that they could take
an active
role in their health decisions. Clearly, the evaluation tool to be used in
determining these progressively higher levels of competency has to go well
beyond “fill
in the blank”-type tests.

Take, for instance, the content area of basic
algebra. An evaluation tool that measures whether the student can carry out
the basic operations involved in
a standard “solve for x”-type problem is indeed providing some information
about how the student performs in this content area – but at a very low level
of competency, namely that of mimicry of a standard algorithmic procedure.
A tool that evaluates how the student performs at applying these basic operations
to a standard word problem where the necessary information is already provided
to the student corresponds to the measurement of a somewhat higher level of
competency.
Replacing the standard word problem with one using nonstandard phrasing provides
an evaluative tool for a substantially higher level of competency. At even
higher levels of competency we would find tools that evaluate how the student
performs
when presented with a problem solvable using the algebraic procedures known
to them, but where the data needed to solve the problem is not provided to
the student
a priori, and must be requested by them. Finally, at the highest level of competency,
it becomes of interest to evaluate how the student performs when applying the
tools of basic algebra to a real-world problem defined autonomously by the
student.

Most standardized tests operate on only the first two levels of our
algebra example, and hence miss a large – perhaps the most important – part
of the
competency
picture. The OECD evaluation tools are designed to provide a broad picture
of the competency spectrum within the context of an equally broad content area.
Furthermore, the “top level” of the OECD competencies corresponds
to a “competency floor” that the majority of a country’s population
can reasonably be expected to achieve. In other words, high scores on the OECD
tests
should be attainable by most people, and not just a privileged elite. The fact
that no country came close to this result indicates the distance yet to be
covered in terms of educational quality and equity throughout the world.

To
better understand the conclusions that can be derived from the OECD report,
we need to take a look at how scores relate to competency levels. A difference
of a few
points in the OECD results between two students means absolutely nothing in
terms of the relative competencies achieved; instead, the OECD provides rough
categorical
scales that correspond to broad swaths of competencies. For the three areas
studied, these categories, listed here with a few sample representative tasks
are:

Categories and Representative
Tasks
Reading Literacy Mathematical Literacy Scientific Literacy
Level 1 (335 to 407 points): locating a single piece of information;
identifying the main theme of a text
Level 1 (around 380 points): carrying out single-step mathematical processes Level 1 (around 400 points): recalling simple scientific facts
Level 2 (408 to 480 points): locating straightforward information; deciding
what a well-defined part of the text means
Level 3 (481 to 552 points): locating multiple pieces of information;
drawing links between different parts of the text
Level 2 (around 570 points): carrying out multiple-step mathematical
processes for predefined problems
Level 2 (around 550 points): using scientific concepts to make predictions
or provide explanations
Level 4 (553 to 625 points): locating embedded information; critically
evaluating a text
Level 3 (around 750 points): creating new mathematical processes as required
by problems
Level 3 (around 690 points): creating new conceptual models to make predictions
or provide explanations
Level 5 (over 625 points): locating difficult to find information; building
new hypotheses based upon texts

The source for this table is the OECD
Education at a Glance 2003
report, from which most of the language
describing representative tasks is drawn. More detailed information can be
found on the OECD Programme
for International Student Assessment (PISA) website
– the executive
summary
of
the Knowledge
and Skills for Life 2001
report is particularly useful in
this regard.

In my previous post, I had identified a group of seven countries that could
reasonably be said to perform “better than average” in the context
of these literacies. Looking at New Zealand as a representative country in
this group, we find that
its averages are 529, 537, and 528 for reading, mathematical, and scientific
literacy respectively – still a ways from providing the majority of its population
with the desired “competency floor”.

As I mentioned at the end of
my last post, remedying the deficiencies revealed by the OECD report will
take more than minor changes in expenditures or classroom
organization. Instead, it will take educational redesign that directly addresses
the idea that higher-level competencies are what is desirable and achievable
in all content areas. Interestingly, I believe that this redesign can be
accomplished within the context of existing educational structures – I have
yet to see any
data that indicates, for instance, that dramatically changing the mix of
public to private educational institutions in any given country would fundamentally
transform the results measured by the OECD.

What can play
a crucial role is the set of technological transformations that can be
brought about by the availability of networked
computers to all
participants
in the educational process. A serious discussion of why this is the case
will have to wait until a later post. However, let me provide the following
example
as a tidbit in the meantime. Returning to our earlier algebra example,
researchers working from a constructivist learning perspective have found that
in mathematics
– and in fact, in most subject areas – project-oriented learning has the
potential to work wonderfully as a way of addressing both the teaching
and evaluation
of
the highest competency level described above. However, this potential
can be effectively sabotaged in situations where a scarcity of information
resources
works against the student’s capacity to research, define, and present a
project. Additionally, at the evaluation stage, it is very important that teachers
have access to a broad range of student projects and serious collegial
discussion
across a broad range of institutions about them – it is too easy otherwise
for
evaluative inbreeding to take place. Intelligent use of networked computers
can address both these issues efficiently in ways no other resource can,
both
by
providing low-cost access to resources, as well as by allowing students
and teachers alike to share and discuss projects. Of course, just throwing
technology
at the
schools will not accomplish this result – both new research and new solutions
derived from existing research will be necessary to change the landscape
described in the OECD report. However, the early results of experiments
such as the Maine
Learning Technology Initiative
make me
hopeful that this is indeed the most promising direction for change.