In the wake of my previous post about the recent OECD education report, several people have asked for further analysis of the OECD evaluation tools. More specifically, they have been interested in how these tools differ from the usual standardized tests (with their known pitfalls and blind spots), and how the results obtained should be interpreted. In order to discuss this, I have to preface the body of this post with a brief definition of two terms commonly used within the educational community: content and competency.
Broadly defined, content can be viewed as the "stuff" that makes up a subject area. For instance, part of the content involved in a basic health class might be the definition of virii and bacteria, with their different characteristics and modes of reproduction. Competencies can then be viewed as the "what can the student do" with this content. Thus, at a very basic level of competency, the student will be able to reproduce the definitions acquired. However, this level of competency is generally not the only thing that would be desired - one would hope that the student would be able to integrate this with other knowledge, and thereby be able to follow simple cause-and-effect procedures supplied them in order to keep themselves healthy. Even better, one would hope that they would be able to create new knowledge for themselves, so that they could take an active role in their health decisions. Clearly, the evaluation tool to be used in determining these progressively higher levels of competency has to go well beyond "fill in the blank"-type tests.
Take, for instance, the content area of basic algebra. An evaluation tool that measures whether the student can carry out the basic operations involved in a standard "solve for x"-type problem is indeed providing some information about how the student performs in this content area - but at a very low level of competency, namely that of mimicry of a standard algorithmic procedure. A tool that evaluates how the student performs at applying these basic operations to a standard word problem where the necessary information is already provided to the student corresponds to the measurement of a somewhat higher level of competency. Replacing the standard word problem with one using nonstandard phrasing provides an evaluative tool for a substantially higher level of competency. At even higher levels of competency we would find tools that evaluate how the student performs when presented with a problem solvable using the algebraic procedures known to them, but where the data needed to solve the problem is not provided to the student a priori, and must be requested by them. Finally, at the highest level of competency, it becomes of interest to evaluate how the student performs when applying the tools of basic algebra to a real-world problem defined autonomously by the student.
Most standardized tests operate on only the first two levels of our algebra example, and hence miss a large - perhaps the most important - part of the competency picture. The OECD evaluation tools are designed to provide a broad picture of the competency spectrum within the context of an equally broad content area. Furthermore, the "top level" of the OECD competencies corresponds to a "competency floor" that the majority of a country's population can reasonably be expected to achieve. In other words, high scores on the OECD tests should be attainable by most people, and not just a privileged elite. The fact that no country came close to this result indicates the distance yet to be covered in terms of educational quality and equity throughout the world.
To better understand the conclusions that can be derived from the OECD report, we need to take a look at how scores relate to competency levels. A difference of a few points in the OECD results between two students means absolutely nothing in terms of the relative competencies achieved; instead, the OECD provides rough categorical scales that correspond to broad swaths of competencies. For the three areas studied, these categories, listed here with a few sample representative tasks are:
Categories and Representative Tasks | ||
Reading Literacy | Mathematical Literacy | Scientific Literacy |
Level 1 (335 to 407 points): locating a single piece of information; identifying the main theme of a text | Level 1 (around 380 points): carrying out single-step mathematical processes | Level 1 (around 400 points): recalling simple scientific facts |
Level 2 (408 to 480 points): locating straightforward information; deciding what a well-defined part of the text means | ||
Level 3 (481 to 552 points): locating multiple pieces of information; drawing links between different parts of the text | Level 2 (around 570 points): carrying out multiple-step mathematical processes for predefined problems | Level 2 (around 550 points): using scientific concepts to make predictions or provide explanations |
Level 4 (553 to 625 points): locating embedded information; critically evaluating a text | Level 3 (around 750 points): creating new mathematical processes as required by problems | Level 3 (around 690 points): creating new conceptual models to make predictions or provide explanations |
Level 5 (over 625 points): locating difficult to find information; building new hypotheses based upon texts |
The source for this table is the OECD
Education at a Glance 2003 report, from which most of the language
describing representative tasks is drawn. More detailed information can be
found on the OECD Programme
for International Student Assessment (PISA) website - the executive
summary of
the Knowledge
and Skills for Life 2001 report is particularly useful in
this regard.
In my previous post, I had identified a group of seven countries that could
reasonably be said to perform "better than average" in the context
of these literacies. Looking at New Zealand as a representative country in
this group, we find that
its averages are 529, 537, and 528 for reading, mathematical, and scientific
literacy respectively - still a ways from providing the majority of its population
with the desired "competency floor".
As I mentioned at the end of my last post, remedying the deficiencies revealed by the OECD report will take more than minor changes in expenditures or classroom organization. Instead, it will take educational redesign that directly addresses the idea that higher-level competencies are what is desirable and achievable in all content areas. Interestingly, I believe that this redesign can be accomplished within the context of existing educational structures - I have yet to see any data that indicates, for instance, that dramatically changing the mix of public to private educational institutions in any given country would fundamentally transform the results measured by the OECD.
What can play a crucial role is the set of technological transformations that can be brought about by the availability of networked computers to all participants in the educational process. A serious discussion of why this is the case will have to wait until a later post. However, let me provide the following example as a tidbit in the meantime. Returning to our earlier algebra example, researchers working from a constructivist learning perspective have found that in mathematics - and in fact, in most subject areas - project-oriented learning has the potential to work wonderfully as a way of addressing both the teaching and evaluation of the highest competency level described above. However, this potential can be effectively sabotaged in situations where a scarcity of information resources works against the student's capacity to research, define, and present a project. Additionally, at the evaluation stage, it is very important that teachers have access to a broad range of student projects and serious collegial discussion across a broad range of institutions about them - it is too easy otherwise for evaluative inbreeding to take place. Intelligent use of networked computers can address both these issues efficiently in ways no other resource can, both by providing low-cost access to resources, as well as by allowing students and teachers alike to share and discuss projects. Of course, just throwing technology at the schools will not accomplish this result - both new research and new solutions derived from existing research will be necessary to change the landscape described in the OECD report. However, the early results of experiments such as the Maine Learning Technology Initiative make me hopeful that this is indeed the most promising direction for change.
Posted by Ruben at October 15, 2003 8:02 AM