Not Wholly Bad (or Good)

We are not wholly bad or good,
Who live our lives under Milk Wood
– Dylan Thomas, Under Milk Wood

I frequently get asked these days, what’s next for SAMR? Are all its details set in final form? After all, the model is about twenty years old now – and its complementary partner, the EdTech Quintet has been around for somewhat under a decade.

When I first introduced the SAMR model, many applications of IT in education were in comparatively early stages. Fast forward to today, and that toolset has matured considerably, although its components have not changed significantly. However, one area has emerged in the last few years in ways that were not visible at the birth of SAMR: AI and its applications, not just in education, but in the world as a whole.

The changes introduced by AI cannot be underestimated: the more robust estimations of its impact upon the workforce, for instance, point to a majority of all jobs undergoing significant task changes, replacements, and redesign as a result. Adoption has been rapid, not just in fields like medicine and law enforcement, but also in education. As one example, AI-driven advising and tutoring systems have become commonplace in higher education.

All of which could be viewed as a net positive, were it not for one small detail: the workings of the new AI systems tend to be opaque to the individuals charged with deciding to implement them – and even more so to those whose careers and lives will be directly affected by them. The results are not pretty: in one recent study, for instance, an AI tool widely used to help manage healthcare decisions exhibited significant racial bias, despite explicitly prejudicial decisions never having been a component of its design.

Much of the reaction in the popular press has tended to run in the direction of AI as a fundamentally inscrutable savior/demon – which is obscurantist nonsense. As serious researchers know, nothing makes AI essentially opaque – complex or challenging, yes, but not inscrutably mysterious.

Which is where SAMR reenters the picture. As educators and learners who have used the model know, when tasks shift from S to R, an interesting process takes place: the impact of technology use upon learning outcomes is enhanced, while simultaneously learners also generally gain agency as a result of the shift. And that is exactly what is called for in the context of AI: processes that increase agency relative to AI for those most likely to be affected by it.

The good news is that the core structure of SAMR works well in this new context – but it needs to be supplemented by some new tools to fulfill its role. One key component of this toolset is the introduction of aspects of AI into learning experiences in such a way that learners gain true creative skills and understanding relative to AI, and not just a superficial cocktail party familiarity with some of its features. At the recent AMEE conference in Vienna, I highlighted aspects of this approach relative to the education of future physicians.

There is another component of the toolset that is also important, and that is the introduction of thinking tools to deal with the rapid and unforeseen changes that are likely to result from the ways that AI is being introduced – what are known (in one incarnation) as black swans. These thinking tools have value beyond AI, of course – and other components of today’s world, such as climate change and social media interactions likewise call upon this toolset for effective understanding and policy definition. Supported by the ShapingEDU team at ASU, I will be presenting a series of sessions on this topic, both in the context of SAMR, but also independent of it; the first session is scheduled to take place on November 20.

As in the quote from Under Milk Wood that opened this blog post, AI – and what we do with it – is neither wholly bad or good. But to realize its better side will require a wealth of approaches that might dwarf even the richly diverse cast of characters inhabiting Dylan Thomas’ mythical Welsh town.