Month: February 2023

ChatGPT and Systemic Change Resistance in Education

ChatGPT is not the first digital age disruption to challenge our systems of industrial education. I can identify at least three systemic shocks that have occurred since the proliferation of the internet in the 1990s: Web 2.0, remote teaching, and now AI. These were not assaults on learning, they were assaults on the systems of education. Learning has been under assault for much longer than that.

Fifty years ago, Ivan Illich recognized the gulf between learning and systems of education when he wrote:

The pupil is thereby “schooled” to confuse teaching with learning, grade advancement with education, a diploma with competence, and fluency with the ability to say something new. His imagination is “schooled” to accept service in place of value. (Ivan Illich, Deschooling Society, 1970)

Illich understood that the purposes of educational systems were diverging from the practice of learning even then. Those systems of education have persisted and solidified since he wrote Deschooling Society. Since then, the gulf between substance and performance has grown.

Web 2.0 posed a challenge to the systems of education that had emerged as we automated learning and grades became the core of the system. Web 2.0 technologies made it easy for anyone to contribute to the conversation on the internet.

With Web 2.0 tools, communities could form around just about anything, including gaming the systems of education. If these systems had been focused on the goal of learning, its members would have perceived Web 2.0 as an opportunity to grow communities, not a threat.

Unsurprisingly, educational systems focused on protecting systems of education, not the goal of learning. That purpose, as Illich observed, had been long relegated to secondary status. In Thinking in Systems, Donella Meadows refers to this as “seeking the wrong goal:”

System behavior is particularly sensitive to the goals of feedback loops. If the goals – the indicators of satisfaction of the rules – are defined inaccurately or incompletely, the system may obediently work to produce a result that is not really intended or wanted.

 [The Way Out is to] Specify indicators and goals that reflect the real welfare of the system. Be especially careful not to confuse effort with result or you will end up with a system that is producing effort, not result. (p. 140)

The system reacted to Web 2.0 by implementing technologies such as anti-plagiarism and proctoring software to “protect the integrity of grades.” There was little movement in the paradigmatic logic of the higher levels of the system. The system did not explore the “way out”.

The alternative approach would have been to create communities of practice using these new tools. While communities of practice would not have eliminated the threat of cheating, they would have helped move the focus toward learning, not gaming the system. The very Web 2.0 technology that made the cheating possible could be turned into a facilitator of learning.

This was not the path taken. Only a few institutions considered the paradigmatic shifts necessary to create true communities of practice using the new technology.

Another shock was the sudden need for remote teaching during the pandemic. Most institutions failed to use the maturation of video conferencing software mated with Web 2.0 platforms to explore what these new modes of interaction could do to augment practice, both during and after the pandemic.

Instead, we’ve seen a rush back to “normality” as pandemic restrictions have eased. During the pandemic, we saw the effects of building walls and hunkering down on both learning outcomes and the overall quality of the experience of learning in the absence of physical classrooms. We are still seeing the after effects of our collective choices in the face of this crisis in terms of diminished enrollment, particularly in on-campus environments.

Remote teaching was a different kind of shock than Web 2.0 (or AI). It demanded a lot of improvisation as the crisis hit. Some very interesting approaches emerged and were tested under difficult circumstances.

Some of these innovations have persisted and have within them the seeds for further growth. In many institutions, online-on-a-schedule and other kinds of blended learning experiences that don’t threaten the core logic of the systempersist.

AI is the latest chapter in this story. The AI “Crisis” is more like Web 2.0 in its evolutionary nature than remote teaching, but slow fuses often lead to bigger explosions.

The fuse that was lit by Web 2.0 didn’t explode until confronted with the requirements of remote teaching. Even then, the focus was more on damage control than evolving systems capable of withstanding future explosions.

Educational systems are already beginning to hunker down in the face of this challenge. However, this strategy is showing signs of decay. Students increasingly see through the fiction of learning and are seeking alternatives to traditional instruction.

It is no surprise that educational systems are inflexible. Any practice based on perceived legitimacy is going to be resistant to change because it questions past legitimacy. Nicholas Taleb points this out in Antifragile:

Education, in the sense of the formation of character, personality, and acquisition of true knowledge, likes disorder; label-driven education and educators abhor disorder. Some things break because of error, others don’t. Some theories fall apart, not others. Innovation is precisely something that gains from uncertainty: and some people sit around waiting for uncertainty and using it as raw material, just like our ancestral hunters. – Taleb, Nassim Nicholas. Antifragile: Things That Gain from Disorder (Kindle Edition), p. 550.

Education’s reliance on past legitimacy for much of its value generates its own unique contribution to Clayton Christiansen’s innovator’s dilemma, which argues that you have to be willing to threaten your existing product every few years in the service of creating innovation.

Few companies are capable of this. Even fewer educational institutions are desperate enough to engage in it. Legislative or accreditation restrictions may also constrain their ability to pivot.

Teachers are at the thin edge of the wedge here. They are being asked to defend practices that are no longer viable. It is also profoundly human of them to resist change. It is easier to retreat to the methods used to teach you than it is to strike out onto unfamiliar ground.

It’s scary to reinvent yourself under the best of circumstances. That reinvention becomes almost impossible in the face of institutional and structural resistance. Couple that with a systemic crisis and it’s no wonder so many institutions are diving for their bunkers in the face of AI.

And so, we find ourselves in the third shock. We have institutions that are rigid, working on borrowed time, and are not very antifragile. AI presents us with a slow-boiling crisis. Its eventual impact remains difficult to predict.

Educational systems should not look to students to drive change. We have perverted their preferences in deference to the old system so much, it’s clear that most of them have almost no understanding of how the system shapes their preferences. Their only choice is to opt-in or opt-out of the game. More and more are opting out.

Healthy systems, per Donella Meadows, “aim to enhance total systems properties, such as creativity, stability, diversity, resilience, and sustainability — whether they are easily measured or not.” (Meadows, Dancing with Systems). Does this describe the current state of education?

Based on the education system’s reactions to Web 2.0 and remote teaching, reactions to AI are likely to resemble those taken to counter Web 2.0. We are already seeing “AI Detection” software, including one from the OpenAI Group itself. Building walls is not a good solution to any challenge, especially one where the residents (students) can simply choose never to enter the walled garden.

ChatGPT Exposes the False Economics of Learning Systems

ChatGPT challenges the systems of industrial education by undermining the accepted economics of learning. I’m not talking about whether college is worth it, but how we reward value for effort at all levels of our educational systems. In Learn at Your Own Risk, I describe this as “transactional teaching,” but its impact goes far beyond any specific interactions between student and teacher.

In brief, transactional teaching is the idea that students exchange work for a grade. Grades lead to degrees and certifications, but none of this shows the true value (or lack thereof) of what the student takes home.

Transactional teaching cheapens education. It exchanges valueless currency for meaningless experiences. Chat GPT exposes this reality, because it threatens to provide students with a means of exchange potentially as valueless as the grades they receive in return.

Transactional education is susceptible to the same kinds of theft and fraud that occur in any economic system. Transparency of transactions is the only remedy to illicit activity. Most educational transactions, as well as their underlying logic, are far from transparent.

Defense is no answer here either. Efforts to crack down and centralize an economic system will cause the same kinds of outcomes: a black market.

Chat GPT is not the first fake ID to emerge in the educational landscape. It is merely the most elaborate of them. As I pointed out in a recent blog, the Internet threatens the logic of a transactional educational system. It significantly expands the resources of the students as they navigate the game that is set up for them.

Up to now, they were wealthy in information, but poor in the application of that information. Chat GPT reduces that poverty of application to a point where, using traditional assessment methods, their poverty is much harder to perceive.

Like healthcare, the economics of education have never made sense because we do such a poor job valuing the intangibles of what it means to get a college education. Completion is an easy metric and grades are the building blocks of completion in the current system.

We have much better technology than these crude metrics to communicate achievement these days. These tools make possible new ways of communicating achievement that are far richer, and harder to falsify, than grades or other unidimensional metrics can provide.

However, we can’t just ignore extensive systems and cultural practices that we have built around anachronistic assessment methods. Most faculty are not well-trained in anything beyond summative assessment based on tests and essays. That alone is a huge barrier to quickly pivoting to richer assessment methods. Add on to that, there is a vast credentialing network that depends on grade-based course outcomes.

Academic freedom has turned most classes into what are essentially black boxes. They just spit out a grade at the end of the process. There are many exceptions to this, but most classes work like this, mine included.

I have used this freedom in my class to upend notions of grading. I am not naïve about how well this works. Swimming against the cultural systems of grading and “achievement” makes it hard for students to wrap their heads arounddifferent approaches to assessment.

I have considered carefully how ChatGPT might enter the workflow of my class. I am not as interested in how well my students write as much as I am interested in how writing disciplines their minds to allow them to break down problems and analyze them. It’s helpful to have a “student” who is less good at this process than they are. ChatGPT provides an infinite variety of poor students for my live students.

My approach to teaching is unusual among my colleagues. Those who engage in transactional teaching often build walls around eroding kingdoms of practice. I still see courses in our faculty development portal on Respondus Lockdown browser and other “defensive” tactics designed to preserve meaningless and outdated assessment practices.

However, it’s the institutions themselves that put pressure on already overburdened faculty to stay the course. The ultimate metric for a class is a “grade” and this is true even in my class.

This reality perverts the focus of learning in my class and is something I cannot get around. I have spent countless hours trying to game out how to pull my students’ focus off of these systemic factors, but it’s really tough.

Most faculty have neither the time nor the inclination to engage in similar reflections and, ultimately, my quest may be quixotic. Institutions need to create pathways that lead to non-graded outcomes if we want to get away from transactional teaching. It’s not fair to put this burden on the shoulders of faculty alone.

ChatGPT is the product of a collectivization of learning. It skims vast amounts of data and mashes that all together to create its outputs. That’s essentially what we ask our students to do when we assign them generic research papers. It should come as no surprise that this non-imaginative process is easy to automate.

The solution to this is to value individual learning over conformity. We should encourage students to apply their uniqueness to their learning products and journeys. ChatGPT fails miserably when we ask it to do this, for it is not human. AI can only hoodwink us if we lose sight of the human in the learning process. Grades are a way of automating humans.

There are many ways that institutions could devalue grades in their internal processes, but this involves embracing individualistic learning and the enabling technology that allows us to scale that to a viable level. These systems need to be built and implemented.

Institutions have a responsibility to both the faculty and their students to train faculty to think differently about how they structure their classes. This is not hard from a content perspective. There is a lot of this that is merely common sense. However, common sense is often difficult to implement, especially in the face of cultural and systemic barriers.

The character of this training is just as important as the techniques being taught. We need to get away from increasingly futile defensive tactics and reimagine the kingdom. We need to create a culture of responsive teaching, not one of reactionary teaching. This will involve some tough conversations.

Throughout history, but particularly in the last century, technology has challenged humanity’s capacity for adaptation. For instance, thoughtful predictions of doom accompanied the dropping of the atomic bomb. Humanity seemed to be too immature to wield the Sword of Damocles.

In the end, it was a combination of technology with the careful reconstruction of human systems that gradually built up our ability to turn data into sound decisions and avoid Armageddon. The human-technology systems that emerged made it easier to avoid brinkmanship as a tactic and ultimately made the world a safer place. We slowed time down to a human pace.

AI is going to force a similar reckoning of our human processes and the creation of new human-technology systems. This will take time. Human systems are slow to change.

Compared to the Cold War, the stakes are both lower in the immediate future (AI won’t blow up the planet) but higher in the long term. Humans need to stand on the shoulders of AI. We also need to learn how to do that.

How we respond to ChatGPT will be a good marker and a learning lesson for the next technology that comes down the pike. Education must develop a new flexibility to pivot and grow. Diving into a bunker will not save us.

ChatGPT is Coming to Get You (and it’s okay)

I do believe that we waste countless opportunities to make ourselves, our families, and our societies better because of the phobias we have about technology. We have done this to ourselves through poor design, breeding false mythologies, and the accretion of power to those who would perpetuate them. I continue to believe that technology, especially information technology, offers us unprecedented possibilities for liberation, both on a personal and societal level. There will be dislocations and political challenges, but we can overcome them with a clear-eyed view of the limitations and opportunities that our technologies provide us. Technology is neither moral nor immoral. It is amoral. It is a canvas upon which we paint. The picture we create depends entirely on us. It’s time to pick up the brush.Discovering Digital Humanity, pp. 15-16.

Source: xkcd 1289

The New York Times is the latest media outlet to see ChatGPT as “technology” undermining education as we understand it. The fear that familiar institutions are being undermined is entirely justified. It is the inevitable consequence of systems of power based on Industrial Age technology being eroded. In recent chats about the technology with colleagues on Bryan Alexander’s Future Trends Forum, I compared ChatGPT and AI to the general alarm that greeted writing in ancient Greece.

And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. – Plato, Phaedrus

Plato/Socrates are not wrong here. It is hard to argue with the impact that literacy has had on human augmentation, but there is a lot of nuance to unpack here. We have all met humans who are well read but unwise, because they cannot properly apply the technology of reading/writing to their practice of life. However, as I wrote about in Learn at Your Own Risk, education is mired in a mindset that favors precisely this kind of book-service over knowledge-service.

All too often, the educational establishment (and those who seek to regulate it) equates the “reminiscence” of words with an understanding of those words. What we have witnessed over the history of industrial education has been a gradual scaling of access to writing, first through the mass production of books and then through the mass production of readers.

The reading and repetition of these words has become the bedrock of what we understand as “education.” Even those who teach critical thinking believe that without forcing our students to address this foundation first, we cannot get them to the analysis level of understanding.

This connective tissue of analysis, however, has proven to be a far more elusive (and hard to measure) goal, especially at the lower levels of higher education. I have spent a career trying to teach (and to figure out how to teach) “critical thinking.” It almost always fails on the shoals of trained practice and a lack of meaning in the student experience. I have seen many faculty (myself included) who claim to be teaching critical thinking through writing but who have capitulated to lower expectations.

We have trained most of our students to play along with the education game without really understanding what it was for. This invites them to do things like “cheating” the game because there is no opportunity cost, especially if no one catches them. More pernicious is the tendency to do the “minimum necessary” to pass the class.

ChatGPT threatens this construct. It has the critical thinking skills of a toddler, but we find that hard to distinguish from the efforts of our own students because we have such low expectations from them (derived, in my case, from long, hard experience). It may finally force us to address the loss of meaning many students experience when asked to conform to the existing educational paradigm.

After experimenting with it, I was confident that ChatGPT couldn’t do what I was asking my students to do. However, what I wasn’t confident about was whether I could distinguish between what it did and what they actually produced. It met the “minimum necessary” standard in many aspects of prose (although lacking in the proper citations). ChatGPT’s results weren’t very far off from the kinds of submissions that I routinely get from my students.

This realization didn’t make me toss out my prompts or assessment strategy, however. Instead, I grasped at the opportunity to use ChatGPT to get my students to engage in the critical thinking skills I claim to be teaching them.

My plan for this semester is to have them submit their prompts to ChatGPT as a draft for the first blog and then ask them to critique and build upon the AI’s results. This forces them to augment their approach by using the technology critically.

On a larger scale, however, ChatGPT is yet another chink in the armor of what we’ve been doing in industrial education for over a century. The technological threats to this have been mounting since the early days of the public internet in the 90s. Google search, crowdsourced papers, paper mills, question banks, etc. are all technologies that distributed collective intelligence has enabled. The resources at students’ fingertips have advanced exponentially even as faculty practice has not.

ChatGPT is moving so fast that most of my colleagues don’t even know it exists yet. However, they have been aware of the last two decades of internet-enabled technologies that have threatened our legacy assessment techniques, such as multiple-choice exams and standardized regurgitation essays.

In most cases, this realization has not forced them to re-evaluate their practice. Instead, defense has been the preferred strategy. Efforts to police the use of technology through proctoring and anti-plagiarism software are doomed to failure. If anything, pandemic remote teaching should have taught us that.

As I write in both of my books, the problem here is not one of technology, but rather of adapting ourselves, our practices, and systems to new realities. And, to take this further, there are tremendous opportunities resulting from these adaptations. We need real thinkers at all levels to tackle the complex problems of today and tomorrow.

Technology can connect us and augment us if we design and use it to do so. Applied properly, ChatGPT can form part of a suite of tools to make us better and more critical thinkers.

A friend of mine asked me the other day what I thought the purpose of society should be. My response was: we have a responsibility to our children to make the world better than when we came into it and societies should strive toward that goal. I then referred him to the Platonic concept of the philosopher-king. Technology has the possibility of making us all kings. Education has a duty to make us all philosophers.

© 2024 IdeaSpaces

Theme by Anders NorenUp ↑


Deprecated: Directive 'allow_url_include' is deprecated in Unknown on line 0