Immature students; immature AIs

As a new school year approaches the tide of think pieces on AI is rising again. Here’s my contribution. Put simply, the challenge is that AIs are immature. But so are students.

There is plenty of doomerism about AI in education right now. I think it is probably true that many professors (and high school teachers, for that matter) will find their course assessments far too easy to game with AI. This fact is regarded, in some places, as the “end of the world” for various classic assessments. Perhaps it is.

The standard response: if the assessments are that easy to game with AI, then they’re not very good. Again, something is right about this critique, but I want to press back on two fronts.

First, the students are immature. A lot of education, including college education, really is about telling students things that “everyone knows”. Of course, not everyone knows these things. Those who haven’t been taught don’t know them yet. It’s true, in principle, that an ambitious autodidact could learn these things on their own, but actually doing so is hard. Teachers are supposed to curate materials from the blooming, buzzing confusion of the world’s ideas and texts, structure their curricula into logical sequences, evaluate whether students have actually understood well enough to avoid subtle errors, etc.

Critical thinking adds value to learning these things in a classroom, rather than on the job. Nevertheless, first, one cannot be a critical thinker with nothing to think about. Facts and systems of organizing them have to come first. Criticism is a higher order thinking skill, and without the subject matter, readily available to the mind, there isn’t much to criticize. Second, the advocates of “on the job training” often fail to appreciate the many ways in which a broad education is valuable. It’s not just that understanding the task for a particular job requires a sense of a broader cultural and social context. Moreover, thinking of knowledge in purely mercenary terms is bad. Sometimes it’s good to know things just because they’re true, and part of a flourishing human life involves knowing things that aren’t immediately useful. (It is true that colleges and universities have not made this case well in recent decades.)

The problem with AI is that most of these tools are also pretty good at regurgitating what “everyone knows”. Because they (approximately) reproduce the consensus on a subject, they say what everyone already says (even when that’s wrong). In this respect, they are about as good as a typical student, and they’re doing what students need to do.

Thus, the “solution” for education can’t be as simple as teaching (and assessing) in ways that LLMs can’t. We can’t shortcut the very thing that LLMs are good at. Instead, we have to explain why doing the hard work without the AI is worthwhile.

Alan Jacobs supplies an insightful analogy to this end. A culinary school that teaches its students to hack HelloFresh isn’t really a culinary school. Part of the aim is to teach the students how to do for themselves what they can buy in the market. And this requires “pointless” work along the way, in the sense that learners must do tasks whose products they could more easily acquire from someone else. But not everyone needs to go to culinary school. For most of us, HelloFresh is fine, and possibly an improvement over our own cooking. The challenge for higher ed, then, is to provide an education that seems valuable in its own right, including in those parts whose products can be purchased.

(Jacobs also points out another feature of the AI world: it’s not really free, and by losing the ability to do the thing for yourself, you’re caught in a market with producers who can fleece you. OpenAI, for example, has put its high-quality GPT-4 behind a paywall, but it’s still a lot cheaper than college. Competition may further reduce costs, and once a model is trained it’s not particularly expensive to run. The shelf life of this critique might be pretty short.)

I think I can say for myself why I find my education valuable, but part of that is because I’ve actually done it (a lot of it). My subjective sense of its value isn’t the kind of thing I can communicate to someone else. They have to experience it for themselves. But for many students, it seems perfectly reasonable for them to be doubtful about the intrinsic value of (any particular bit of) knowledge. They can say, “I’ll take your word for it” and then go use the AI tools when they need them.

Second, the AIs are immature. A common response to the explosion of AI tools is that they’re actually not very good at more complicated activities. For example, they often have trouble with basic arithmetic. If you give them extended tasks (beyond their context windows) they can veer off topic. And so on.

But this is also a problem with students. Every teacher can give examples of students who make bone-headed errors in assignments. LLMs hallucinate facts; but so do students. LLMs lose focus; but so do students.

Indeed, Timothy Lee’s really helpful LLM explainer uses the term “attention” to describe what these tools are doing in the depths of the algorithm, as far as we can tell. Sometimes LLMs fail because they don’t pay attention properly. Ahem.

So if the immature students illustrate that there could be value in education even in a world of super AI tools, the immaturity of AIs suggests that the current (accurate) criticisms of their abilities aren’t quite sufficient. AIs aren’t perfect right now, and perhaps will never be. But might they become much better than humans? I don’t see why not.

A lot of the critiques of AI’s competence seem to me to apply to children just as well. We don’t demean children for their ignorance and foolishness because we expect them to learn as they grow up. We also don’t put them in charge of things until they’ve established their abilities. We accept their limitations for the time being and then expect them to improve with age and experience. But why wouldn’t we expect AI tools to grow up as well?

The problem, then, is that teachers who rework their courses to resist AI will end up having to do it all again in a few years when the AI has gotten better. If a student can’t improve by retaking a course, then something is wrong with the course. But AIs will have lots of opportunities to “retake” courses, and eventually the AI tools might perform the way an average student would perform after taking the course a dozen times. I’m not persuaded by the critique that AI isn’t and never will be very good at this stuff. Many of its current limitations seem like the same kind of limitations that immature humans have. Absent a good account of the technical limits on AI, or a good theory of what “intelligence” consists in, I don’t see how to avoid the possibility that AI tools might become unequivocally superior to the vast majority of human intelligences.

This is a complicated problem for education, but I think it’s really the same problem we’ve had ever sense we expect basically everyone to go to school. For a reasonably large subset of the society, school doesn’t have much value. AI is making that subset grow. An adequate response will probably require rethinking the purpose of education, which in turn might require rethinking the goods of human life. And where better to do that than in school?