Immature students; immature AIs

As a new school year approaches the tide of think pieces on AI is rising again. Here’s my contribution. Put simply, the challenge is that AIs are immature. But so are students.

There is plenty of doomerism about AI in education right now. I think it is probably true that many professors (and high school teachers, for that matter) will find their course assessments far too easy to game with AI. This fact is regarded, in some places, as the “end of the world” for various classic assessments. Perhaps it is.

The standard response: if the assessments are that easy to game with AI, then they’re not very good. Again, something is right about this critique, but I want to press back on two fronts.

First, the students are immature. A lot of education, including college education, really is about telling students things that “everyone knows”. Of course, not everyone knows these things. Those who haven’t been taught don’t know them yet. It’s true, in principle, that an ambitious autodidact could learn these things on their own, but actually doing so is hard. Teachers are supposed to curate materials from the blooming, buzzing confusion of the world’s ideas and texts, structure their curricula into logical sequences, evaluate whether students have actually understood well enough to avoid subtle errors, etc.

Critical thinking adds value to learning these things in a classroom, rather than on the job. Nevertheless, first, one cannot be a critical thinker with nothing to think about. Facts and systems of organizing them have to come first. Criticism is a higher order thinking skill, and without the subject matter, readily available to the mind, there isn’t much to criticize. Second, the advocates of “on the job training” often fail to appreciate the many ways in which a broad education is valuable. It’s not just that understanding the task for a particular job requires a sense of a broader cultural and social context. Moreover, thinking of knowledge in purely mercenary terms is bad. Sometimes it’s good to know things just because they’re true, and part of a flourishing human life involves knowing things that aren’t immediately useful. (It is true that colleges and universities have not made this case well in recent decades.)

The problem with AI is that most of these tools are also pretty good at regurgitating what “everyone knows”. Because they (approximately) reproduce the consensus on a subject, they say what everyone already says (even when that’s wrong). In this respect, they are about as good as a typical student, and they’re doing what students need to do.

Thus, the “solution” for education can’t be as simple as teaching (and assessing) in ways that LLMs can’t. We can’t shortcut the very thing that LLMs are good at. Instead, we have to explain why doing the hard work without the AI is worthwhile.

Alan Jacobs supplies an insightful analogy to this end. A culinary school that teaches its students to hack HelloFresh isn’t really a culinary school. Part of the aim is to teach the students how to do for themselves what they can buy in the market. And this requires “pointless” work along the way, in the sense that learners must do tasks whose products they could more easily acquire from someone else. But not everyone needs to go to culinary school. For most of us, HelloFresh is fine, and possibly an improvement over our own cooking. The challenge for higher ed, then, is to provide an education that seems valuable in its own right, including in those parts whose products can be purchased.

(Jacobs also points out another feature of the AI world: it’s not really free, and by losing the ability to do the thing for yourself, you’re caught in a market with producers who can fleece you. OpenAI, for example, has put its high-quality GPT-4 behind a paywall, but it’s still a lot cheaper than college. Competition may further reduce costs, and once a model is trained it’s not particularly expensive to run. The shelf life of this critique might be pretty short.)

I think I can say for myself why I find my education valuable, but part of that is because I’ve actually done it (a lot of it). My subjective sense of its value isn’t the kind of thing I can communicate to someone else. They have to experience it for themselves. But for many students, it seems perfectly reasonable for them to be doubtful about the intrinsic value of (any particular bit of) knowledge. They can say, “I’ll take your word for it” and then go use the AI tools when they need them.

Second, the AIs are immature. A common response to the explosion of AI tools is that they’re actually not very good at more complicated activities. For example, they often have trouble with basic arithmetic. If you give them extended tasks (beyond their context windows) they can veer off topic. And so on.

But this is also a problem with students. Every teacher can give examples of students who make bone-headed errors in assignments. LLMs hallucinate facts; but so do students. LLMs lose focus; but so do students.

Indeed, Timothy Lee’s really helpful LLM explainer uses the term “attention” to describe what these tools are doing in the depths of the algorithm, as far as we can tell. Sometimes LLMs fail because they don’t pay attention properly. Ahem.

So if the immature students illustrate that there could be value in education even in a world of super AI tools, the immaturity of AIs suggests that the current (accurate) criticisms of their abilities aren’t quite sufficient. AIs aren’t perfect right now, and perhaps will never be. But might they become much better than humans? I don’t see why not.

A lot of the critiques of AI’s competence seem to me to apply to children just as well. We don’t demean children for their ignorance and foolishness because we expect them to learn as they grow up. We also don’t put them in charge of things until they’ve established their abilities. We accept their limitations for the time being and then expect them to improve with age and experience. But why wouldn’t we expect AI tools to grow up as well?

The problem, then, is that teachers who rework their courses to resist AI will end up having to do it all again in a few years when the AI has gotten better. If a student can’t improve by retaking a course, then something is wrong with the course. But AIs will have lots of opportunities to “retake” courses, and eventually the AI tools might perform the way an average student would perform after taking the course a dozen times. I’m not persuaded by the critique that AI isn’t and never will be very good at this stuff. Many of its current limitations seem like the same kind of limitations that immature humans have. Absent a good account of the technical limits on AI, or a good theory of what “intelligence” consists in, I don’t see how to avoid the possibility that AI tools might become unequivocally superior to the vast majority of human intelligences.

This is a complicated problem for education, but I think it’s really the same problem we’ve had ever sense we expect basically everyone to go to school. For a reasonably large subset of the society, school doesn’t have much value. AI is making that subset grow. An adequate response will probably require rethinking the purpose of education, which in turn might require rethinking the goods of human life. And where better to do that than in school?

I am not an expressivist, but…

This old blog post seems quite accurate, even prescient. I’m writing this mostly so that I have an easy link back to it.

The argument claims that the whole online medium is aiming at a basic emotional response. The ur-interaction online is the equivalent of thumbs up or thumbs down. This makes people feel powerful, as if they deserve to be consulted about the value of whatever it is they’re looking at, or that everyone else is entitled to their opinion of how things might have been done differently.

But it’s remarkable, from a philosophical perspective, how nicely this analysis fits with expressivist theories of value. As I say, I am not convinced that expressivism is the correct theory of value, but it sure looks like you get something like it when you boil down a lot of our social life.

Perhaps the blog post suggests one of the reasons I dislike expressivism (though not a reason to disagree with it). The broad expressivist program that encourages quasi-emotional votes on everything valuable makes it too easy to render an opinion. And so people have opinions about everything, even things that they really have no business opining on.

What is a public health ‘guideline’?

We are now entering year two of COVID-tide, and an effective vaccine to stop the pandemic appears to be soon at hand. It has been a tough year, and probably will continue to be abnormal for a while, if public health and epidemiological experts are to be believed.

The pandemic has forced many people to learn a lot about the world very rapidly. Many of us have had a crash course in epidemiology, immunology, and public health over the last year. One thing we seem to have also learned is how complex and impotent our public health institutions are.

One of the problems in public health is that many public health experts have no real practical authority. They’re academics, and most of their conversations in ordinary circumstances are among themselves. The public health officials do have some authority, but it is often limited to medical providers and the adjacent industries (think CDC, FDA, etc.). In ordinary circumstances few people would think there is anything strange about this constrained mandate. Indeed, there is a vigorous (though small) cottage industry of ferreting out strange make-work regulations from these agencies, thereby indicating that even the limited mandate may be too broad in ordinary times.

The trouble now is that these public health agencies (and even more, academics) can’t really make rules for the general public. They issue "guidelines". Both a rule and a guideline are a kind of norm, and so it is common to see the words used interchangeably, particularly in these quasi-medical contexts. There is, however, an important difference between a guideline and (what I will call) a rule, and I want to think out loud about this difference for a bit.


Put simply, a rule requires enforcement, whereas a guideline is merely advice. If we distinguish these two concepts in this way, it helps illuminate the problems we’re having with all the various pieces of public health and medical advice we’ve gotten over the last year.

Start with a rule. If don’t enforce compliance with a rule, it is hard to see what practical import it has. Enforcing public health rules is really, really hard. Enforcing rules in general is hard, but in this case, we’re trying to deal with many kinds of behaviors performed by many kinds of people in many kinds of situations. It is implausible that a single one-size-fits-all rule could cover every case. And so it is hard to enforce the rule in a non-draconian way.

Consider, for example, the recent news that NY governor Andrew Cuomo was going to levy fines and other penalties for not following the state’s vaccination "guidelines". This action is totally reasonable and incredibly stupid all at once. It is being made by the proper person: only Cuomo, or someone elected official like him, plausibly has the authority to punish in this way. It also gives a powerful extrinsic motivation to comply. However, it is far too powerful, and in this way quite stupid. In a time where vaccines have extremely short shelf-life and are in very limited supply, while also being extremely effective, making people second-guess their use of the vaccine is a bad idea, for it makes it more likely that vaccines will be wasted. Better for it to go in the wrong arm than in no arm at all.

Other public health measures have proven very hard to enforce. Mask-wearing is a notable case. I can’t think of much actual argument in favor of the moral or civil right to go unmasked. (This essay complicates matters somewhat, though I don’t think it makes a case for a right.) Preventing social gatherings has also been difficult, not least because there are many different sorts of them, and some have been "approved" for political or religious reasons.

Any rule that is simple enough to remember will necessarily have some exceptions in the wide variety of relevant contexts. This adds another difficulty to enforcement, because it is not the case that every instance of non-mask-wearing (for example) is wrong, or even against the rules. Trying to suss out every possible case is a fools errand, and a waste of political or moral authority.


Because the rules are hard to enforce, and because they are often issued by people who have no practical authority, they often come in the form of "guidelines." A guideline is basically a kind of structured advice, given in the style and tone of a rule. It is notable in that it is effectively unenforceable by the one making it; if it were enforceable, it would be a rule, and it would require real enforcement.

Some public health "guidelines" are actually rules, particularly when the constrain the action of various other actors. Medical guidelines, for example, are often really rules for medical professionals. Failure to comply can earn one a hefty penalty. Cuomo’s order mentioned earlier is like this. The NY public health officials promulgated "guidelines", but Cuomo’s actions reveal that these are really rules, since there are penalties for non-compliance.

The trouble with true guidelines is that they have only as much authority as advice does. Guidelines about the size of gatherings, for example, depend on groups deciding to follow the norm. Other groups may decide that they care about their fellowship more than the guideline, and it is hard to clearly say what is wrong about this. (To be clear, I think there often is something wrong about flouting the guidelines, for reasons I’ll get to momentarily.)

Advice is a peculiar thing. Agnes Callard offers a helpful distinction between three different things: "instructions", "coaching", and "advice". Asking for advice, in this trichotomy, is "instructions for self-transformation." That is, it is asking for coaching delivered in the form of instructions.

I think a lot of guidelines are trying to do almost exactly this. And this is why they fail. When public health experts issue guidelines, they are appealing to epistemic authority rather than practical authority.

The transformation that people are seeking in public health guidelines is increased knowledge of what to do in a novel public health emergency. Few of us have any personal understanding of all of the complex features of a global pandemic. We need to know what to do for our own safety and well-being, and we look to experts to give us insight. But the experts can’t give us a graduate-level education ("coaching") in epidemiology or any other technical field. (And often they are unable to even explain their own field to non-experts—a real weakness of many kinds of expertise.) They’re forced to give fairly generic and vague bits of practical wisdom. Fundamentally, they’re trying to supply an education in the form of practical instructions. They need us to think differently about various activities, but they lack the time and opportunity to teach us how to understand. So they give instructions—practical maxims that looks more like rules.

It turns out that a lot of people seem to be genuinely looking for just this sort of thing. They want to know what they can do, and generally are willing to follow the instructions, even without external enforcement. Yet because the instructions are generic and impersonal, individuals can gain knowledge without complying. There is some evidence that this is how a lot of people are operating.

(A topic for a different post: some people already have the relevant knowledge, and often it’s far deeper and broader than the guidelines can provide. E.g., people who live in E. Asian countries and have past experience with pandemics. It isn’t unreasonable to listen to them for advice rather than or in addition to "science.")

Yet when people use guidelines to increase their knowledge, but then supplement that information with their knowledge of their own particular circumstances, sometimes they choose to not obey the guidelines—the instructions—even as they benefit from them. Thus the guidelines "fail" to change behavior, which is what they are intended to do. How then is it ever possible to promote compliance with the public health norms without turning them into rules with official enforcement?


There is a large class of norms that aren’t enforced (or enforceable) by the state, and yet substantially constrain our actions. We might call these "manners". Having bad manners won’t get you fined or put in jail, but it will have consequences, most notably your exclusion from certain types of society.

Manners are famously opaque to those outside the society that uses them. They seem pointless or excessively fussy, and often the social opprobrium directed at mannerless behavior seems to far exceed the immediate practical consequences of the faux pas.

Something similar seems to be true of many current public health guidelines. There seems to be little public health need to wear a mask while jogging, for example. Yet at least in some places, appearing outdoors without a mask for any reason is treated as a serious error. To those who care, this treatment is enough to promote compliance (and, crucially, to perpetuate the norm by "enforcing" it against others). To those who don’t care, there is little one can say. If someone doesn’t want to be part of the mannered society, it is hard to justify complying with any of its norms. In this way, manners resemble instructions. Instructions are useful only if you want what they aim at. Some people just don’t care whether others (usually described as "elites" or "liberals") approve of them, and so the informal social enforcement mechanisms just don’t engage.

Further, as I hinted, mannered behavior gets perpetuated by being enforced by the participants, rather than by some central authority. We’ve seen this too. Ordinary citizens berate one another for not following certain guidelines, as if the guidelines empower the man-on-the-street to enforce the norms. If you asked these self-appointed police whether they believe they have any legal authority, they would say of course not (most of the time). But they clearly think they have some right to demand compliance with the norms. This makes a lot more sense if the norms are like manners, where there is no central enforcing authority and each participant is at least somewhat empowered to police the standards of right behavior.

Finally, there are reasons to comply with manners, even if you think they’re stupid. Often manners are the way that a society demonstrates respect for its members. There may be many different possible systems of norms that indicate respect or care for one’s neighbors, but within a given context the individual usually doesn’t have a choice about which system to follow. Following public health guidelines often takes this form. It may be true that in a particular situation a mask is unnecessary (e.g., while jogging), but wearing it demonstrates that one is willing to limit one’s own freedom out of care for others, and that is a useful message on its own. Similarly, forgoing group gatherings to limit the rapid spread of infectious disease may demonstrate respect for the health care workers that are physically, mentally, and morally exhausted, even if you know that no one in your group actually has the disease.


In sum, I think there are good reasons to comply with public health guidelines, but I also think there are some real limits on how much we can say to those who don’t want to. Fundamentally, most of the norms coming from the public health and other science-tinged domains are just advice. It’s probably mostly good advice, but it’s also not irrational or immoral to ignore it. The same is not true for rules that public officials have issued. If you think you are obligated to obey the government, then you should obey their public health mandates. But public officials should be clear too. For those norms that are really important, public officials who have the relevant authority need to use that authority and actually enforce the norms. Though, as I suspect many have realized, doing so may cost them their job. So be it.

A duty to be informed

Philosophers are discovering a host of new arguments for the value of their discipline these days. COVID-19 has pushed to the front a variety of topics that philosophers think about frequently, though often in bloodless, abstract terms. Ethics of triage and scarcity, for example, has moved from models of trolleys and organ donors to real-life questions about who should get limited medical resources.

Epistemology and philosophy of science are also getting their day in the sun. Much of the anxiety about COVID-19 arises because we just don’t know much about it, so the range of reasonable beliefs about the outcome of this all is very wide. People are discovering that science involves more than crude applications of a technique, and that real scientific expertise includes practiced judgment about hard-to-quantify uncertainties.

I suggest that this crisis illustrates an interesting combination of ethics and epistemology: a duty to be informed. For some, this duty is quite extensive, but I think there is a case to be made that anyone making or influencing decisions right now has some degree of a duty to be informed about what is going on. A duty to be informed is not a duty to be right, for that would be impossible. Instead, it is a duty to sincerely and virtuously seek to acquire more knowledge—to be a good knower; to apportion belief according to evidence; to reason well; to avoid bias and remain open to correction.

I’ll start with the obvious cases: those in positions of authority. Our public officials are making huge, life-changing, society-altering decisions every day. They already have extensive public duties; that’s what the job requires. (Actually, one might say that they have public obligations, since they "volunteered" for their positions.) I think it is obvious that public officials should seek to be informed about the facts of the situation.

But we can say a little more about what being informed requires. First, it requires that they take into account the facts. Whatever we know about COVID-19 should be included in their deliberations. Facts are true or false. If two public officials disagree about some fact, then at least one of them is wrong.

Second, they should be actively seeking better information. Jason Brennan has been arguing that a lot of our public officials are making huge decisions without trying to improve their knowledge, and just falling back on facile "trust the experts" platitudes. The initial response to COVID-19 has been very strict, in order to account for uncertainty, and let us grant that strict rules were at least initially justified. (They almost certainly have been.) Yet severe measures may lose their justification as we learn more. So much is uncertain or unknown, but knowable, and public officials are uniquely poised to accelerate our learning. It seems as if there are daily updates to the best estimate of COVID-19 infection rates, fatality rates, treatment capacities and strategies, etc. Some of this information can’t be updated overnight, but the process can at least be underway, and it isn’t obvious that we’re actually making a lot of progress on this front (or that our public officials are leading and coordinating it).

Third, public officials should be reasoning well. The duty to be informed includes not just acquiring lots of true facts, but thinking about them effectively. They need to reason correctly about scientific and mathematical facts, such as sampling error, uncertainty, Bayesian conditionals, endogenous and exogenous variables, lagging indicators, and even basic arithmetic. (From the beginning, politicians, media personalities, and—sadly—some scientists have been making elementary errors even in multiplication and division.) They also need to have some basic awareness of how to evaluate scientific research, or at least have trustworthy advisers who can do so. Here we can include economists among the scientists, for many decisions are not merely medical decisions.

Public officials also need to think well about ethics. Some seem to think that preventing any loss of life from COVID-19 justifies any amount of public restrictions. Others seem to think that having 1-2% of a country’s population die from this disease is an acceptable tradeoff, even though for most countries that would make this disease the deadliest event in the last few centuries. Or they think that it’s OK to let older and sicker people die, because…? It is usually a mistake to put a dollar value on a life, but when making public policy we have to do this all the time. Refusing to acknowledge the tension is just bad reasoning, and thinking simplistically about what makes a life good won’t help either. Perhaps more common are public officials who are officious, where they appear to think that crises permit them to be "punitive and capricious". A crisis does not change what the government can legitimately do, and if anything, a crisis is a good opportunity for showing patience and forbearance.

Other public figures bear some of these same obligations, though perhaps to a lesser degree. I suggest that our media figures are nearly as responsible as our public officials. Because media types don’t actually have to decide, they are uniquely positioned to be critical. Yet being merely critical shirks responsibility, for it is easy to get attention just by being contrary. At the same time, many of our public officials desperately need their decisions challenged, if only to force them to improve their communications. The media can both inform the public, and also criticize the decision-makers. But to do so, they have to be as well-informed as anyone.

We can move on down the tree of responsibilities. Employers obviously have some duties toward their employees. Their capacities are much more limited, but so is their scope of concern. Pastors owe it to their congregations to be informed so that they can make good decisions (which might, at some point, involve disobeying poorly-informed public officials). Heads of households should know what will affect their own families.

Even a single individual has at least a mild duty to be informed. As this crisis has revealed in great detail, our actions affect others whether we intend them to or not. Complying with public policies, heeding medical advice, and caring for others around us requires us to understand to some degree the implications of our own decisions. We have to know enough to exercise good judgment, and at least for that we each have a duty to be informed.

One final word about duties: I don’t think duties are absolute. We all have many duties, and being informed is just one of them, and one that may compete with others. If someone starts forgetting to feed their kids because they’re trying to keep up with the latest research, that’s not good (definitely my temptation). But I think a duty to be informed is one of our duties, and so we ought to take account of it when deciding what would be the best use of our resources.

Crying wolf and doing our part

A number of people have noted that it is hard to persuade people to take COVID-19 seriously because it feels like the boy who cried wolf. Previous outbreaks of infectious disease, from ebola, SARS, MERS, etc., have been generally contained in a few regions, so the cries of pandemic have seemed overblown. To some people’s minds, this latest outbreak is just another in a long line of cases where media and public officials have restricted liberties and spread what feels like unnecessary panic. More cynical observers might even say these crises are pretexts for greater government control over citizens’ everyday lives.

The trouble is that this particular outbreak looks a lot more like a real wolf. As of writing, the growth in cases around the world exhibits the classic signs of exponential growth, and there is compelling evidence that many countries are severely under-reporting the actual number of cases (including, it seems, the United States). Furthermore, this virus seems to be in the "sweet spot" for a public health concern, for it isn’t so deadly that it burns itself out (like ebola), nor is it so mild that medical facilities can absorb it (like the common cold or the seasonal flu).

But there is another aspect of "crying wolf" to consider. The way to combat this virus is to create "social distance" so that the virus doesn’t spread as rapidly. This is a classic collective action problem, because for the vast majority of people, there is little personal benefit to social distancing, and often quite a lot of personal cost. Typically the government and media persuade by showing how a particular behavior is in an individual’s self-interest. In this case, mitigating the risk for those for whom this virus is very dangerous requires people who have almost no risk of their own to massively alter their behavior.

In short, we need everyone to pitch in and do their part, even if it doesn’t seem to benefit most people directly. But this kind of rhetoric is also common, and often has looked like crying wolf.

One can hardly walk through a museum or a zoo without being bombarded with claims about what dire things will happen if we don’t all contribute to the cause-du-jour, even when some of these causes are distinctly out-dated and poor candidates for action by individuals. Individuals can only rarely affect many of these causes, such as reducing pollution, minimizing plastics, divesting from fossil fuels, mitigating acid rain, protecting the ozone layer, conserving water, preventing species extinction, etc.. All of them may be good things to do, but the problems are ones of public policy or technology. Acid rain, for example, wasn’t reduced by ordinary citizens’ acting together; it was mitigated by better public policies and improved technology. The typical visitor to the museum can have only the tiniest effect on the problem, and often at a personal cost that makes this sort of ostensibly-virtuous action available only to the relatively well-off (e.g., using less fossil fuels).

But now, with COVID-19, we seem to have a real case in which it actually is important that we generally act in a coordinated way, and for which we have no time for improved public policies or technological solutions. But to people who have learned to ignore the overstated "we all have to do this together" messages in our society, and who have internalized the "what’s in it for me" style of advertising, it’s hard to explain why this time it’s different.

If everything is a crisis, then nothing is. I think our cultural elites (not a pejorative) have too often made everything they care about into a public crisis, evangelizing for their current interests, only quietly revising their predictions, rarely moderating their confidence, and almost never conceding error. And then a real crisis comes along, and no one is willing to listen.

Exploiting emotional labor

Casey Newton’s article on The Verge about the lives of Facebook moderators likely only adds to the growing rage against social networks. It’s worth a read. Even if stories like this often make the work seem worse than it usually is, it’s not a pretty picture.

Other journalists and bloggers have recently been talking about work and about how online communities work. On work, see Derek Thompson’s recent Atlantic essay. Thompson observes the way in which work is expected to function as one’s entire life, making it more like a religion than a job. Scott Alexander’s post on his attempts to moderate comments in his own little community is worth considering.

These articles offer a chance to synthesize some varied thoughts about how our high-tech, information-rich, ultra-connected world is affecting us. Here is just one idea that these essays have made me think about.

As computers can do more and more, jobs will be more and more about what only humans can do. Firms will look for ways to extract value from distinctively human abilities. This is what a lot of “information” jobs actually look like. They are not traditional “white collar” jobs; they’re not in management or administrative support. Instead, they are ways of leveraging part of the human mind that computers can’t duplicate yet.

For a few months I worked at a company where the task was to correct errors that computers made in reading documents. The computer did pretty well with the initial read, but any characters it was not confident in got passed to a human reader. The software we used was built to make us work as fast as possible. We didn’t need to read the entire document, only the few parts the computer couldn’t read. We were carefully tracked for speed and accuracy. Nowadays machine-learning technology has likely surpassed even human abilities in this domain, but the basic function of the human in the system is much like the Facebook moderators’ function. It makes up the gap between what the machine can do and what the product requires.

This gap-filling is what Newton’s article describes in the Facebook moderating company. Employees are asked to leverage their judgment in figuring out whether something is appropriate or not. Because judgments of this sort are hard to reduce to rules (note all the problems Facebook has in specifying the rules clearly), the task needs a tool that is good at interpreting and assessing an enormous amount of information. And human minds are just the thing.

Computers have gotten good a certain kinds of pattern recognition, but they are still not good at extracting meaning from contexts. Human beings do this all the time. In fact, we’re really, really good at it. So good, in fact, that people who aren’t better than the computer strike us as odd or different.

The problem is that this task of judging content requires the human machines to deploy what they have and computers don’t. In Facebook’s case, that thing is human emotions. Most of our evaluative assessments involve some kind of emotional component. The computer doesn’t have emotions, so Facebook needs to leverage the emotional assessments of actual people in order to keep their site clean.

These kinds of jobs are not particularly demanding on the human mind. Sometimes we call this kind of work “knowledge work,” but that’s a mistake. The amount of knowledge needed in these cases is little more than a competent member of society would have. It would be better to call these jobs human work, or more precisely emotional work, because what is distinctive about them is the way they use human emotional responses to assess information. Moderators need to be able to understand the actions of other humans. But we do this all the time, so it’s not cognitively difficult. In fact, this is why Facebook can hire lots of relatively young, inexperienced workers. The human skills involved are not unusual.

The problem is that as those parts of us that are distinctively human become more valuable, there is also a temptation to try to separate them off from the actual person who has them, then track them and maximize their efficiency. In ordinary manual labor, it’s not so hard to exchange some effort and expertise for a paycheck. Faster and more skilled workers are more productive, and so can earn more. Marx notwithstanding, my labor and expertise are not really part of who I am, and expending them on material goods does not necessarily diminish or dis-integrate me. In contrast, my emotions and capacity for evaluate judgments are much closer to who I am, and so constantly leveraging those parts of me does prompt me to split myself into my “job” part and my “not-job” part. We might call this “emotional alienation,” and it is a common feature of service economies. We’re paying someone to feel for us, so that we don’t have to do it.

All this doesn’t mean we should give up content moderation, or even that moderator jobs are necessary bad jobs. I have little doubt that there is tons of stuff put online every day that ought to be taken down. I am an Augustinian and a Calvinist, and harbor no illusions about the wisdom of the crowd. But we should be more aware of what it actually costs to find and remove the bad stuff. We enjoy social networks that are largely free from serious objectionable and disturbing content. But someone has to clean all that off for us, and we are essentially paying for that person to expend emotional labor on our behalf. Social media seems “free,” but as we’re being constantly reminded, it really isn’t—not to us, and not to those who curate it for us.

So suppose Facebook, or Twitter, or YouTube actually paid their moderators whatever was necessary for their emotional and spiritual health, and gave them the working conditions under which they could cultivate these online experiences for us without sacrificing their own souls. How much would that be worth? I doubt our tech overlords care enough to ask that question. Maybe the rest of us should. Though we cannot pay them directly, we can, perhaps, reduce their load, exercise patience with them, and apply whatever pressure we can to their employers. This is, after all, the future of work. It’s in all of our interests to set the norms for distinctively human labor right now, while we still can.

“Saving” baseball with game theory

The conventional wisdom this summer is that baseball is struggling. Games are boring and long, too many teams are really bad, and so no one is watching. Unsurprisingly, this supposed sorry state of things has prompted people to offer “advice.” 

The worst piece I’ve seen so far is this article from the Wall Street Journal. I’ll save you the read. The author reports on a proposal called the “Catch-Up Rule.” When a team is ahead, they only get two outs per inning instead of the usual three. This makes the games closer and faster, and this is supposed to make them more appealing.

The original proposal appears to come from a game theorist and a computer scientist at NYU. If you needed proof that “game theory” isn’t actually about what we usually think of as games, this is it. 

The proposal is absurd, but it’s worth considering just what is so bad about it. First, the common complaint against baseball these days is that there isn’t enough action. This proposal would reduce the amount of action by reducing the number of outs. Second, the authors propose that the rule would reduce inequality between teams by artificially hindering the ability of the good teams to succeed. I doubt this would happen. Instead, the good teams would assume even less risk, and thereby continue their dominance, just at a faster clip. More generally, a lot of baseball is about random chance–this is why there are 162 games–and reducing the number of baseball events will emphasize the randomness.

But these are minor quibbles compared to the basic mistake the authors make. They seem to think that the purpose of playing a baseball game (and to be fair, they propose similar changes to basketball and football) is to see who wins. Rule changes that reach that end state more efficiently are therefore regarded as desirable.

This way of thinking confuses the goal with the point of the game. But the distinction between the goal and the point of a game is what makes it a game. A game is an activity in which we voluntarily, and for the purposes of playing the game, rule out the most efficient means to the goal. Consider soccer (a game that doesn’t seem friendly to a “catch-up” rule). Two of the most important rules of soccer specifically prevent the most efficient means of scoring: no hands, and no off-sides. People sometimes complain that soccer is too slow, there isn’t enough scoring, the attacks are opaque, etc. How much better would it be if you could just pick the ball up? Well, it wouldn’t be better soccer, because it wouldn’t be soccer. Though a game must have a target or goal of some sort–some action or event that is aimed at–the purpose of playing (or enjoying) the game is the joy of playing itself.

I think most serious baseball fans would object not for the sake of tradition, but because they enjoy the game, and not just the result. Reducing the number of things that happen isn’t desirable, even if it gets to an end faster. But let’s grant that serious fans aren’t bothered by the lethargic pace these days. (I’m not sure that’s true, but let’s grant it for argument.) Will the causal fans be better off? I kind of doubt it. First, to the casual fan, we’d be adding a rule that seems manifestly unfair. I’m not sure that it would be so easy to explain why competitive balance is more desirable than more exhibitions of baseball skill, but this is exactly the proposed tradeoff. Second, the rule would reduce the amount of skill displayed by limiting the opportunities for the better team to hit. Supposedly the problem is that there isn’t enough hitting, but the proposal suggests reducing it even more. And third, I think that a casual fan would likely intuit that something seems off when we have to redesign the game to finish it faster. 

The authors point out that having more people watch a game would be good for baseball revenue. Shorter games would permit more watchers, and so shorter games means more revenue. But baseball isn’t hurting for revenue, and changing the game to make it not just unrecognizable as baseball, but a deficient game seems likely to be counterproductive.

But the proposal as a whole is a perfect illustration of how suck the life out of something by making a theory of it. Baseball fans, like any sports fans, can get nerdy about the details of their passion, but fundamentally that obsession is driven by a love of the game, not a love for the theory of the game. And perhaps there’s a lesson in that for other things too. 

Universal Basic Income and the Liberal Arts

Recent social fractures, combined with our society’s enormous wealth, has caused a fringe discussion in the public policy world to get more attention. The idea is that the society (the government) should supply everyone (every citizen?) with some kind of basic material support. This goes beyond what we usually think of as “welfare,” for it applies equally to everyone, without conditions on working, having disabilities, etc.  Slate Star Codex has a useful post supplying the arguments in favor of a Universal Basic Income (UBI) instead of a basic jobs guarantee.

I find the arguments for a UBI over a BJG pretty convincing. The basic idea still seems to have a number of unknowns, and it isn’t obvious that a UBI would work as a policy. (See point iii on the SSC post for some worries about the economics.)

I’m interested here in one particular complaint: “iv) Without work, people will gradually lose meaning from their lives and become miserable.”

It seems to me that this objection makes a basic error that is nonetheless very common: it equates “work” with “what I’m paid for.”

At one level the assertion may be true. A life with literally nothing to do isn’t a lot of fun. It’s boring. (“Meaning of life” questions are fraught, so let’s focus on whether we would want such a life.) Even here there are caveats all around. It might not be easy to determine whether a life’s activity is “pointless” or whether someone ought to find it boring. Let’s just work with the intuition that mere aimlessness is not good.

The complaint against a UBI is that without work, life would have this kind of pointlessness. And there does seem to be some evidence for this. Consider the last few decades in the Rust Belt and Coal Belt. Many of the social issues there seem to track the loss of stable, decent-paying jobs. Or consider the people who “retire” several times from different jobs, only to find themselves stir-crazy with nothing to do.

SSC rightly observes that a lot of jobs are pretty boring themselves, so it isn’t clear that offering basic jobs is going to solve the problem. A UBI is better because it gives people freedom to do what interests them.

But that’s the other problem, because a lot of people don’t really know what interests them. And this is a failure of education.

The Liberal Arts(TM) were not the subjects that would make one free, but rather those that befitted free men–those domains of knowledge that were appropriate for citizens, freed from the demands of labor, whether slave or wage. One studied the liberal arts to know things that would make life interesting when there was no need to work.

This is the problem with linking “work” with “job”, and then saying that lack of “work” causes a lack of meaning. Some kinds of work are not easy to compensate, but are still valuable and interesting. There are many people I know who, as far as I can tell, would love to be freed from their day-to-day labors so that they could do what they really like. Some of them even work less than might be considered prudent so that they can do their side-gig. Their real work is not paid, but no less valuable for it.

Our society is probably rich enough that it could probably support at some basic level everyone who doesn’t want to work in a job. Those who do want a job can produce enough marginal value to support those who want to do other things. (Lest you think these other things are themselves pointless, remember that caring for family, volunteering for charity, etc. are all things that might fall into this category. Imagine being able to decide whether to be a stay-at-home parent without having to seriously worry about making ends meet.)

This economic freedom is impressive, probably unprecedented in the history of the world. It really does seem as if, at least in the West, we are quickly approaching a time when large portions of the society don’t need a job. If having nothing to do is so bad for one’s soul, then how should we prepare for this coming freedom?

The liberal arts have two answers. First, we can prepare to do more of these liberal arts. We shouldn’t think of the liberal arts as the so-called “humanities.” It’s not that everyone should write more poetry (though perhaps some should). It might be that some should do more science or math, or more arts and crafts, or more politics, or more cooking, or more gardening. (Chad Wellmon has been critiquing this distinction recently.)

Second, studying the liberal arts gives us something to be interested in. SSC’s examples on this point seem slightly off. Folks who already feel their interests squeezed out by their responsibilities would be fine. Most people benefiting from a UBI wouldn’t know what to do with their time. The UBI would give them time to pursue their interests, but for the most part their interests aren’t worth pursuing. And often they know it, at least a little. They’d be bored because their current time-wasting distractions aren’t interesting enough to sustain an entire life of leisure. But the liberal arts are interesting enough.

The problem is that our recent educational trends have favored purely technical education–job preparation–when it seems likely that there will be no job to prepare for. One might say that this technical preparation has enabled our society’s wealth, and perhaps that is partly true. But the cost in the long run might be very high.

The liberal arts have tried to justify their existence in purely utilitarian terms–writing gets you a better job; reading comprehension helps you understand your work; etc. I wonder if the better way defend them is that the liberal arts give you something to do when when you don’t need to do anything. This was David Foster Wallace’s point in his famous commencement address at Kenyon College.

Ironically, then, the biggest problem with a UBI might be that the a society with the material means for it will not have the moral means. But that is what the liberal arts are supposed to fix, and the best part of the UBI would be to supply the modest material means to participate in the life formerly restricted to a tiny fraction of society. SSC thinks that a UBI would be close to utopia. I’m less sure, because I’m not confident that most of us are ready to live there yet.