Immature students; immature AIs

As a new school year approaches the tide of think pieces on AI is rising again. Here’s my contribution. Put simply, the challenge is that AIs are immature. But so are students.

There is plenty of doomerism about AI in education right now. I think it is probably true that many professors (and high school teachers, for that matter) will find their course assessments far too easy to game with AI. This fact is regarded, in some places, as the “end of the world” for various classic assessments. Perhaps it is.

The standard response: if the assessments are that easy to game with AI, then they’re not very good. Again, something is right about this critique, but I want to press back on two fronts.

First, the students are immature. A lot of education, including college education, really is about telling students things that “everyone knows”. Of course, not everyone knows these things. Those who haven’t been taught don’t know them yet. It’s true, in principle, that an ambitious autodidact could learn these things on their own, but actually doing so is hard. Teachers are supposed to curate materials from the blooming, buzzing confusion of the world’s ideas and texts, structure their curricula into logical sequences, evaluate whether students have actually understood well enough to avoid subtle errors, etc.

Critical thinking adds value to learning these things in a classroom, rather than on the job. Nevertheless, first, one cannot be a critical thinker with nothing to think about. Facts and systems of organizing them have to come first. Criticism is a higher order thinking skill, and without the subject matter, readily available to the mind, there isn’t much to criticize. Second, the advocates of “on the job training” often fail to appreciate the many ways in which a broad education is valuable. It’s not just that understanding the task for a particular job requires a sense of a broader cultural and social context. Moreover, thinking of knowledge in purely mercenary terms is bad. Sometimes it’s good to know things just because they’re true, and part of a flourishing human life involves knowing things that aren’t immediately useful. (It is true that colleges and universities have not made this case well in recent decades.)

The problem with AI is that most of these tools are also pretty good at regurgitating what “everyone knows”. Because they (approximately) reproduce the consensus on a subject, they say what everyone already says (even when that’s wrong). In this respect, they are about as good as a typical student, and they’re doing what students need to do.

Thus, the “solution” for education can’t be as simple as teaching (and assessing) in ways that LLMs can’t. We can’t shortcut the very thing that LLMs are good at. Instead, we have to explain why doing the hard work without the AI is worthwhile.

Alan Jacobs supplies an insightful analogy to this end. A culinary school that teaches its students to hack HelloFresh isn’t really a culinary school. Part of the aim is to teach the students how to do for themselves what they can buy in the market. And this requires “pointless” work along the way, in the sense that learners must do tasks whose products they could more easily acquire from someone else. But not everyone needs to go to culinary school. For most of us, HelloFresh is fine, and possibly an improvement over our own cooking. The challenge for higher ed, then, is to provide an education that seems valuable in its own right, including in those parts whose products can be purchased.

(Jacobs also points out another feature of the AI world: it’s not really free, and by losing the ability to do the thing for yourself, you’re caught in a market with producers who can fleece you. OpenAI, for example, has put its high-quality GPT-4 behind a paywall, but it’s still a lot cheaper than college. Competition may further reduce costs, and once a model is trained it’s not particularly expensive to run. The shelf life of this critique might be pretty short.)

I think I can say for myself why I find my education valuable, but part of that is because I’ve actually done it (a lot of it). My subjective sense of its value isn’t the kind of thing I can communicate to someone else. They have to experience it for themselves. But for many students, it seems perfectly reasonable for them to be doubtful about the intrinsic value of (any particular bit of) knowledge. They can say, “I’ll take your word for it” and then go use the AI tools when they need them.

Second, the AIs are immature. A common response to the explosion of AI tools is that they’re actually not very good at more complicated activities. For example, they often have trouble with basic arithmetic. If you give them extended tasks (beyond their context windows) they can veer off topic. And so on.

But this is also a problem with students. Every teacher can give examples of students who make bone-headed errors in assignments. LLMs hallucinate facts; but so do students. LLMs lose focus; but so do students.

Indeed, Timothy Lee’s really helpful LLM explainer uses the term “attention” to describe what these tools are doing in the depths of the algorithm, as far as we can tell. Sometimes LLMs fail because they don’t pay attention properly. Ahem.

So if the immature students illustrate that there could be value in education even in a world of super AI tools, the immaturity of AIs suggests that the current (accurate) criticisms of their abilities aren’t quite sufficient. AIs aren’t perfect right now, and perhaps will never be. But might they become much better than humans? I don’t see why not.

A lot of the critiques of AI’s competence seem to me to apply to children just as well. We don’t demean children for their ignorance and foolishness because we expect them to learn as they grow up. We also don’t put them in charge of things until they’ve established their abilities. We accept their limitations for the time being and then expect them to improve with age and experience. But why wouldn’t we expect AI tools to grow up as well?

The problem, then, is that teachers who rework their courses to resist AI will end up having to do it all again in a few years when the AI has gotten better. If a student can’t improve by retaking a course, then something is wrong with the course. But AIs will have lots of opportunities to “retake” courses, and eventually the AI tools might perform the way an average student would perform after taking the course a dozen times. I’m not persuaded by the critique that AI isn’t and never will be very good at this stuff. Many of its current limitations seem like the same kind of limitations that immature humans have. Absent a good account of the technical limits on AI, or a good theory of what “intelligence” consists in, I don’t see how to avoid the possibility that AI tools might become unequivocally superior to the vast majority of human intelligences.

This is a complicated problem for education, but I think it’s really the same problem we’ve had ever sense we expect basically everyone to go to school. For a reasonably large subset of the society, school doesn’t have much value. AI is making that subset grow. An adequate response will probably require rethinking the purpose of education, which in turn might require rethinking the goods of human life. And where better to do that than in school?

I am not an expressivist, but…

This old blog post seems quite accurate, even prescient. I’m writing this mostly so that I have an easy link back to it.

The argument claims that the whole online medium is aiming at a basic emotional response. The ur-interaction online is the equivalent of thumbs up or thumbs down. This makes people feel powerful, as if they deserve to be consulted about the value of whatever it is they’re looking at, or that everyone else is entitled to their opinion of how things might have been done differently.

But it’s remarkable, from a philosophical perspective, how nicely this analysis fits with expressivist theories of value. As I say, I am not convinced that expressivism is the correct theory of value, but it sure looks like you get something like it when you boil down a lot of our social life.

Perhaps the blog post suggests one of the reasons I dislike expressivism (though not a reason to disagree with it). The broad expressivist program that encourages quasi-emotional votes on everything valuable makes it too easy to render an opinion. And so people have opinions about everything, even things that they really have no business opining on.

Persuading the right audience

It’s hard to persuade someone if you don’t understand what they’re already thinking. There’s a strange debate going on in the media right now about whether the FDA/CDC’s “pause” on the Johnson & Johnson vaccine for COVID is likely to improve vaccine hesitancy or make it worse. As of now, there have been six serious blood clots among people who have had the shot, and one person has died. Proponents of the “pause” say that by confirming the safety of the vaccine, people will eventually feel more confident about it. Opponents say that this just adds ammunition to the anti-vaxxers stock.

As far as I can tell, people who think that this pause will increase vaccine confidence just don’t understand who they’re trying to persuade. Some media and academic types have observed that many anti-vaxxers right now are Trump voters, and this is true. But I don’t think I’ve seen much about what can be done to persuade them to get the shot. These are not typical anti-vaxxers; their objections are not necessarily about the safety of the vaccine. Rather, their objections are about the necessity and sufficiency of the vaccine.

Folks who work in public health are sort of by definition going to think differently than very populist libertarian types who think public health is at best meddlesome, and at worst a subtle (or not so subtle these days) means of illegitimate control. So consider this brief post an attempt to reconstruct the populist view, and then see if you think the pause will help convince these people to get vaccinated.

Because it’s necessary these days, let me pause <wink> and put some cards on the table. I think it is hard to conceive of any moral justification for the FDA/CDC’s decision. I grant that they have a reason to act as they did. But merely having “a reason” is not good enough. No moral theory or system that I can think of would justify this decision. (And yes, I do think I have the relevant expertise to say this.) I am also generally well-disposed toward public health efforts and good healthcare, and I think public health is a legitimate function of government. I am…not a fan of Trump.

For some data to support the following analysis, refer to this latest YouGov poll. (Yes I know it’s one poll; it’s just an illustration.) Yet most of my reconstruction below comes from actually talking to some of these people about this.

(Partial update: YouGov compared results from people who took the poll before the announcement vs those who took it after. There’s a big drop in confidence in J&J’s vaccine. I’d interested to see the demographics of the change. Even a question like “After hearing the CDC’s announcement, are you more or less confident in the safety of the vaccine?” I suspect Trump voters’ opinions aren’t the ones changing.)

These populist anti-vaxxers (PAVs) think that vaccines are unnecessary. They think that COVID in general has been overblown. The risk is no where near as high as the “liberals” in government have made it out to be. Most people who get COVID won’t get terribly sick, most sick people won’t infect anyone else, and the economic and social fallout from a year of tight restrictions will be far worse than the disease. (Note that these beliefs aren’t obviously false.) COVID has just become an excuse for the government to impose on citizens’ lives and to infringe on their liberties. (See poll question #6.)

The “pause” just boosts this belief. If the government were really worried about COVID, they would be doing everything possible to get shots in arms as fast as they can. But now they’re willing to stop giving one of the shots—the one that is the most convenient on several dimensions—possibly for a few weeks to assess the risks implied by one death.

If you’re disposed to think that this is really about government control, and that vaccines would end the pandemic and thus the political cover for that control, it is hard to see how the decision would do anything but strengthen your belief.

(Again, for clarity, I think these folks are wrong about the risks of COVID. And also unfair about the motivations of public health officials.)

PAVs also think vaccines are not sufficient. (Poll questions 22-24.) A decent number of Trump voters (and ideological conservatives) believe that it is not necessary to wear a mask right now. They also think that travel is safe. Some of these beliefs imply that the vaccines aren’t necessary. But it also shows that they think getting vaccinated won’t really change anything. The FDA, CDC, and other public health figures have not helped on this point. Refusing to describe an “end-game” to the pandemic has been a constant source of frustration to these folks. The goal posts keep moving. Remember “15 days to slow the spread”? Yeah.

So suppose that the CDC and FDA do the review, and determines that the vaccine really is safe still. Yay! (Does anyone really think that there will be a different result?) What has changed for the PAVs? Not much, it seems. They didn’t really doubt that the vaccine was safe. (See poll questions 11-19.) They just doubt that there’s any point in taking it, so what’s the rush? And look! The CDC agrees that there’s no rush!

The trouble throughout is that public health types believe that these anti-vaxxers don’t trust the vaccine. But that’s not right. They don’t trust the government, and especially the unelected federal bureaucracy.

So I think the “pause” will have little notable effect on populist hesitancy. (Though compare poll questions 14 and 18. It will be interesting to see how #18 changes in the next couple of weeks. And see this NY Times piece on the worldwide impact.) It won’t do anything to address their actual concerns. If anything, the pause will likely entrench their belief that their government really doesn’t have their interests in mind, and doesn’t really believe its own messaging. And honestly, I’m not sure how to show they’re wrong about that.

When is it OK to break the rules?

Last night I was part of a panel at Regis College on various matters related to COVID. My task was to talk about some of the ethics issues that the last year has presented. I decided to focus on how rules and exceptions work, and this post is a kind of follow-up and elaboration of one of my points.

I posed this question to the audience.

Imagine you’re a nurse working the COVID vaccine administration desk. A young woman shows up in the vaccine clinic and asks if she can have the shot. She is not eligible according to the current phase of vaccinations, and in fact probably will not be eligible for another 6 weeks. But she begs for the shot, saying that she’s virtually a single mom, with three kids under 8 yrs old at home and a husband who works long and difficult hours. She can’t afford to get sick, and she’s terrified that she might get COVID and be unable to care for her kids, who are themselves exhausted and frazzled from months in semi-quarantine. You can easily tell just by looking at her that she has had a very hard year.

Do you give her the shot?

One nice feature of having to do this over Zoom is that I could get immediate responses via a poll. About 40% of attendees said Yes, and 60% said No. This was almost exactly what I expected, because it’s supposed to be a hard case with no obvious answer. Indeed, I think that if you believe the answer is obvious, you’re probably not thinking about the case very well.

It would be easy to adjust the story to make one answer or the other more likely. For example, suppose it’s the end of the day, and each day for the last few weeks there have been a few open vials of vaccine. You might just suggest that the young mom wait for a while in hopes that there will be some “extra” doses. Or you might know that you have been struggling to get enough patients to get the shot—they just won’t come in—and if you don’t use your supply you’ll lose the next shipment.

In the other direction, perhaps your clinic is subject to detailed audits, and if it’s discovered that you didn’t follow the rules, you’ll be fired. Maybe your boss will be too. Or maybe you’ll have other penalties levied against your clinic. Or perhaps there are dozens of other people in the line all hoping for the same kind of special treatment.

The point is that the myriad facts of the case will probably affect your decision. And you have to make a decision. “I don’t know” isn’t an option, for it implies “No” in this case.

I’m interested in those who are very sure of their answer, given the limited details.


Start with the confident “Yes”. Even given the abbreviated story, I don’t think Yes is obvious. The argument for it is that the mom’s life would genuinely be better with the shot; she needs it possibly more than others who are currently eligible. A 65-year-old who is relatively healthy, lives at home, likes being alone, and can carefully manage their COVID risk might be eligible, but it would be hard to say that they need the vaccine more than the young mom. Basic human sympathy should at least give some weight to the mom’s request, and the rules are too crude to get every case right.

So I think Yes is defensible, but there are at least two good reasons to hesitate anyway. First, it really is against the rules, and the mom doesn’t have an agreed-upon moral excuse to jump the line. Below I’m going to criticize a kind of rule fetishism, but it is possible to err in the other direction and think that the rules should easily fall to immediate, evident neediness. Rules that get too many exceptions, and especially when they get unpredictable exceptions, no longer serve the purpose of allowing us to coordinate our actions. Sometimes rules need to be followed even if it’s inconvenient or somewhat suboptimal, if only to ensure that we all still recognize the value of having rules.

Second, by asking you for an exception—to break the rules—the mom is putting you in a moral dilemma. She may have good reasons to want the vaccine, but just as you have direct moral duties toward her (more below), so she has duties toward you. And one of those duties is to not needlessly place you in a moral dilemma. The precise details of the case will matter a lot on this point, but the general principle holds. It is at least ungenerous and sometimes unfair to behave in a way that forces an authority to have to exercise their control over your situation when you could avoid this by simply doing what you know you should. It is not always OK to even ask for an exception. Doing so may cause moral distress in the person who has to say No.


Now for the confident “No”. There are two main arguments for saying “No.” Let’s take each in turn.

First, it’s unfair. It’s not the mom’s turn, and it would be unfair to everyone else if she could just skip the line. This is true, but it’s just the nature of exceptions. You and the mom might agree that it would be unfair, but in a sense that’s exactly what’s at issue. Not everything will be perfectly fair. The question is what justifies the unfairness. Pointing out that it’s not fair just restates the case. Moreover, obsession with precise fairness is childish and naïve. It is childish to ungraciously demand that every rule be followed perfectly, and every good be distributed equally. Suppose you said Yes and gave the mom the shot. Later she tells some friends that she got her first shot. One of them says, “That’s great! I’m so happy for you.” The other says, “What?! How did you get it when I can’t? That’s not fair!” I take it that the first response is the better one, particularly given the kind of case. (Vaccines aren’t that scarce.)

Second, it’s against the rules. This was, I think, the most common reason that the audience last night said “No.” It’s worth exploring in some extra detail.

In a sense, this response also kind of misses the point of the example. Of course it’s against the rules. But why think that you have to follow the rules exactly in this case? Yet I think that for a lot of people, and especially a lot of people in professional settings, the fact that there is a rule is supposed to conclude the debate and deliberation. There is no moral dilemma anymore, because there is a rule. I barely need to think; I can just apply the rule. It might be uncomfortable to tell the mom she can’t have the shot, but it’s clearly the right thing to do.

I think this attitude is one of the reasons that many laymen dislike interacting with the medical community. They find the rules confusing and complicated, designed more for ease of function in the hospital or clinic, rather than actual moral standards that need to be respected. The rules seem to protect the establishment, rather than the patient. This complaint is unfair in many ways, but I think there is a kernel of truth to it, and more than a kernel when it comes to rules about vaccine distribution and other similar unusual events.

The trouble with woodenly enforcing rules is that it sidesteps moral judgment. In many cases, moral rightness really is encoded in rules, and following the rules yields the right result. Having a rule not only sets a moral standard, it also makes moral deliberation easier and faster. Speed is often valuable in medical settings, so by establishing good rules, decisions can happen faster. Furthermore, clinicians’ moral sensibilities and judgment are shaped by the rules, so even those clinicians who slept through their ethics classes (perhaps with good reason), but have internalized many of the procedural norms of the clinic, will generally do the right thing.

But in novel situations, as with many of our COVID-specific rules, the rules themselves have not had the kind of extensive real-world testing that long-standing norms have. Many of them have been created with partial information (e.g., fomite transmission vs. aerosols) or by analogy with other epidemics. There are still notable gaps in our understanding of the disease. Moreover, the basic social aims that the rules intend to serve are disputed and disputable. There really is a substantive, good-faith debate about who should have priority in the vaccination schedules, as well as about many of the other high-profile rules (e.g., mask wearing situations, outdoor events, etc.).

Sometimes we have to have a rule so that we can generally predict each other’s behavior, even when we aren’t sure what the best rule would be. Good enough is sufficient for the moment. The rule serves a purely pragmatic function. The problem is that rules justified on merely pragmatic grounds can look a lot like rules that are justified on moral grounds, and often the people enforcing the rules don’t distinguish between these two types very well.

When the rule is merely pragmatically justified, it gives little guidance on the moral situation. The moral status of a particular case can’t be determined directly from the rules. For example, there is a rule in American society about which side of the road one should drive on. This is a pragmatically necessary rule, given the nature of driving. But there is no moral fact about which side of the road is better in general. Driving on the right is not morally correct because something about the right side of the road is intrinsically better. People in the UK who drive on the left are not doing something wrong. The right thing to do is to follow the rule, whatever it happens to be. But knowing which side of the road to drive on doesn’t really require driving judgment. It merely involves following the norm that everyone else is following. There is no “deeper” explanation.

Thus, the question for the vaccine case is this: Is the distribution rule a merely pragmatic rule, or does it encode a moral principle? I think it pretty clearly has to be the first one. A rule that is supposed to cover so many possible cases, in such a novel situation, almost certainly cannot perfectly align with the morally right thing to do in every situation. Refusing to admit that there could be special, exceptional cases in which doing the morally right thing requires breaking the general rule just mischaracterizes the rule itself. Now that there is a rule, there is some reason—perhaps a fairly strong one—to follow the rule in most cases. This reason is like the reason you should drive on the right in the US: not because it’s morally right on its own, but because it’s the way to avoid the real moral dangers of uncoordinated traffic.

The moral relationship between you and the vaccine-seeking mom requires you to give her a reason to not give her the shot. “It’s against the rules” is an inadequate reason when the rules are merely pragmatic. It has the same moral value as responding “It’s the law” to the question “Why do you drive on the right instead of the left?” It’s not false, exactly; it’s just not an answer. By shortcutting the moral judgment, the moral justification for saying “No” disappears. But if you exercise your moral judgment, you might decide that the rule is getting this case wrong, and determine that you should give her the shot.

In this kind of situation, where the rule is a mere tool for coordinating, and does not (clearly) encode a moral requirement, it is much easier to justify breaking the rule. Indeed, I think that in this particular case, it is probably easier to justify breaking the vaccine distribution rule than the moral rule about giving an sufficient justification for saying “No.” But the case is still hard, and I think it is still not obvious what one should do.


I that a danger of working in highly rule-governed (i.e. regulated) environments is that they encourage people to forgo their moral and practical judgment in favor of following the rules. Cases that are in fact relatively difficult can seem to have obvious answers because of the shortcuts that the rules provide. This has at least two bad effects. First, it causes their practical reasoning skills to atrophy, which in turn can make them less sensitive to ways in which policies really can be actively bad, and not just suboptimal. Sometimes particular rules are bad, and not merely because they are ineffective. It isn’t good to lose a conceptual vocabulary by which we can critique the rules on non-pragmatic grounds. Second, it can encourage a kind of vicious Pharisaism in which all of the boundaries are policed with equal severity. Rule-followers can think themselves better than others, including those who actually use good moral judgment, merely because they follow the rules well. I have been worried for most of this last year of COVID that people will face recriminations for exercising their best judgment contrary to the established norms, but it is precisely this kind of novel situation that requires people who have good judgment to make good decisions.

What is a public health ‘guideline’?

We are now entering year two of COVID-tide, and an effective vaccine to stop the pandemic appears to be soon at hand. It has been a tough year, and probably will continue to be abnormal for a while, if public health and epidemiological experts are to be believed.

The pandemic has forced many people to learn a lot about the world very rapidly. Many of us have had a crash course in epidemiology, immunology, and public health over the last year. One thing we seem to have also learned is how complex and impotent our public health institutions are.

One of the problems in public health is that many public health experts have no real practical authority. They’re academics, and most of their conversations in ordinary circumstances are among themselves. The public health officials do have some authority, but it is often limited to medical providers and the adjacent industries (think CDC, FDA, etc.). In ordinary circumstances few people would think there is anything strange about this constrained mandate. Indeed, there is a vigorous (though small) cottage industry of ferreting out strange make-work regulations from these agencies, thereby indicating that even the limited mandate may be too broad in ordinary times.

The trouble now is that these public health agencies (and even more, academics) can’t really make rules for the general public. They issue "guidelines". Both a rule and a guideline are a kind of norm, and so it is common to see the words used interchangeably, particularly in these quasi-medical contexts. There is, however, an important difference between a guideline and (what I will call) a rule, and I want to think out loud about this difference for a bit.


Put simply, a rule requires enforcement, whereas a guideline is merely advice. If we distinguish these two concepts in this way, it helps illuminate the problems we’re having with all the various pieces of public health and medical advice we’ve gotten over the last year.

Start with a rule. If don’t enforce compliance with a rule, it is hard to see what practical import it has. Enforcing public health rules is really, really hard. Enforcing rules in general is hard, but in this case, we’re trying to deal with many kinds of behaviors performed by many kinds of people in many kinds of situations. It is implausible that a single one-size-fits-all rule could cover every case. And so it is hard to enforce the rule in a non-draconian way.

Consider, for example, the recent news that NY governor Andrew Cuomo was going to levy fines and other penalties for not following the state’s vaccination "guidelines". This action is totally reasonable and incredibly stupid all at once. It is being made by the proper person: only Cuomo, or someone elected official like him, plausibly has the authority to punish in this way. It also gives a powerful extrinsic motivation to comply. However, it is far too powerful, and in this way quite stupid. In a time where vaccines have extremely short shelf-life and are in very limited supply, while also being extremely effective, making people second-guess their use of the vaccine is a bad idea, for it makes it more likely that vaccines will be wasted. Better for it to go in the wrong arm than in no arm at all.

Other public health measures have proven very hard to enforce. Mask-wearing is a notable case. I can’t think of much actual argument in favor of the moral or civil right to go unmasked. (This essay complicates matters somewhat, though I don’t think it makes a case for a right.) Preventing social gatherings has also been difficult, not least because there are many different sorts of them, and some have been "approved" for political or religious reasons.

Any rule that is simple enough to remember will necessarily have some exceptions in the wide variety of relevant contexts. This adds another difficulty to enforcement, because it is not the case that every instance of non-mask-wearing (for example) is wrong, or even against the rules. Trying to suss out every possible case is a fools errand, and a waste of political or moral authority.


Because the rules are hard to enforce, and because they are often issued by people who have no practical authority, they often come in the form of "guidelines." A guideline is basically a kind of structured advice, given in the style and tone of a rule. It is notable in that it is effectively unenforceable by the one making it; if it were enforceable, it would be a rule, and it would require real enforcement.

Some public health "guidelines" are actually rules, particularly when the constrain the action of various other actors. Medical guidelines, for example, are often really rules for medical professionals. Failure to comply can earn one a hefty penalty. Cuomo’s order mentioned earlier is like this. The NY public health officials promulgated "guidelines", but Cuomo’s actions reveal that these are really rules, since there are penalties for non-compliance.

The trouble with true guidelines is that they have only as much authority as advice does. Guidelines about the size of gatherings, for example, depend on groups deciding to follow the norm. Other groups may decide that they care about their fellowship more than the guideline, and it is hard to clearly say what is wrong about this. (To be clear, I think there often is something wrong about flouting the guidelines, for reasons I’ll get to momentarily.)

Advice is a peculiar thing. Agnes Callard offers a helpful distinction between three different things: "instructions", "coaching", and "advice". Asking for advice, in this trichotomy, is "instructions for self-transformation." That is, it is asking for coaching delivered in the form of instructions.

I think a lot of guidelines are trying to do almost exactly this. And this is why they fail. When public health experts issue guidelines, they are appealing to epistemic authority rather than practical authority.

The transformation that people are seeking in public health guidelines is increased knowledge of what to do in a novel public health emergency. Few of us have any personal understanding of all of the complex features of a global pandemic. We need to know what to do for our own safety and well-being, and we look to experts to give us insight. But the experts can’t give us a graduate-level education ("coaching") in epidemiology or any other technical field. (And often they are unable to even explain their own field to non-experts—a real weakness of many kinds of expertise.) They’re forced to give fairly generic and vague bits of practical wisdom. Fundamentally, they’re trying to supply an education in the form of practical instructions. They need us to think differently about various activities, but they lack the time and opportunity to teach us how to understand. So they give instructions—practical maxims that looks more like rules.

It turns out that a lot of people seem to be genuinely looking for just this sort of thing. They want to know what they can do, and generally are willing to follow the instructions, even without external enforcement. Yet because the instructions are generic and impersonal, individuals can gain knowledge without complying. There is some evidence that this is how a lot of people are operating.

(A topic for a different post: some people already have the relevant knowledge, and often it’s far deeper and broader than the guidelines can provide. E.g., people who live in E. Asian countries and have past experience with pandemics. It isn’t unreasonable to listen to them for advice rather than or in addition to "science.")

Yet when people use guidelines to increase their knowledge, but then supplement that information with their knowledge of their own particular circumstances, sometimes they choose to not obey the guidelines—the instructions—even as they benefit from them. Thus the guidelines "fail" to change behavior, which is what they are intended to do. How then is it ever possible to promote compliance with the public health norms without turning them into rules with official enforcement?


There is a large class of norms that aren’t enforced (or enforceable) by the state, and yet substantially constrain our actions. We might call these "manners". Having bad manners won’t get you fined or put in jail, but it will have consequences, most notably your exclusion from certain types of society.

Manners are famously opaque to those outside the society that uses them. They seem pointless or excessively fussy, and often the social opprobrium directed at mannerless behavior seems to far exceed the immediate practical consequences of the faux pas.

Something similar seems to be true of many current public health guidelines. There seems to be little public health need to wear a mask while jogging, for example. Yet at least in some places, appearing outdoors without a mask for any reason is treated as a serious error. To those who care, this treatment is enough to promote compliance (and, crucially, to perpetuate the norm by "enforcing" it against others). To those who don’t care, there is little one can say. If someone doesn’t want to be part of the mannered society, it is hard to justify complying with any of its norms. In this way, manners resemble instructions. Instructions are useful only if you want what they aim at. Some people just don’t care whether others (usually described as "elites" or "liberals") approve of them, and so the informal social enforcement mechanisms just don’t engage.

Further, as I hinted, mannered behavior gets perpetuated by being enforced by the participants, rather than by some central authority. We’ve seen this too. Ordinary citizens berate one another for not following certain guidelines, as if the guidelines empower the man-on-the-street to enforce the norms. If you asked these self-appointed police whether they believe they have any legal authority, they would say of course not (most of the time). But they clearly think they have some right to demand compliance with the norms. This makes a lot more sense if the norms are like manners, where there is no central enforcing authority and each participant is at least somewhat empowered to police the standards of right behavior.

Finally, there are reasons to comply with manners, even if you think they’re stupid. Often manners are the way that a society demonstrates respect for its members. There may be many different possible systems of norms that indicate respect or care for one’s neighbors, but within a given context the individual usually doesn’t have a choice about which system to follow. Following public health guidelines often takes this form. It may be true that in a particular situation a mask is unnecessary (e.g., while jogging), but wearing it demonstrates that one is willing to limit one’s own freedom out of care for others, and that is a useful message on its own. Similarly, forgoing group gatherings to limit the rapid spread of infectious disease may demonstrate respect for the health care workers that are physically, mentally, and morally exhausted, even if you know that no one in your group actually has the disease.


In sum, I think there are good reasons to comply with public health guidelines, but I also think there are some real limits on how much we can say to those who don’t want to. Fundamentally, most of the norms coming from the public health and other science-tinged domains are just advice. It’s probably mostly good advice, but it’s also not irrational or immoral to ignore it. The same is not true for rules that public officials have issued. If you think you are obligated to obey the government, then you should obey their public health mandates. But public officials should be clear too. For those norms that are really important, public officials who have the relevant authority need to use that authority and actually enforce the norms. Though, as I suspect many have realized, doing so may cost them their job. So be it.

A political difference of note

Much digital ink has been spilled on the increasing political divide in the United States. Yet on this election day in 2020, it struck me that there is a difference I haven’t seen discussed, and that may be relevant for explaining different kinds of political enthusiasm and for thinking about the next few years. I, like so many others, would love to see the political temperature of the nation lowered. However, it is hard to see how this is likely to happen so long as the citizenry fails to obey the Psalmist’s injunction to “put not your trust in princes“.

It seems to me that the typical Democratic partisan has no personal memory of a “bad” Democratic president. The last two Democratic presidents were Bill Clinton and Barack Obama. Democrats largely regard both as generally successful chief executives. (Obviously Republicans disagree.) You have to all the way back to the 1970s to find a “bad” Democrat—Jimmy Carter. Though Mr. Carter is still alive, few enthusiastic Democratic voters will remember his administration. I suspect they’d have to be at least in their 50s to have any meaningful political memory of that era.

In contrast, the typical Republican voter can think of at least one that they would regard as bad. Trumpists likely regard George W. Bush as a bad president (at least in some key respects), and every time John Roberts fails to give them what they want, their opinion of Bush diminishes a little more. Hurricane Katrina was the end of a number of seemingly significant policy failures, whose archetype was the Iraq War. Never-Trump Republicans think Trump himself is a bad president. Reasons for this belief are too many to enumerate here. Even those Republicans who don’t think either Bush or Trump are all that bad may be able to remember George H. W. Bush’s four years, with their mix of global upheaval and failed promises. (“Read my lips”—look it up.) Ross Perot, the O.G. Trumpist, got such traction because Bush Sr. was disliked by Republicans.

Now, when I say “bad” president, I don’t mean morally bad. I don’t want to register an opinion on that aspect. (That’s not to say that I lack an opinion on it…) I just mean that they were not particularly good at the job. Whether through administrative or political errors, often combined with bad luck, they didn’t accomplish what they set out to do.

(Note too that this is about how their administrations are perceived by their supporters. Republicans could give a long list of policy failures they would attribute to recent Democratic administrations, and vice versa. Nor do I intend to say that these administrations actually are inept about everything. Sometimes they can accomplish good things that never really make it to the general political consciousness. This applies to Trump too.)

Given this history, it seems comparatively easy to understand how Trump voters favor him as a kind of totem—a way to “own the libs” or “drain the swamp”—rather than as a skillful chief executive. Republicans are familiar with having bad chief executives, so it’s easier to ignore all of Trump’s failings on that score. They expect relatively little of him on the policy front, and instead relish how he makes the political class act insane.

Democrats think their presidents have been pretty good chief executives. Though there are some that wish Clinton and Obama could have pushed an even more left-wing agenda, Democrats seem to regard those administrations fondly. (After all, the latest two Democratic presidential nominees were very much part of both administrations.) The country was generally pretty prosperous and relatively peaceful. Major policy changes got put in place. And so on. These successes feed an existing technocratic impulse that shades to the left.

I fear that Mr. Biden has all the markings of someone who might not be a good executive. He has little experience in that kind of role, and in his few opportunities it seems that his achievements have been slight at best. Technocrats would seem to tend toward good government, though sometimes they are better at policy than at politics. Perhaps a Biden administration would find good subordinates to handle the essential executive tasks. (Merely filling key administrative posts would be welcome.)

I do not think ineptitude in the White House is a good thing, regardless of which party its occupant represents. Right now we need an administration that can get some things done. An effective White House would be a nice change, and in the moment, a critical necessity. (See Kevin Williamson on the crisis of political competence in the 21st century.) I strongly suspect that basic executive competence from our national government would do much to reduce the political rancor in our society.

But for the sake of lowering the political temperature, I wonder if it would be salutary to have a relatively inept Biden administration. (And, to be clear, as I write this, I have no idea whether a Biden administration is going to occur.) It might be useful for Democrats to be reminded that just having a D after one’s name on the TV screen doesn’t make you a good politician or a good administrator. It is good for us to be disappointed by our political leaders every so often. A great many Biden supporters seem to think that his election would bring profound change to the country. I doubt that this is actually so. I’d be very glad if a change in administration would restore some executive competence, and I can’t really hope for something else. But I do wonder.

A duty to be informed

Philosophers are discovering a host of new arguments for the value of their discipline these days. COVID-19 has pushed to the front a variety of topics that philosophers think about frequently, though often in bloodless, abstract terms. Ethics of triage and scarcity, for example, has moved from models of trolleys and organ donors to real-life questions about who should get limited medical resources.

Epistemology and philosophy of science are also getting their day in the sun. Much of the anxiety about COVID-19 arises because we just don’t know much about it, so the range of reasonable beliefs about the outcome of this all is very wide. People are discovering that science involves more than crude applications of a technique, and that real scientific expertise includes practiced judgment about hard-to-quantify uncertainties.

I suggest that this crisis illustrates an interesting combination of ethics and epistemology: a duty to be informed. For some, this duty is quite extensive, but I think there is a case to be made that anyone making or influencing decisions right now has some degree of a duty to be informed about what is going on. A duty to be informed is not a duty to be right, for that would be impossible. Instead, it is a duty to sincerely and virtuously seek to acquire more knowledge—to be a good knower; to apportion belief according to evidence; to reason well; to avoid bias and remain open to correction.

I’ll start with the obvious cases: those in positions of authority. Our public officials are making huge, life-changing, society-altering decisions every day. They already have extensive public duties; that’s what the job requires. (Actually, one might say that they have public obligations, since they "volunteered" for their positions.) I think it is obvious that public officials should seek to be informed about the facts of the situation.

But we can say a little more about what being informed requires. First, it requires that they take into account the facts. Whatever we know about COVID-19 should be included in their deliberations. Facts are true or false. If two public officials disagree about some fact, then at least one of them is wrong.

Second, they should be actively seeking better information. Jason Brennan has been arguing that a lot of our public officials are making huge decisions without trying to improve their knowledge, and just falling back on facile "trust the experts" platitudes. The initial response to COVID-19 has been very strict, in order to account for uncertainty, and let us grant that strict rules were at least initially justified. (They almost certainly have been.) Yet severe measures may lose their justification as we learn more. So much is uncertain or unknown, but knowable, and public officials are uniquely poised to accelerate our learning. It seems as if there are daily updates to the best estimate of COVID-19 infection rates, fatality rates, treatment capacities and strategies, etc. Some of this information can’t be updated overnight, but the process can at least be underway, and it isn’t obvious that we’re actually making a lot of progress on this front (or that our public officials are leading and coordinating it).

Third, public officials should be reasoning well. The duty to be informed includes not just acquiring lots of true facts, but thinking about them effectively. They need to reason correctly about scientific and mathematical facts, such as sampling error, uncertainty, Bayesian conditionals, endogenous and exogenous variables, lagging indicators, and even basic arithmetic. (From the beginning, politicians, media personalities, and—sadly—some scientists have been making elementary errors even in multiplication and division.) They also need to have some basic awareness of how to evaluate scientific research, or at least have trustworthy advisers who can do so. Here we can include economists among the scientists, for many decisions are not merely medical decisions.

Public officials also need to think well about ethics. Some seem to think that preventing any loss of life from COVID-19 justifies any amount of public restrictions. Others seem to think that having 1-2% of a country’s population die from this disease is an acceptable tradeoff, even though for most countries that would make this disease the deadliest event in the last few centuries. Or they think that it’s OK to let older and sicker people die, because…? It is usually a mistake to put a dollar value on a life, but when making public policy we have to do this all the time. Refusing to acknowledge the tension is just bad reasoning, and thinking simplistically about what makes a life good won’t help either. Perhaps more common are public officials who are officious, where they appear to think that crises permit them to be "punitive and capricious". A crisis does not change what the government can legitimately do, and if anything, a crisis is a good opportunity for showing patience and forbearance.

Other public figures bear some of these same obligations, though perhaps to a lesser degree. I suggest that our media figures are nearly as responsible as our public officials. Because media types don’t actually have to decide, they are uniquely positioned to be critical. Yet being merely critical shirks responsibility, for it is easy to get attention just by being contrary. At the same time, many of our public officials desperately need their decisions challenged, if only to force them to improve their communications. The media can both inform the public, and also criticize the decision-makers. But to do so, they have to be as well-informed as anyone.

We can move on down the tree of responsibilities. Employers obviously have some duties toward their employees. Their capacities are much more limited, but so is their scope of concern. Pastors owe it to their congregations to be informed so that they can make good decisions (which might, at some point, involve disobeying poorly-informed public officials). Heads of households should know what will affect their own families.

Even a single individual has at least a mild duty to be informed. As this crisis has revealed in great detail, our actions affect others whether we intend them to or not. Complying with public policies, heeding medical advice, and caring for others around us requires us to understand to some degree the implications of our own decisions. We have to know enough to exercise good judgment, and at least for that we each have a duty to be informed.

One final word about duties: I don’t think duties are absolute. We all have many duties, and being informed is just one of them, and one that may compete with others. If someone starts forgetting to feed their kids because they’re trying to keep up with the latest research, that’s not good (definitely my temptation). But I think a duty to be informed is one of our duties, and so we ought to take account of it when deciding what would be the best use of our resources.

Crying wolf and doing our part

A number of people have noted that it is hard to persuade people to take COVID-19 seriously because it feels like the boy who cried wolf. Previous outbreaks of infectious disease, from ebola, SARS, MERS, etc., have been generally contained in a few regions, so the cries of pandemic have seemed overblown. To some people’s minds, this latest outbreak is just another in a long line of cases where media and public officials have restricted liberties and spread what feels like unnecessary panic. More cynical observers might even say these crises are pretexts for greater government control over citizens’ everyday lives.

The trouble is that this particular outbreak looks a lot more like a real wolf. As of writing, the growth in cases around the world exhibits the classic signs of exponential growth, and there is compelling evidence that many countries are severely under-reporting the actual number of cases (including, it seems, the United States). Furthermore, this virus seems to be in the "sweet spot" for a public health concern, for it isn’t so deadly that it burns itself out (like ebola), nor is it so mild that medical facilities can absorb it (like the common cold or the seasonal flu).

But there is another aspect of "crying wolf" to consider. The way to combat this virus is to create "social distance" so that the virus doesn’t spread as rapidly. This is a classic collective action problem, because for the vast majority of people, there is little personal benefit to social distancing, and often quite a lot of personal cost. Typically the government and media persuade by showing how a particular behavior is in an individual’s self-interest. In this case, mitigating the risk for those for whom this virus is very dangerous requires people who have almost no risk of their own to massively alter their behavior.

In short, we need everyone to pitch in and do their part, even if it doesn’t seem to benefit most people directly. But this kind of rhetoric is also common, and often has looked like crying wolf.

One can hardly walk through a museum or a zoo without being bombarded with claims about what dire things will happen if we don’t all contribute to the cause-du-jour, even when some of these causes are distinctly out-dated and poor candidates for action by individuals. Individuals can only rarely affect many of these causes, such as reducing pollution, minimizing plastics, divesting from fossil fuels, mitigating acid rain, protecting the ozone layer, conserving water, preventing species extinction, etc.. All of them may be good things to do, but the problems are ones of public policy or technology. Acid rain, for example, wasn’t reduced by ordinary citizens’ acting together; it was mitigated by better public policies and improved technology. The typical visitor to the museum can have only the tiniest effect on the problem, and often at a personal cost that makes this sort of ostensibly-virtuous action available only to the relatively well-off (e.g., using less fossil fuels).

But now, with COVID-19, we seem to have a real case in which it actually is important that we generally act in a coordinated way, and for which we have no time for improved public policies or technological solutions. But to people who have learned to ignore the overstated "we all have to do this together" messages in our society, and who have internalized the "what’s in it for me" style of advertising, it’s hard to explain why this time it’s different.

If everything is a crisis, then nothing is. I think our cultural elites (not a pejorative) have too often made everything they care about into a public crisis, evangelizing for their current interests, only quietly revising their predictions, rarely moderating their confidence, and almost never conceding error. And then a real crisis comes along, and no one is willing to listen.

Pulled in two ways

Events in the last couple of week have once again highlighted for me the tensions in the gun control debate. The United States has had yet another mass shooting—a real one, with lots of victims—and not just one event, but two. And just a few days earlier I had my own brush with illicitly-used firearms. Someone shot up my front door, either by mistake or bad aim, apparently intending to shoot at my neighbor.

bullet hole

I completely understand why some folks would renew their cry for additional gun control. The more shootings of this sort there are, the more strident the cries will be, and the more powerful the emotional pull will be. Everyone seems to think that we ought to do something, but I don’t think the suggestions have really improved.

Gun control advocates struggle to convincingly claim that they aren’t after all guns, even those owned and operated legally. Often their suggestions betray basic ignorance about guns themselves, or propose policies that already exist, or that wouldn’t meaningfully affect the mass shootings that have recently plagued our society. They often don’t seem to appreciate that most meaningful restrictions on guns really will require a constitutional amendment, and that without an amendment, private gun ownership is a civil right. I think it’s wishful thinking to believe that the 2nd amendment was ever intended to be so narrow as some critics suggest, and relying on the courts to restrict guns would just add to the list of cultural hot-buttons that have been removed from the democratic process.

Now, I am not opposed to seeing a constitutional amendment. I increasingly think that a carefully-constructed amendment might be just the right approach. A model that shows the strategy and the danger might be Prohibition. The federal government, seeking to end the scourge of drunkenness in society, actually got the Constitution amended to permit bans on alcohol. Prohibition mostly had its intended effect, a fact not often admitted. Of course, it also had many unintended effects, sometimes in surprising places, and which arguably outweighed its benefits. But as a one-time “surge” of enforcement to change the culture, it seems to have done the trick.

Perhaps something similar could be done for guns. Gun rights advocates also seem unserious about stopping mass shootings. They point out that the shooters have a variety of other issues, and they’re not wrong. (Most notably, these shooters are nearly always young, male, mentally unstable, and fatherless.) But the relatively easy access to guns is obviously also a factor. True, violent boys could do a lot of damage with knives or other deadly weapons. But a knife attack would be a low slower and a lot easier to stop. It is also true that most of the proposals for restricting guns focus on cosmetic features, rather than actual deadly effectiveness. Yet insofar as one of the problems is the sheer number of guns in the society, any limitations, however arbitrary, might have a good effect.

Another possible advantage of a Prohibition-style constitutional amendment would be the possibility of varying local laws. Big cities like New York, Los Angeles, or Chicago might essentially ban guns entirely, while small towns in Wyoming, Utah, Texas, or Maine might not be so strict, thereby reflecting the different typical uses of firearms in these various places.

One reason that I still remain skeptical about gun control was highlighted for me by the events in my household over the last week. As I said at the top, someone shot my apartment door four times last Sunday morning. Two bullets went all the way through, and one of them ended up on the other side of my apartment, having gone through an interior window and hitting a flashlight on my desk. It happened around 3:45am. My wife got up right after it happened, thinking that someone was knocking on the door. She literally stood right in front of the door that had just had new holes punched in it before she realized what had happened. Thankfully, my kids mostly slept through it all, and woke up a couple of hours later to police investigators in the living room.

We found out this week that this is not the first time this person has shot at my neighbor this month. At the beginning of the month, he shot at my neighbor in the parking lot in the middle of the day. We were out of town, and didn’t find out until after he had tried again.

This is what gives me pause about the Prohibition model: without some means of powerful, legal self-defense, we’d end up entirely dependent on the police for protection from these kinds of people, and I’m not convinced they’re up to the job. I’m not anti-police. They usually serve well, taking risks instead of me, caring for people who are hard to care for, etc. I’ll even stipulate for the sake of argument that the various high-profile cases of police misconduct are extreme outliers. My worry is that they might not be up to the task of enforcement for something so profoundly society-shaping as a huge gun-control program. The current track record of enforcing the laws that already exist isn’t great. I think it is perfectly reasonable to be not “pro-gun” but rather “government-skeptical.” Nor does it seem likely that someone like the shooter here would care much about rules forbidding gun ownership. It would matter a lot how exactly the law would be enforced.

None of this is to say that I could have done anything with a gun myself. The shooter didn’t even come all the way up to the level of my front door; he just shot from the steps. I assume he was long gone before I could have responded personally. Though it seems hard to find good information about how many crimes mere private possession of a firearm has prevented, my case couldn’t get added to the list regardless, since it was all over before I was even really awake.

As far as I can tell, my local police haven’t caught the guy who shot my door, even though they seem to be pretty sure who did it. They also haven’t been willing to talk to me about it. The officers who responded last Sunday were kind and helpful—just the sort of police you’d want. But since then it’s been crickets. I’ve learned more from my neighbors than from the men and women who asked me to waive some of my Constitutional rights so that they could collect evidence in my dwelling. I have nothing against their moral standards, but I’m not yet convinced by the organization’s competence.

For now, at least, I think this is my biggest hesitation about gun control. It’s not that it’s a bad idea, but that the mechanism for doing it relies too much on an institution that all too often doesn’t seem up to the tasks it already has. But stopping the slow-motion riot of mass shooting is a compelling aim too, so I am pulled in two ways.

Exploiting emotional labor

Casey Newton’s article on The Verge about the lives of Facebook moderators likely only adds to the growing rage against social networks. It’s worth a read. Even if stories like this often make the work seem worse than it usually is, it’s not a pretty picture.

Other journalists and bloggers have recently been talking about work and about how online communities work. On work, see Derek Thompson’s recent Atlantic essay. Thompson observes the way in which work is expected to function as one’s entire life, making it more like a religion than a job. Scott Alexander’s post on his attempts to moderate comments in his own little community is worth considering.

These articles offer a chance to synthesize some varied thoughts about how our high-tech, information-rich, ultra-connected world is affecting us. Here is just one idea that these essays have made me think about.

As computers can do more and more, jobs will be more and more about what only humans can do. Firms will look for ways to extract value from distinctively human abilities. This is what a lot of “information” jobs actually look like. They are not traditional “white collar” jobs; they’re not in management or administrative support. Instead, they are ways of leveraging part of the human mind that computers can’t duplicate yet.

For a few months I worked at a company where the task was to correct errors that computers made in reading documents. The computer did pretty well with the initial read, but any characters it was not confident in got passed to a human reader. The software we used was built to make us work as fast as possible. We didn’t need to read the entire document, only the few parts the computer couldn’t read. We were carefully tracked for speed and accuracy. Nowadays machine-learning technology has likely surpassed even human abilities in this domain, but the basic function of the human in the system is much like the Facebook moderators’ function. It makes up the gap between what the machine can do and what the product requires.

This gap-filling is what Newton’s article describes in the Facebook moderating company. Employees are asked to leverage their judgment in figuring out whether something is appropriate or not. Because judgments of this sort are hard to reduce to rules (note all the problems Facebook has in specifying the rules clearly), the task needs a tool that is good at interpreting and assessing an enormous amount of information. And human minds are just the thing.

Computers have gotten good a certain kinds of pattern recognition, but they are still not good at extracting meaning from contexts. Human beings do this all the time. In fact, we’re really, really good at it. So good, in fact, that people who aren’t better than the computer strike us as odd or different.

The problem is that this task of judging content requires the human machines to deploy what they have and computers don’t. In Facebook’s case, that thing is human emotions. Most of our evaluative assessments involve some kind of emotional component. The computer doesn’t have emotions, so Facebook needs to leverage the emotional assessments of actual people in order to keep their site clean.

These kinds of jobs are not particularly demanding on the human mind. Sometimes we call this kind of work “knowledge work,” but that’s a mistake. The amount of knowledge needed in these cases is little more than a competent member of society would have. It would be better to call these jobs human work, or more precisely emotional work, because what is distinctive about them is the way they use human emotional responses to assess information. Moderators need to be able to understand the actions of other humans. But we do this all the time, so it’s not cognitively difficult. In fact, this is why Facebook can hire lots of relatively young, inexperienced workers. The human skills involved are not unusual.

The problem is that as those parts of us that are distinctively human become more valuable, there is also a temptation to try to separate them off from the actual person who has them, then track them and maximize their efficiency. In ordinary manual labor, it’s not so hard to exchange some effort and expertise for a paycheck. Faster and more skilled workers are more productive, and so can earn more. Marx notwithstanding, my labor and expertise are not really part of who I am, and expending them on material goods does not necessarily diminish or dis-integrate me. In contrast, my emotions and capacity for evaluate judgments are much closer to who I am, and so constantly leveraging those parts of me does prompt me to split myself into my “job” part and my “not-job” part. We might call this “emotional alienation,” and it is a common feature of service economies. We’re paying someone to feel for us, so that we don’t have to do it.

All this doesn’t mean we should give up content moderation, or even that moderator jobs are necessary bad jobs. I have little doubt that there is tons of stuff put online every day that ought to be taken down. I am an Augustinian and a Calvinist, and harbor no illusions about the wisdom of the crowd. But we should be more aware of what it actually costs to find and remove the bad stuff. We enjoy social networks that are largely free from serious objectionable and disturbing content. But someone has to clean all that off for us, and we are essentially paying for that person to expend emotional labor on our behalf. Social media seems “free,” but as we’re being constantly reminded, it really isn’t—not to us, and not to those who curate it for us.

So suppose Facebook, or Twitter, or YouTube actually paid their moderators whatever was necessary for their emotional and spiritual health, and gave them the working conditions under which they could cultivate these online experiences for us without sacrificing their own souls. How much would that be worth? I doubt our tech overlords care enough to ask that question. Maybe the rest of us should. Though we cannot pay them directly, we can, perhaps, reduce their load, exercise patience with them, and apply whatever pressure we can to their employers. This is, after all, the future of work. It’s in all of our interests to set the norms for distinctively human labor right now, while we still can.