Grade Inflation: Maybe Unfair, Probably Just

Before I left school last fall, I graded one set of students’ papers in my role as a graduate instructor at UC Berkeley. It was a basic paper assigned in an introductory sociology course, so I assumed that a competent, complete answer deserved an “A.” When I submitted my grades and sample papers for the professor to check, she demanded that I re-grade every single one. A’s, she insisted, are for excellent work that goes above and beyond the norm.

Four years at the finest undergraduate institution in the country, and I had no sense of the difference between exceptional work and simply complying with instructions.

I learned yesterday that Princeton will most likely be ending its experiment in “grade deflation.” Most of the endless discussion that began before I set foot on campus has centered on claims that the specific way grade deflation was implemented—namely, a 35%-A target for each department, with a stringent only-55%-A standard for junior and senior —was not “fair.” Maybe it isn’t: the stories about exams with A-‘s erased and replaced with B+’s certainly give that impression. “Unfair,” though, is the term you use when you feel you have a sense that you are not getting the advantages of others (i.e. students at Harvard or Yale) but have no deeper principle to back it up.

Since “fair” seems like an awfully subjective standard, and the faculty committee recommending an end to grade deflation put quite a bit of stock in such perceptions, I will offer my own. I’m reasonably sure that with a small bit of introspection, most of us—myself included—would admit that we received A’s for courses at Princeton where we did not exactly give it our all. I was shocked at the consistency with which I could get A’s by simply doing what I would have assumed, prior to coming to campus, would be the minimum—that is to say, doing the reading, starting my papers more than a night before they were due, seemingly vaguely interested in precept, and actually going to lecture. Yes, I was a sociology major—but, then again, sociologists were more “deflated” than Woody Wu majors and had lower grades to begin with.

Most Princeton students, apparently, would not agree with me. According to the grade deflation committee’s survey of students, 80% of Princeton students believe that they have at least “occasionally” had a grade “deflated,” and 40% think it has happened frequently. This must be a joke. The committee’s data suggests that the actual decline in grades due to the deflation policy was modest to non-existent. It’s mathematically possible but barely plausible to think that, during a period where average GPAs went up .05 points, 80% of Princeton students at some point received “B+’s” for “A-“ quality work.

Let me offer an alternative explanation: grade deflation is a good excuse. It’s a good excuse for students, of course, to explain why they are no longer effortlessly succeeding like they did in high school. More importantly, though, grade deflation was an excuse for professors, who could hold their highly entitled students to some kind of standard, while preserving their teaching evaluations through displacing blame onto a third party (usually Dean Malkiel).

What this last point gets at is that there’s much more at stake in grading than “fairness” within the university. Grade inflation is one aspect, although probably not a driving force, behind the ongoing transformation of American higher education. A recent experiment with grade deflation at Wellesley found that underperforming departments with underfunded students could compensate by pumping up their grades. Worse, grade inflation appeared to be a tool to mask racial disparities—that is to say, Wellesley dealt with concerns about its racial achievement gap by just offering artificially high grades to everyone. This is the Faustian bargain of modern higher education: professors, under the pressure of an increasingly competitive job market and rising non-teaching obligations, can reduce the quality of instruction by sating students with A’s and leaving them plenty of time for the real business of university life, which is to say, anything but learning.

Grade deflation is not just a matter of students’ feelings or fairness. It is an issue of justice – that is to say, the role of universities in either reinforcing or challenging structural inequalities. For one thing, as researchers like Annette Lareau have consistently shown, upper middle class students come to schools like Princeton not just advantaged in their academic skills, but also advantaged with extra-academic skills, particularly with respect to relating to authority and accessing services. Let me make this more concrete: we have every reason to believe that rich white kids are more likely to bitch about their B+ and get it raised to an A-. Working class kids are more likely to just take it, because that’s what we train working class kids to do—take what’s given to them.

Grade inflation not only worsens stratification within universities, but between them. Debates about grade deflation at Princeton nearly always contrast Princetonians’ GPAs to those of our “competitor institutions”—that is to say, the laughably high grades given out at Harvard and Yale. But Princeton students are not just “competing” with other Ivy Leaguers for Rhodes Scholarships and spots at U Penn Medical School. They are “competing” with other college graduates in the much broader universe of graduate school admissions and the labor market.

Most of Princetonians’ “competitors” come from public universities with lower grades. Although grades at public and private institutions were once comparable, and both have inflated grades significantly since the 1960s, private schools have done it more. This gap emerged precisely at the time that the position of expensive private colleges were threatened by well-funded, and cheaper, public ones. As one Dartmouth professors explained it, “we began systematically to inflate grades, so that our graduates would have more A’s to wave around.” It worked: admissions officers at graduate institutions systematically favor students who come from grade-inflated schools, even when candidates are otherwise equal. Although flagship public universities have subsequently followed suit, even after controlling for “talent level,” grades at private institutions are .1 to .2 points higher. The structural conditions of the modern public university–minimal face time with professors, huge classes, heavier reliance on testing over papers, pressures to weed out students universities can no longer afford to teach, less construction of students as paying private “consumers” who can be “dissatisfied”—makes bargaining for grades more difficult.

Of course, many Princeton students predictably insist that they produce better work than students at other institutions where grades are lower. But I find this utterly unimpressive. Princeton students have access to resources and instruction way beyond those of the vast majority of American college students. Shouldn’t our grades reflect what we, as individuals, make of the very real advantages that Princeton offers us, rather than, say, rewarding us for having those advantages in the first place?

Waste Not, Want Not?

A small child, having eaten the tastier offerings on his plate, picks unenthusiastically at his vegetables. An exasperated parent tells him that he should eat his food because there are starving people in China.* The child points out that there is no way anyone can transport his broccoli to China, and thus his decision is not really related to world hunger.

Just last week, the UN Food and Agriculture Association released a report stating that “Latin America and the Caribbean Could Eradicate Hunger with Amount of Food Lost and Wasted.” Usually I don’t bother writing blog posts to pick holes in an argument that a truculent four-year-old could identify. Yet because commentators persist in not just seeing a connection between food waste and hunger, but asserting that in addressing one we could address the other, I feel the need to extent the pre-schooler’s logic a bit.

The argument that we could address hunger by directly redistributing wasted food crumples with a whiff of logic and data. For starters, what gets thrown out is not what people need: in the U.S., nearly fifty percent of discarded calories are added sweeteners and fats. The model of food banks which the FAO trumpets for Latin America has been developed to its zenith in the U.S.—and yet hunger has actually grown since the explosion of private charity in the 1980s. The recent National Geographic feature on hunger inadvertently offers a pretty damning portrait:

By whatever name, the number of people going hungry has grown dramatically in the U.S., increasing to 48 million by 2012—a fivefold jump since the late 1960s, including an increase of 57 percent since the late 1990s. Privately run programs like food pantries and soup kitchens have mushroomed too. In 1980 there were a few hundred emergency food programs across the country; today there are 50,000…One in six reports running out of food at least once a year. In many European countries, by contrast, the number is closer to one in 20.

Food banks are a terrible way to address hunger because, as sociologist Janet Poppendieck documents, the food they offer is often insufficient, culturally inappropriate, nutritionally inadequate, unreliable, and heavily stigmatized. Flooding food banks with the subsidized corn-and-sugar-based “edible food-like substances” will not change this.

The more sophisticated commentators—like Tristram Stuart—accept that food waste does not directly snatch food from the mouths of the hungry, but claim that it still indirectly causes food insecurity by raising global prices. This, at least, squares with the basics of economic research on hunger and famine: that poor people do not go hungry for lack of food but for lack of money to buy food. One in six Americans is not going hungry because they walk into a grocery store and find the shelves unstocked; it’s their pockets that are empty. Hypothetically, if all the food currently going to waste were instead put on supermarket shelves, the supply would be so huge (since the world produces 4,600 kcal/person/day) that prices would plummet, and the poor could eat. Huzzah!

Of course, basic micro-economics also tells us that if the price plummets, so does production. It is a common trope that food waste happens because food is too cheap; yet, in truth, the overproduction behind food waste—and the overproduction that would underpin any redistributive scheme—actually depends on the artificially high price of food. If producers, distributors, and retailers could no longer pass the cost of waste onto consumers by inflating the price of what they sell, they would simply produce less. Adam Przeworski plays this thought experiment out and convincingly shows that there is no scenario under which we could feed everyone through a free market mechanism, and that feeding everyone would invariably undermine the free market.

Thrift non-wasting practices, eating your leftovers, faith in God, volunteerism and charity, and unbridled free markets do not feed people. Adult discussions should start from the premise that there are two basic ways to address hunger. One is to increase the purchasing power of the poor to buy commodified food. We already do this, to an extent, with food stamps, but do so by reinforcing an unjust private food system (and subsidizing retailers like Wal-Mart, which pay their workers so little they qualify for SNAP). The alternative is to de-commodify food—that is, create a right to food not dependent on individual’s capacity to pay or participation in the labor market. This has been tried in socialist countries and, more recently, in India. History suggests that it may help feed people, but at the cost of inefficiencies and the loss of the abundance, excessive choice, and convenience that a capitalist food system gives (some of) us.

“Food waste” is a powerful symbol of the dysfunction of our food system, and the coexistence of hunger and waste is as visceral a reminder as any of the insanity of free-market capitalism. But as a kind of “slack” which we could use to eradicate hunger, minimize our ecological footprint, and address socioeconomic inequality? Well, sometimes waste really is just garbage.

* I don’t know why it was always China for me. China ranks 42nd in food security. Better to say “Democratic Republic of the Congo,” or the post-industrial neighborhood by your suburb.

Should the Revolution Have a Vegan Option?

French people don’t give a shit about lifestyle politics.

Okay, it’s a generalization, but—despite my short residence and limited language skills—there’s some truth to it. I saw it on May Day, among the anarchists selling Coca-Cola products and candy bars, and I saw it at a “zero waste” conference where the attendees couldn’t manage to get their paper plates into the recycle bin. But mostly, I see it every time I go to an activist event with food—and, being France, there’s always food—and am confronted by the wholesale lack of anything vegetarian. Even the incredibly low bar of having something that is not pork, in a country with a sizeable Muslim population, is rarely reached.

As someone coming from the Bay—the land of gluten-free, of re-useable shopping bags, of farmers’ market parking lots crammed with Priuses—this has been a bit of a jolt. But being in France has challenged me to re-evaluate my politics: in the face of climate change, of billions of animals slaughtered, of global labor exploitation, does the way I live actually matter? I’m pretty sure the chain-smoking, sweatshop-buying French communists I know would say “no.” In fact, in my (usually disastrous) forays into the French language, I’ve learned that talking about “la politique” makes no sense without referencing political parties or the state. In French, “lifestyle politics” is a bit like talking about “non-political politics.”

Much like universal healthcare, what is taken for granted in France is up for debate in the U.S. In fact, hating on lifestyle politics is totally huge on the left right now.* There’s the old Derrick Jensen article, “Forget Shorter Showers,” and the more recent “Stop Worrying About Your Carbon Footprint” that I’ve been sent three times. And—thankfully—sociologists have finally added their ever-weighty opinions to the matter. Exhibit one is Samatha MacBride’s (fantastic) Recycling Reconsidered, which pretty much dismembers the idea that recycling has any impact beyond making us feel good. Her conclusion that we need to “relegate notions of personal commitment and responsibility…to the backburner” pretty much sums up the post-lifestyle zeitgeist. It’s not just that buying local is useless—it’s that it actively detracts from useful things, be it direct action or harassing our Congressperson.

Okay, I get it: the old premise behind my veganism—that each of us could save 95 animals a year, starting today, thanks to the magical power of supply and demand—is bunk. Regardless of what the economists tell us, the world is not an aggregate of individual consumptive choices. But I still want to be vegan, and I’d like a reason a bit more utilitarian than just saying it’s the right thing to do.

Fortunately, even France managed to furnish a justification. I was at an anti-waste event the other week, sitting with a group of dumpster divers on the margins (scoffing at the respectable people in suits and their power point presentations and, you know, real political program for change), and one of the caterers on his break sat next to us. “Do you see how many beer bottles they’re leaving half-full?” he observed, “And they’re telling me I need to waste less?” Whatever we think of lifestyle politics, we have to acknowledge that, even as political actors, we are increasingly judged on personal criteria. We live in an age of heightened scrutiny, and our consistency matters—not because being consistent changes the world in itself, but because inconsistency is an easy excuse to discredit us.

There are other reasons, too, to keep taking shorter showers. We need to acknowledge the profound disempowerment that most people—even privileged people—feel today, and recognize that the one area where people do feel they have some efficacy is in their consumptive choices. If we are serious about the movement-building maxim of “starting where people are at,” we need to acknowledge that most people approach problems in their lives by purchasing something. What’s more, the glorying in the irrelevance of personal choices leaves me wondering  how many activists actually want to live in the world they claim they’re trying to create through direct action and political organizing. Because guess what: you really are going to have to take shorter showers and eat less meat and buy less shit in our future utopia, even if you don’t see any of these as tactics to get us there.

Having a vegan option isn’t going to precipitate a revolution. But, to do a great injustice to Emma Goldman, it isn’t a revolution if I can’t have something ethical to eat.

- – – – -

* I mean this, of course, referring to the tiny leftist echo chambers in which I exist. It’s kind of like how my punk friends and I thought the Dropkick Murphy’s were a “huge” band in 2003 because more than two people at my high school had heard of them.

Freudian Shifts

Did you know that going on extremely long vacations and then forgetting about them used to be a socially legitimate—if not exactly socially acceptable—way to go crazy?

Maybe. It’s certainly not an original discovery of mine: it has come from reading Ian Hacking’s brilliant Mad Travelers, which explores the cultural niche, formed from anxiety about vagrancy and romantic celebration of tourism, within which “fugues”—long, aimless, unconscious journeys—flourished in 19th century France. My own version of madness (or maybe it’s just procrastination) has kept me from (re)embarking on any original research of my own. In the meantime, I’m content with methodically consuming the social-scientific literature on mental illness and vaguely imagining a future contribution to it in dissertation form.

In truth, I’ve never been as uncomfortable with social science as I am now. Perhaps that’s because I’ve previously always studied questions that, while important (“Will capitalism survive?” “Will we do anything about climate change?”) did not exactly bear on my day-to-day life. Reading about the “social construction” of mental illness, on the other hand, impinges on my ongoing own interpretation and processing of the last ten years of my life.

Prior to reading Hacking, I worked through Ethan Watters’ Crazy Like Us, which explores the globalization of American psychiatry through processes that range from clueless (telling post-Tsunami Sri Lankans that they really must be suffering from PTSD a la American) to sinister (convincing Japanese people that melancholy was a disease and that only U.S. pharmaceuticals could cure it). His most interesting vignette describes a more inadvertent sort of mental colonization. Anorexia virtually didn’t exist in Hong Kong—despite the long-running penetration of Western advertising and ideals of body type—until a 14-year-old collapsed and died in public. Despite exhibiting virtually none of the symptoms that usually come alongside not-eating, such as obsessions with thinness or fear of fatness, the press widely publicized it as a case of “anorexia.” Almost overnight, an epidemic of anorexia emerged—not of people claiming to be anorexic, but people who weren’t before and suddenly were.

The take-away lesson is that every culture opens up certain avenues for expressing distress and shuts down others. This notion worries me a bit, because it fits too easily into the don’t-talk-about-suicide-because-it-gives-people-the-idea-to-kill-themselves narrative (which, frustratingly, seems to have some sociological data to support it). More proximately, I am uncomfortable with this because it challenges the assumption—to which I cling rather dearly—that my own depression is biology, pure and simple, and that it was medication that pulled me out of it. Radical, Foucaultian critiques of psychiatry and “pharmaceuticalization” as forms of social control fit neatly in with my political worldview, but less easily with the fact that I believe—and, in a way, need to believe—that I have been saved by Big Pharma and Western medicine.

This week, I’ve been plodding through Freud. I had read Civilization and Its Discontents during my brief flirtation with anarcho-primitivism in college, but I doubt I would ever have seen reading hundreds of pages about phallic symbols and infantile sexuality as a good use of time had I not taken a year off. Most of what Freud said apparently is wrong—psycho-analysis has been shown to be no more effective than anti-depressants, which is to say, apparently not very effective—and yet I’m finding, just as the sociology of mental health tells me they should, that his theories about the manifestations of mental illness have a way of making themselves true.

When I was really depressed, I didn’t dream at all; sleep was a form of blissful oblivion, a well-earned respite prior to the mornings, which were inevitably the worst (the fact that I had read prior that depressed people don’t dream and feel worst in the mornings is, perhaps, not coincidental). As my waking life has improved, my nocturnal existence has deteriorated: I have nightmares almost every night. I usually don’t remember much from them, except that they are—surprise!—about being depressed again.

And then came this week, when I read Freud’s Interpretation of Dreams. All of the sudden, my dreams are full of symbols—of hints of repression and sublimation and transference, a horde of thoughts that day-to-day existence as a sane person requires I keep at bay. And the weird thing is, I’m convinced that it’s not just that I’m noticing these symbols now: I really think that my dreams were fairly meaningless, and now have becoming meaningful, just as I have read that they should be. And so I am reminded that studying oneself is a never-ending mindfuck, and that maybe it’d be more straightforward to crack capitalism than to crack my own brain.

 

Going back was the best of times, going back was the worst of times

Perhaps because the novelty—by which I mean an alcohol-accentuated tincture of horror and awe—has worn off, I’m not coming away from my fifth reunion with the same crazed list of stories as I had after, say, my Freshman year. There were no drunken alumni saving me from arrest at the hands of Mohawk-profiling P-safe officers; no rambling stories from Bill Fortenbaugh ’58 about the hookers we could expect at his 70th birthday party; no thieving of giant inflatable monkeys from the 35th (I’m still unclear about how that one happened).

Still, I think I “did” reunions pretty well. I went through the P-Rade with the band no less than three times and felt like I played my heart out despite dancing too energetically to read the music for songs I had never played before. I ran into my thesis adviser in a heavily inebriated state on Poe Field. I managed a temporary coup d’etat and convinced the percussion section to start “Children of Sanchez” for the umpteenth time. I swam in the fountain, got a 4:00 a.m. “Eggplant Parm without the Parm” from Hoagie Haven, and stayed up for a reunions sunrise (a first!). And my antics in the band office led one undergraduate officer—perhaps not realizing how much I would treasure the comment—to say that I really was the “hot mess” of band lore.

I list stories and antics and happenings because I always hope that, by adding them up, they will sum to three days of consistent and straightforward happiness. And, for most people, it seems like they do: my facebook feed has been dominated for days with comments about the “best damn place of all” and the sheer joy of revisiting our alma mater. I imagine there’s a certain amount of posturing in that, but I more-or-less believe the sentiments are genuine. I wish I shared them, though.

Somewhere between the moments of blasting away on trumpet and catching up with my best friend on the deck of Terrace, there were what seemed like interminable periods of wandering around alone at the 5th, avoiding eye contact and fearing conversation. I hadn’t initially expected to spend the entire weekend with the band—not even most band alums do that—but then I realized that the alternative was walking around campus by myself, not sure if I did or didn’t want anyone to see me. It’s not that I’m not incredibly fortunate to have great friends from my class: only that interacting with them, with the attendant sense of “losing” them again as soon as the weekend was over, was hard for me to bear.

Depression is, in so many ways, all about struggling with your past. For some, it’s past trauma. For me, it’s an idealized sense of past happiness that I alternate between desperately want to relive—not in the “telling stories with old friends” sense, more the “build a time machine” sense—and wipe from my mind. When I walk around Princeton, I’m not sad because I see the room where I used to cut myself, the health center where I had to inter myself Freshman year, or the street where my roommate had to pull me away from oncoming traffic. No: I’m sad because I’m constantly thinking about the sense of wonder and meaning and community that I had there and yet never really managed to appreciate and which, at Berkeley, seems so impossibly out of reach.

Being me, I told myself this was my last reunion. Not in the sense that it’ll actually be my last, but the last where I feel like I can actually have conversations with undergraduates, play with the band, or dance drunkenly until 4 a.m. It also feels like my last because I’ve chosen to make coming back a logistical absurdity, whether I’m in France or California or England or anywhere else. I feel jealous of the people who can maintain a connection to Princeton after they graduate, and I frequently fantasize about coming back for a road trip or two each football season, but I’ve realized that I burn my bridges with the past every two years because I probably couldn’t get by any other way.

For me, at least, there’s wisdom that comes from the experience, and not just angst, which makes writing about it on my 27th birthday seem less pathetic and more edifying. When I first started to recover, I followed a pretty rigidly Benthamite pleasure-maximizing strategy, avoiding anything that might make me feel bad. Now that I know that I can break down a bit without falling of the deep end, though, I am realizing that depression can be part of the normal flow of experience—that it’s okay to go back and laugh and dance like an idiot and play trumpet and bask in the warmth of good friends and, yes, cry a little bit.

 

I, Too

In retrospect, I’m lucky: I actually got called out for the most racist thing I did in college.

I’ve noted passing references to the “I, too” campaigns at various universities cropping up on my facebook feed, but I only took the time to fully absorb the images when I saw a tumblr for “I, too, am Princeton.” The frustrations are the same as I’ve seen elsewhere: of being called upon to speak for an entire race, of being assumed to have gained entrance thanks to athletics or affirmative action, of having the validity of experiences of racial discrimination being constantly questioned.

They hit harder coming from Princeton students, though. I wonder—and I hope every white alumnus or alumna of Princeton is wondering—if I was one of those racist shitheads pontificating in seminar, mouthing off on the street, or whispering at a table in Frist. I don’t know if I ever said any of the things quoted on the “I, too” whiteboards. In a sense, that’s what it means to be white: to live in a maelstrom of racism to which you are contributing and not even being aware of it. White is when you wait until some people set up a tumblr five years after you graduate to reflect, “Shit, whether or not I said that, I probably heard someone say it and didn’t do a damn thing about it.”

Then again, I actually know I did some terribly racist shit in college, because someone told me. I count myself lucky for it because so many micro- and macro-aggressions pass unmarked and unacknowledged. My junior year, the Princeton Animal Welfare Society—of which I was the Vice-President—brought a PETA campaign called the “Animal Liberation Project” to campus. The project compared the justifications for abusing animals—generally, a variant of, “They’re different from us, so we can do whatever we want to them”—to the justifications for abusing humans—generally, a variant of, “They’re different from us, so we can do whatever we want to them.” There were pictures of animals and humans—dark-skinned humans, usually—to drive the point home.

Just writing the sentences above, with six years of hindsight, is cringe-worthy. But it gets worse. I had a sense that the campaign was going to spark some controversy—particularly within the African-American community—and so I reached out to every black student group I could find. I penned an explanatory editorial, entitled “Slaves and Slaughterhouses” (yes, really), and organized a panel to discuss the demonstration that included one (“1”) black woman, one (“1) Indian woman, and two (“2”) white males.

Given the extraordinary sensitivity I had shown, I was shocked (shocked!) when no one attended the panel, the comments section of the editorial lit up with anger, and black student groups rejected my invitation to have a reasoned, dispassionate discussion and instead sent an e-mail to their membership denouncing what we were doing—and me, personally—as racist. I fought back, articulating as logically as I could why this really wasn’t offensive (something I have done subsequently, as well). I claimed that if only students recognized their own prejudices, they really wouldn’t mind the comparisons. “I’m not racist, but you are speciesist.” Numerous friends of color didn’t talk to me for months afterward.

There are various hackneyed lessons that I learned—eventually—from the experience. They are banal and should not have required denigrating hundreds of my peers to arrive at them. I realized that, as a white person, it’s really not my place to “debate” whether or not something is racist. I also discovered that gestures of conciliation for racist actions don’t make those actions less racist. Writing about this experience is hard because there’s always an element of cleansing one’s guilt for past actions, when in fact those actions should remain raw so as better to shape one’s own behavior in the future. These are lessons I’m still learning, but I don’t expect anyone to be “understanding” in the meantime when I fuck up again.

The first draft of this post contained a list of college-era racial misdeeds—half the jokes I participated in while in the band come to mind—and mitigating factors—I took classes on race! And I helped organize events around incarceration and immigrants rights! But sometimes our actions really shouldn’t be judged in context. One of the most mind-blowing things I’ve ever read on race is Sam Lucas’ Theorizing Discrimination in an Era of Contested Prejudice. He notes that most black people experience pervasive racial discrimination, but most white people claim not to be racist. As it turns out, these two are not mutually incompatible. Even if we (generously) assume that only 5% of cops racially profile, or 5% of teachers think black students are inferior, the chance that a black individual will encounter a racist cop or teacher in their lifetime is extremely high. And, really, you only need one cop to stop you for your skin color to think the system is pretty fucking fucked.

The same applies to racist actions, not just racist people. You can do only one racist thing in four years (I’m sure I did more, but for the sake of argument…) and still make a substantial contribution to a campus climate of oppression. In the end, what I’m trying to get at is that I, too, was Princeton, maybe too much so, because I, too, was part of the problem.

Free

Should a book on freegans—that is to say, people who try to live for “free” in the present through appropriating capitalism’s waste, while trying to build a future in which the things people need are provided for “free” through a gift economy—be free?

This is a purely academic question. My “book” on freegans—I’m going to call it that, even though at this point it’s just a really, really long word or PDF document, for which this blog post is a shameless plug—is already free. Even were it to be picked up by a real live academic publisher, I still have no doubt that it would quickly be scanned and shared online, and I would make no effort to stop it.

Despite the fact that reality has gotten ahead of philosophy, I still feel like I increasingly need to think through my position on the question of “free.” I feel it both in general—with advocates for open access at my own university suggesting that publishing in pay-for-access journals is just dumb—and personally—as a number of voices have told me they assume that I would never try to sell a book on freegans. I’m thus starting to wonder about what it means that—as someone who expects his life’s work to consist mostly of reformatted word documents—everything I produce is ultimately going to be free.

*          *         *

I should start by saying that the arguments for making academic products free to the public are, I think, particularly strong. We still have well-heeled institutions (universities and federal research councils) that are willing to pay (some of) us to produce knowledge and to contribute to journals as editors and reviewers. The open access advocates’ strongest argument (ironically made at the same time as we hear slogans like “information just wants to be free”) is that we’ve already paid for research with our tax dollars, so we shouldn’t have to pay for it again. I’m excited about experiments like Sociological Science, the new open-access sociological journal, not so much because I’m sure their model is the wave of the future (author fees for graduate students still are scary-high) but because I believe that experimentation is the only way to find out.

But this is emphatically not the position that most producers of cultural goods—musicians, artists, or authors are in. The other week, I read a New York Times editorial by Jeremy Rifkin who rosily declared that, “The inherent dynamism of competitive markets is bringing costs so far down that many goods and services are becoming nearly free, abundant, and no longer subject to market forces.” The “marginal cost revolution” (about which he is selling a book) has a fairly simple source. There is now a Napster for virtually anything that you can copy on a computer, and because copying a file doesn’t cost money, now books, movies, and music can be “free.”

I suppose I was most annoyed by Rifkin’s editorial because it conflated the “rise of free” with the “rise of anti-capitalism.” When I’ve been told that I really ought to make my book or anything else I write “free,” it’s usually couched in the assumption that “free” and “capitalism” are opposed to one another. But there is nothing inherently anti-capitalist about getting something for free. In fact, the “free” labor of the worker—that is, time spent producing things of value for which the worker is not commensurately paid—is at the root of all profits in a capitalist system.

So long as the things we need to survive—and I’m not talking about books on freegans here, although I do think my book is valuable, but food and housing and all that—are commodified and must be purchased, being told the things you produce are “free” is just another way of saying you are being exploited. And, unlike for academics, we don’t have any sort of public provisioning for the majority of cultural producers, and as such, for most of them, discovering that their products have “zero marginal cost” is not exactly a happy revelation.

And, of course, even as an academic, “free” sounds increasingly scary. When legislators see that students can now access Massive Open Online Courses courses for “free” (at least for the moment), it sounds like a great argument for further defunding public education. And when graduate students are expected to add more students to their sections without an increase of pay–an experience virtually any GSI at Berkeley can recount–they’re working for “free.” And I can’t help but think that the logical consequence of telling us that the books we write will be “free” is that eventually universities will feel they no longer have any obligation to pay us to produce them.

*          *          *

Admittedly, this is all a bit of a straw-person argument. For most of the activists I know—and, especially, the freegans—“free” has a very different meaning. It has nothing to do with price or with the “marginal costs” of production. As I came to understand it, “free” meant that some things are too valuable to have a price—whether necessities like food and shelter or public goods like transportation, the arts, or knowledge. Sure, there were always dumpster divers who thought that wasted food was “free,” but the wiser freegans I knew always recognized that these things had a cost—in human labor or natural resources—which were real. “Free” was, in effect, a way of recognizing that all things have a cost, albeit one that is often poorly captured by “price.”

I’m not against “free.” I’ve read enough anthropology to know that gift economies in which goods and services are shared freely are not a utopia, but a part of the human historical experience and an honest possibility for the future. It’s more an issue of timing, or, you might say, a collective action problem. I’m reluctant to say it’s fine for someone to have free access to everything I produce until I have free access to what others produce. It makes very little sense for some types of things be “free” while others are commoditized. And, frankly, I’m far more concerned about “freeing” things that do have a marginal cost, like food or shelter. I don’t want to sound like those old commercials that said, “You wouldn’t steal a car—Piracy is not a victimless crime”; just that I’d like to be able to steal dinner along with my DVDs.

I didn’t find my book in a dumpster. It’s taken time and money and effort and love. Writing it has involved a great deal of lost opportunities and missed chances. It’s been made possible by the generosity of a host of people and institutions too numerous to name. But don’t worry, I’m not some dirty capitalist or luddite who has yet to get on the digital freedom bus. My book is “free.”