The Add/Drop Dance

The course I’m TA-ing this fall is required for undergraduates majoring in sociology. It is thus heinously oversubscribed. For the first three weeks of class, we circulated a sign-in sheet in lecture. Actually, we circulated several. Some students signed every single one—one person literally signed-in four times in one day and then came up to me after class to check to make sure I saw she was there.

The students are right to be scared. At Berkeley, if you miss one section or one lecture in the first three weeks, you’re automatically dropped from the course. It’s the raw arithmetic of the neo-liberal university. Even after our weeks of culling the herd, however, there aren’t even enough chairs: our class was placed in a lecture hall with fewer seats than students, so to get peoples’ attendance, I had to scramble around among students sitting on stairs, against the back wall, and out in the hallway.

I tried to shield my students as best I could. I fudged a few records and I quickly caved to the student pleading for me to expand the size of my section to accommodate him (a brilliant solution to underfunding, by which I teach for free). Now, halfway into the semester, my classes are set, and, to be honest, I haven’t been this infatuated since high school. I adore my students. Teaching has given me a new sense of the value of graduate school and, for the first time in a long time, the confidence in the future that comes from having a “calling” in life.

In my usually unsuccessful efforts to make class “relevant,” on the 50th anniversary of the Free Speech Movement—an event marked with an administration-sponsored rally attended mostly by people older than my parents—I tried to obliquely raise the school’s activist history in section. I asked what going to Berkeley “meant” to them and to those in their family (we were talking about Weber’s analysis of the meaning of social action). I expected at least one of 41 students to say something about “radicalism,” “hippies,” or “protests.” Not one did. Instead, they talked about Berkeley being “elite,” “selective,” and “prestigious.”

I should have known. Getting into Berkeley is, of course, an achievement, one that for many of my students signals that they’ve caught one of the last few rungs on a ladder of upward mobility that our ruling class is rapidly pulling up behind it. I see the rankings; I see that my students have made it to the nation’s “Number One Public University,” and that this is, by all accounts, a privileged institution. I know that, whatever my complaints about the institutional context at Berkeley, I would have to multiply them by two before talking about another UC school; by ten before remarking on conditions at Cal State; by one-hundred before even mentioning the community colleges from which 40% of my students come.

It’s just hard to square the crammed classrooms, overworked TAs, and jockeying for spots in required courses with my experiences at the “Number One Private University.” My students ask me about Princeton; I tell them they’re just as smart as the students at Princeton, and it’s true. But it’s also true that the Ivy League considers itself above admitting community college transfers, people working three jobs to support their siblings, ex-convicts, and students in their 40s. I also tell them that Berkeley is harder, and I’m not lying. We had a hearty laugh at the fact that 35% A’s was too low for Princeton students. Nonetheless, I realize how, in so doing, I am reinforcing the double burden that every student—graduate or undergraduate—professor, and staff person at Berkeley faces: that of maintaining a top-tier institution with third-tier funding.

I read today that, through tax deductions, Princeton gets a $54,000 a year subsidy per head from the federal government. Here, state funding languishes at $7,500 per person, and spending per pupil has fallen 25% since 1990 to $15,000 total. But what does Princeton’s money buy you that you can’t already get at Berkeley, one of the finest universities in the world? It buys you an institution that respects you enough to have clean bathrooms. It buys you courses where the papers haven’t been replaced with exams to save graduate student labor. It gets you a TA who doesn’t have to hold his nose at grammatical mistakes and basic composition errors because his contract limits his hours and his professor says it’s not his job to fix California’s broken high school system.

At an event for the FSM anniversary, Professor Wendy Brown said something striking about the comparative “apathy” of today’s youth. In the ‘60s, students had to fight for free speech, but they could take a free public education for granted. Today, students have speech, but they don’t use it, because they’re too busy fighting to get an education. It’s just too bad that this “fight” takes place in the back of a lecture hall, as students vie to get a goddamned chair.

On the Appropriate Role for Assault Rifles in a Civil Campus Community

In the “national conversation” that we’re largely not having about militarized policing, I have nothing important to contribute. For what it’s worth, I forced my undergraduate Sociological Theory class to apply different theorists’ analyses to recent events in Ferguson. I’m vaguely aware of ongoing police surveillance, disruption, and violence in communities of color, and I’m lucky enough to have enough radicals on my news feed to know that we don’t know how many black people the police execute each year. But for me, in my citadel of privilege, it’s all background noise.

I did, however, see a University of California Police Department officer the other day walking from her car – parked on the street outside my office – into the underground, mostly hidden campus police station. She was carrying an assault rifle and a shotgun.

As it turns out, there’s more where that came from. That same day, the Daily Californian reported that UC Berkeley’s PD actually has 14 assault rifles, which they got from one of those federal programs that gives away military toys the same way the Engineering Department occasionally sends the social sciences some old computers. I checked the campus news for that day, to see if there was any plausible reason for the PD to be breaking out its heavy weaponry. According to their spokesmen, the rifles are necessary because its 9mm pistols “won’t defeat the body armor.” I’m not sure whether it was the Dean of the School of Education announcing her resignation or some students starting a campus version of BitCoin which created the need on this particular day.

This would all be sort of comical – I mean, comical relative to the not-at-all comical uses of military weaponry in Ferguson – if the UCPD hadn’t actually killed a student. I don’t know if any of my Princeton friends – accustomed to unarmed “P-safe” officers whose job it is to tell you to turn the music down at your illegal dorm-room party full of under-age drinkers – caught that, so I’ll repeat it: UCPD killed a student. By all accounts, it was suicide by police: a “troubled” (yes, it would seem so, seeing as he had multiple previous attempted suicide attempts) student brandished a gun in the business school and they killed him. His name was Christopher Nathen Elliot Travis.

The incident has stuck with me, but it’s disappeared from institutional memory. The Daily California makes no mention of it beyond one-week after the event. The UCPD annual report from that year does not see the incident as meriting a reference, although in the third paragraph they do state that their big event of the year was that they “hosted a very successful scenario on our campus that simulated the hostile takeover of an animal research lab, complete with escaped primates challenging the teams.” The crime statistics state that no homicides happened on Berkeley’s campus that year. The “independent” campus police review board also made no investigation into what happened, presumably because Christopher did not file a complaint in a timely manner.*

Perhaps the date of that unmemorable killing – November 9th, 2011 – sticks in my mind because it’s the night I was arrested by UCPD. I actually still see the officer who broke my ribs, booked me, and then lied to the police review board about it, telling them I was “cocky” because I asked for my rights, on a regular basis. Police still freak me out. But this is small change. As a _____ (insert list of synonyms for “privilege”), I don’t get beaten, arrested, or shot by police at random. But when I look at my sections for the course I am teaching, I realize that most of those adjectives do not apply to them. 80% of my students are non-white. My black students come from a community in which five unarmed men have been killed in the last month by police; my Latino students from neighborhoods where the police are the means for tearing families apart; many of my foreign students from countries where the police are the enforcement arm of authoritarian states.

“Trigger warnings” have been a buzzword of public discussion of academia in the last year. The justification for “trigger warnings” is that our students have suffered forms of trauma that might be resurrected through exposure to sensitive material. I have mixed feelings about whether course content should be subject to trigger warnings; but while we’re on the subject, perhaps we should also consider the trauma of our students who have watched their families, neighbors, and people who look like them get deported, beaten, frisked, imprisoned, arrested for kissing their white partner, or shot. I don’t even approach the level of having suffered “trauma,” but I for one would like to know when and where representatives of the only campus group that have killed a student will be, so that I can – for my own safety and psychological well-being – stay the fuck away.

Like I said, I have nothing to say on this. I have no way to comprehend what it’s like to be constantly victimized by the police, because it isn’t part of my history. But I do speculate that, for at least some of our students, a “civil” campus climate might start with an absence of assault rifles and, while we’re at it, any institution that thinks it needs assault rifles to be part of campus life.

*Of course, we could say that students waving guns is exactly why we need police: then again, this is just ceding the terrain to idiocy, since we could also just say – as many other advanced countries have – that people shouldn’t have guns, and watch our homicide and suicide rates drop in tandem.

Grade Inflation: Maybe Unfair, Probably Just

Before I left school last fall, I graded one set of students’ papers in my role as a graduate instructor at UC Berkeley. It was a basic paper assigned in an introductory sociology course, so I assumed that a competent, complete answer deserved an “A.” When I submitted my grades and sample papers for the professor to check, she demanded that I re-grade every single one. A’s, she insisted, are for excellent work that goes above and beyond the norm.

Four years at the finest undergraduate institution in the country, and I had no sense of the difference between exceptional work and simply complying with instructions.

I learned yesterday that Princeton will most likely be ending its experiment in “grade deflation.” Most of the endless discussion that began before I set foot on campus has centered on claims that the specific way grade deflation was implemented—namely, a 35%-A target for each department, with a stringent only-55%-A standard for junior and senior —was not “fair.” Maybe it isn’t: the stories about exams with A-‘s erased and replaced with B+’s certainly give that impression. “Unfair,” though, is the term you use when you feel you have a sense that you are not getting the advantages of others (i.e. students at Harvard or Yale) but have no deeper principle to back it up.

Since “fair” seems like an awfully subjective standard, and the faculty committee recommending an end to grade deflation put quite a bit of stock in such perceptions, I will offer my own. I’m reasonably sure that with a small bit of introspection, most of us—myself included—would admit that we received A’s for courses at Princeton where we did not exactly give it our all. I was shocked at the consistency with which I could get A’s by simply doing what I would have assumed, prior to coming to campus, would be the minimum—that is to say, doing the reading, starting my papers more than a night before they were due, seemingly vaguely interested in precept, and actually going to lecture. Yes, I was a sociology major—but, then again, sociologists were more “deflated” than Woody Wu majors and had lower grades to begin with.

Most Princeton students, apparently, would not agree with me. According to the grade deflation committee’s survey of students, 80% of Princeton students believe that they have at least “occasionally” had a grade “deflated,” and 40% think it has happened frequently. This must be a joke. The committee’s data suggests that the actual decline in grades due to the deflation policy was modest to non-existent. It’s mathematically possible but barely plausible to think that, during a period where average GPAs went up .05 points, 80% of Princeton students at some point received “B+’s” for “A-“ quality work.

Let me offer an alternative explanation: grade deflation is a good excuse. It’s a good excuse for students, of course, to explain why they are no longer effortlessly succeeding like they did in high school. More importantly, though, grade deflation was an excuse for professors, who could hold their highly entitled students to some kind of standard, while preserving their teaching evaluations through displacing blame onto a third party (usually Dean Malkiel).

What this last point gets at is that there’s much more at stake in grading than “fairness” within the university. Grade inflation is one aspect, although probably not a driving force, behind the ongoing transformation of American higher education. A recent experiment with grade deflation at Wellesley found that underperforming departments with underfunded students could compensate by pumping up their grades. Worse, grade inflation appeared to be a tool to mask racial disparities—that is to say, Wellesley dealt with concerns about its racial achievement gap by just offering artificially high grades to everyone. This is the Faustian bargain of modern higher education: professors, under the pressure of an increasingly competitive job market and rising non-teaching obligations, can reduce the quality of instruction by sating students with A’s and leaving them plenty of time for the real business of university life, which is to say, anything but learning.

Grade deflation is not just a matter of students’ feelings or fairness. It is an issue of justice – that is to say, the role of universities in either reinforcing or challenging structural inequalities. For one thing, as researchers like Annette Lareau have consistently shown, upper middle class students come to schools like Princeton not just advantaged in their academic skills, but also advantaged with extra-academic skills, particularly with respect to relating to authority and accessing services. Let me make this more concrete: we have every reason to believe that rich white kids are more likely to bitch about their B+ and get it raised to an A-. Working class kids are more likely to just take it, because that’s what we train working class kids to do—take what’s given to them.

Grade inflation not only worsens stratification within universities, but between them. Debates about grade deflation at Princeton nearly always contrast Princetonians’ GPAs to those of our “competitor institutions”—that is to say, the laughably high grades given out at Harvard and Yale. But Princeton students are not just “competing” with other Ivy Leaguers for Rhodes Scholarships and spots at U Penn Medical School. They are “competing” with other college graduates in the much broader universe of graduate school admissions and the labor market.

Most of Princetonians’ “competitors” come from public universities with lower grades. Although grades at public and private institutions were once comparable, and both have inflated grades significantly since the 1960s, private schools have done it more. This gap emerged precisely at the time that the position of expensive private colleges were threatened by well-funded, and cheaper, public ones. As one Dartmouth professors explained it, “we began systematically to inflate grades, so that our graduates would have more A’s to wave around.” It worked: admissions officers at graduate institutions systematically favor students who come from grade-inflated schools, even when candidates are otherwise equal. Although flagship public universities have subsequently followed suit, even after controlling for “talent level,” grades at private institutions are .1 to .2 points higher. The structural conditions of the modern public university–minimal face time with professors, huge classes, heavier reliance on testing over papers, pressures to weed out students universities can no longer afford to teach, less construction of students as paying private “consumers” who can be “dissatisfied”—makes bargaining for grades more difficult.

Of course, many Princeton students predictably insist that they produce better work than students at other institutions where grades are lower. But I find this utterly unimpressive. Princeton students have access to resources and instruction way beyond those of the vast majority of American college students. Shouldn’t our grades reflect what we, as individuals, make of the very real advantages that Princeton offers us, rather than, say, rewarding us for having those advantages in the first place?

Waste Not, Want Not?

A small child, having eaten the tastier offerings on his plate, picks unenthusiastically at his vegetables. An exasperated parent tells him that he should eat his food because there are starving people in China.* The child points out that there is no way anyone can transport his broccoli to China, and thus his decision is not really related to world hunger.

Just last week, the UN Food and Agriculture Association released a report stating that “Latin America and the Caribbean Could Eradicate Hunger with Amount of Food Lost and Wasted.” Usually I don’t bother writing blog posts to pick holes in an argument that a truculent four-year-old could identify. Yet because commentators persist in not just seeing a connection between food waste and hunger, but asserting that in addressing one we could address the other, I feel the need to extent the pre-schooler’s logic a bit.

The argument that we could address hunger by directly redistributing wasted food crumples with a whiff of logic and data. For starters, what gets thrown out is not what people need: in the U.S., nearly fifty percent of discarded calories are added sweeteners and fats. The model of food banks which the FAO trumpets for Latin America has been developed to its zenith in the U.S.—and yet hunger has actually grown since the explosion of private charity in the 1980s. The recent National Geographic feature on hunger inadvertently offers a pretty damning portrait:

By whatever name, the number of people going hungry has grown dramatically in the U.S., increasing to 48 million by 2012—a fivefold jump since the late 1960s, including an increase of 57 percent since the late 1990s. Privately run programs like food pantries and soup kitchens have mushroomed too. In 1980 there were a few hundred emergency food programs across the country; today there are 50,000…One in six reports running out of food at least once a year. In many European countries, by contrast, the number is closer to one in 20.

Food banks are a terrible way to address hunger because, as sociologist Janet Poppendieck documents, the food they offer is often insufficient, culturally inappropriate, nutritionally inadequate, unreliable, and heavily stigmatized. Flooding food banks with the subsidized corn-and-sugar-based “edible food-like substances” will not change this.

The more sophisticated commentators—like Tristram Stuart—accept that food waste does not directly snatch food from the mouths of the hungry, but claim that it still indirectly causes food insecurity by raising global prices. This, at least, squares with the basics of economic research on hunger and famine: that poor people do not go hungry for lack of food but for lack of money to buy food. One in six Americans is not going hungry because they walk into a grocery store and find the shelves unstocked; it’s their pockets that are empty. Hypothetically, if all the food currently going to waste were instead put on supermarket shelves, the supply would be so huge (since the world produces 4,600 kcal/person/day) that prices would plummet, and the poor could eat. Huzzah!

Of course, basic micro-economics also tells us that if the price plummets, so does production. It is a common trope that food waste happens because food is too cheap; yet, in truth, the overproduction behind food waste—and the overproduction that would underpin any redistributive scheme—actually depends on the artificially high price of food. If producers, distributors, and retailers could no longer pass the cost of waste onto consumers by inflating the price of what they sell, they would simply produce less. Adam Przeworski plays this thought experiment out and convincingly shows that there is no scenario under which we could feed everyone through a free market mechanism, and that feeding everyone would invariably undermine the free market.

Thrift non-wasting practices, eating your leftovers, faith in God, volunteerism and charity, and unbridled free markets do not feed people. Adult discussions should start from the premise that there are two basic ways to address hunger. One is to increase the purchasing power of the poor to buy commodified food. We already do this, to an extent, with food stamps, but do so by reinforcing an unjust private food system (and subsidizing retailers like Wal-Mart, which pay their workers so little they qualify for SNAP). The alternative is to de-commodify food—that is, create a right to food not dependent on individual’s capacity to pay or participation in the labor market. This has been tried in socialist countries and, more recently, in India. History suggests that it may help feed people, but at the cost of inefficiencies and the loss of the abundance, excessive choice, and convenience that a capitalist food system gives (some of) us.

“Food waste” is a powerful symbol of the dysfunction of our food system, and the coexistence of hunger and waste is as visceral a reminder as any of the insanity of free-market capitalism. But as a kind of “slack” which we could use to eradicate hunger, minimize our ecological footprint, and address socioeconomic inequality? Well, sometimes waste really is just garbage.

* I don’t know why it was always China for me. China ranks 42nd in food security. Better to say “Democratic Republic of the Congo,” or the post-industrial neighborhood by your suburb.

Should the Revolution Have a Vegan Option?

French people don’t give a shit about lifestyle politics.

Okay, it’s a generalization, but—despite my short residence and limited language skills—there’s some truth to it. I saw it on May Day, among the anarchists selling Coca-Cola products and candy bars, and I saw it at a “zero waste” conference where the attendees couldn’t manage to get their paper plates into the recycle bin. But mostly, I see it every time I go to an activist event with food—and, being France, there’s always food—and am confronted by the wholesale lack of anything vegetarian. Even the incredibly low bar of having something that is not pork, in a country with a sizeable Muslim population, is rarely reached.

As someone coming from the Bay—the land of gluten-free, of re-useable shopping bags, of farmers’ market parking lots crammed with Priuses—this has been a bit of a jolt. But being in France has challenged me to re-evaluate my politics: in the face of climate change, of billions of animals slaughtered, of global labor exploitation, does the way I live actually matter? I’m pretty sure the chain-smoking, sweatshop-buying French communists I know would say “no.” In fact, in my (usually disastrous) forays into the French language, I’ve learned that talking about “la politique” makes no sense without referencing political parties or the state. In French, “lifestyle politics” is a bit like talking about “non-political politics.”

Much like universal healthcare, what is taken for granted in France is up for debate in the U.S. In fact, hating on lifestyle politics is totally huge on the left right now.* There’s the old Derrick Jensen article, “Forget Shorter Showers,” and the more recent “Stop Worrying About Your Carbon Footprint” that I’ve been sent three times. And—thankfully—sociologists have finally added their ever-weighty opinions to the matter. Exhibit one is Samatha MacBride’s (fantastic) Recycling Reconsidered, which pretty much dismembers the idea that recycling has any impact beyond making us feel good. Her conclusion that we need to “relegate notions of personal commitment and responsibility…to the backburner” pretty much sums up the post-lifestyle zeitgeist. It’s not just that buying local is useless—it’s that it actively detracts from useful things, be it direct action or harassing our Congressperson.

Okay, I get it: the old premise behind my veganism—that each of us could save 95 animals a year, starting today, thanks to the magical power of supply and demand—is bunk. Regardless of what the economists tell us, the world is not an aggregate of individual consumptive choices. But I still want to be vegan, and I’d like a reason a bit more utilitarian than just saying it’s the right thing to do.

Fortunately, even France managed to furnish a justification. I was at an anti-waste event the other week, sitting with a group of dumpster divers on the margins (scoffing at the respectable people in suits and their power point presentations and, you know, real political program for change), and one of the caterers on his break sat next to us. “Do you see how many beer bottles they’re leaving half-full?” he observed, “And they’re telling me I need to waste less?” Whatever we think of lifestyle politics, we have to acknowledge that, even as political actors, we are increasingly judged on personal criteria. We live in an age of heightened scrutiny, and our consistency matters—not because being consistent changes the world in itself, but because inconsistency is an easy excuse to discredit us.

There are other reasons, too, to keep taking shorter showers. We need to acknowledge the profound disempowerment that most people—even privileged people—feel today, and recognize that the one area where people do feel they have some efficacy is in their consumptive choices. If we are serious about the movement-building maxim of “starting where people are at,” we need to acknowledge that most people approach problems in their lives by purchasing something. What’s more, the glorying in the irrelevance of personal choices leaves me wondering  how many activists actually want to live in the world they claim they’re trying to create through direct action and political organizing. Because guess what: you really are going to have to take shorter showers and eat less meat and buy less shit in our future utopia, even if you don’t see any of these as tactics to get us there.

Having a vegan option isn’t going to precipitate a revolution. But, to do a great injustice to Emma Goldman, it isn’t a revolution if I can’t have something ethical to eat.

- – – – -

* I mean this, of course, referring to the tiny leftist echo chambers in which I exist. It’s kind of like how my punk friends and I thought the Dropkick Murphy’s were a “huge” band in 2003 because more than two people at my high school had heard of them.

Freudian Shifts

Did you know that going on extremely long vacations and then forgetting about them used to be a socially legitimate—if not exactly socially acceptable—way to go crazy?

Maybe. It’s certainly not an original discovery of mine: it has come from reading Ian Hacking’s brilliant Mad Travelers, which explores the cultural niche, formed from anxiety about vagrancy and romantic celebration of tourism, within which “fugues”—long, aimless, unconscious journeys—flourished in 19th century France. My own version of madness (or maybe it’s just procrastination) has kept me from (re)embarking on any original research of my own. In the meantime, I’m content with methodically consuming the social-scientific literature on mental illness and vaguely imagining a future contribution to it in dissertation form.

In truth, I’ve never been as uncomfortable with social science as I am now. Perhaps that’s because I’ve previously always studied questions that, while important (“Will capitalism survive?” “Will we do anything about climate change?”) did not exactly bear on my day-to-day life. Reading about the “social construction” of mental illness, on the other hand, impinges on my ongoing own interpretation and processing of the last ten years of my life.

Prior to reading Hacking, I worked through Ethan Watters’ Crazy Like Us, which explores the globalization of American psychiatry through processes that range from clueless (telling post-Tsunami Sri Lankans that they really must be suffering from PTSD a la American) to sinister (convincing Japanese people that melancholy was a disease and that only U.S. pharmaceuticals could cure it). His most interesting vignette describes a more inadvertent sort of mental colonization. Anorexia virtually didn’t exist in Hong Kong—despite the long-running penetration of Western advertising and ideals of body type—until a 14-year-old collapsed and died in public. Despite exhibiting virtually none of the symptoms that usually come alongside not-eating, such as obsessions with thinness or fear of fatness, the press widely publicized it as a case of “anorexia.” Almost overnight, an epidemic of anorexia emerged—not of people claiming to be anorexic, but people who weren’t before and suddenly were.

The take-away lesson is that every culture opens up certain avenues for expressing distress and shuts down others. This notion worries me a bit, because it fits too easily into the don’t-talk-about-suicide-because-it-gives-people-the-idea-to-kill-themselves narrative (which, frustratingly, seems to have some sociological data to support it). More proximately, I am uncomfortable with this because it challenges the assumption—to which I cling rather dearly—that my own depression is biology, pure and simple, and that it was medication that pulled me out of it. Radical, Foucaultian critiques of psychiatry and “pharmaceuticalization” as forms of social control fit neatly in with my political worldview, but less easily with the fact that I believe—and, in a way, need to believe—that I have been saved by Big Pharma and Western medicine.

This week, I’ve been plodding through Freud. I had read Civilization and Its Discontents during my brief flirtation with anarcho-primitivism in college, but I doubt I would ever have seen reading hundreds of pages about phallic symbols and infantile sexuality as a good use of time had I not taken a year off. Most of what Freud said apparently is wrong—psycho-analysis has been shown to be no more effective than anti-depressants, which is to say, apparently not very effective—and yet I’m finding, just as the sociology of mental health tells me they should, that his theories about the manifestations of mental illness have a way of making themselves true.

When I was really depressed, I didn’t dream at all; sleep was a form of blissful oblivion, a well-earned respite prior to the mornings, which were inevitably the worst (the fact that I had read prior that depressed people don’t dream and feel worst in the mornings is, perhaps, not coincidental). As my waking life has improved, my nocturnal existence has deteriorated: I have nightmares almost every night. I usually don’t remember much from them, except that they are—surprise!—about being depressed again.

And then came this week, when I read Freud’s Interpretation of Dreams. All of the sudden, my dreams are full of symbols—of hints of repression and sublimation and transference, a horde of thoughts that day-to-day existence as a sane person requires I keep at bay. And the weird thing is, I’m convinced that it’s not just that I’m noticing these symbols now: I really think that my dreams were fairly meaningless, and now have becoming meaningful, just as I have read that they should be. And so I am reminded that studying oneself is a never-ending mindfuck, and that maybe it’d be more straightforward to crack capitalism than to crack my own brain.

 

Going back was the best of times, going back was the worst of times

Perhaps because the novelty—by which I mean an alcohol-accentuated tincture of horror and awe—has worn off, I’m not coming away from my fifth reunion with the same crazed list of stories as I had after, say, my Freshman year. There were no drunken alumni saving me from arrest at the hands of Mohawk-profiling P-safe officers; no rambling stories from Bill Fortenbaugh ’58 about the hookers we could expect at his 70th birthday party; no thieving of giant inflatable monkeys from the 35th (I’m still unclear about how that one happened).

Still, I think I “did” reunions pretty well. I went through the P-Rade with the band no less than three times and felt like I played my heart out despite dancing too energetically to read the music for songs I had never played before. I ran into my thesis adviser in a heavily inebriated state on Poe Field. I managed a temporary coup d’etat and convinced the percussion section to start “Children of Sanchez” for the umpteenth time. I swam in the fountain, got a 4:00 a.m. “Eggplant Parm without the Parm” from Hoagie Haven, and stayed up for a reunions sunrise (a first!). And my antics in the band office led one undergraduate officer—perhaps not realizing how much I would treasure the comment—to say that I really was the “hot mess” of band lore.

I list stories and antics and happenings because I always hope that, by adding them up, they will sum to three days of consistent and straightforward happiness. And, for most people, it seems like they do: my facebook feed has been dominated for days with comments about the “best damn place of all” and the sheer joy of revisiting our alma mater. I imagine there’s a certain amount of posturing in that, but I more-or-less believe the sentiments are genuine. I wish I shared them, though.

Somewhere between the moments of blasting away on trumpet and catching up with my best friend on the deck of Terrace, there were what seemed like interminable periods of wandering around alone at the 5th, avoiding eye contact and fearing conversation. I hadn’t initially expected to spend the entire weekend with the band—not even most band alums do that—but then I realized that the alternative was walking around campus by myself, not sure if I did or didn’t want anyone to see me. It’s not that I’m not incredibly fortunate to have great friends from my class: only that interacting with them, with the attendant sense of “losing” them again as soon as the weekend was over, was hard for me to bear.

Depression is, in so many ways, all about struggling with your past. For some, it’s past trauma. For me, it’s an idealized sense of past happiness that I alternate between desperately want to relive—not in the “telling stories with old friends” sense, more the “build a time machine” sense—and wipe from my mind. When I walk around Princeton, I’m not sad because I see the room where I used to cut myself, the health center where I had to inter myself Freshman year, or the street where my roommate had to pull me away from oncoming traffic. No: I’m sad because I’m constantly thinking about the sense of wonder and meaning and community that I had there and yet never really managed to appreciate and which, at Berkeley, seems so impossibly out of reach.

Being me, I told myself this was my last reunion. Not in the sense that it’ll actually be my last, but the last where I feel like I can actually have conversations with undergraduates, play with the band, or dance drunkenly until 4 a.m. It also feels like my last because I’ve chosen to make coming back a logistical absurdity, whether I’m in France or California or England or anywhere else. I feel jealous of the people who can maintain a connection to Princeton after they graduate, and I frequently fantasize about coming back for a road trip or two each football season, but I’ve realized that I burn my bridges with the past every two years because I probably couldn’t get by any other way.

For me, at least, there’s wisdom that comes from the experience, and not just angst, which makes writing about it on my 27th birthday seem less pathetic and more edifying. When I first started to recover, I followed a pretty rigidly Benthamite pleasure-maximizing strategy, avoiding anything that might make me feel bad. Now that I know that I can break down a bit without falling of the deep end, though, I am realizing that depression can be part of the normal flow of experience—that it’s okay to go back and laugh and dance like an idiot and play trumpet and bask in the warmth of good friends and, yes, cry a little bit.