Hello, folks! Just a quick housekeeping announcement: For those of you who have read my site before, you may notice that it has a new name. I blogged at Life’s a Lap for many years, but recently changed the name to Noisy Gong. All posts, old and new, will be here (and you may have noticed that the old url redirects here as well). Thanks for reading!
Whenever we talk to one another, we use words. And for those words to communicate what we intend, they have to have a more-or-less agreed upon meaning. If I say “dog” and you picture a furry creature with four legs that barks, then we can move forward with our conversation. But if instead you picture a two-footed, winged creature covered in feathers, our conversation will quickly break down. To use a language is to participate in a community of meaning.
Of course, this doesn’t mean that words each have some complete, narrow definition. The word “dog” covers a nearly infinite number of possible creatures, from wolves to chihuahuas, and is also used figuratively to refer to anything from pained feet (“my dogs are really barking!”) to untrustworthy men (“he’s such a dog!”). Even so, the word can’t just mean anything, because in that case it would actually mean nothing at all. The boundaries that contain a word’s possible meanings are the very structure upon which our meaningful speech and writing are based.
This means that an ambiguous word is not very helpful to us. Indeed, most of us know immediately that vagueness is the enemy of clear communication. And yet, to the extent that we recognize vagueness, it isn’t much of a threat. If, for example, I say “so then I told him to leave me alone”, and we aren’t sure who “him” refers to, we know to ask, and thereby clarify the meaning of the sentence. Having recognized this vagueness, we can resolve it, and clarify our communication. So it is unrecognized vagueness that is the true enemy of clear communication. If two people are using the same word to mean different things–and neither of them realizes it–then no real communication will happen. Indeed, both people will simply be speaking past each other, rather than to each other.
This kind of hidden vagueness afflicts more of our communication than I think we’d like to admit, but today I’d like to just focus on how this vagueness afflicts one word in particular: Democracy. This is a word whose meaning we need to hunt down because it is both ubiquitous and powerful. It’s a keystone in much of our political, social, and cultural life. And yet I think it often means very different things to different people.
Although I am sure there are many different ways of understanding democracy, for now I’d like to focus on the two meanings that I think are most common, and most formative to our political culture.
The first meaning is something like this: democracy means a system of government in which the government interferes with individual life as little as possible (“The best government is that government which governs least.”) In this understanding, democracy is about removing the barriers that impede individuals from living autonomously. Most notably, advocates of this kind of democracy tend to focus on ensuring that individuals can make decisions about their private property with as little interference as possible. Indeed, this vision of the democratic, which is historically linked to small-“r” republicanism, we might call “libertarian democracy” (though of course this adjective introduces its own vagueness). This approach understands democracy as simply releasing the power of property-owners to maximize their own welfare.
It’s important to see that while advocates for this understanding of democracy definitely want to limit the control the government has over them–i.e. kings are seen as very bad–they are at the same time generally very interested in maintaining their control over other people. Profiting off of private property always involves efficiently using the labor of other people. Much of the freedom that advocates of libertarian democracy want is the freedom to employ labor as cheaply as possible. This is how one gains the economic means to live prosperously. This is why, seemingly paradoxically, this vision of democracy frequently come hand-in-hand with slavery. This relationship is evident both in ancient examples of this mode of democracy (e.g. Athens and Republican Rome) but of course, also in democracy as it was envisioned by the founders of the United States.
In short, libertarian democracy seeks to limit the control the government has over property owners in order to give those property owners maximum ability to exercise control over others–that is, labor. This vision of democracy is probably the more dominant in American culture; the fetish for constantly minimizing taxes, even among working-class people, speaks to the power of this vision. As John Steinbeck (maybe) said, “…the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.” We all seem to want to limit government’s control over us, so that we can use our skills, power, and connections to exploit each other. Because so many of us understand democracy in this way, we have a hard time imagining what else it might mean. But in truth, there is at least one other major tradition of understanding what it means to believe in democracy.
The libertarian vision of democracy is focused on releasing the individual (or at least some individuals) from any kind of control, restriction, or oversight from the state. The logic of this vision of democracy is perhaps best glimpsed in the work of Ayn Rand, whose political philosophy amounts to a fetishization of the individual’s will. Upon a moment of reflection, though, this might seem odd. Doesn’t the very term “democracy” come from two Greek words, demos, meaning “people”, and the root kratia, meaning “to rule”? If so, then at first glance, democracy seems as if it should be focused not on the arbitrary autonomy of any individual, but rather on the will of the people more or less as a whole.
Indeed, this understanding of democracy is not absent from American political life. It is perhaps most clearly seen in two examples: first, whenever we talk about the importance of majority rule, and second, in the fact that the motto of the United States originally was the Latin phrase E pluribuis unum: “Out of the many, one.” These elements of our political thought point to this different, second conception of democracy. Instead of understanding democracy negatively as a set of limitations on government, meant to free individual (property owners) from any interference, this vision of democracy understands the democratic impulse as the desire to create a prosperous, harmonious, and just community. Understood this way, an individual may actually believe that it is his or her belief in democracy that should lead them to accept restrictions or discipline by the community and government. This is at the heart of the celebration of majority rule: if someone feels the need to uphold a law, value, or goal that they do not personally hold, but which the community has decided upon according to a democratic process, then this community-focused vision of democracy is in action.
We might call this second understanding of democracy “social democracy”, though like the term “libertarian democracy”, this term comes with its own vagueness, as well as a set of assumptions and biases that may cloud, rather than clarify, our meaning. This vision of democracy may not always be mutually exclusive with its libertarian counterpart, but there will often be tension between them. To cite just one example, the libertarian understanding of democracy will likely lead to one rejecting the idea of single payer healthcare as a violation of democratic principles, because it will involve taxing property owners to provide healthcare for others. However, someone who asserts a social democratic understanding will draw the exact opposite conclusion: since government guarantees for healthcare are not only hugely popular, but also help to ensure a prosperous, harmonious, and just community, support for such a measure will be seen as necessary for anyone committed to democratic principles.
What’s crucial to see here is that our public discussions and debates are not enriched by arguments in which each side simply asserts “democracy” at each other. Since each side actually means something rather different when they utter this word, no real communication is happening in this case. It is only once we explore what this word means for each of the groups using it that we can see the real divide. Once we can see this, we are armed against overly simplistic debates in which two sides simply denounce each other as insufficiently democratic. Exploring the details of what each of us means by the word “democracy” allows us to get the fundamental values that are really motivating us. For any real political discussion to happen, this is essential.
Contained within discussions and debates about democracy is a deeper conflict between conflicting priorities: individual autonomy, on the one hand, and the general welfare, on the other. The point is not that one of these is wholly good and the other wholly bad, but that each of them is good in some way and to some extent. Crafting a political system is about weighing the relatively value of each, and admitting that we may have to make trade-offs in balancing which value we are pursuing. Seeing this allows us to perceive the difficult complexity of our political and economic life, and allows us to bring to the surface the real conflicts, while avoiding long-winded debates about the purely superficial.
In a recent article published in The Baffler, Tom Whyman suggests that we should not be as opposed to the “post-truth” era that so many insist is dawning upon us in the age of Trump and Brexit. Indeed, Whyman insists that what many consider the indubitable truth is really nothing more than a set of claims that benefits a small empowered group at the expense of the majority of humanity (that is, an ideology.) And in pointing out that what is presented as truth is often nothing more than an attempt to deceive exploited people so that they cannot even admit that they are exploited–much less to actively resist that exploitation–I think that Whyman offers a valuable reflection.
However, the way that he discusses the term “reality” raises a number of concerns for me, and points to a serious problem that I think has infected a lot of contemporary discourse. Perhaps the most offending passage comes with the sixth paragraph as Whyman discusses the work of Herbert Marcuse:
For Marcuse, “reality” is constructed by means of the Freudian reality principle, through which the infant psyche learns to delay gratification in response to the fact of scarcity. This process forges the ego from a portion of the id, and as the infant develops it leads in turn to the formation of the superego, in the first instance through the child’s dependence on its parents. Over time, the superego absorbs “a number of societal and cultural influences,” causing it to “coagulate” into “the representative of established morality.” The superego ends up enforcing the demands of what Marcuse calls the “performance principle,” which is his term for the “prevailing historical form” of the reality principle. In short: the superego, one’s “conscience,” acts to enforce prevailing social norms.
On the one hand, this strikes me as a wonderfully concise synopsis of Marcuse’s central point (not being a scholar of Marcuse, I can’t vouch for its accuracy–but as someone interested in social theory, it strikes me as insightful and useful). But the way in which Whyman uses the term”reality” here immediately caused me consternation (I should be clear that I do not know whether this terminological vagueness is present in Marcuse or whether this is Whyman’s own addition.)
Whyman presents two options for understanding reality: the first is the more common idea, that reality describes that set of existing circumstances which have their existence or being independent from any mind and which constrain human thought and action. Whyman questions this conception by claiming that what is often presented as reality is really little more than slick propaganda:
Imagine if it wasn’t really “true” that your landlord owned your flat, and you could stay indefinitely without paying rent. Imagine if it wasn’t “true” that your boss was paying you to fulfil any particular duties at work, and you could spend your time there playfully doing whatever. Imagine if the laws of physics didn’t bind you, and you could simply flap your arms and fly to the stars.
The first “alternate reality” he presents is really a critique of the concept of property–Whyman is suggesting, I think, that we could create a different set of social relations, a different way to decide how to disburse scarce resources. Such a claim need not question the idea of reality in general, but simply suggests that the building blocks of the real can–and should–be rearranged to meet human needs.
The second alternative above, however, seems to me to move in a different direction. Here, Whyman seems to be moving from the revolutionary towards the utopian. And in the final suggestion, he moves towards science fiction. And it is this juxtaposition that is concerning. That Whyman seems to think that to question reality in the first sense is no different from questioning it in the second or third suggests a seriously deficient conceptual analysis on his part. It seems that, fundamentally, he is working with a binary, discrete understanding of the term “reality”–either reality is just what those in power says it is, or it’s nothing at all. This warped and overly simplistic way of thinking, which strikes me as a sort of metastasis of the rule of the excluded middle, renders Whyman’s piece, which begins with such a worthwhile impetus, deeply misleading.
For Whyman, the options before us are stark and irreducible: we can either accept the status quo, or commit to a Quixotian project of simply fantasizing our way out of difficulty. What is perhaps most perplexing about this suggestion is how thoroughly un-Marxist it is. Whyman suggests that Marcuse offers the prefect synthesis for Marx and Freud, but for Marx, the economic base was, and would always be, the reality which defined the political and cultural options that humanity could truly act on (the possible “superstructures”). By de-coupling Marx from this realism, we get an odd creature, a sort of inverse, positive-thinking ersatz Marxism that strikes me as simply an opium of the people for the 21st-century: imagine what you want and ignore your material circumstances.
What is necessary here is not to marshal better arguments for one of the two sides that Whyman presents, but rather to realize that the very structure of argumentation that he offers is mistaken: there are more than two options on the table. We can both affirm that there is a reality which constrains us and yet also affirm that this reality is flexible enough to yield a more human and liberative society. If we begin by accepting the framework that Whyman offers, however, such a possibility is foreclosed upon before we can even consider it.
Most of all, what is needed is sound conceptual analysis–sustained reflection on the terms we use–and then conceptual synthesis–recognition of the way in which our understanding of any given concept shapes the way we understand concepts related to it. By simply employing our terms without reflecting on them, critiquing them, and developing them–and this is what I think Whyman does in this piece–we understand little and achieve nothing. It’s far too easy to allow a dichotomous mode of thinking to colonize our imagination. Whyman seems to engage in a simplistic, knee-jerk reasoning: the inverse of my opponent’s position must be true. But in fact, the inverse of my opponent’s position is awfully similar to my opponent’s position, just turned inside-out; in seeking freedom from oppression and exploitation, we actually reproduce its form even if we negate its content. What is needed is something genuinely different.
I hope to have shown above the error in Whyman’s mode of reasoning; wanting to explore the ways in which public discussions of truth and reality often offer only propaganda, he short-circuits the full discussion before it can even begin. Though he offers the beginning of a cogent critique, that critique never develops, since what he offers in place of what he opposes is little more than its negative-image. What would greatly enrich his argument is simply a less-vague sense of the word “reality”. Whyman employs this term without adjective or qualification, and this leads to a problem for the reader: what, exactly, does Whyman mean by this word?
On the one hand, it seems that at times he means by “reality” something more like “perceived reality”; that is, he seems to be pointing out that how things actually are and how they may seem to any particular observer can be quite different. On its face, I think this is hardly even controversial. A more stringent and perhaps more controversial–but still, I think, very sound–claim would go slightly further: since the reality discussed by any individual or community is always reality as perceived by that individual or community, the reality we talk about can never be reality-as such. This will ruffle some feathers, no doubt, but it’s a position well-attested by a range of philosophers: not only Immanuel Kant, C.S. Peirce, Edmund Husserl, and Emmanuel Levinas–but also a deeper lineage of critical thought that stretches back to the Stoics.
Understood in this way, reality–the “real” reality–is always, ultimately, beyond our grasp to fully determine. But, importantly, this is not the same thing as saying that reality is somehow unreal or utterly absent. It is important to be able to critique our perceptions of reality–and even to go as far as to realize the radical implications of this–without allowing this critique to collapse into a naive anti-realism. Unfortunately, the history of philosophy from the late-19th century on shows that there is a lineage of thought that seems to make precisely this error, confusing the unavailability of any total certainty about reality with the conclusion that reality must simply be absent, false, and meaningless in any sense.
Thus, Whyman seems to counter what we might call call a naive perceptual-realism–“what I see just is real”–with the aforementioned naive anti-realism–“since what I see isn’t necessarily the real, there must be no real”. Presented this way, I hope it is not hard to see how the latter position is really just the inverse of the former; both take perception itself as the unquestioned starting-point. But the impact of postmodern thought should be to question perception, not reality as such. Whyman ultimately fails to do this, as far as I can see, and thus falls into a trap that seems to have befuddled many other thinkers (Nietzsche, Sartre, and Boudrillard all come to mind as probable examples, though of course this claim is not without controversy).
But real and valuable critique of the ways in which the perceived reality of the powerful is used to oppress others can only come about if we get comfortable occupying the awkward middle place between these two naivetes. We must be able to both recognize that our perception fails to meet reality as it is, and yet also admit that there is some reality that nonetheless constrains us, even if determining its full details remains beyond our ability (for a technical approach to appreciating this, the first third of Kant’s Critique of Pure Reason is of immense, if at times opaque, value). Living in this space will begin to show us new ways of thinking about our world and ourselves, and will also begin to reveal new options for how to organize our common life. Anything less is simply to repeat the old ways while calling them new.
“Freedom” is a word beloved by Americans, both left and right, liberal and conservative. No one in this country would ever explain their own political philosophy by saying, “basically, I’m against freedom.” Even those who wish to control others always present it as a mode of liberation. Everyone argues that they (and generally, they alone) are struggling against a sea of nefarious opponents to deliver true freedom to the world. But if that’s the case, why are we still struggling? If everyone agrees that freedom is good, the good, why haven’t we achieved it? If everyone is for freedom, then no one can be against it–so why is it always receding off into the distance?
And furthermore, isn’t this a rather strange situation, that everyone would agree–in word word, if rarely in deed–that freedom is the proper goal of all human political and economic activity? How is it that Republicans, Democrats, Greens, Libertarians, Marxists, and even Fascists all alike present their programs as struggles for freedom?
Perhaps this is only branding? That is: perhaps only one of these groups is really fighting for true freedom, but the other groups, having seen how popular it is, chose to parrot this in their PR? Even so, such a universal respect for something that so often seems controversial is still hard to explain. Everyone loves freedom, and everyone presents their opponents as the enemies of freedom. What’s going on here?
Considering how many different political groups all champion freedom, it isn’t surprising to find that they each understand the concept somewhat differently. What is surprising is how much continuity there nonetheless is between these various understandings of human freedom. Such a curious situation demands further attention, yet our enthusiasm for freedom has tended to result in less, not more, intellectual scrutiny towards the concept: when everyone agrees about something, it’s not likely to get discussed much. How often do we ramble on and on about how important breathing is?
The fact that these contradictions about freedom simultaneously sit right at the apex of our political culture and yet are simultaneously almost never explored suggests that ideology is at work here. Though in common English, the word “ideology” is generally synonymous with “a system of political and economic ideas”, in certain corners of the social sciences, the word has taken on a more technical meaning. In this sense, an ideology is an existent social system–that is to say, it’s not just a set of ideas, but is actually the social structure that truly pertains in the present–that actively seeks to obscure itself. Ideologies are social systems that maintain their dominance, at least in part, by hiding from plain view.
This may seem odd, but an example can flesh this out. Perhaps that most obvious and oft-repeated one is the claim by defenders of laissez-faire economics that unfettered capitalism is the natural method of allocating scarce resources. Note that word, “natural”. Using this word makes this particular politico-economic system seem to be the given state of affairs–as if if no one chose it, no one in particular benefits from it, and as if no alternatives are really possible. Presenting the current social structure as “natural” is an effective rhetorical tactic. Anyone who argues against such a structure can easily be denounced as uneducated, unrealistic, or immature. Once a given social system is presented–and received by the public–as “natural”, it becomes much harder to challenge. After all, how many political movements oppose gravity? If a system can present itself as inevitable as gravity, it will be nearly impossible to displace.
This is how ideologies function. They press certain contingent social structures onto populations, and then cover their own tracks, convincing a majority of the people living under them that they are natural, irreversible, absolute. And, of course, it’s not only defenders of capitalism that are guilty of this maneuver. Most Marxists argue that only they have a political program developed from an objective understanding of the science of human history; likewise, many religious institutions try to claim that only their view of spirituality reflects human, natural, and divine realities as they truly are.
But what does any of this have to do with the ubiquity–and simultaneous vagueness–of the word “freedom” in western political discourse?
Broadly speaking, especially in the West, “freedom” always has two aspects: it is freedom of the individual, and it is negative freedom. To say that we westerners celebrate “negative” freedom is just to say that we understand freedom as freedom from other people. Freedom of religion means that others can neither prevent nor require my religious practice. Likewise, freedom of speech means that the state may not prevent me from speaking my mind. And this leads to the other aspect: such freedom is always of the individual: it is the individual who can be free, it is the individual who strives to be free. Individuals strive to be free of the state, of natural events, and of other individuals.
One can characterize this understanding of freedom as ideological because it forecloses on other possible understandings of freedom without ever even alluding to the fact that such alternate view of freedom are even possible. During the Cold War, however, Eastern Bloc states made a point of arguing for a “positive” conception of freedom: freedom to certain things, rather than only freedom from certain people or institutions: freedom to work, freedom to health care, freedom to meaningful social interaction, etc.
This critique of a purely negative conception of freedom is therefore not unheard of, even if it is rare in the west and, indeed, utterly absent in any mainstream political discourse. But the second dimension of the ideology of freedom–that freedom is always freedom of the individual, generally receives less attention. Again, “orthodox” Marxist theory has generally critiqued this assumption as well, though perhaps less consistently, and certainly less successfully. By and large, especially since the fall of the Soviet Union, most western Marxists have attempted to cast themselves in language that is more friendly to liberal conceptions of freedom (that is, the one we have been discussing above). In fact, this “softer” Marxism runs back into the pre-World War II days, when some intellectual Marxists attempted to present a more humanistic approach to Marxist theory (e.g. Walter Benjamin).
So even the primary pole of opposition to liberal capitalist hegemony has had a hard time sustaining the idea of freedom outside the confines of the individual. And the fact that this dimension of the ideology of freedom has been harder to name and counter makes a lot of sense–opposing a purely negative conception of freedom is easy to do because modern people understand the need for things like work, medical care, and education, and so the idea that freedom could be “for” as much as “from” is easy to grasp, even if it ends up having little actual political traction. But individualism is a much harder nut to crack. The very way in which most modern people understand themselves is through the lens of individuality. We see ourselves, separate people, as the subjects and agents of existence. This extends well beyond the realm of politics. In our romantic lives, in our spiritual lives, in our day-to-day activities, western culture is, through and through, a culture of individual experience and identity. The watchword of the 21st century is, I think, “authenticity”. Authenticity, not to one’s region, or ethnicity, or history, or religion–but to self.
Political projects are understood to be good or bad to the extent that they maximize the potential for individuals to act authentically. But is this the only way to characterize freedom? Is it possible to have a conception of freedom that is social, rather than individual? Of course, the idea of freedom for certain groups is not new–nationalists constantly decry the restrictions on their nations–but this attitude towards freedom is still fundamentally anti-social; that is to say: zero sum. One nation’s freedom necessarily means the loss of rights, property, or power on the part of some other nation.
A truly social understanding of freedom would seek to create social institutions that free people for one another, not just from one another. Such an understanding of freedom would be much harder to articulate than those who have argued for a positive understanding of freedom alongside the negative, because it would require a completely new mode of subjectivity–we would have to know ourselves, and each other, in a new and different way, because the very way in which we understand self and other today already has inscribed in it the zero sum competition of individual against individual. The possibility of freedom with one another has already been foreclosed upon by the reality of our social relations. Only able to witness, and imagine, freedom from one another, we reproduce these social relations in our constant struggle to achieve more freedom for ourselves at the expense of others. We can’t imagine being anyone other than who we are–even if who we are now is profoundly unfree.
What exactly this kind of social, rather than individual, freedom could mean is not–and, I think, at this time, cannot–be clear. And this is precisely because any change in the understanding of freedom (since this concept is so essential to the very way we understand ourselves, our identities, and the societies in which we live) would result in a completely different way of thinking. At this stage, I think it is only possible to discuss the limits of the current structure of our subjectivity, to continue to whittle away at its foundations. The answer is still well over the horizon–but we can ask the question today.
An example will be illustrative. When it comes to discussions of racial justice, many on the progressive Left are fond of saying that white supremacy is bad for white people as well as for people of color, and that therefore the struggle for racial justice is something that everyone should be able to get behind. Whatever one thinks about the factuality of this claim, it’s clear that the goal of this kind of rhetoric is to produce and maintain a sense of solidarity–that term so much-beloved (and oft-over-used) by leftists. To the extent, the thinking goes, that we can get white people to believe that anti-racist agitation, legislation, and direct action is good for them as well as for their non-white neighbors, we reduce the obstacles to achieving racial justice.
So far as it goes, of course, this makes sense. But there’s a problem: what if many white people realize that anti-racism won’t always benefit them, or that it will benefit them in some ways while harming them in others? Or that it will benefit them in the long-term, but not in the short-term? The problem is that this claim about the universal benefits of racial justice stumbles over the gritty details of our actual social existence. It would be ridiculous to deny that at least some white people benefit some of the time in concrete ways from white supremacy. Indeed, that’s the whole point of deploying the term “white privilege”: white supremacy gives white people real and desired advantages. So which is it? Is white supremacy ultimately a social structure that gives real, material advantage to white people? Or is it an obstacle to the welfare of all people–including white people–and therefore something that we can easily develop solidarity in resistance to?
Of course, I have oversimplified the reality of white supremacy as an existent social structure. The fact of the matter is that some white people benefit far more from others, and that white people in general both benefit in some ways and pay in others. It’s not possible to easily quantify the cost/benefit impact of white supremacy on white people in general or even on specific white individuals. This is especially true for white working-class people, for whom white supremacy provides both benefits–more likelihood of being hired, generally higher wages, much less chance of violence from police, etc.–but also real costs, since a working class divided by race will generate–and very clearly has generated–lower wages, fewer benefits, and less occupational security for all working people as a whole. And of course, this is the perverse genius of racism’s appeal to white workers in the US: it both acts to discipline and impoverish them while simultaneously drafting them to uphold, through political violence against their fellow workers (of color), the very system that limits their political possibilities.
Considering the complex nature of the situation makes it clear that, whatever we would like to believe, simply stating that racism hurts white people and therefore white people should be eager to combat it is imprecise at best, and disingenuous at worst. A more honest call to racial justice would take a different course, especially if one were speaking to middle-class or upper-class whites, whose benefiting from white supremacy is almost completely unalloyed by class costs: “white supremacy helps you, but you should resist it anyway.”
But there is an obvious question that they would raise here: “why should I?” And here our survey of the concept of freedom above is essential. To the extent that we understood the only proper goal of a socio-political system to increase the autonomy of individuals to act in their own interest–so long as this is what we mean by “freedom”–it’s clearly nonsensical to call on (especially middle- and upper-class) white people to resist white supremacy. They are clearly, and materially, benefiting from the structures of racism. And it must be noted that while poor and working-class whites may actually be able to benefit more clearly from an end to white supremacy in some ways, many of them nonetheless identify so strongly with the accepted notion of freedom that they will respond to calls for racial justice as if they were solidly middle-class: the American “poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires“. If politics is nothing more than the competition of individuals to maximize their own autonomy and access to property, why would any white individual choose to dismantle such an effective tool as white supremacy?
It is this question, I think, that can function as crowbar at the site of the contradictions that must be explored here. Most Americans seem to want to both understand themselves as individual political actors and cherish their individual freedom to live their lives unfettered by others but also want to see themselves as moral beings who act in ethically consistent ways. But these two visions of human life are mutually exclusive. To the extent that one believes individual humans to have an absolute right to individual freedom, one cannot maintain the idea that humans might have an ethical responsibility to care for one another. One could, of course, choose, as an individual, to try and live up to some set of moral standards for a more or less arbitrary reason, but such a moral ethics would have no social, political, or philosophical force behind it. It would be a lifestyle choice, not a call to justice.
So, for progressives, leftists, and radicals, which is it? Do we understand ourselves primarily as individuals seeking unfettered freedom of action? If so, it’s hard to see how we can avoid endorsing a more-or-less libertarian view of government action, even if we adopt socially liberal attitudes towards sexuality, drug use, and entertainment along with our laissez-faire economics. Or, do we find ourselves committed to a set of ethical claims about our responsibility to care for each other? In this case, we have a strong intellectual basis in arguing for a more-or-less socialist mode of government, in which individuals sacrifice some degree of arbitrary autonomy in order to create a more just and more equitable society.
If we choose this latter option, though, we will need to think long and hard about what we mean when we say we are fighting for freedom, since the meaning that that word generally has in most of its contemporary uses will, I hope to have shown above, no longer be consistent with our political vision. This is not to say that Freedom and Justice are somehow mutually exclusive in an absolute sense, but rather that this particular understanding of freedom–which is the dominant and default one–is not consistent with our understanding of, and commitments to, social and economic justice. I think there can be a fruitful and mutually-reinforcing intersection of these two ideas as they inform our political and social vision of the future–but I don’t think we’ve arrived at that intersection yet. Instead, we find often find ourselves trying to talk out of both sides of our mouths, as discussed above in reference to leftist discourse around white supremacy.
If we honestly believe that combating white supremacy will materially harm at least some white people (and indeed, all white people in at least some ways for at least some period of time) we should be honest about this and then still call for whites to fight for justice. But this will involve developing a political discourse that sees freedom as one good among many others, rather than the absolute and only political good. The difficulty is that, as I have suggested throughout this essay, most of the time, American political discourse has functioned according to a freedom-first or indeed freedom-only paradigm. All of this is to say that if we want to organize people around social and economic justice, we cannot simply try to insert our content into the form of political discourse that currently exists. If I may be allowed a short Biblical reference: we cannot put new wine into old wine skins. Our politics is not just a variation of liberal democracy. We are not proposing some tweaks of and tinkering to capitalism. We are calling for a radically different mode of social, political, and economic organization. We are calling not just for some new political content, but wholly new forms of political life. For a completely different way for individuals to relate to one another. We cannot pretend that such a radical vision can be communicated with the political terms and assumptions of the very system we find so problematic–and yet, generally speaking, this is what we do.
This means that struggling against capitalism will mean imagining a different way not only of working, voting, and allocating resources, but a different way of thinking and indeed of existing as social creatures. Above all else, we need a completely new discourse, a new set of fundamental political terms to build our discussions on. The trouble is that the left in the US seems, more often than not, to simply try and radicalize the terms and assumptions of centrist liberalism, as if one can accept the need for socialism if one simply reads Paul Krugman, and then multiplies his position by ten. But this just isn’t the case. The very basis for what constitutes good and just governance for liberals is completely different than for socialists.
Part of what this means is that we need to be as focused on “theory” as on “action” (though dividing these two things as if they are not mutually interdependent is itself, I think, a faulty mode of thinking). Imagination must be seen as a critical political tool. We have to be able to imagine different ways of living together before we can be expected to work towards them; we cannot arrive at our destination if we have no idea where we are going. Much of what passes for “radical” thought today is, I think, little more than metastasized liberalism. Calls for “social justice” too often simply mask attempts to gain leverage within the structures of capitalist decision-making, rather than attempts to dismantle this system. We have to recognize the limits not only of the outcomes of the systems we are struggling against, but also the conceptual and linguistic foundations upon which those systems are built.
As I admitted above, I will not pretend to have any idea of exactly what structures of thought can replace those which we are struggling to break free from. Many leftists will attempt to build upon the work of Marxist thought; religious progressives and radicals may prefer to build on their own spiritual traditions for clues on how to build new modes of subjectivity. Anarchists and syndicalists will doubtless offer their own critiques (though the essential dimension of individual freedom for this lineage of radical thought must, I think, be admitted and addressed). I don’t have the answer, but I am convinced that we must incessantly, loudly, and seriously ask these questions about the very foundations of our political ethics if we want to have any idea of how to move forward.
Giles Fraser recently published a short opinion piece at the Guardian arguing that the problem with religiously-motivated terrorism is not that such terrorists–like the man who drove a truck into crowds on Bastille Day this past summer in Nice, France–are too religious, but that in fact they are not religious enough. Fraser goes on to argue an important theological point:
It’s a very basic point. The truth of God’s existence does not depend on me. It does not depend on me filling my church with believers at midnight mass. Nor does it depend on me (or anyone else) winning or losing arguments about God’s existence on Twitter. God is not like a political party that lives or dies on its support or lack of it.
Fraser is reiterating a fundamental theological doctrine central to the Abrahamic faiths: that of God’s utter sovereignty. God creates but is not created. God upholds, but is upheld by nothing except God’s own self. God defines without being defined. Fraser’s argument is simple: those of us who profess religious faith should be “more extreme” in our total reliance on God–and this should lead to less terrorism and less religious coercion rather than more. The more we depend on God, he argues, the less we will try to act as God’s guardians or agents. The more secure we are in our faith in God, a faith based on God’s solidity and not our own confidence or energy, the less anxious we will feel, the less need we will have to assert our beliefs on others.
To some extent, this strikes me as a good argument. Certainly, I will always applaud any public declaration of this kind of theology. Asserting the super-ontic, as it were, primacy and security of God over and above the material world or human thought and activity is something we need more of, and it’s refreshing to find this kind of discourse in the Guardian, which is not known as a place one goes for metaphysical subtlety (this is of course not a critique, as the Guardian is a newspaper generally focused on current events).
And yet, I have to say I have a problem with Fraser’s argument. While it may be the case that we believers in God need not defend God’s being or honor in public, and that we need to trust God more and our own actions less, I worry that, taken on its face, his argument could lead to a sort of religious quietism: trusting in the goodness of God while the world burns.
But this kind of extreme, to borrow Fraser’s own diction, understanding of God’s sovereignty and power is, in fact, un-Scriptural. It is certainly true that the Bible–both the Hebrew Bible and the much shorter Christian New Testament–frequently acclaim God’s ineffability, power, and utter sovereignty, yet both texts also make it clear that faith must always mean action. It’s true that God doesn’t need us in order to be Real, in order to be God. But! God does call us to action, to serve a broken a world, to heal wounded people, to speak truth in a time of falsehood. God may not need us, but God’s world does.
Perhaps the clearest expression of this is in the famous passage of the goats and the sheep in Matthew 25:31-46. I quote it here at length and encourage you to read it, even if it is familiar to you:
‘When the Son of Man comes in his glory, and all the angels with him, then he will sit on the throne of his glory. All the nations will be gathered before him, and he will separate people one from another as a shepherd separates the sheep from the goats, and he will put the sheep at his right hand and the goats at the left. Then the king will say to those at his right hand, “Come, you that are blessed by my Father, inherit the kingdom prepared for you from the foundation of the world; for I was hungry and you gave me food, I was thirsty and you gave me something to drink, I was a stranger and you welcomed me, I was naked and you gave me clothing, I was sick and you took care of me, I was in prison and you visited me.” Then the righteous will answer him, “Lord, when was it that we saw you hungry and gave you food, or thirsty and gave you something to drink? And when was it that we saw you a stranger and welcomed you, or naked and gave you clothing? And when was it that we saw you sick or in prison and visited you?” And the king will answer them, “Truly I tell you, just as you did it to one of the least of these who are members of my family, you did it to me.” Then he will say to those at his left hand, “You that are accursed, depart from me into the eternal fire prepared for the devil and his angels; for I was hungry and you gave me no food, I was thirsty and you gave me nothing to drink, I was a stranger and you did not welcome me, naked and you did not give me clothing, sick and in prison and you did not visit me.” Then they also will answer, “Lord, when was it that we saw you hungry or thirsty or a stranger or naked or sick or in prison, and did not take care of you?” Then he will answer them, “Truly I tell you, just as you did not do it to one of the least of these, you did not do it to me.” And these will go away into eternal punishment, but the righteous into eternal life.’
It is easy to miss the central thrust of this passage by either dwelling on the implicit threat contained in this passage, or by snickering over the comparison of Christ’s followers to “sheep”. But note the main point Jesus is making: those who care for those in need have already entered ‘the Kingdom’, they are already doing the work of building the just and peaceful reign of God in the world. Meanwhile, those who profess faith while refusing to live that faith are proving themselves to be obstacles to God’s work, God’s plan for a creation imbued with justice and love.
That is to say: “extreme faith”, as Fraser calls us to have, should not lead us to disengage from politics, social action, or advocacy for what we hold to be true or right. This point can be summarized even more succinctly by John 14:15–“If you love me, keep my commandments.” One who professes faith in a sovereign God but refuses to endeavor to live a renewed life of love in light of that faith, does not really have faith at all. Or, as St. James put it, much to Martin Luther’s later chagrin: “Faith without works is dead.” (James 2:17).
Thus, I worry that Fraser has oversimplified what it would mean to live an “extreme” faith. I agree wholly with him that those who kill, exploit, enslave, or disregard others in the name of God are indeed not nearly religious enough. But I disagree with his conclusion that this means that religious people ought to retreat from acting on their faith. It’s just that we must be very clear about what kind of action God calls from us. Let’s let Jesus’s word guide us again:
‘Beware of false prophets, who come to you in sheep’s clothing but inwardly are ravenous wolves. You will know them by their fruits. Are grapes gathered from thorns, or figs from thistles? In the same way, every good tree bears good fruit, but the bad tree bears bad fruit. A good tree cannot bear bad fruit, nor can a bad tree bear good fruit. Every tree that does not bear good fruit is cut down and thrown into the fire. Thus you will know them by their fruits. (Matt. 7:15-19)
Those who are truly, and “extremely”, religious, will be people whose fruits are acts of love, kindness, compassion, social and economic justice. This means refusing to use force and violence in the name of God, to be sure, but it does not mean retreating from all religiously-inspired activity. To do so would be to abdicate our responsibility to build the just and loving society God calls the human community to be.
Critics of Christianity and Christian apologists alike tend to approach Christian truth-claims as a set of content which either are, or are not, to be accepted, according to some previously determined–though generally unspoken–set of criteria. Public discourse occurs within various language communities, with sets of rules for determining whether a given argument is received as successful or not. Though individuals, and smaller sub-communities and sub-cultures frequently have their own sets of rules for making some judgments, there are nonetheless cultural logics which are effectively hegemonic, which provide a basic set of axioms and logics that most people, most of the time, employ when making or evaluating public arguments.
The actual set of such meta-rubrics, as we might provisionally call them, change over time. For a long period of late antiquity and the early medieval period in western Europe, neo-Platonism provided the basic structure for public reasoning, as witnessed in the works of writers like Augustine or John Scotus Eriugena. In the middle period of the medieval age, Aristotelian logic and metaphysics became ascendent in the west, especially through the work of Thomas Aquinas, who had the benefit of Arabic translations of works lost to Latin. (And it is of course worth noting how much overlap there is between Platoni and Aristotelean modes of reasoning.) At the end of the medieval period, both of these threads–the Platonic, still represented by the work of many Franciscans, like Bonaventure, and the Aristotelean, generally employed by Aquinas’s fellow Dominicans–both faced a crisis of confidence which led to new attempts at “first philosophy”, most notably in the work of Rene Descartes and John Locke. These and other figures of the 17th century ushered in “modern” thought, which has itself both evolved and been challenged by the panoply of critiques covered under the umbrella label of “post-modernism”. Nonetheless, some version of modernism itself still provides, in modified form, the basic set of criteria which most people employ in public discourse.
What exactly qualifies as the basic set of modernism’s truth-criteria is, of course, up for debate, but some basic assumptions like the following are generally seen as uncontroversial: the homogeneity of space and time, the limitation of the real to the perceivable or conceivable, the sufficiency of human reasoning to draw all valid inferences, and the individual thinker as the final arbiter of truth-claims. Each of these has come under some fire from both traditionalists (e.g. Thomists) as well as postmodernism (e.g. Foucault or Baudrillard), but by and large modernism’s basic structure of reasoning remains in force throughout most public discourse today–not only in the West (wherever exactly that is) but, increasingly, throughout the whole world.
Thus, not only the critics of Christian truth-claims but most apologists engage in their rhetorical struggles on this ground. Christian claims are understand as particular content–a set of pronouncements, each of which must be evaluated according to this over-arching meta-logic. While, in one sense, this is unavoidable and is ultimately necessary, I think there is reason to see such an approach to arguing about Christian truth-claims as highly problematic. This is because I think it is a mistake to see Christianity–or indeed any serious political or religious system of thought–as simply a collection of claims to be evaluated by whatever set of logics a given thinker happens to value and employ. Instead, such systems are, well, systematic. Christianity is not just a collection of claims (“there is only one God”, “Jesus is God incarnate”, “Jesus was raised from the dead”, etc.) but rather a complete system of thought–a set of axioms and logics that underpins those truth claims. That is to say: the claims, the content, arrive after one engages in reasoning according to the axioms and logics of the system. If one attempts to make sense of Christian claims apart from Christian axioms and logics, then it is unlikely one will find any of them convincing; indeed, one may not be even able to make sense of some of them in such a context.
Now, this is not to say that making judgments about Christian truth-claims outside of Christian axioms and logics is “wrong”; indeed, this is the whole point: to say that a given argument is wrong, or its conclusion false, or its method invalid, is to make a judgment–and as suggested above, whenever a judgment is made, one should ask: according to what criteria have you made this judgment? Judging, for example, Christian truth-claims according to certain modern logics–especially according to the assumptions of positivism–will yield one set of conclusions, while judging it according to other structures of thought will yield quite different ones. Likewise, judging the act-of-judging Christian truth claims according to, say, certain set of traditional Christian axioms and logics will yield one set of conclusions, while judging this act-of-judging according to modern logics will likely yield another. That’s a huge part of the difficulty in even discussing this issue. When we reason about the criteria we employ to make judgments, we are employing criteria to make judgments–we are making judgments about how to make judgments. The chicken and the egg both arrive on the horizon, but we never reach either.
However, the difficulty in seeing the complexity of this situation only gets worse when we recognize that, actually, few humans only employ one set of totally consistent systems of criteria. Most people employ at least two, and frequently they blend them in unpredictable ways. Thus, many Christians employ one set of axioms and logics when thinking about “religious” issues, but then, at their jobs, employ very different sets. Likewise, many non-religious people employ different sets of reasonings when thinking about scientific questions than they do when thinking about ethical or moral ones. Frequently, the systems of thought are incompatible, to the point of being mutually exclusive (this is actually not the case, I think, between science and religiosity broadly conceived, but consider the gap between evolutionary logic and the political ethics that most people hold dear, and a more glaring inconsistency makes itself clear. But this is a topic for a separate post.)
So each of us individually, and groups of us as communities, find ourselves attempting to adjudicate various truth-claims, many of which rest on differing assumptions, according to a blended set of axioms and logics, which are often actually rather obscure–that is to say, we often reason without being conscious of the rules according to which we reason. If and when we attempt to reflect on our own processes of reasoning, we find an intellectual mine-field, in which we have to employ the very criteria we wish to observe and study in our studies and observations. Reasoning about reasoning ends up resembling the unfortunate Ouroboros.
Of course, to some extent, this is just repeating some of the critiques leveled at modern logic over the past two centuries by thinkers often grouped together as post-modern (despite that term being largely indeterminate–though again, the difficulty of making sense of this term can’t be dwelled on here). But one should not allow any association with post-modernity–often a term of abuse employed by lazy thought that doesn’t want to truly engage critique–to obscure the real difficulty here. What I hope I have shown above is that the serious, self-conscious evaluation of any truth-claim always comes with an immense amount of baggage. Truth-claims do not sit, pristine and discrete, to be evaluated by some kind of Archimedean Reason. They always exist within a context and system of reasoning and axioms which themselves must be explored if we are to truly understand the original claims. But this is extremely difficult, the work of a lifetime.
Above, we’ve run through an extremely truncated summary of the formal dimension of the difficulty I want to discuss here: we have talked about reasoning in the abstract, outlining how any act of judgment is always an act of applying some particular set of criteria to a given question, and observed that this leaves questions about which criteria to employ wide open. I also claimed that most of us today, most of the time, employ some version of what I loosely called “modern” modes of reasoning to questions, after having suggested that such criteria may not always be appropriate to every question. Now I’d like to apply this formal argument to a particular claim.
The collision of Christian truth-claims and modern criteria of judgment is evident in some of the most basic claims of the faith. The idea that Jesus died but was somehow risen from death and then appeared to his disciples is an excellent example. Modern reasoning, in evaluating this claim, employs some of the basic assumptions listed above. Space and time are assumed to be homogeneous; therefore, any claim about what happened to Jesus must comply with the laws of physics as we now understand them. Likewise, the mechanics of this purported act of resurrection should be understandable to human reasoning. Any appeal to “mystery” is seen as only an attempt to dodge accountability, for all of reality is assumed to be either perceivable or conceivable by human thought. Finally, each thinker him- or herself assumes final authority and autonomy in reaching his or her own conclusion about the matter. Appeals to authority are likely to fall on deaf ears.
What kind of response can a Christian offer? One could attempt to argue each of the above assumptions philosophically, point-by-point. For the sake of keeping this post somewhat briefer, I will employ Scripture itself to model some of the logic that I think informs the Christian claim, to highlight the fact that in the claim about Christ’s resurrection we do not simply have a particular content, a truth-claim, which should be evaluated by whatever reason any given thinker brings to the table, but rather a truth-claim that sits within a broader context, a system not only of other truth claims, but of axioms and logics which anchor those claims. To assert the resurrection of Jesus is to assert a specific way of thinking about reality. Christianity is a way of thinking, not just a collection of content.
Perhaps the clearest way of pointing to this general system of thought is to appeal to the very opening of the Hebrew Bible. In Genesis chapter one, the cosmos in general and humanity in particular are pronounced “good”; indeed, we humans are said to be made “in the image of God”, the imago Dei. This is not a reference to our bodily form, but rather to our intellectual, moral, and/or spiritual nature and capacities. The upshot of the first creation story, the very opening of Jewish and Christian Scripture, is that reality has not only a descriptive content, but a moral or valuative one as well: it is good.
Yet the second chapter introduces a wrinkle in this optimistic opening. Something goes wrong, terribly wrong, with the good cosmos. The Accuser, metaphorically imaged by the snake, tempts humanity into transgressing the boundaries that support our very being, and by so doing, the good cosmos is cast into chaos. (The philosophical question here of the problem of evil more generally is one worth exploring, but which I can’t pursue here.) So we have a tension in the text, only a few pages in: in its essence, the cosmos is good, yet in its existence here and now, it is far from good. It is “fallen” from its true state.
These two stories appear themselves as collections of truth-claims about the way in which our world came into being, but they must be seen as providing a philosophical architecture to Jewish–and therefore–reasoning. To claim that the world as it is now is not how it is in truth is to announce an axiom of Jewish and Christian reasoning: Truth cannot be judged only according to how things appear to be for us now. Truth, Truth with a capital “T”, is, to some extent, other than how things currently appear to be. Current existence is a fallen version of Truth.
Once these stories are understood as presenting, through allegory and metaphor, a philosophical system, it should not be hard to see how this touches on our previous discussion of formal systems of reasoning and of the particular question we tackled above: the Christian claim of the resurrection of Jesus. For the axiom just outlined in Jewish and Christian reasoning is in direct conflict with some of the axioms of modern reasoning. Whereas we said that modernism (again, assuming a simple and homogeneous meaning to this thorny term) assumes the homogeneity of space and time, Genesis argues that how things are now is markedly different from how things are essentially. Likewise, by arguing that humanity is itself fallen, Genesis argues that human epistemic capacities–our abilities to perceive, to conceive, and to reason–may themselves be limited, may fall short of being able to fully grasp the Real.
Apart from providing these formal challenges to some of the axioms of modern reasoning, these Scriptural axioms also provide a grounding for understanding claims about Jesus’s resurrection. If in the resurrection, this one part of the fallen cosmos, Jesus’s body, is restored to its essential character as good, as unfallen, then we might expect that it would not “play by the rules”, as it were. Indeed, the fact that Jesus’s resurrection does not play by the rules of being in its mode of existence now is essential to the message. For if one understands the world in its current existence as less than it should be, then only an event that transcends the limits of being as it currently is constituted, the limits that confine being to suffering and dissolution, could possibly be seen as truly manifesting the restoration of being in its essential Truth.
This application of Scripture as a source of axioms and logics shows, I hope, that Christianity–and again, other religious, political, and philosophical systems as well–cannot be understood as simply a set of truth-claims, a disparate content. Instead, the faith must be seen as a system of axioms and logics, as well as the conclusions which are then seen as particular truth claims.
Now, this is not to say that any of what I have said above “proves” Christian claims. Indeed, my whole point here has been to assert that thinking of proof as some kind of absolute and unquestionable conclusion is inherently invalid. Once we recognize that all judgments are made according to sets of accepted criteria, then we can see that any act of “proving” an argument will only prove that argument according to the criteria applied. Proving a point is simply the concluding of an argument according to the axioms and logics accepted at the outset; so long as we ignore that different thinkers can accept different sets of axioms and logics, we will miss the fact that there is no automatic set of criteria that any reasoner must accept. (Here my “postmodern” proclivities are on full display, though this term is even less useful than “modernism”–again, there is much to be said on this score which I cannot pursue here.)
Thus, it is crucial to see that what has been laid out above does not “disprove” modern modes of reasoning. It simply offers an example of an act of reasoning according to different criteria. Now, if one wanted to ask whether there was a process for determining which criteria to accept, that’s a wholly different–and far thornier–question: we would be seeking a criteria for determining criteria, a meta-criteria. For the time being, I would only like to point that even if one accepts the Scriptural criteria of reasoning outlined above–that the existential is not necessarily the finally True–this does not in any way require one to reject modern criteria of reasoning for other questions. That is to say, there is no necessary conflict here between “religion” and “science”. To the extent that one is seeking to understand being as it exists here and now, then modern modes of reasoning seem to be the best tool for the job, considering how much of natural phenomena they have explained and how effectively they allow us to develop new technology. However, if one has questions that move beyond simply how things are, and instead inquire into why things should be that way, or why things should be at all, or as to the value of being as such, then one may find that such modes of reasoning are less useful.
That is to say: it seems clear to me that there are properly scientific questions, which we ought to pursue with scientific (i.e. one specific set of “modern”) modes of reasoning. There may be other questions, however, for which scientific reasoning is not applicable. It is only the assumption, common in the modern period, that all judgments must be adjudicated by scientific modes of reasoning that is being critiqued here. Refuting this position, however, does not necessitate holding that scientific reasoning is never proper for any question. Instead, the refutation of scientific modes of reasoning as hegemonic should lead us to conclude that this mode of modern reasoning is appropriate for some (indeed, many) questions.
It is worth pointing out that this conclusion is relevant to all human thinkers, whether one considers oneself “religious” or not. As suggested above, moral and ethical judgments also fall into the range of questions for which scientific reasoning is not appropriate–though, considering the length of this post already, I will not tackle that claim here.
The upshot of all of this, for an understanding of Christian thought, is this: Christianity is not just content, it is form. Accepting Christianity does not just mean accepting a jumble of truth-claims, but rather a whole system of reasoning. It is not just new wine, but also a new wineskin appropriate for holding that wine. This has far-ranging implications, I think, for Christians struggling to be both “in the world, yet not of the world” (John 17:14-19). We should neither accept that all of our truth-claims can be meaningfully adjudicated by accepted modes of public reasoning nor insist that all questions must be adjudicated by specifically Christian modes of reasoning. Indeed, there may be many questions for which Christian modes of reasoning–which focus on the essential, the eternal, and the moral–are not appropriate.
We can only walk this fine line, balancing differing systems of reasoning for different kinds of questions, however, if we are aware of the often-unspoken assumptions which guide human thought. I hope that in the above, I have contributed to our efforts to not only think well, but also to think self-consciously.
Apart from healing and feeding people as he traveled through Galilee and Judea, Jesus also spent a lot of time teaching people. Sometimes this meant interpreting the Hebrew Scriptures–explaining or reinterpreting the Law, for example, or quoting and applying passages from Prophetic literature. But frequently, when people ask him a direct question, they ask him about one specific thing: eternal life. “Good Teacher, what must I do to inherit eternal life?” is a frequent refrain (Luke 18, compare John 3, etc.). Jesus spends a good deal of time, then, explaining what humans should do to attain this state of eternal life. But eternal life itself remains more or less un-explained. When Jesus does reference it, he almost always refers not to the state of one human being existing eternally, but to what he calls the “Kindom of Heaven” or the “Kingdom of God”. And when he does start to explain and define this, he invariably speaks not directly, but in parables: “the Kingdom of God is like a mustard seed…”
Considering how important both Jesus’s followers, and Jesus himself, seemed to regard eternal life, it’s a bit curious that the term remains so nebulous, so undefined. His followers are exhorted to seek eternal life, to live lives of love that can lead to it, but what it is exactly is never really offered. This has left a gap in Christian thought in which a wide range of ideas has entered. A whole range of concepts and definitions have been put forward to explain what eternal life will be like. Some are sophisticated and deeply grounded in philosophy, as per the ideas around the beatific vision, explored especially in Roman Catholic Thomist thought. Others are more folklorish and popular, such as the trope about playing ping-pong with grandma in heaven. The trouble is that none of these ideas seem to have a firm foundation in Scripture, all seem to owe more to secular and even pagan ideas, cultures, and values than anything identifiably Christian (the beatific vision easily brings to mind neo-Platonism, while many lay Christians’ conception of heaven looks more like the pleasures of the Elysian fields of Greek polytheism than anything in Scripture).
That Jesus is himself quiet on the details of eternal life is itself something worth considering. As mentioned above, when he does say anything about it, it’s always indirect, constructing analogies through parables. Here, in this space, I’d like to offer one way of making sense of this unwillingness of Christ to say more, when, on other topics, he seemed quite happy to be explicit, as, for example, in his ethical instructions around wealth or caring for those in need (see e.g. Matthew 25).
What reason might there be for Jesus, if he came to reveal the Truth to humankind, to be so silent on what seems to interest us humans most of all? If achieving eternal life is so important, shouldn’t we be told more about it? It has been common for Christian authors in the past to respond to such questions by an appeal to the importance of faith: if we knew the truth fully and directly, such an argument goes, we would not choose it for the “right” reason: what use would faith be, if we simply knew what was at stake? But surely this is a churlish argument at best, and morally outrageous at worst. If God, manifest in Jesus, wished to convey to humanity the importance of living righteously, wouldn’t God be willing to use any tool, any information, to convince us to mend our ways? Isn’t faith, ultimately, to be commended only because it is necessary here and now, only because of our incomplete knowledge and deficient faculties? As Paul says: “For now we see in a mirror, dimly, but then we will see face to face. Now I know only in part; then I will know fully, even as I have been fully known” (1 Cor 13:12).
Faith is necessary now, Paul seems to be saying, not because living in faith is somehow better than living in knowledge or wisdom, but rather because there is something about the way we human beings are now that makes knowledge impossible. In other words, Paul is making a point that is–to employ a perhaps over-used and often-abused term–rather postmodern. He is making it clear that knowing everything about the world–in this case, what it would mean to inherit eternal life–is not just a matter of cataloging sensory experience and then organizing it rationally. Such an attitude towards knowing assumes that human beings can basically know everything there is to know, can come to have knowledge about any and all modes of being, if only we pay close enough attention and organize our conclusions rationally and systematically.
Paul is pointing to the possibility that there may be limits to what kind of information, generally, and what modes of being, more specifically, a given kind of knowing being might be able to access, process, or make sense of. The human way of sensing and knowing, that is, may have limits. This is a point that will more formally be made by Immanuel Kant in his Critique of Pure Reason and will go on to be a central plank of his critical philosophy, itself providing the spring-board to what comes to be known as post-modern thought: that is, philosophy that questions the assumptions of modernism, the mode of thought launched (to oversimplify intellectual history drastically) by Descartes and Locke.
Well, I’ve clearly gotten well ahead of myself, and have meandered far beyond the boundaries of Scripture. But I think it’s necessary to make these connections, so that we can see what Paul is really up to. What can, at first, look like a somewhat sloppy, semi-mystical phrase turns out to be, upon closer inspection, a serious epistemic point. And Paul is not alone. Scripture frequently points to the possibility, indeed the likelihood, of formal limits on human ways of knowing–that is, it frequently offers a critical epistemology, or indeed a critique of overly confident epistemologies.
So: what if one of the states of being that human knowledge is unable to make sense of is the state of being called “eternal life”? That would make sense in at least two ways. Exegetically, all of the sudden, the fact that Jesus refrains from any kind of clear-cut discussion of eternal life looks to make a lot of sense. Secondly, it also may, somewhat paradoxically, tell us something about eternal life, even in the moment we announce our necessary ignorance of it.
This latter point is actually contained within the quote from 1 Corinthians above. Paul says that we see darkly now, that we have limited knowledge now–but he also says that, “then”, that is, once eternal life is present or has been achieved, we will see “as face to face”. He seems to be suggesting that the epistemic limits he points to in the first clause will themselves be transcended in the second. So how does this tell us something about what it might mean to attain eternal life? It seems that Paul is telling us there will be a transformation, from the kind of knowing being we are now–one with serious limits to our knowing–to a kind of being who will know differently, and indeed, better, perhaps even perfectly.
This idea of transformation is not limited to this one passage. Paul will himself say, later in 1 Corinthians: “we will not all die, but we all will be changed” (1 Cor 15:51). Likewise, too, Jesus in the third chapter of the Gospel of John says that we must be “born again” or “born from above” in order to enter the Kingdom. Whatever this may mean, it certainly suggests a serious transformation of our way of being.
Now, some may see such a move as a way of shutting down the question: by saying that eternal life will involve a transformation of some kind from the kind of knower that we are now to a different kind of knowing–that is, that an ontic change in our mode of being will effect an epistemic change in our mode of knowing–it may seem like we are just kicking the can down the road, avoiding hard questions. I’d like to conclude by providing two examples which may show, formally, the logic of this move. Pointing to analogies does not provide a shatterproof argument, but it may allow us to understand a previously-made argument with greater clarity and sophistication.
First off, we can quote Paul again, who speaks about the change between being a child and being an adult, in the sentence which precedes our original quote from 1 Corinthians 13: “When I was a child, I spoke like a child, I thought like a child, I reasoned like a child; when I became an adult, I put an end to childish ways.” Imagine trying to explain certain adult experiences to a young child: sexual attraction, or the stresses of the workplace, or the responsibility of paying bills. We can, of course, use language to present these experiences. But there is no way to really understand what it’s like to feel sexual attraction, occupational stress, or the burden of bills, until one actually undergoes those experiences. One has to be the sort of being who goes through those experiences to really understand any linguistic expression about them.
Another, more radical example is that of the caterpillar and the butterfly. We talk about the process through which the former becomes the latter as one of transition or growth, but in many ways, the process actually involves the death of the caterpillar and the birth of something totally different. Yes, they have the same DNA, but the two beings are constituted completely differently. The body shape, the legs, the mouth, the digestive systems, even the eyes and other sensory equipment of each are completely different. The caterpillar builds a cocoon which becomes a chrysalis–and within this, the caterpillar is effectively dissolved, and its matter reorganized into a totally new mode of life: the butterfly.
Now, presumably, neither the caterpillar nor the butterfly has what we humans would call self-consciousness. But imagine that they did. For the caterpillar, the chrysalis is really a death. Its consciousness would end as its brain and sensory organs are dissolved. However, it could be the case that the butterfly, upon its birth from the chrysalis weeks later, might look back upon the caterpillar’s existence as an earlier stage of its own life–just as I do, in fact, look upon my life as a 5-year old as an earlier stage of my own identity, even though the life I live and the consciousness I now have would be totally unrecognizable to that 5-year old version of myself–who has, in a real if figurative sense, died.
We Christians may be eager to imagine eternal life as our current identities, or at least some best-version of them, living on for eternity. But I don’t think that’s what Jesus was referring to. He was calling for us all to endeavor to be changed–to be re-created into the true of image of God, that which we were meant to be but which we fail to attain in this fallen life. Imagine that a prophet-caterpillar came to a colony of caterpillars and promised them renewed life in the chrysalis. Imagine that they all rejoiced in the thought that they would enter the chrysalis and then live as caterpillars for eternity. Of course, that’s not what the chrysalis is, what the chrysalis does. It will transform them. Looking back from their butterfly-future, they may identify with their past selves, their caterpillar-selves. But first, they must be transformed into something radically different.