What’s wrong with individualism?

My fridge is an inanimate object. It doesn’t desire anything. It doesn’t have any preferences or interests. It is entirely non-sentient. If I slowly and sadistically hammered a nine-inch nail right through the side of my fridge, or even nailed it to a cross, it wouldn’t matter morally at all. It wouldn’t feel a thing.

Mind you, if you hammered a nail into my fridge, it would matter morally, because it’s my fridge, and I don’t want you to do that. I’d prefer it if you didn’t do that, and it would harm me if you did. But the harm would be done to me as a sentient individual rather than to the fridge which feels nothing, cares for nothing, and deserves nothing.

See the difference? On the one side, something that isn’t sentient and doesn’t deserve moral respect. On the other side, something that is sentient and so does deserve moral respect.

What about my society or my community? – Groupings of individuals are composed of individuals who are sentient and so do deserve moral respect, but the groupings themselves are as non-sentient as my fridge. Like a triangular arrangement of non-triangular dots (like this ∴), the parts have a property that the whole doesn’t have. The mistake of assuming the whole has a property that the parts have is called the “fallacy of composition”. (The converse error of assuming the parts have a property of the whole is called the “fallacy of division”.) The important thing to see is that it is a mistake.

‘Individualist’ is an obvious word for someone who thinks sentient individuals deserve moral respect but thinks inanimate, non-sentient objects like fridges or society do not deserve moral respect. Please note that an individualist so understood would not normally be someone who “lacks compassion”. Why not? – Such an individualist would normally think that since individuals count, individuals have responsibilities to look after each other and to respect each other’s interests. “People matter” – in fact only individual people matter. This sort of individualist wouldn’t normally have compassion for inanimate, non-sentient objects such as fridges or society, because why should they?

But evidently, the word ‘individualist’ is also used in a pejorative sense to mean someone who “lacks compassion” like that. Presumably, this second sort of individualist thinks society should be run along the lines of “every man for himself”, with each individual protecting his or her own interests and not caring about other individuals. Philosophers often distinguish between the two sorts of individualism by labeling the first “liberalism” and the second “rugged individualism”. I hope you can see why traditionally, liberalism was associated with the left wing of politics rather than the right wing, and why the word ‘liberal’ is sometimes used in a sloppy way to mean “left wing”.

When Margaret Thatcher’s critics berate her for “not caring about society”, what do they mean? – Usually they mean that she didn’t do enough to protect the interests of weak individuals from the selfish greed of strong individuals. That strikes me as a perfectly legitimate criticism.

But some of her critics seem to mean that she was wrong to care about individual people instead of caring about an inanimate, non-sentient object called “society”. It strikes me as inhumane to care about non-sentient “collectives” of people such as nations or races instead of sentient individual people. That way lies fascistic nonsense about “destiny” and collective culpability. So I think this second sort of criticism is illegitimate and conceptually confused.

I don’t expect non-specialists to be familiar with technical philosophical terms, but I do hope that people of average intelligence can grasp the difference just discussed, and not get carried away by a rather everyday sort of ambiguity. (Such seems to have been the fate of current Irish president Michael D Higgins.)

In the hope of bringing a little bit more “harmony” where there was “discord”, let’s use language clearly!

René Descartes, used car salesman

Consider some differences between a car salesman and a car mechanic. The car salesman’s job is essentially social: he uses rhetorical skills of persuasion. He wants to sell you a car as an entire package. The car mechanic’s job is different: it involves engineering rather than persuasion. She works on much more limited, specific parts of your car. She rolls up her sleeves and works under the hood, where salesmen pushing a “positive driver experience” won’t go. Although there is some overlap between the two types of job, for the most part the techniques used by one are inappropriate for the other.

There is a similar division of labor in epistemology. Philosophers such as Descartes call all knowledge into question at the same time, the eventual purpose being to provide reassurance. Doubt is cast on the “entire package” until we can be persuaded that we are indeed entitled to say we know what we thought we knew all along. In his Meditations, Descartes in effect steals our car, then sells it back to us in his capacity as a used car salesman. Contrast that with the approach of philosophers such as Quine, whose aim is to explain how we acquire knowledge in this or that area of human life.

Internalism and externalism

There is a crucial difference in the methods involved. Descartes is an internalist – in providing reassurance, he can only allow himself to appeal to what is already “internally available” to him as justified belief. Quine is an externalist – in providing explanations, he is free to use any information that may usefully serve that purpose, such as scientific hypotheses about how sense organs work, philosophical theories about language and mind, and so on.

This isn’t an effete distinction of merely philosophical interest. Internalism can give science a mistaken self-image, and in so doing make for bad science. For example, Kant thought scientific inquiry revealed what had to be the case given the phenomena we observe. The assumption is that to find what we seek, we need not – in fact we cannot – stray outside the “internal” realm of the “given”. But in fact, science reveals what might be the case rather than what must be the case. In other words, it provides sufficient rather than necessary conditions for observable phenomena. These sufficient conditions are the product of conjecture – of guessing about the external world instead of building on what is internal or “given”.

As another example of the baleful influence of internalism, consider Francis Bacon’s inductivist model of science, which supposes that scientific laws are inductive generalizations from repeat observations. The structure is assumed to be that of an edifice resting on foundations: the observational “basis” implies the extrapolated generalizations. Internalism tells us that the best inductive generalizations are those supported by the strongest internal reasons – usually the ones that are extrapolated from the largest number of observations. Externalism needn’t reject induction altogether, but instead of focusing on their internal basis, it tells us that the best inductive generalizations are those that are as a matter of external fact the most reliable – i.e. the ones that are underwritten by genuine lawlike connections in the real world, even if we aren’t aware of them. So, internalism puts a premium on observing many white swans to conclude that all swans are white. Externalism instead asks whether there really is a lawlike connection between color and genus-membership. (There isn’t.)

Kant and Bacon were natural philosophers of earlier times, but the corruption of internalism continues in our own day. Consider the way inductivist sciences such as psychology and climate science assume that theory (or models) are “based on observational data”. This assumption is so deep-seated that repeatable testing has all but been abandoned. In these nether regions of science, observations are not made in order to confirm or falsify the predictions made by hypotheses, as happens in legitimate branches of science. Instead, the role of observation has been reversed. Rather than test theory, “data” are sought which supposedly imply it. Where no honest “data” are available, dishonest ones are conjured up in the form of “proxies”. The theories (or models) so arrived at are typically supported by statistical extrapolation, which is an application of induction. That should not be accepted as any substitute for testing.

Social versus naturalized epistemology

I’m going to be a bit idiosyncratic now, and call the internalist project of providing reassurance or persuasion “social” epistemology. Although one can (like Descartes) call one’s own knowledge into question so that only one person is involved, the methodological pattern is essentially rhetorical, and is modeled on the social interaction that occurs when one person tries to persuade another person of something called into question. When we try to persuade one another that a belief is true, we can only appeal to what we agree upon already before proceeding “onwards and upwards”. Even though the beliefs are “empirical” – a posteriori, uncertain, etc. – the business of persuasion has the same “foundational” structure as mathematics. That is, some claims are taken for granted like axioms, and used as a “basis” to imply some further claims like theorems.

I’ll follow Quine’s terminology in calling the externalist project of explaining knowledge “naturalized” epistemology.

The techniques of car salesmanship are inappropriate for car mechanics, and similarly, the methods of social epistemology are inappropriate for naturalized epistemology. Yet the habitual practices of social epistemology spill over into epistemology in general. Perhaps more worryingly, the habitual practices of social philosophy also infect branches of science (or would-be science) that make foundationalist assumptions about “justification”.


When we try to persuade another person of something, we try to “implant” a new belief in the other person’s belief system. Usually, we already believe what we are trying to persuade the other person of – in other words, it’s already a “node” in our own belief system, which is bound to differ in many details from their belief system.

The practice of implanting a belief usually involves finding out where our respective belief systems do not differ, so that the potential new belief can be incorporated into their system by becoming “anchored” to the same nodes in their system as the ones it is already anchored to in our own system.

This practice gives a central role to arguments, ideally arguments that are deductively valid. Such arguments serve two distinct purposes. The first is to make premises explicit, in other words to bring out into the open what in our own case is sufficient reason to continue believing what we already believe (and hope will serve as fertile ground for sowing a new belief in the other person). The second purpose is to show what is in fact implied by those premises, in other words to show that they make a compelling reason for him to adopt the new belief.

In seeking fertile/common ground like that, the operative word is ground. We are looking for some shared beliefs that can be taken as “given”. This is a contextual matter of social epistemology, because it crucially depends on who is trying to persuade whom (as well as of what). The grounds so discovered might well be temporary. They are not “certain”, but they are taken for granted for the time being by the relevant parties. They are arrived at via a sort of “game” whose rules forbid appeal to anything other than the shared beliefs of those involved. These beliefs are treated as being “beyond question” in the discursive context, but I repeat: they are not certain in any absolute sense.

Our social habit of seeking foundations for belief should not extend beyond those discursive contexts – of persuasion, of calling the beliefs of others into question, that sort of thing. But alas, the appeal to “foundations” for belief has become a model for the analysis of knowledge in general. As I said above, it can steer scientific inquiry in the wrong direction as well.

Knowledge is justified true belief?

Consider the analysis of knowledge. Epistemology’s traditional three conditions on knowledge include the demand that any putative item of knowledge be “justified”. Justification is generally understood as synonymous with being grounded – in the sense of being implied by more secure foundations. The concept of knowledge itself is corrupted by granting the status of criterion to what should be merely a guiding assumption from the habitual practice of social epistemology.

Social epistemology does have a place in persuasion, where one person’s beliefs are called into question by another person, and so on. But outside of those contexts, all beliefs are in a sense “justified” simply by virtue of being beliefs in the first place. As genuine beliefs, they have to belong to a sufficiently rich belief system. And that usually means standing or falling with several other beliefs as new circumstances arise. In other words, any belief is part of a network in which it implies and is implied by other beliefs. In that sense, they are all “grounded”. Wittgenstein observed that many of our beliefs do not stand in need of justification. Perhaps none need justification because all of them are already justified. (Which of course is not to say that all of them are true.)

It is quite legitimate to ask what knowledge is, or whether we actually have knowledge in this or that particular area of inquiry. But if the aim is not the social “calling into question” or persuasion described above, the answers will usually not appeal to foundations at all. Knowledge consists of true beliefs sustained by reliable processes rather than true beliefs which are “justified” as social pressures might contextually demand. Those reliable processes might be causal, but they need not be. They connect beliefs with states of affairs, which are neither true nor false. The greater part of any such process is “external” and not likely to be known or even knowable to the knower. For example, dogs know when they are about to be taken for a walk, but they do not know that they know it, nor how they know it, and they are wholly unable to give an account of either. All that matters is that a lawlike connection exists between their actually being about to be taken for a walk, and their believing it. Furthermore, whatever “justification” they have is minimal – it hardly extends beyond their belief belonging to a belief system. Yet this does not militate against their having knowledge.

Philosophers since Plato have drawn a distinction between rhetoric and logic. Rhetoric might be characterized as the “art” of persuasion, whereas logic is the “science” of truth. It seems to me that talk of epistemological foundations – of “justification”, “data”, “grounding”, etc. – should be understood as a concern of rhetoric rather than as part of a naturalistic account of how we come to know truths.

What is a paradigm?

Autobiography (skip to “Meaning is use” if you prefer)

Two books made a huge impression on me as a young man. Both literally kept me awake at night, although for very different reasons. The first was The Selfish Gene by Richard Dawkins, which I read as a first-year engineering student. I had been in love with physical sciences and biology since I was a child, and by the age of eighteen, I was already a pub bore on Darwin. But I was unhappy with standard high school biology’s explanation of altruism: supposedly, it was “for the good of the species”. I knew that couldn’t be right, but I couldn’t quite put my finger on what was wrong with it until I read The Selfish Gene. It was thrilling to be able to explain altruism properly, as well as so much else. It was fun to speculate about the earliest “replicators” and the origins of life on Earth (and probably the origins of life elsewhere). It was dizzying to imagine vast new intellectual projects, such as the biological treatment of human behavior, or the evolutionary treatment of ideas as distinct from living organisms.

The sheer exhilaration of all that explanatory power and speculative fecundity turned my love of science into something like a marriage. I officially “believed in science”. I fancied myself to be an intellectually tough-minded yet unusually enlightened character, who was dismissive of anything that seemed “unscientific”. (And since then I have met many who seem to have cultivated the same smug self-image.)

It may sound odd, but during the course of this marriage I began to lose interest in specializing in any particular branch of science, and began to think more about “how it all hangs together”, how disparate sciences can work with each other to reveal the true structure of reality. I wasn’t familiar with the terminology at the time, but I was becoming a dedicated “scientific realist” and a “reductionist” in the sense of constantly seeking and expecting to find smooth “meshing” between theories of different branches of science.

I won’t flatter myself by saying it was my intellectual development that led me out of engineering. Punk rock, alcohol and late-onset misspent youth had more to do with it. Somehow or other my tortuous journey continued through pure mathematics, and unemployment, to philosophy.

As a philosophy student a few years later, I found another book that literally kept me awake at night: The Structure of Scientific Revolutions by Thomas Kuhn.

Years before, Dawkins had kept me awake with excited delight. But now Kuhn kept me awake with the nauseating symptoms of a disease. Kuhn said that science was not the pinnacle of rationality and constructive human cooperation I had assumed – it was more like a “darkling plain / swept with confused alarms of struggle and flight / where ignorant armies clash by night.” Kuhn gave me the queasy feeling that my “marriage” had been a complete sham. I welcomed this truth that had taken so long to reveal itself, of course, but felt sickened at the length, breadth and depth of the apparent deception that had existed before.

But enough of this autobiographical detour. I took it because I think I know first-hand why so many scientifically-minded people – including Richard Dawkins – dismiss Kuhn’s central idea of “paradigms” as pretentious nonsense. Inasmuch as they understand it, it makes them feel sick. And many do not understand it, nor even make an effort to do so, because they regard the history of science as an arty-farty humanities subject that cannot have any relevance to “hard science”.

They’re making a mistake. I suspect that like me, they’re intuitively attracted to scientific realism, but they assume the way to do justice to that intuition is to dismiss what seems like an empty threat because it comes from “outside science”.

So I’ll try to explain what a paradigm is, and why paradigm shifts are real and important intellectual events, without appealing to any sort of history. I think it’s helpful to think of the concept of a paradigm as having two components: the first is expressed by the slogan meaning is use. The second component is holism.

“Meaning is use”

We’re all well-aware that the meanings of words depends on the way they’re used, even if they’re used wrongly. For example, some people use the word ‘disinterested’ to mean uninterested or bored. They defy linguistic authorities when they do that, and from a wider perspective than that of their own linguistic group it confuses things to do so. But it confuses things precisely because within their group they do successfully mean “bored”, and they manage to do so because “meaning is use”. If they want to bring their usage into line with mainstream English speakers, the dictionary definition might “correct” them. But if they don’t, the dictionary definition is impotent. It doesn’t shape the meaning of this word, or any other word. Dictionaries merely describe how words are already used.

The main insight expressed by the slogan “meaning is use” is that definitions play a far smaller role – if any – in determining (establishing, fixing, etc.) meaning than had hitherto been supposed. Definitions obviously play no role at all in the rudimentary forms of language seen in animal communication. And that is where human language evolved, and indeed where it developed into science and great art long before the invention of dictionaries. Definitions play a far smaller role in science than those who take mathematics as a model for the empirical sciences seem to assume. Only in mathematics do definitions play an “official” role when new terms are introduced, and even there they do so by strictly prescribing use.

Unlike mathematics, empirical science involves competition between rival theories. And rival theories use terms differently, even terms that are shared, like ‘mass’ in Newtonian and relativistic physics. Because the terms are used differently, they mean different things. For example, Newtonian physics assumes time “ticks by” at a regular rate regardless of reference frame, that spatial distance is absolute, that mass is an intrinsic feature of objects (so in effect it’s a measure of “how much matter” they’re made of) and so on. All of these concepts change – along with the meanings of the terms used to refer to them – in the transition from one theory to the other.

So far I’ve used the word ‘theory’ rather than ‘paradigm’, and ‘transition’ rather than ‘shift’ out of respect for incredulous readers. Why not continue to do so? – Well, we might understand a theory as a collection of hypotheses that are used together and linked by logic or whatever. But when we use hypotheses together like that, in practice we assume more than they explicitly state. In other words, something larger than a theory so understood is involved. Each of Newton’s laws is a hypothesis, but his three laws of motion and his law of universal gravitation together do not exhaust what is normally understood by “Newtonian physics”. There is a much larger, fuzzier set of further assumptions and habitual practices that go into the “use” of those hypotheses, such as the assumptions about time and space above. It is that larger entity (than theory understood as a collection of hypotheses) that interests us when we think about the meaning of the terms and the shape of the concepts involved. Since practice is largely guided by what is considered best practice, and best practices are exemplified by notable successes, the word ‘paradigm’ is an obvious contender.

Many of those who are hostile to talk of “paradigm shifts” ask why no definitions of the word ‘paradigm’ seem to be available. I think that misses the point slightly, as the very idea of a paradigm depends on a rejection of definitions, at least as determiners of meaning. If meaning is fixed by use rather than definition, there is no prima facie reason to expect an explicit definition of any word to be even possible, let alone easy to concoct.

The expectation that definitions can settle such matters also misses the insidious nature of differences in meaning. When two people use the same word differently, they may not even notice that there is a difference. They may seem to agree when in fact they differ sharply. For example, two people might seem to agree that on the question of press freedom, “Hugh Grant is disinterested.” But one thinks he is commendably unbiased, while the other thinks he is a bored dilettante whose real motive is to promote his own celebrity status.

The idea of a paradigm isn’t complicated, and it will only seem difficult if it’s unfamiliar. It’s easy enough to imagine something analogous to paradigms in animal behavior and communication. Suppose some garden birds make a distinctive noise whenever a predator such as a cat appears on the lawn. When they make that noise, all of the birds who hear it take flight and stay off the grass for a few minutes. But after a while a smaller group of birds notices that making that noise also makes for an easy meal, because it clears the bird feeders, which are replenished every day by a human. They make the noise whenever a human appears on the lawn. So the same noise has a different meaning when different groups of birds use it – it can mean “there’s a cat in the garden!” or “food’s up!” This is a rudimentary case of the same noise having different meanings because in practice it is used in different ways, and the uses differ because they are guided by different exemplars of success, namely escaping from predators and getting an easy meal.

Differences in the meanings of terms entail that rival theories usually contradict each other in a rather stealthy, oblique way. They are in pragmatic tension rather than open competition. This makes it harder to assess them. Which brings me to the second component of the concept of a paradigm.


Talk of “holistic medicine” and the like detracts from holism’s roots in “hard science”. The idea was first explained by Pierre Duhem in connection with testing in physics. In order to test a hypothesis, it must imply some observable consequence which actual observation can confirm or fail to confirm. But no scientific hypothesis implies any observable consequence on its own. It can only do so in conjunction with several other hypotheses and assumptions. Thus when an actual observation agrees with prediction, it corroborates all of the hypotheses and assumptions that were used to make the prediction. And when an observation is unfavorable, once again all and any of those hypotheses and assumptions can be called into question. Which one (or plurality of them) is identified as “the culprit” is not a straightforward matter. It depends on how attractive they seem – yes, seem – when compared with each other. This is a complicated, subjective, and indeed an aesthetic matter of taste. The hope that it isn’t a matter of taste is epistemologically naïve, and factually mistaken.

Holism entails that the falsification of a hypothesis is never conclusive, as the simplest versions of Popper’s philosophy of science might suggest. If someone wants to hold on to a favored hypothesis in the face of unfavorable observations, he is free to do so. He can even make a habit out of it, as long as he is prepared to make up new ad hoc hypotheses indefinitely in order to protect his favored hypothesis.

This is where paradigms enter the picture once more. A hypothesis that looks unattractive enough to reject from one perspective can look attractive enough to retain from another perspective. These rival perspectives can be identified as paradigms, because what looks attractive to each of us depends on what we already regard as notable successes. Thanks to holism, a sort of “doxastic inertia” sets in. Those who look at things (especially unfavorable observations) from one perspective have little reason to change their perspective. Tradition – or if you prefer, getting stuck in a rut of habitual thinking – is an inevitable part of science. What we like to think of as theory has much in common with mere ideology.

I have not mentioned history yet, and I don’t intend to look at historical evidence that scientists of the past were often prone to getting stuck in a rut. But I will point out that no serious study of science can overlook the history of science, because thanks to holism, theory choice can’t be made without it. The test of a hypothesis is like a sheepdog trial: we may be interested in the performance of one particular animal (i.e. the dog) but the judgement’s reliability depends on the past performance of several other animals (i.e. the sheep being herded, which may be more or less compliant).

Scientific realism

I hope I have convinced you that the concept of a paradigm is better than just pretentious nonsense. It makes a hash of naïve ideas about meaning, and it blurs the distinction between the contexts of discovery and justification. It may seem as though scientific realism can’t survive the suggestion that science is a free-for-all between reactionary factions.

Naïve ideas about scientific method and the “march of progress” are indeed threatened by the idea of paradigms, but I remain a committed scientific realist. In fact I think realism can only be defended by embracing the idea of paradigms. I see science as a social process whose understanding of success is importantly independent of appeals to authority or consensus. Instead it demands compelling explanations and publicly available, repeatable test results. By actually meeting these demands, the best sciences are nudged uncertainly in the direction of truth. Loosely, we might say that science is “more a subject for blogging than peer review”. If we do not acknowledge the partisanship and deep conceptual divisions that exist in this social process we call science, orthodoxy-protecting mechanisms such as peer review will pervert the enterprise.

The most important factor in that social process is the decision-making of scientists in accepting one theory and rejecting another. For the most part these decisions should be “rational” in that they actually choose the better theory rather than maintaining a blind factionalism.

It seems to me that the greatest scientists are generally aware of the scale of the conceptual changes that have to be made for that complicated social process to deliver the goods through rational decision-making. Some of them have personally considered how things look from more than one perspective, working with alternative concepts, alternative meanings and alternative views of what to count as evidence. For example, Einstein understood how Newton understood space and time and mass at the same time as he was developing his alternative understandings.

So it seems to me to be perfectly acceptable – and definitely meaningful – to sometimes suggest that what’s needed is a “paradigm shift”. When a science has failed to deliver the goods in terms of explanation, prediction, conceptual fecundity, technical by-products, etc., it’s OK to suggest we go “back to the drawing board” and make some much larger conceptual changes. These changes may well be so radical that they are hard to understand from the traditional perspective.

For example, it seems to me that psychology as a science has been a failure. Its understanding of belief and desire seems to treat them as emotional attitudes rather than as mental representations that play an essential causal and explanatory role in behavior. Required: a paradigm shift away from the habits of methodological behaviorism and the broadly positivist assumptions that have done the potential science of psychology such disservice. I’m proposing radical, life-threatening, paradigm-shifting surgery here.


Whatever about paradigm shifts in science, equivocation as a product of conceptual differences is routine in philosophy. For example, some (such as Descartes) understand mental states as essentially conscious experiences, while others (such as Daniel Dennett) understand them as functional states – i.e. as states that causally direct agents in action. A paradigm shift is required to move from one to the other.

Or again, some philosophers (such as Kant) understand moral right and wrong in terms of the motivation of agents, while others (such as Peter Singer) understand them in terms of the consequences of action. Again, two paradigms are involved here, complete with the change in meanings of many terms common to both.

In political philosophy, some (like Hobbes) understand freedom as the ability to get what you want thanks to an absence of external obstacles. Others (like Rousseau) understand freedom as being empowered through wanting the right things. These two concepts are similar enough for both to warrant the name ‘freedom’, but there’s a gulf of incomprehension between them. Something like a paradigm shift is needed to understand “how the other side thinks”.

In my own view, the greatest paradigm shift of all is needed at the very heart of philosophy, in epistemology. The main concern of traditional epistemology is to “refute the skeptic” (i.e. the radical, Cartesian skeptic) by appealing to “internal” foundations to show that we do in fact have “justified” true beliefs about the “outside world”. The concern of “naturalized epistemology” is quite different: it assumes that all animals routinely have knowledge of many aspects of the world we all live in. The aim is not to show that any of our beliefs are “justified” but to show how some beliefs are sustained by reliable processes that connect them and their (usually “external”) subject matter. In Quine’s terminology, naturalized epistemology addresses “conceptual” questions rather than “doctrinal” questions. There is no longer any appeal to foundations, except within limited discursive contexts in which agents call each other’s beliefs into question. But that is a peripheral matter of social epistemology.