Erotic love (as opposed to parental love, for example) is intimately bound up with male parental investment. Why?
Species with high male parental investment use a reproductive “strategy” in which bringing offspring to viable adulthood normally depends on support from both parents. Why “normally”? – A single parent might get lucky and manage it in times of abundance, but even then, the resulting adult will have to compete against other adults who have enjoyed the attentions of both parents. That will usually be a disadvantage. We know it must usually be a disadvantage, because if it were not, the alternative one-parent strategy would spread throughout the population and become the norm. A male who is absent for the rearing of offspring can wander off and father other offspring. A female who does not need a male partner to rear offspring can choose from a wider variety of males, some of which will be of higher quality than others. This is what does happen with many animals such as grazing ruminants, and it happens because – given the specific needs of their young – that is a more efficient way of producing viable adults.
Please note that although different species use different reproductive strategies, it is wrong to suppose that there are smooth gradations between them. The most successful strategy will always spread throughout the population and become established as the norm. Some species simply lay eggs and leave the young to fend for themselves. In other species, mothers play a special role as parent. In monogamous species, both parents play a role like the role of mothers. Their parental investment will be roughly equal, because their “biological interest” in reproducing is the same. Which strategy becomes established as the norm is determined by which is the most efficient method of producing viable adults in the next generation. But there can be little or no crossing over between these different strategies.
In species whose males do not wander off, sticking around to help provision the offspring isn’t just an added luxury for the female. It’s a matter of life and death for the offspring, and thus a matter of reproductive success or failure for both male and female. Since proliferation of genes in future generations is evolution’s “prime directive”, it’s a matter whose importance equals that of life and death for everyone involved. The male isn’t simply doing the female a favor – he’s using her to reproduce, just as she’s using him to reproduce.
When provisioning is a matter of life and death like that, a male who misspends his provisioning powers on another male’s offspring is in effect throwing away his ability to reproduce. And a female whose male squanders his provisioning powers on another female’s offspring in effect has her ability to reproduce stolen. Given evolution’s “prime directive”, these possibilities are bad news for one or other of them. Furthermore, the strategy of sharing provisioning between male and female opens up such possibilities. Monogamy and betrayal are two sides of the same biological coin.
So in species where male parental investment is high, something new enters the picture: potential parents of each sex set “terms and conditions” for each other in a partnership whose “purpose” is to bring offspring to viable adulthood. Each demands guarantees of fidelity from their partner (at the same time as being rather more relaxed about their own fidelity).
Whatever we choose to call this set of attachments and demands, it is close to the everyday folk psychological concept of love. It isn’t a selfless or sexless ideal, or an experienced “feeling”, but a real attachment between two members of a pair which serves a vital biological function. It involves possessiveness, jealousy, and the ever-present possibility of betrayal. In real life there are many variations on the theme of two parents exclusively attached to each other, of course, but I would argue that most of them involve some degree of betrayal of one sex by the other, even if those involved bite the bullet and observe the social decorum of calling it something more polite.
You might think this is a rather bleak view of love that “lowers humans to the level of animals”. But I would urge you instead to think of it as raising some animals (such as birds) to the level of humans.
Prime numbers are those that are divisible only by themselves and by the number one. Now I dislike arithmetic, and my heart sinks whenever I hear the word ‘divisible’, because it suggests boring activities such as counting or doing “long division sums”.
But I enjoy working with text, and trying out clever things with “find” and “change to” in applications such as InDesign. So it was a real pleasure to learn recently that GREP can be used to find prime numbers. (GREP is InDesign’s implementation of “regular expressions” for matching text.) Grasping how it can do that also helps to throw light on the concept of a prime number. And it does so in an intuitive and simple way that does not involve doing arithmetic.
Imagine an old-fashioned pavement made out of a fixed number of rectangular paving slabs laid side by side. Imagine a child walking from one end of the pavement to the other. By “avoiding the cracks” between them, the child can always reach the last slab, whatever their number, by simply stepping from one slab to the next. But by jumping over alternate slabs (i.e. every second slab), the child might not be able to land on the last slab – it depends whether there is an even number of them. Likewise, by taking big leaps of three slabs at a time, our child will only be able to land on the last slab if their total number is a multiple of three. And so on.
I hope you can see how this might continue into larger and larger numbers. So imagine this going on, with our child taking larger and larger steps, possibly with the help of a pair of stilts. For example, if the pavement is 15 slabs in length, the fifteenth slab can be reached by taking five big leaps of three slabs each, or three even longer stilted strides of five slabs each.
Now here’s the really important thing about prime numbers: the last slab of a prime number of such slabs can only be reached by taking a single step over all of the slabs that precede it.
GREP can be used to find prime numbers, because a simple GREP expression can match non-prime numbers. It manages to do that by mimicking the behavior of a child stepping over multiple paving slabs as just described.
Let’s build up a GREP expression slowly to see how this works. By analogy with reaching all the way to the final slap of a pavement, we want our GREP expression to match an entire series of letters. Let’s choose any letter at random, such as capital M.
We should start off with the simplest of GREP expressions (for clarity, they have this dark red color): the single letter
M will match any single instance of the letter M. If I have a long series of Ms (like this: MMMMMMMMM) the GREP expression
M+ will match the whole series at once. That’s because the plus sign asks it to match one or more Ms, and by default GREP is “greedy” – it will match as much as it can, in this case the whole series. We can change that default behavior by adding a question mark. In isolation,
M+? will match the same as
M on its own.
What we want to build is a GREP expression that will mimic the behavior of a child skipping over whole paving slabs (plural) rather than one-by-one by simply stepping over the cracks between them. The expression
MM+? works for that purpose, because it will match two or more instances of the letter M, at the same time as matching as little as possible thanks to the
? at its end. This gets really useful when combined with parentheses to make a “unit”
\1 to match whatever that unit matches (the number one is used here because it’s the “first unit” in the entire expression).
Bearing in mind what I have just said about the default “greediness” of GREP and the way it can be overridden with a question mark
?, consider the following expression:
This expression is nearly what we’re looking for, as it matches as much as can be matched by repeatedly re-using its smallest constituent parts, where the parts in question are anything bigger than single letters. To illustrate, consider this series of six Ms: MMMMMM.
MM+? in parentheses matches the first two Ms in MMMMMM. It won’t match all of them because the
? tells it to match as little as possible, and it won’t match just one, because it must match at least two. So now
\1 matches a pair of Ms. So
\1+ matches as many pairs of Ms as it can, to try and match all six Ms. As it happens, just two further pairs are needed.
This is analogous to a child reaching the final slab of a pavement of six slabs by jumping over the first two slabs in one go, then repeating the same feat twice. Reaching the last of any even number of slabs involves the same procedure, repeating the initial jump as many times as may be necessary.
But now suppose we use the same expression to try to match nine Ms. Just repeating matching pairs won’t work this time, because nine isn’t a multiple of two. This is where GREP does something clever. It “backtracks” as soon as it has to give up on its initial attempt to match the whole series by repeating a matching pair. Next, it tries matching
MM+? to three Ms instead of two. This is what it must do, if you think about it, since it is trying to match as little as possible with the part of the expression in parentheses, yet as much as possible with the entire expression. The default “greediness” of GREP remains the “prime directive”, and it might be able to match more by trying repeated triples rather than repeated pairs of matched letters. And in the case of nine letters, it turns out happily again, with
\1+ matching two further triples.
This is analogous to a child reaching a ninth slab by leaping over three in one go at first, then repeating it two more times.
I hope it’s obvious how this continues. GREP will keep trying out larger and larger initial matches as long as it fails to match the entire series by repeating its initial match. With non-prime numbers of letters, it will eventually succeed. But with prime numbers, it will never arrive at an initial match whose repetition succeeds in matching the entire series. So prime numbers are those that GREP can’t match when searching in series of the same character (such as the letter M).
There are couple of loose ends to tie up. GREP needs to recognize the start and the end of such a series. We might tell it only to look within entire paragraphs, in which case we should put
^ at the start of the expression and
$ at the end (this is a standard GREP convention). Or we might use spaces between series to mark them off from each other, and look for any character except spaces instead of the letter M. Using standard GREP code for “positive lookbehind”
(?<= ), “positive lookahead”
(?= ), and “anything but”
[^ ] set to spaces, it ends up like this:
(?<= )([^ ][^ ]+?)\1+(?= )
I have tested several scripts for generating and testing quite large prime numbers, and GREP works remarkably efficiently when put to this unintended purpose. In doing so, I have acquired a more intuitive grasp of what prime numbers are, and why they are part of nature. For example, 13-year cicadas and 17-year cicadas only have to compete against each other every 13 × 17 = 221 years, when they emerge in the same year. It is no accident that evolution stumbles upon prime numbers in this sort of situation.
I can see why we might we might call primes the “building blocks” of the counting numbers. Best of all, I haven’t had to do any arithmetic! Hate arithmetic!
I often think that those who say we face “climate change catastrophe” mustn’t really understand the most basic tenet of evolutionary theory: that life involves a struggle for existence.
Consider, for example, what the Sunday Times television guide says about tonight’s wildlife documentary on BBC2, The Polar Bear Family and Me: “polar bears are the world’s largest carnivores, but global warming is making it more and more difficult for them to find food”.
In fact, individual polar bears have always found it difficult to find food. Whenever less food was available, their numbers fell, as more of them succumbed to various causes of death. Most such causes have always been related to food shortage: diseases of malnutrition, exhaustion through having to travel long distances to find food, attacks by other hungry polar bears, even killing at the hands of human beings they wouldn’t have approached if they hadn’t been so hungry.
Whenever more food was available, their numbers rose – up to the point at which food was difficult to find again. That brings us right back to the situation described in the previous paragraph. Polar bear numbers are not decided by ancient “polar bear wisdom” with which they thoughtfully control their own numbers, nor is there a “delicate balance of nature” in the Arctic that perfectly suits polar bears. The issue is always settled the hard way – by food shortages and by death.
As Arctic ice melts, polar bear numbers may be rising or falling – and no one seems to know with much confidence which. Polar bears are good swimmers, and they get most of their food in the form of other swimming animals such as seals. It might be that more open water has the effect of increasing the availability of food – a situation that sustains larger numbers of polar bears. Or it might be that more open water allows more polar bear competitors into their “turf”, which help to use up the food supply. Or that less ice means fewer air-holes where seals can be caught. These are situations that sustain smaller numbers of polar bears. But fewer bears means fewer competitors for each individual bear, which makes finding food slightly less difficult. Which reverses things a bit. Via many swings and roundabouts of fortune, a sort of balance is struck. It isn’t a balance that arises through design, or anything like it. It’s a balance that results from the “chips falling where they may”.
Whichever way the chips may fall, the difficulty of finding food remains roughly the same. The degree of difficulty is always approximately a matter of life and death.
I’m not sure why so many climate alarmists seem to be unaware of this situation, which exists pretty much everywhere in nature. It might be that their area of specialization has nothing to do with evolution, which makes them no better qualified than any other layperson to guess the effects on life of a changing climate. Or it might be that the insight Darwin credited Malthus for bringing to his attention has been largely forgotten in today’s attitudes to “ecosystems”. These attitudes assume that there is something akin to design in nature, and suffering gets much worse when a supposed “way things were meant to be” is disrupted.
Wherever there’s life, there’s a struggle for existence. Whether sea levels rise or fall, whether ice caps retreat or advance, whether the climate warms or cools, whether the earth is beset by floods or droughts, even if everything stays exactly the same – living things have to battle against each other and their environment as a matter of life and death. There’s a lot of suffering in all that strife, and changes are just as likely to bring a little relief from that vast tapestry of suffering as to make it a little worse.
As a simple example of so-called “supervenience”, consider a container of gas at a given temperature. There are infinitely many possible molecular states for any given temperature, and statistically they are bound to differ. The one respect in which they will not differ is in their mean kinetic energy. It sounds strange to say that the property of the gas being at that temperature “supervenes” on the property of its molecules being in this or that state. I would call it downright misleading inasmuch as it suggests that phenomenological thermodynamics describes a different “realm” from that described by statistical mechanics.
In fact, phenomenological thermodynamics and statistical mechanics are just different theories, one of which reduces the other. The fact that they are actually inconsistent with one another is a stark reminder of the difference between them. Yet this remains a classic case of successful inter-theoretic reduction. Statistical mechanics is capable of mimicking phenomenological thermodynamics well enough to recreate Boyle’s Law and other laws of thermodynamics in statistical form. A part of statistical mechanics has the same taxonomy as phenomenological thermodynamics – a taxonomy represented by the tick marks on a thermometer. Rather than saying one property “supervenes” on another property – as if there were “levels of reality” – we should say that the taxonomic classes of two theories are identical. The smoothness of the inter-theoretic reduction between the theories entitles us to make such identity claims.
I use this example because it is not particularly mysterious. When we start talking about the “supervenience” of the mental on the physical, the (traditional, dualist) suggestion that there are two different “realms” is often overpowering.
When I was a young engineering student, I was very impressed by techniques that seemed to deliver something from nothing – or from surprisingly little. For example, one of Kepler’s insights was that orbiting planets “sweep out equal areas in equal times”. Newton later proved that this follows from nothing more than gravity acting along the line joining the centres of planet and Sun. The force’s magnitude needn’t be that of an “inverse square” law; it might even be repulsive instead of attractive – all that matters is that it act radially.
As another example, consider “dimensional analysis”. We might suppose that the period of a simple pendulum depends on variables such as its length, its mass, or acceleration due to gravity. But by simply noting that the period must be measured in units of time (rather than of mass or length, say), we can show that it must be proportional to the square root of the pendulum’s length divided by acceleration (i.e. the constant g). And it cannot depend on the mass of the bob. Here again, a modest assumption about a constraint yields a surprisingly powerful result.
Something rather like this can also happen in philosophy. Take the modest assumption that science posits entities that cannot be observed directly – such as electrons, viruses, force fields and dinosaurs. This constrains scientific methods in surprisingly powerful ways. For a start, it immediately puts the two traditional patterns of reasoning into the back seat. Scientific method cannot be much like mathematics, whose methods of proof are exclusively deductive. Nor can it be much like Francis Bacon imagined it to be in the late sixteenth century, as the rigorous application of induction.
Deduction cannot deliver nontrivial conclusions of valid arguments containing terms that do not appear in the premises. So any entity purportedly denoted in such a conclusion must already be denoted in the premises. Where do these premises come from? – Ultimately, they themselves cannot be the product of deduction alone.
Induction too can only deliver more general, extended versions of claims already made. So where do these original, less general claims come from? – Typically they are about things that can be observed directly. Those claims don’t even purport to describe things that can’t be observed directly. The few inductions that start off by purporting to describe such things cannot themselves be the deliverances of induction.
The problem with both deduction and induction is essentially the same: each starts off with some claims that are already accepted, but which describe nothing beyond what can be observed directly. Each then makes a sort of “jump” to a “new” claim, but nothing “new” enough to contain anything of the sort we’re interested in in science, namely a description of something that cannot be observed directly. Genuinely scientific “jumps” to what cannot be observed directly must be of a different sort from anything made in deduction or induction.
So science must – I repeat, must – be a matter of guesswork. Of course it must also be more than guesswork if it is to produce anything worth believing. That essential extra bit is testing.
There is a variety of takes on guesswork, each with its own fancy name. Some call it the “method of hypothesis”. Others call it “abduction”. Some like the sound of “inference to the best explanation”. And there are other words like these. But most of them are vague, and all are misleading inasmuch as they suggest that step-by-step reasoning is involved (analogous to traditional deduction and induction) instead of honest, common-or-garden, risky guesswork.
Most of those fancy words are inspired by discomfort or even hatred – hatred of uncertainty, of risk, of depending on luck, of gambling, of being unable to do anything remotely like accountancy. So people who hate those things tend to resist or even to cover up the fact that science is essentially guesswork.
One “radical” attempt to disguise the fact that science starts off with guesswork is to pretend that science does not in fact posit entities that cannot be observed directly. So they say instead that all our talk of electrons, viruses, force fields and dinosaurs is a mere instrument to “organize experience”. Scientific theories are neither true nor false, they say, but consists of mere “models” which we use to predict how the world as we experience it will unfold.
It’s a long, old debate – over the issue of scientific realism – and I’d be happy to join it with anyone who’s willing to take me on. In the meantime, please note that if we understand science in an instrumental way, it cannot challenge our religious or philosophical beliefs, or even earlier scientific beliefs. In fact it is powerless to do anything interesting at all.
There is a sort of trade-off in these opposed attitudes to science. The scientific realist accepts that scientific knowledge is very risky in that it cannot avoid guesswork, yet it presents a real challenge to other beliefs because it purports to be literally true. The scientific instrumentalist, on the other hand, sees scientific knowledge as more like accountancy – it is secure, but buys its security at the cost of being very shallow. It cannot present any real challenge to other beliefs because it can’t contradict them.
Here’s an example of a lawlike claim: ‘all emeralds are green’. This claim is much like a scientific law, because the predicate ‘is an emerald’ and the predicate ‘is green’ are practically “made for each other”. They’re ideally suited to their linguistic “marriage”, because what makes a beryl count as an emerald is the very thing that makes it green. So you can’t have an emerald that isn’t green.
Although most scientific laws are written using mathematical symbols – such as Newton’s ‘F = ma’ – those symbols capture intimate connections between the real things they stand for, much as words do in the emerald example above. Those connections are generally simple, and consist of such facts as the containment of one set by another (as above), or direct cause-effect links (as in ‘what goes up must come down’), or suchlike. Speculating, we might well wonder whether our very sense of simplicity itself is shaped by our innate ability to sniff out lawlike connections. In any case, these intimate connections give laws a distinct “flavour of necessity” – laws can seem almost empty like tautologies, or almost trivial like definitions.
An important feature of laws is that they support “counterfactual conditionals”: although I’m not actually holding anything in my hand, if I were holding an emerald in my hand, then it would be green. This is why laws are useful in prediction: you can predict that something will be green, just from knowing it’s an emerald.
Now here’s an example of a claim that is not lawlike (in fact it’s not even true): ‘all swans are white’. There is practically no correlation between an animal’s colour and the genus it belongs to, or even the species it belongs to. Many groups have subgroups whose most noticeable distinguishing feature is their colour – so the predicates ‘is a swan’ and ‘is white’ are not at all suited to “marriage” in a law.
Although ‘all swans are white’ may be superficially (grammatically, etc.) similar to ‘all emeralds are green’, it cannot be used to make reliable predictions. If I were to keep a swan in my own private lake, you wouldn’t be able to reliably guess whether it would be black or white.
Sometimes, people talk about “black swans” as if they were occasional anomalies whose possibility everyone should be forewarned and forearmed about. But really, that is not nearly deep enough or sceptical enough. The real problem is not that exceptions occasionally turn up, but that not enough thought is given to whether laws are involved at all when we try to predict things.
Such laws might be statistical – as long as they’re genuine laws which describe real linkages, and which therefore support counterfactual conditionals. Prediction cannot be based on a mere “statistical snapshot” of the way things accidentally happen to be. For example, in the long run, repeated throws of a pair of dice will result in doubles about one sixth of the time. Even if we don’t actually throw the dice repeatedly, we know that if we were to do so, that proportion would be approached with increasing proximity. Or again, in a large enough sample of mammals, the sexes will be represented roughly equally. Even if we don’t actually take a head count, we know that if were to take a big enough head count, we would find roughly equal numbers of male and female. These proportions are not accidental: they’re the products of careful manufacture (shaping, balancing, etc.) of dice, and of evolutionary biology, respectively. Either of these statistical proportions could take part in a statistical law.
But with many statistical phenomena, the numerical proportions we measure are no better than merely accidental. If we extrapolate from the latter for purposes of prediction, our predictions will be unreliable. For example, suppose about one sixth of Australians drive Ford cars. There is nothing to suggest that that proportion is anything but an uninteresting coincidence. In a decade’s time, they may drive entirely different brands of cars, in entirely different proportions. Or again, the human population has been rising because food is getting cheaper, but the wealthier people become, the fewer children they tend to have. So although there has been an overall upward trend, there is no reason to think any sort of law is involved in the rise of the human population. The current rate of population rise is no basis for any reliable predictions about how big the human population will be at any time in the future.
Now for my main complaint: many people don’t bother to ask whether any sort of law is involved in apparent trends such as population rise. They just extrapolate from the current “data”, and expect nature to “continue uniformly the same” (as Hume put it) in the relevant respects, as if a law could describe the process. Often, we have very good reasons to think the process isn’t remotely lawlike – in other words, we have good reasons to think that no law could describe it. Laws are bits of human language, and human language can describe some things but not others.
The reliability of any prediction depends on an essential linkage between what we know already and what we’re predicting. There might be a simple “constant conjunction” between them (to use Hume’s terminology again). Or there might be some other non-causal connection that underwrites a lawlike connection, such as exist in quantum entanglement. But these lawlike connections are not optional – they’re a requirement of prediction. The ever-present question in our minds should therefore be: Is there or isn’t there a lawlike connection between what we’ve observed already, and what we’re trying to predict?
I think that question isn’t asked often enough. And when questions aren’t asked, answers tend to be merely assumed. The assumed answer to the present question is in effect that there is always a lawlike connection of the required sort, because the physical world is assumed to be mechanical and regular simply by virtue of being physical. The naïve Newtonian intuition is that it’s “like clockwork”. Without even asking the question above, we tend to assume that all we have to do is follow the standard pattern of extrapolation from already-observed cases, and the physical world will oblige. Its unfolding patterns may not be obvious at first, the idea goes, but they must be there, waiting to be revealed beneath the apparent confusion.
I think that assumption is profoundly mistaken – so badly mistaken that it’s worth a brief look at the philosophical ideas behind it.
We belong to a tradition that takes the mind to be “spiritual” rather than “material” – it doesn’t interact with material things in the usual way in which matter interacts with other matter. So we think of the mind instead as a centre of consciousness or an engine of experience, in a sense “cut off” from the physical world outside the mind, because it “deals in experience” rather than with material objects. According to this view, whatever the mind knows about matter is made possible because its experiential inputs from the outside world provide “justification” for its beliefs, and if the beliefs are actually true, they count as items of knowledge. This standard analysis of knowledge takes “justification” to be “internal” to the mind. The vague idea is that I cannot accept anything except “what is available to me” within the confines of the “theatre of my own experience”, because otherwise I would have to “step outside of my own skin”. In the supposedly isolated state “inside my own skin”, with only internal cues available to me as “justification”, the best any mind can do is follow the standard pattern of extrapolation from observed cases – in other words, treat white swans in the same way as green emeralds.
Of course most people who belong to this tradition dropped the idea that the mind is “spiritual” long ago. The trouble is, most of us retain its associated epistemological baggage – such as that knowledge consists of true beliefs suitably “justified” by simple “basic beliefs” about experience, as just described. This idea is still so all-pervading, it even finds its way into popular ideas about science: our theories or computer models are analogous to beliefs, so it is widely supposed that they require an analogous “justification” of being supported by “data” – the public counterpart of “basic beliefs” about experience.
Like many philosophical errors, this one is so deep-seated that any alternative can seem unthinkable to those in its grip. How could it possibly be otherwise than that theory is supported by “data”? – Happily, the answer is given in mainstream philosophy of science: observations test theory rather than imply theory. Hypotheses yield predictions which observations either confirm or do not confirm. If a prediction is confirmed, the hypothesis is corroborated by the observation – a very different matter from its being implied by the observation.
But scientists pay little attention to philosophers nowadays. Many imagine that they don’t have to study any philosophy. The tragic result is that they do their own, newly cobbled-together, half-baked sort of philosophy. In a few branches of science (pseudo-science, if we’re honest) internalism of the sort described above has become a will-o’-the-wisp that guides methodology.
For example, consider the application of computer modelling to irregular natural phenomena that look confusingly “ravelled” to the human eye. The hope is that the magic powers of computer modelling can summon forth order from chaos and “unravel” them.
I think that hope is forlorn. Take something as simple as a compound pendulum. A compound pendulum’s individual parts – of which there are only two – behave in a lawlike way, but the whole does not. There is no ideal “marriage” of predicates (of the sort I began with) that link its earlier and later positions. Like so many things, the whole does not have a crucial feature that its parts do have. The mistake of thinking it does is called the fallacy of composition, and it is a common error. (For example, many suppose that if genes are “selfish”, the entire organism must be too.)
A compound pendulum is chaotic in the sense that its position depends in a critical way on initial conditions. Predicting its future position or behaviour from its past position or behaviour is a practical impossibility.
Now of course, it’s easy to simulate a compound pendulum in a computer, because it’s such a simple system. But it’s impossible to get such a simulation to model an actual compound pendulum, because both are chaotic. Their respective behaviours are bound to diverge. Far from “unravelling” the chaos, the simulation multiplies it by simply adding chaos of its own, if anything increasing the inevitability of a mismatch between it and any actual compound pendulum. The simulation may exemplify or illustrate by mimicry the chaotic behaviour of compound pendulums in general, but it’s incapable of modelling any individual pendulum.
In my opinion, the attempt to model the Earth’s climate using computer simulations is many orders of magnitude more misguided than the attempt to model a warehouse full of compound pendulums. That attempt is inspired by the “traditional” hope that the climate is made of physical stuff, and so “there must be predictable order hidden beneath the apparent disorder”. Well, there may be order in the form of lawlike behaviour on the part of individual molecules, but we have no reason to expect lawlike behaviour on the part of the inconceivably many component “parts” (including causal influences) that together constitute the climate.
I’m not a crank: I think we have good reasons to accept the greenhouse effect. In other words, we have good reasons to think that there is a lawlike connection between the concentration of greenhouse gases in the atmosphere and global temperatures. But a quick inspection of the best graphs we have reveal that at every temporal scale, from one year to several millenia, global temperatures go up and go down in a non-monotonic way. Any graph is confusingly “ravelled” to the human eye in pretty much the same way as a compound pendulum “flies around the place like a madman”. So any lawlike connection even with this simplest of causal connections must be extremely tenuous, or buried beneath mountains of extraneous noise. There is no obvious pattern to see here, nor any reason to think there is a “deeper” pattern that computer models could salvage from the disorder.
I don’t know anyone who thinks gangsters should be allowed to run protection rackets. Even the purest libertarians committed to a wholly unregulated market would baulk at that. Yet we can imagine other practices of “exchange” that would amount to much the same thing. Suppose one person is dying of thirst, and the other controls the water supply. Then the latter can charge an “extortionate” amount for it.
Although I don’t think this situation differs much from a protection racket, I imagine many libertarians would say this second situation should be allowed, because it is in a sort of unstable equilibrium. It’s just a matter of time before another water-seller comes along and undercuts the original water-seller’s extortionate price, or so they would argue (I think).
That might happen if there were many potential water-sellers, and if the water supply were not controlled by a few of them, and if there were many potential water-buyers, and if water-buyers were prepared to buy unlimited amounts of water if it were cheap enough.
That’s quite a lot of ifs. Buts: water-buyers cannot drink or carry more than a small amount at a time, and water-sellers know it. Water-buyers must and will fork out the cash for the water they need, and water-sellers know it. Unless water-sellers are not interested in making money, they’ll cooperate with each other and make a lot of it rather than try to undercut each other’s prices. Why would they do that if they can make more money by cooperating?
I think there is only a difference of degree between this extreme situation and milder versions that we all see happening around us. There are all sorts of things that people must buy, but will only buy limited amounts of. For example, you must travel to and from work. But most people want to travel as little as possible. Even if you love travelling, you only have time to do so much of it.
Or again, unless you’re a collector, you only want one car, but you might need that car quite badly. Even if you’re a “workaholic”, you cannot have more than a couple of jobs, and if you only have one job, you need it desperately – almost as desperately as someone dying of thirst needs water.
I don’t know anything about economics, but it’s just common sense that if the supply of these things can be controlled, any self-interested parties who can control the supply will do so and raise the price rather than compete with each other.
We are a cooperative species. To put it another way, we are a price-fixing species; a species that runs cartels.
I am not a libertarian, but like many libertarians I see striking similarities between an unregulated market and “nature”. Living things thrive in nature, probably better than anything an environmentalist can organize by second-guessing nature. But the living things of value to us – as individual humans – tend to thrive much better when they are managed by farmers. When fertile land is left untended, weeds grow rather than food crops.
I’d guess many libertarians quite like the parallels between unregulated markets and nature. Maybe they have misunderstood what goes on in nature. Here’s Darwin:
What a book a devil’s chaplain might write on the clumsy, wasteful, blundering low and horridly cruel works of nature!
And here’s JS Mill on the same theme:
In sober truth, nearly all the things which men are hanged or imprisoned for doing to one another are nature’s every-day performances. Killing, the most criminal act recognised by human laws, Nature does once to every being that lives; and, in a large proportion of cases, after protracted tortures such as only the greatest monsters whom we read of ever purposely inflicted on their living fellow creatures. If, by an arbitrary reservation, we refuse to account anything murder but what abridges a certain term supposed to be allotted to human life, nature also does this to all but a small percentage of lives, and does it in all the modes, violent or insidious, in which the worst human beings take the lives of one another. Nature impales men, breaks them as if on the wheel, casts them to be devoured by wild beasts, burns them to death, crushes them with stones like the first Christian martyr, starves them with hunger, freezes them with cold, poisons them by the quick or slow venom of her exhalations, and has hundreds of other hideous deaths in reserve, such as the ingenious cruelty of a Nabis or a Domitian never surpassed. All this Nature does with the most supercilious disregard both of mercy and of justice, emptying her shafts upon the best and noblest indifferently with the meanest and worst; upon those who are engaged in the highest and worthiest enterprises, and often as the direct consequence of the noblest acts; and it might almost be imagined as a punishment for them. She mows down those on whose existence hangs the well-being of a whole people, perhaps the prospect of the human race for generations to come, with as little compunction as those whose death is a relief to themselves, or a blessing to those under their noxious influence. Such are Nature’s dealings with life. Even when she does not intend to kill she inflicts the same tortures in apparent wantonness. In the clumsy provision which she has made for that perpetual renewal of animal life, rendered necessary by the prompt termination she puts to it in every individual instance, no human being ever comes into the world but another human being is literally stretched on the rack for hours or days, not unfrequently issuing in death. Next to taking life (equal to it according to a high authority) is taking the means by which we live; and Nature does this too on the largest scale and with the most callous indifference. A single hurricane destroys the hopes of a season; a flight of locusts, or an inundation, desolates a district; a trifling chemical change in an edible root starves a million of people. The waves of the sea, like banditti, seize and appropriate the wealth of the rich and the little all of the poor with the same accompaniments of stripping, wounding, and killing as their human antitypes. Everything, in short, which the worst men commit either against life or property is perpetrated on a larger scale by natural agents.
50 years has passed since the publication of one of the most important books of the twentieth century: Thomas Kuhn’s The Structure of Scientific Revolutions. This book is vital for our understanding of science in ways that are too numerous to list exhaustively. Here are a few random thoughts half a century on.
First, Kuhn showed us that the “Whig history” told by science textbooks is wrong. In fact it’s downright dishonest. Typically, science textbooks barely touch on the history of science, but when they do, the story nearly always goes that we are blessed by being currently in possession of the truth. The past has been a series of cumulative steps leading our forbears towards this glorious present.
The reality is always much messier and less “monotonic” than that.
Second, Kuhn showed us that science is a social process in which committed partisans vie for supremacy. The real worry here is not that scientists are not the saints they are often painted as being, but rather that the decisions they make when they choose one theory rather than another are not rational decisions.
If we do not have good reasons to think theory change is mostly rational, we do not have good reasons to think current theories are even approximately true.
Third, Kuhn showed us that communication between partisans of alternative paradigms is at least problematic, and may even be impossible. (Kuhn used the word ‘paradigm’ for a central theory combined with its “penumbra” of guiding ideas – techniques, unwritten assumptions, and above all notable successes that work as examples of “how to do it right”.)
Kuhn put our understanding of science through a harrowing trial. I remember lying awake at night the first time I read it, half excited and half fearful that everything I had taken for granted about science was wrong. Personally, I think science – real science, not pseudo-science – survives this trial. But it’s a surprisingly near-run thing.
Communication is problematic between partisans of alternative paradigms because the words they use have different meanings. For example, in Newtonian mechanics the word ‘mass’ refers to an intrinsic property of an object, but in the newer relativistic alternative the same word refers to a quantity that depends on the reference frame.
These problems are troubling in science, but they are more obvious in humanities subjects such as philosophy. Students are often anxious that their teacher will penalize them if the teacher doesn’t agree with the ideas and opinions expressed in their written work. But they have little grounds for worry if there is disagreement. Disagreement is a sign that teacher and students are at least working within the same paradigm. And most third-level teachers are scrupulously careful to avoid penalizing students for expressing opinions they disagree with. An apparent lack of understanding is the real liability.
A much more treacherous situation arises when a student is an original thinker, and is writing within an entirely different paradigm from that of the teacher, so that they use words differently. In this situation, the teacher is liable to think the student has simply missed the point, or is changing the subject. This can look like a lack of understanding rather than the embracing of a new or different understanding.
I have never been original enough for that situation to arise in my own case. All the same, I think I have seen the potential for such equivocation in baffled expressions on the faces of peers and colleagues in a few areas. (And even teachers – yeah, I’m talking about you DD, when you saw my copy of EO Wilson’s Sociobiology!)
For example, ethics is divided between those who make moral judgments with reference to the consequences of action, and those who make moral judgments with reference to the motivation of agents. The word ‘right’ changes its meaning across this division, much as ‘mass’ did between Newton and Einstein. Culpability does not even enter into moral deliberation in the former, but it is the central concern of the latter. So each side tends to regard the other as “not thinking morally at all”. In a discussion of ethics, one side seems to the other to be simply “changing the subject”.
Science gets over this sort of failure of communication through observations and testing, but there are no such tests for moral theories. Much depends on what is in fashion, on what the most – or most influential – people think is a worthwhile way of thinking.
Utilitarianism was taken seriously in the nineteenth century, but the tide turned against it. It is widely thought to have been discredited. Thus the few who don’t think so are liable to be treated as stubborn or even ignorant people who “haven’t heard the news”.
Another area in which a gulf yawns between alternative paradigms is epistemology. The traditional project of epistemology was to worry about “justification” and to try to “refute the sceptic” (by which is meant the radical or Cartesian sceptic who feels he has no reason to think he is perceiving the “outside world” at all). WVO Quine’s “naturalized epistemology” rejects Cartesian dichotomies between “inner” and “outer”, and its concern with internal “justification”, to instead ask about the external reliability of the processes that give rise to beliefs. To the traditional epistemologist, he has simply “changed the subject”. He is considered a “naïve realist” rather than someone who sees, like Donald Davidson, that we have “unmediated touch with the familiar objects whose antics make our sentences and opinions true or false”.
I spent a few years as the lone “scientific realist” among the graduate students at a US university with a strong tradition of “continental” philosophy. I was considered naïve, having perversely turned my back on all the stuff they assumed had been known since Kant’s day about noumena being forever beyond our ken and all that. (Despite our apparent ability to refer to “them” using the word ‘noumena’!)
The one thing we did have in common is we all worshipped Kuhn, for various reasons.
Then one day, Thomas Kuhn Himself came to town. It was a meeting of the American Association (or something like that) for the Philosophy of Science (or something like that), around 1990. As a graduate student, I was dutifully writing names on badges. Someone said “here comes Thomas Kuhn!” as the great man approached the desk, and quietly announced his name with modesty and grace, despite the fawning multitudes loudly welcoming him to the Chicago Hilton. With trembling hands I wrote his name on the badge, desperately anxious that I might not be spelling it right.
Later, I got away from the desk and started to drink the (free) beer. After half a dozen (or something like that) cold ones I approached him, in the main ballroom (or something like that). At this stage, he was surrounded by the graduate students of at least three universities in the Chicago area. But eventually I saw an opening, and asked him: “Professor Kuhn, if you had to answer Yes or No to the question whether you are now a scientific realist, what would your answer be?”
Reader, he said Yes.
(And then he qualified his answer with some other stuff, but you wouldn’t be interested in that. Detailed, boring kind of stuff.)
I’m often frustrated by the poor quality of discussion of the problem of overpopulation (if indeed it is a problem). It seems to me that almost all participants to the discussion have missed one of the most important insights of evolutionary theory, an insight attributable to Malthus.
The population of any species in any closed habitat would rise geometrically, but it cannot, because it always hits a “ceiling”. This ceiling is mostly set by supply. I mean to construe “supply” in the most general terms – usually what matters is the availability of such necessities for living as food, water, and light. But it can include more, such as the availability of ornaments used by bower birds in sexual selection.
Where weeds can grow, weeds do grow. The weed population expands, and only stops expanding when overcrowding prevents further expansion. Where bower birds can build ornate bowers, bower birds do build ornate bowers. The bower bird population expands, and only stops expanding when the quality of the poorest-quality bowers is too low for their builders to have realistic hopes of getting chosen by female bower birds, and hence to reproduce. In both cases, the population bounces along the ceiling like a helium balloon that has slipped out of a child’s grasp. The ceiling is set by supply, although what needs to be supplied differs sharply from one species to the next.
Of course there is an attrition rate: some members of the population are picked off by predators. But that rate is set by the population of the predators, which in turn is set by their food supply – in other words, by the replacement rate of the population they prey upon, which is exactly where we started.
How much each individual consumes of the supply decides how many individuals there are. For example, a given field that can sustain a population of 100 rabbits might only be able to sustain 10 sheep, or 50 rabbits and one fox.
So two components determine the number of individuals: the supply and the rate of consumption. (Perhaps I mean “demand” here, but I know nothing about economics, and I don’t want to suggest that I am talking about anything other than biology.)
The human population rose dramatically in recent centuries, not because humans decided they wanted to have more children, or because they became more sexually promiscuous, or because many generations have passed since The Great Flood, or even much because advances in sanitation and medicine lowered the attrition rate. It was mostly because food became easier to procure, thanks to cheaper energy and advances in agricultural technology.
So although there may be a human overpopulation problem, an increase in the population is a sign of good things happening, or at least of good things having happened. Although there may be trouble ahead, the trouble will not be that the expanding population finally “hits a wall” of the Earth’s “carrying capacity”. That “wall” is better understood as a ceiling, and the population has always been already at that ceiling. It is hardly ever acknowledged that the normal condition of the Earth is to be at “carrying capacity”, and that at all times some places enjoy a surplus while others suffer a famine, with the same statistical inevitability as floods and droughts. The trouble is not that we hit a wall or a ceiling but that the ceiling might start to get lower. This could happen if energy to produce food became significantly more expensive. The reality of a lowering ceiling is famine.
There are two obvious ways to lower the population, if indeed that is a good thing to do. The first is to artificially lower the ceiling by limiting the supply. The second is to increase consumption of each individual so that the same habitat can sustain fewer individuals. Suppose we are again considering grazing animals in a given field of grass. In effect, the first solution is to lower the number of rabbits by having less grass. The second solution is to lower the number of grazers by turning the rabbits into sheep.
In the case of rabbits and sheep, the “supply” is of grass. In the case of humans, the “supply” is of more costly and abstract items, things more like the ornaments of bower birds. All humans need an education, for example, although giving a child a good education generally entails having fewer children. In increasingly affluent countries, humans get increasingly ambitious about their need for houses, and cars, and expensive clothes, and foreign holidays, and memberships to golf clubs. Although we may disapprove of the levels of consumption here, we should remind ourselves that such levels are a good way of keeping the population down. People who have high expectations for their children have fewer of them, and invest more in each. This explains why as societies become more affluent, life becomes less cheap, and the birth rate generally drops.