Tuesday, 20 November 2012

Skepticism about skepticism: Is there really a problem of causality?

My philosophy lecturer had a rant* this morning, in which she asserted that no real progress had been made in the area of causality since Hume. On the whole this was a refreshingly true thing to say, yet, in one respect, important progress has been made that she would refuse to recognise, and in another, some of the ‘progress’ to which later philosophers have been aspiring is completely misguided.

Hume’s problem of causation takes a similar form to the problem of induction: Causes are not things we can perceive, just as we could not perceive a principle of induction, so we can’t have knowledge of them from experience—yet since they are a thing in the world, we can’t have a priori knowledge of them either. In very brief outline: For an event to be the cause of another, there has to be a special kind of relation between them, a ‘necessary connexion’ which specifies that the one will always be followed by the other. The relation is unknowable by experience, says Hume, because all we actually perceive is regular succession––certain events continually following certain others (a billiard ball always moving when hit, say). We can’t perceive the relation, so our need to posit it is psychological, and irrational.
Has no progress been made since Hume formulated this problem? Popper pointed out in Objective Knowledge that the problem of causation, while similar in form and even taken by Kant to be the same thing as the problem of induction, is directly refuted by Hume’s ‘negative’ solution to the latter. It is irrational to posit causes only if we assume that knowledge is revealed to us by experience—the ‘bucket theory of knowledge’. In the rest of the aforementioned book, Popper sets out an alternative theory of knowledge that does not require the illogical form of inference that Hume himself rejected. According to this alternative, there is no problem in conjecturing a causal relation, for whatever reason, and seeing if it stands up to criticism (empirical or otherwise).

There is no causality—there is only explanation

So that’s an extremely important respect in which progress has been made. Meanwhile, contemporary philosophers have come up with various theories of the nature of causality. Broadly these can be either mechanistic or difference-making, and either monistic or pluralistic. All of them—including, I suspect, the pluralistic accounts—commit the same basic error: namely, in thinking that there is even a problem to be solved here. They assume that there is a thing in the world called causality or a ‘law of causality’ that universally relates events—or in the case of pluralistic accounts, they assume that there are a few such ‘laws’ for different types of events. Both assumptions are wrong, because causes are simply parts of our explanations** of certain phenomena. Clearly we have different kinds of explanations in different fields. Causes in economics are not the same thing as causes in physics, just as explanations in economics draw on different knowledge from those in physics. The attempt to find a relation common to both fields, or even to define one relation that applies automatically in one and a different kind of relation that applies in the other, is not an interesting problem to solve, because it divorces causes from their explanatory roles. And sometimes we don’t even need causes. They are inapplicable at the level of microscopic physics, for example. F. Russo and J. Williamson recently put forward an ‘epistemic theory of causality', which says, perhaps in agreement with me, that the distinction between mechanistic and difference-making causality is a false dichotomy. There are different kinds of evidence of causality, they say, and it is improper to try to infer causal laws beyond that evidence: “Causality is just the result of this epistemology.” I hope to criticise this alternative in a later post. For now, there is no law of causality: There is only a concept of causality that functions as an explanatory tool.

 *Viz a planned departure from the syllabus to opine in an honest way that is generally more interesting than the rest of the lecture.
** By ‘explanation’ I simply mean accounts of what exists, what it does, and how and why.

Wednesday, 31 October 2012

Part 2: Monarchy for anarcho-capitalists

Since my last post on this, another argument has been brought to my attention. We have, in the United Kingdom, an apolitical head of state. (Some people say that Elizabeth II is just an exception to the general tendency of monarchs to interfere in politics. But she is not so much an exception as a continuation of the development of an apolitical Crown: Most monarchs before her were similarly unprecedented in how little they interfered. It's not impossible that this will change after Prince Charles succeeds to the throne, but it's not very likely. Elizabeth II has established new standards of behaviour for monarchs. Violations of these standards are considered a betrayal of trust, and cause outcry greater than could be provoked by any politician who betrays his promises.) Moreover, the monarchy has evolved to be independent from other parts of government. A distinction is thus set up between the state and the government.

When the government has been entirely privatised, the concept of the state of the United Kingdom could still serve an important purpose. The first anarcho-capitalist society will be surrounded by nation states (we can all agree that a worldwide transition to anarchy is comparatively implausible).  There will need to be a clear political boundary between it and those other states, much as we need established boundaries between states now for international law to function. The border of the ancap 'state' couldn't be completely free, because this would allow, in principle, for people to flood in from outside, be lawless, and overwhelm and destroy the ancap system. People would have to recognise the  legitimacy of this border. So the monarchy would serve the same purpose in an ancap society as it does now: namely, to maintain traditions of legitimacy and to be a symbol of the sovereignty of the UK.

Wednesday, 24 October 2012

Why libertarians should support the monarchy (part 1)

As the Diamond Jubilee year draws to an end, examples mount up of Andrew Marr's reinterpretation earlier this year of the monarchy with the Queen as our "department of friendliness," in the wake of the intense Jubilee programme. Marr's description is admittedly cringy. But is it wrong?

 What the monarchy is not
First I’ll look at some arguments inspired by this blog post, and show that there is no libertarian case against the monarchy in Britain.
That the monarchy is a cost to the taxpayer is a common misconception. While the real amount spent on the monarchy is somewhat debatable if we take into account the cost of security and so on, the revenues from the Crown Estate more than make up for it. As of last year, 85% of this revenue goes to the government, and the other 15% goes to the Queen, to whom the land actually belongs. So in actual fact, for the last 250 years the monarch has effectively had to give money to the taxpayer (in return for an allowance), not the other way around.
A libertarian might still object to the Crown Estate because it is not strictly private property—it’s a statutory corporation, accountable to Parliament, which the Queen is not allowed to sell. So the land is effectively nationalised. If it were available to the private market the land might generate more wealth than it does in its current 'nationalised' state. For a start, this is a somewhat dubious counterfactual, as the Crown Estate has actually been doing rather well: profits from this land have risen in the last year (see Crown Estate website link above). But as we shall see, even conceding this point, it is still not a good reason to oppose the monarchy.
Nor can we object to it from the perspective of property rights. George III voluntarily gave up the income from the Crown lands, and this agreement has been voluntarily continued by successive monarchs. There are no identifiable private owners who are being forced to give up their property to the state (as is the case for actual nationalised industries).
And even if there were such a grievance, this is still only an objection to how the monarchy is set up financially, not to the institution itself—which could perfectly well still exist under some other arrangement, as it eventually will as it continues to evolve. I similarly object to how the military and the NHS are funded; I’m not opposed to armies and hospitals. I merely hope and expect that as these institutions are improved, they will gradually be privatised. Given the comparative success of the Crown Estate, and given its voluntary nature, there should be less reason to abolish it on financial grounds, not more. Why do anti-monarchist libertarians commit this double standard? I suppose it is part of a deeper error that will presently be treated.
It is not the job of the government to make money, so it has no business accepting the revenues of the Crown lands.*
This argument contains the methodological assumption that to decide whether a policy is right, one first chooses a set of principles and then derives policy from them. So, a libertarian who takes this approach might conclude that the state’s proper functions are to run the police and the military and nothing else, and oppose the existence of all other government institutions on those grounds. It is true that as these things are improved, they will eventually be privatised (if you don’t agree, accept it here for the sake of argument). But it is profoundly untrue that we should decide whether to support a policy by checking it against a principle, and oppose it if it’s not allowed by the principle. This is dogmatic: It has to assume that the principle is infallible, as it is being treated as the ultimate authority on the rightness or wrongness of a given policy. Treating it as an authority makes it difficult or impossible for the principle itself to be improved.
This utopian methodology fails to take into account that it is harder to create knowledge than to destroy it.  This means that less knowledge will be embodied in our institutions if they are all completely revamped every time the principles of government are changed, abolishing everything that doesn’t fit that utopian criterion. Even without one of these Ultimate Libertarian (/insert other political persuasion here) principles, one might wish to have abolished anything that seems to exist without an explicit reason of some kind. But this is also dogmatic: We don’t know what the value is of many of our traditions. It’s wrong to assume that failing to name their value entails that they are worthless. Countless rules of grammar don’t appear to have explicit motivations, but if we started revising the English language on those grounds we would end up with a language not dissimilar to Newspeak.
The better approach—and the one on which England bases its success as a civilisation—is to retain a given tradition until and unless a problem is found with it (an actual problem—not merely a failure to live up to utopian criteria or articulated justification). Tradition is existing knowledge. Much of this knowledge is not explicit; we don’t consciously know the reasons behind many rules and ceremonies. It is for this reason that people who call for the monarchy to be abolishedrather than altered in certain waysare so woefully mistaken. It would be easy to wipe it away, along with other government institutions, and replace it with a government derived from libertarian principlesbut there is danger in that ease. 

* Quoted from this post.

Saturday, 25 August 2012

Free will: Misconceptions answered

Having established the problems with reductionism, and having established a need to explain the 'world' of abstract ideas on a higher-than-deterministic level, let's turn to some of the more common determinist and compatibilist claims about free will.
An action is only free if it was possible to do otherwise. Our thoughts consist of activity in the brain, which is a purely physical phenomenon. Physical activity is either random or deterministic – so our thoughts are either physically inevitable (determined) or controlled by random factors. Neither of these allow for free will.
For a start, this isn't very helpful in itself. Like solipsism, it merely proclaims the non-existence of something, not because it's a bad explanation, but just because of an apparent logical difficulty. It denounces a host of experiences as illusory, closing off the possibility of finding better explanations for them. It's not a good approach.

It's also a reductionist argument. It assumes that people's choices must be explained in terms of low-level interactions, as if this form of explanation were more 'fundamental'.  Even if we did know all the (inconceivably many) physical events that led to a certain thought, they wouldn't be a good explanation of why the person had that thought. If I think to myself, 'I need to write a blog post', the reason I had that thought could be explained as, 'Because X neurons fired off in Y and Z locations' – but this only tells us what physically prompted my thought.  It was already clear that neurons needed to signal in order for me to have had a thought. So it tells us nothing we didn't know already.

Fine, some stuff is emergent. But still, every effect has a cause. Even if my thoughts and actions are caused by emergent ideas and circumstances, according to causality they're still unavoidable. So for every decision I make, I couldn't have done otherwise.
People are creative. Ideas change, technology progresses, all the time people form new theories in the light of criticism. It doesn't make sense to talk about creativity in terms of sufficient causes, because if a new idea could be inferred from something that went before, that thing would contain the idea – and so it wouldn't be new at all. The future is unavoidable only on a lower level of explanation with respect to that of knowledge-creators. On the abstract level of creative thought, causal determinism isn't applicable. It is true that there is always a single answer to questions about what *physical* state something will be in in the future. Physically it is determined. But on the emergent level of knowledge creation the future has to be open.
But you don't know what your thoughts are going to be before you think them. And you can't be in control of something if you can't predict what it's going to do, surely. So how can you be in control of your thoughts?
Again, if you could predict your own thoughts, those predictions would be the thoughts themselves. There's no real reason to think you can't be in control of a thought as you're having it, though it didn't occur to you to have it beforehand. We pursue certain trains of thought purposefully: we can freely intend to come to a conclusion without knowing in advance what the conclusion will be. This doesn't imply a lack of control; it just implies creativity.
People are free in the sense that they have evitability: They can alter their courses of action according to projected consequences. Even if at the micro-level we are behaving deterministically, we're still free in the sense that matters.
This view of free will would mean that a simple computer program could have a little bit of that which makes humans free. It is true that computers can be programmed to follow abstract rules, such that their activities couldn't be explained in purely physical terms. But human freedom is on a higher level of emergence than that, because of creativity. If we were 'free' in the same way as that computer program, just to a higher degree, our actions would be predictable in principle and hence determined. So there's more to it than having purpose and 'evitability'.

Tuesday, 7 August 2012

Free will emerges

Free will and notes on The Beginning of Infinity, Chapter 5
Arguments against free will tend to make the reductionist assumption that, because there isn’t a physical explanation for human freedom, it must be an illusion. Broadly, reductionism is the idea that the world can only be explained as the sum of interactions of its fundamental parts. For example, the high-level properties of a substance (such as boiling point or state at room temperature) can be predicted from low-level atomic interactions. Lots of these sorts of ‘reductive’ theories are true and useful; they have reach. Newtonian mechanics was a ‘reduction’ of Kepler’s laws of planetary motion and Galileo’s theories of motion. But reductionism as a methodology is mistaken. Science doesn't progress by analysing high-level phenomena into low-level phenomena. There are lots of high-level phenomena that, while they ultimately consist of low-level phenomena, have patterns and laws that are present on the higher level but not, so far as we know, on the lower level.

Douglas Hofstadter gave the following as an example of the inadequacy of reductionist explanations for emergent phenomena. Imagine a set-up of millions of dominoes, placed closely together (so that they can knock each other over in the usual way) in a complex pattern of rows. The dominoes are spring-loaded and, if knocked over, will pop back up after a set time. If a row is knocked over it can be interpreted as a binary ‘1’, and if not knocked over, a binary ‘0’. The set-up is sufficiently rich and complex that it can perform computations, and in this instance it is set up to tell you whether the number keyed in (by placing a row of that number of dominoes at a specified position) is a prime. One domino in the set-up is the output domino: If it is knocked over at the end of the computation, it means a divisor was found and hence the input number is not a prime. If it stays standing, the input number is a prime.

Now, if the output domino stays standing after one of these operations, an observer may single it out and ask: “Why did that domino never fall?” The reductionist explanation would be: “Because the domino behind it never fell, because the domino behind that domino never fell, because…” – and so on, or: “Because none of its neighbours ever fell, because none of their neighbours ever fell,” and so on.  This answer is true, but it is not an explanation: It merely states the already obvious fact that no domino will fall unless one of its neighbours falls. To explain why the domino didn’t fall, we have to make reference to the non-physical concept of primality and to the dominoes’ emergent capacity to say whether a given number is a prime.
So reductionist 'explanations' are inadequate for some emergent phenomena. Even if the series of physical events described by the reductionist account did actually happen, such emergent patterns have to be explained on their own terms. If human freedom is emergent, it doesn’t conflict with physical determinism -- it just has nothing to do with it.

There is a reductionist idea that the mind cannot affect the physical world, on the grounds that only physical events can cause other physical events. But the idea of a ‘cause’ is abstract; at the purely physical level, cause and effect are interchangeable. The laws of motion can retrodict as well as predict. So a cause is just an explanation we infer for why something happened, and there is no reason to think that physical explanations are the only explanations we have.

For more, see David Deutsch, The Beginning of Infinity, Chapter 5.

Popper’s three Worlds

To describe the emergence of human knowledge, Popper posits three worlds -- World 1, the world of physical events; World 2, that of mental states or subjective experiences; World 3, that of products of the human mind: theories, scientific (or otherwise intellectual) problems, the information (if not the physical stuff) in books and libraries. It used to be popular to deny the existence of World 1 and to claim that only subjective experience exists. Now it’s more fashionable to claim that only the physical world exists and to deny the existence of experience. The once-popular denial of the reality of World 1 was refuted by Dr Johnson on the grounds that it ‘kicks back’. If the physical world is an illusion, and that illusion behaves exactly as though it were real, then it needs to be understood and explained in exactly the same way as would a real physical world. It kicks back, and so it might as well be considered real.

To explain the physical (World 1) presence of man-made objects -- say, skyscrapers, computers, nuclear reactors -- we have to refer to the theories people formed about how to produce them. These World 3  theories could also be mistaken. A miscalculation might lead to the World 1 event of a bridge collapsing. World 3 has a logical structure that exists independently of humans. For example, though natural numbers are a human invention, facts about them can be discovered, such as the existence of primes.

The ‘human’ dimension of World 3 also kicks back. We make judgements about people’s beliefs, values, habits, etc., and explain their actions as responses, based on those things, to a given set of circumstances. I might be convinced that my boss is going to sack an incompetent colleague, but then discover that, because my boss is the forgiving type, the colleague has instead been put on a training programme. Using the laws of physics to explain all of these things reductively would be absurd. It would be as much as to say that they don’t exist, even though they kick back. If someone thinks that 4+4=9, he will be kicked back by the laws of mathematics. Similarly, (and quite rightly), people who deny the existence of free will are kicked back by the contradiction inherent in proposing such arguments as things that people ought to favour (to choose) over other theories.

For more on the three Worlds, see Karl Popper, The Open Universe, Addendum 1: Indeterminism is Not Enough.

Saturday, 9 June 2012

Free will and creativity

So I was at a philosophy conference last weekend where a handful of undergraduates presented papers on various topics. One paper made replies to some recent criticisms of Frankfurt-cases as counterexamples to the notion that, for a person to be morally responsible for an action, it must have been possible for him to do otherwise. To this day generally accepted, the claim is known as the Principle of Alternate Possibilities. Frankfurt's argument is essentially that, even if sufficient conditions are met to ensure that a person acts in a certain way, they do not necessarily explain why the person acted in that way. Only the true explanation, which is to be found in the person's will, can tell us whether he acted freely. Frankfurt's main thought experiment runs thus: 

Jones has decided to kill Smith. Black also wants Smith to be killed, so he monitors Jones' mental state and will know if Jones is going to change his mind. If he does change his mind, Black has the power to make him kill Smith anyway. But Black doesn't want to get involved if he doesn't have to, so if Jones does go through with it, Black won't intervene. In this case, it seems intuitively true that Jones would be morally responsible for his decision to kill Smith, even though he couldn't have done otherwise.

The counterexample suggests that the real reason someone might be said to be morally responsible for an action is its explanation in terms of their will – their own inner reason for having so acted. Jones didn't kill Smith because Black determined him to – he did so because of his own decision. (There is the obvious objection that, if causal determinism is true, Jones's very thought process couldn't have gone any other way, so he could hardly be blamed for that either. I'll return to this shortly.)

Now, a recent criticism of Frankfurt-cases as counterexamples to the Principle disputes Frankfurt's concept of action. For something to count as a genuine action by an agent, the agent must have the ability to refrain from doing so at the time of the action. An 'action' is thus defined as an intervention into the course of nature that the agent need not have bought about. By this standard, Jones didn't even perform an action, let alone bare moral responsibility for it. But, as Moran points out, formalising this argument reveals that it's just a brute assertion of incompatibilism: No actual case is made for the claim that free will only occurs if the agent has the possibility to refrain from their chosen course of action. It has failed to show that there is any connection between an event depending on the agent's will and the agent's ability to refrain. So this mere definition of agency isn't a very successful criticism in itself.

Moran is critical of this and also of a second argument which says that, in the Frankfurt case, Jones does not have the right sort of control over his actions. Different sorts of control have been identified for the purposes of this debate, and Moran argues that a kind of hypothetical or 'conditional' control should suffice for agency. For example, Sally is driving a car and turns left. She was 'metaphysically determined' to do this, and in that sense she couldn't have done otherwise. However, it is still the case that if a certain set of events occurred, Sally would not have turned left. The ability to respond appropriately to events is part of her 'intrinsic properties': In principle she can refrain from turning left, and this seems to be the relevant sense in which she 'can' do that, even if she is destined to respond to events appropriately and is not 'able' to respond in any other way.

But even inanimate objects have this sort of conditional 'power' to some extent. If it rains, the grass can grow, but in the event it hasn't rained so the grass doesn't grow. So clearly the idea of free will conditional powers needs to be further explained. The most Moran says about this is that humans are "highly responsive to their environment," and that

we are the kind of creatures who are psycho-physiologically sophisticated in such a way that if any of a range of possible events occur, we are capable of the appropriate responses.

But a computer could be programmed to 'respond appropriately' to a range of possible events – so this isn't enough for free will. It also seems wrong to imply that inanimate objects, in their sheer obedience to the laws of physics, have a certain (if tiny) amount of the quality that supposedly makes human freedom. But free will isn't a matter of degree. This was brought up in the question period as someone asked where to draw the line between animals, etc., that aren't responsive enough to have free will and those which are. The reply was that more work needed to be done before a clear distinction could be made. 

In the second section of the paper it is argued that the future may be 'subjectively open' even if it's objectively determined. When we talk of a person considering 'options', we might only mean that those options are 'epistemically' rather than objectively possible. This looks like a restatement of Frankfurt's argument. But if something is only 'subjectively' the case it might just as well be an illusion, so this doesn't suffice either. It was brought up in the question period that our deliberations about the subjectively open future are themselves determined, so these Frankfurt-type arguments beg the question against incompatibilism. In any case, none of the arguments so far have shown that humans have intrinsic properties that result in free will.

Perhaps it has something to do with creativity. When we talk of people deliberating over various options, we refer more than anything else to the creative act of conjecturing and criticising various options and thereby also conceiving new options that didn't previously exist. Creative processes are inherently unpredictable, which is problematic for the requirements of hard determinism. Also, if I haven't misinterpreted it, the view of free will as creativity is embodied in Karl Popper's 'two-stage model' – mirroring his epistemology of conjecture and refutation. In a later post I'll look at this model and criticisms of it.

NB: I have referred to the sources below but if anyone wants citation for a particular sentence I'd be happy to give it.

Paper from the conference -- 'Agency, Frankfurt-Cases and the Compatibility of Determinism with Free Will and Moral Responsibility' in the BJUP
Alvarez, 'Actions, thought-experiments and the ‘Principle of alternate Possibilities’'
Frankfurt, 'Alternative Possibilities and Moral Responsibility'
Summary of Popper's two-stage model