Tuesday, 10 February 2015

Immigration and the collapse of society: a reply to Michael Huemer

Michael Huemer’s article ‘Is There a Right to Immigrate?’ tries to show that restricting immigration violates the prima facie right to be free from harmful coercion, a right which is not overridden by the points usually raised in defence of restriction. The approach of using a thought experiment based on certain widespread intuitions (rather than taking a philosophical theory or ideological orientation to be true and deriving policies from it) is very sensible, and makes the whole article more relevant and persuasive.

I think the article succeeds as an argument for economic immigration, and ideally it may justify an open-borders policy for the US. The problem is in the claim that its arguments "apply equally well to other countries." On the contrary, most of its responses to objections to unlimited immigration become naive when applied to other developed countries, such as Britain and elsewhere in western Europe. A related problem is that the article ostensibly aims to defend immigration as such, yet its central thought experiment refers strictly to two parties who want to trade with each other (starving Marvin and an unspecified shop owner) but are prevented from doing so by a third party. What hasn't been addressed is the large number of people who would come purely to receive certain benefits offered in the developed world. For example, many of those countries have socialised health care, whose legitimacy depends on the ability and willingness to treat everyone, rich or poor, without charging them. A flood of critically ill immigrants would destroy that institution (or more likely, the government under which open borders were introduced would be voted out, and restrictions would be reintroduced).

The argument for preventing that kind of immigration is not the same as the argument that, "because of a policy one has voluntarily adopted, if one did not coerce one’s victim in this way, one would instead confer a benefit on the person that one does not wish to confer." It is not the same because it does not infringe the freedom of association to stop people from entering a country purely in order, say, to gather outside its hospitals, creating an imperative to treat them (by that institution's ethical/social standards) and thereby destroying an institution set up to provide (relatively) high quality health care to anyone who needs it.

(If one has some absolutist conception of rights, one might say it impinges on the prospective immigrants' freedom of movement. But the argument in the article is set up around not assuming any such view, so that doesn't matter.)

This sort of concern is not confined to the existence of government welfare; that is just a factor which would attract non-economic immigrants. The general problem is this. In a country like Britain – which I will use as a counterexample from here on – institutions work only in virtue of shared knowledge of their members. It is a special kind of society, in which people have learnt to deal with each other and manage conflicts without violence. Because Britain does not currently have sufficient mechanisms (be they policies or informal traditions) for getting immigrants to assimilate, unrestricted immigration would threaten the existence of social norms which allow this kind of society to exist. For example: today in Britain, it is outside the realm of ordinary experience to come home and find someone ransacking the house, who, when confronted, pleas that he needs money to save his child’s life. That is the case partly because society is arranged in such a way that no one ever gets into such a desperate situation (an arrangement which would no longer work if swamped). But it is also because the vast majority of people respect the law, and would find legal, peaceful ways to manage desperate situations. Law enforcement in Britain relies on that fact. It could not cope with vast numbers of people who were not law-abiding in the British sense, and many of whom preferred prison to their tyrannical or war-torn countries of origin.

Huemer dismisses the 'cultural change' objection with a thought experiment about a country becoming Buddhist from the inside. But the objection has force when it is acknowledged that change might be imposed on a society from the outside, and that it involves a threat of violence, in the sense that members of the foreign culture do not recognise the non-violent institutions which make the society of their new place of residence what it is. Again, this is not an issue for Huemer's example country, partly because the US is so large (it would take a far greater number of immigrants to have any of the above effects), and, more importantly, because it has an ethos which strongly encourages immigrants to 'become American'. The same has become far less true of Britain over the last few decades, where that sort of attitude would nowadays be considered churlish.

Huemer addresses the fear of general societal collapse in section 3.5, where he disputes that any such thing would happen, apparently because foreigners, no matter where they come from or what their circumstances, would rather be with their families and stay in their own country (to which they are proud to belong) than move to a richer one. This is just factually false. To point out that Americans rarely move between states is to make a false comparison. Every American state has a tolerable standard of living. People are not dying of hunger in America. So of course their incentive to move is comparatively weak. The central example of the paper is based on the fact that many people who want to immigrate will otherwise die. That – and also much lesser plight than that – overrides any preference to stay with family or to stay in one's hometown.

This is especially true of an area of concern already raised: immigrants who come expressly to receive health care. They will die if they do not come. So the author's dismissal of the claim that a country with open borders would be overloaded with immigrants (on the basis that they would rather stay with their families) is too hasty.

It is strange that Huemer attaches significance to the fact that only 13 million people have made some effort toward moving to the US, and then admits that many others presumably make no effort, being aware of the draconian restrictions – not acknowledging that the vast majority of people who would rather live in the US are preemptively put off from making any such effort, which renders the 13 million figure meaningless. He does concede that the 'immigrant flood' worry means it might be better to open borders gradually, adding an extra million to the cap each year. Again, this may work for the US. It would not work for most developed countries.

Wednesday, 30 July 2014

Four myths about antisemitism

#1 It's a form of racism
The word 'antisemitism' might never have entered the world’s vocabulary. It might have remained a piece of pseudo-scientific jargon, exclusive to the thought of eugenicists and other writers on race in the second half of the 19th century, whose central theories have long since been refuted and abandoned. During the same period, the situation of European Jews seemed to be improving. The Age of Enlightenment had generated universalistic ideas about human beings, which led to the expectation that institutions treat people equally, regardless of religion or social status. In the spirit of social integration and the Enlightenment values of rationalism, humanism and universalism, successive European states emancipated their Jews. Britain opened membership of Parliament to Jews in 1858. In the 1860s and 1870s, swaths of Europe granted various forms of emancipation to their Jewish minorities. By 1919, Spain was the only European country in which Jews did not formally have full civil equality. Thus it has been pointed out1 that at the time, it was reasonable to expect these modernising processes to continue, since the parochial, theological basis of antisemitism had fallen out of favour, especially in northern Europe.
An antisemitic drawing on a 13th-century English tax roll

But the trend did not continue, and a second generation of eugenicists made this word famous. Yet even Nazi racism did not treat Jews the same way as any other inferior race. It was only the Jews, not Gypsies, not Slavs, who were blamed for the loss of the First World War, for the economic crises, for Britain's opposition to German expansionism. The intensity of the Nazi obsession with Jews, and their invoking traditional antisemitic canards, cannot be explained by their racism.

And for most of the history of antisemitism, it was not a question of race at all. Until the 19th century, the low status of the Jews was chiefly justified by their social separation and adherence to a degraded religion. Today this is still widely described, with the originally racial term, as antisemitic. And indeed, a Christian antisemite in the Middle Ages commits the same basic immorality as a modern, racist antisemite; medieval accusations of ritual slaughter carried out by Jews are no different from blood libel in the 21st century. Pagan blood libel from the Hellenistic period, too, is described as antisemitic, although its equivalence to Christian and modern era antisemitism is more controversial – which brings us to the second popular myth.

#2 It's Christianity’s fault
The historian Jacob Katz argued that modern antisemitism (beginning in the 19th century) is a continuation of the historical rejection of Judaism by Christianity, dating from Late Antiquity.2 Modern antisemites used the same arguments, stereotypes and generalising attacks on the Jewish character that Christians once used. Their socioeconomic separation was rooted in the Christian era. And even in the modern era, people considered Christianity the 'superior' religion in 'historical perspective'. Jews gained more prominent positions in society after emancipation, and were seen by antisemites as former 'pariahs', now encroaching on the Gentile population.3 Indeed, what many modern antisemites wanted was simply to reinstate the position of Jews in pre-emancipatory times.4

Accordingly, Katz argues that modern antisemitism does not resemble its ancient equivalents closely enough for it to be the same thing. Christianity added new, specifically Christian accusations to ancient Jew-hating ideas. The old charges were combined with deicide and religious guilt. Katz thinks this means that antisemitism is a legacy of the theological conflict with Christians, rather than of earlier times.5 There was something exceptional in the linking of antisemitism with the tenets of a new religion – and as this religion spread throughout the world, so did hostility towards Jews.

As an argument for why antisemitism became so widespread, Katz's account is plausible – although it could equally be the case that Christianity was a successful religion precisely because it seemed to contain ideas that already appealed to people, such as antisemitism. In any case, none of this shows that Christianity is the cause of antisemitism. In particular, it doesn't account for antisemitism being included in the Gospels (e.g. John 8:44, Matthew 27:24-5). If we want to explain why an idea is present in one generation, it’s not enough just to say that it was passed on from the previous generation. Not all ideas are retained over time. Some are kept and some are discarded. To explain the cause of antisemitism, one needs to explain the enduring appeal of the antisemitic mindset.

#3 It's multi-causal
Many people have tried to explain its appeal in terms of discrete historical episodes, so that there is no one explanation for the existence of antisemitism, and instead the Jewish people have just been profoundly unlucky. Pogroms in Russia were spurred on by economic hardship; the propagation of the Christian faith brought with it a tradition of enmity toward Jews; Jews in medieval Europe were forced into the moneylending and tax collecting professions, prompting resentment from borrowers and scorn from Catholics. Explanations like these are often cited for particular cases of persecution or discrimination. It is of course true that each of these phenomena came about in the way it did by virtue of a particular set of circumstances. But as a general explanation, it raises the question: Why the Jews?

More importantly, these explanations in terms of direct, local causes fail to explain the long-term persistence of distinctive patterns of persecution and antisemitic canards used to justify them. One of these is blood libel: false accusations of atrocities committed by Jews, almost always involving blood. The most prominent examples come from the Middle Ages, when Jews were widely accused of poisoning wells, spreading the plagues, desecrating the Host, killing defenceless Christians in order to use their blood to cure ailments or for Jewish rituals, or out of spite.6 But the first known instances of blood libel predated this by over a thousand years. Apion, a writer from the time of the Roman Empire, claimed that Jews had an annual tradition of kidnapping a Greek foreigner, fattening him up, and offering him as a sacrifice. The Greek historian Damocritus is also thought to have claimed that Jews practiced ritual slaughter of foreigners.7 Blood accusation was successfully revived by the Nazis, and today it still persists in the Arab world and closer to home.

Another example is the dual loyalty accusation. It was not only made during the Dreyfus Affair in France, and not only in the Stalin-era Soviet Union. In the second century C.E., the poet Juvenal claimed that the Jews were "accustomed to despise Roman laws.8 Similarly, Apion accused the Jews of hating Greeks. Further back, in the Bible itself, Haman advised king Ahasuerus:

There is a certain people scattered abroad and dispersed among the people in all the provinces of thy kingdom; and their laws are diverse from all people; neither keep they the king's laws: therefore it is not for the king's profit to suffer them. (Esther 3:8)

Note that despite its distance in time, this accusation follows with uncanny accuracy the same pattern as those made during the Dreyfus Affair and through to the Nazi movement: Claim that Jews are disloyal, and thereby legitimise their degradation, abuse or murder.

More recently, anti-Zionists (many of whom are themselves Jews) have used it against Jews in the Zionist movement. It also cropped up during the Iraq War, when its supporters in the US government were said to be acting only for Israel's interests – a claim which was of course nonsense. Another prominent example is the book The Israel Lobby by John Mearsheimer and Stephen Walt, in which it is claimed that there is a coalition of people and organisations, most of whom Jewish and all of whom Zionists, who are actively working to influence U.S. foreign policy so that it serves the interests of Israel, rather than those of the United States.

Blood libel and the dual loyalty accusation are quintessentially antisemitic, in the following respect. Consumption of blood is strictly forbidden under kashrut (kosher laws). This makes the accusation that (for instance) Jews killed Christians in order to use their blood to make matzah for Passover wildly unrealistic. The idea is utterly repugnant to any Jew. Used as an accusation to justify their persecution, it is designed to hurt. Similarly, the dual loyalty canard arose, along with the growth of political antisemitism, at a time when Jews were making every effort to assimilate. So this accusation was not just wildly untrue, it negated the very essence of contemporary Jewish culture and spurned the Jews’ effort to participate in mainstream society (which itself was partly an attempt to cure antisemitism).

#4 It's 'hatred of the other'
Antisemitism has been explained as just one example of tension between In and Out groups. Since diasporic Jews have always lived as minorities, they have always been the victims of this pattern of social psychology, being a convenient scapegoat for the society’s problems. Criticising this view, Samuel Ettinger pointed out that this sort of explanation wrongly emphasises the “existence of a real difference between Jews and their surroundings.”9 If Jews were hated because they were different, it follows that antisemitism should have decreased when most Jews in 19th-century western Europe abandoned strict religious observance and began not only to assimilate and identify nationally with the countries in which they found themselves, but to devote their lives to excelling in the local culture – Heinrich Heine and Felix Mendelssohn are obvious examples. This week, a story came out about Hessy Taft, a German Jew who was chosen, possibly by Goebbels himself, as the ideal Aryan baby for the cover of a Nazi magazine. The spectre of the Jewish stereotype existed entirely in their minds. But that did not matter. And so antisemitism did not decrease in the 20th century – it massively increased. Hitler feared Jews all the more because of their ability to ‘camouflage’ themselves in the host society; it simply wasn’t important, from a social perspective, whether they fitted in or not.

It is also not accurate to describe antisemitism as a form of ‘hatred’. Hatred is an emotional reaction. But one can believe conspiracy theories or other accusations which justify hurting Jews without personally hating them. Richard Wagner had no problem with befriending or working with Jews, yet he still believed that that Jews were by nature incapable of producing good art, and he was fixated with this theory to the point of writing a book about it. It was clearly an antisemitic theory, since it justified excluding Jews from the art profession, and denigrated the already prominent contribution of Jews to German art and culture.

If none of these four things is the unifying cause of antisemitism, what is? To begin with, antisemitism is an ancient psychological disorder, a kind of wrong thinking about morality, which compels people to legitimise the hurting of Jews for being Jews. I do not know the underlying explanation.




1. E.g. Jacob Katz, From Prejudice to Destruction: Antisemitism, 1700-1933: p7.
2. Ibid., 319.
3. Ibid., 320.
4. Ibid., 321.
5. Ibid., 323.
6. Efron et al (2009), The Jews: A History, p152.
7. Menahem Stern, Greek and Latin Authors on Jews and Judaism, p530.
8. Jacob R Marcus (1946), "Antisemitism in the Hellenistic-Roman World," in Koppel S. Pinson, Essays on Antisemitism, p.76.
9. Samuel Ettinger (1976), “The Origins of Modern Anti-Semitism,” in Yisrael Gutman and Livia Rothkirchen, eds., The Catastrophe of European Jewry, p18.

Friday, 3 January 2014

Who should rule? A bogus question and a bogus answer

Recently I read a paper which dealt with various problems involved in measuring and preserving representation of the electorate under proportional and plurality systems. A recommended read, if only because it gives a fresh take on some old problems, and at several points the authors point out interesting possibilities for further research. It is perhaps the only research piece I have read which is willing to entertain the idea that proportionality is not among the primary criteria of soundness for an electoral system. I reject this criterion entirely.

However, instead of proportionality, the authors posed "representation of the median voter's preferences" as their criterion, supposing it to be the goal of plurality systems. By their description of the majoritarian vision of politics, the point of elections is to allow 'citizens' to make a clear decision about who governs 'them'. Working on this assumption, they tried to test the Downsian theory of plurality systems, which says that the two main parties will converge toward the preferences of the median voter.

But this supposed expression of preferences isn't the point of elections at all. For a start, any study in this field should acknowledge that, there is no rational, self-consistent way to express the preferences of a diverse group – as Arrow's theorem showed. Therefore there can be no ideal in this matter, because every possible form of representation will contain some paradox or other. But Arrow's theorem isn't the only trouble with this sort of approach. There is no foolproof way to choose a ruler – not only in democracies, but in every kind of system – because all rulers, people, are fallible. The reader may object: "Yes, there is no perfect way, but surely there are better and worse ways to choose a ruler."

No. Accept for the sake of argument that any ruler, or method of choosing one, is prone to error. Therefore, if it is taken to be the 'right' method, and is established dogmatically and without any redeeming institutional features, its inherent errors – whatever they happen to be – become entrenched. For this reason, Popper suggested in the Open Society and its Enemies that the question we ask about politics ought not be "Who should rule?" but rather "How can we limit the damage they do?" In other words, it doesn't matter how the ruler is chosen per se – there is nothing *inherently* better about rule of the many versus rule of the few – but rather that the system contains mechanisms for its own improvement. Such improvement is greatly helped by a procedure for carrying out changes of power without violence. That is the real virtue of democracy.

Moving on from this basic objection, the next question is, how exactly do we measure the "median voter's preferences"? The authors go with the left-right scale, using it to compare policy-positions of citizens with policy-positions of the parties that are supposed to represent them. They make an ostensibly reasonable defence of this tool: It is true that 'left' or 'right' are good summaries the political views of most people in democracies. Nevertheless, I don't think that makes it viable for this study. It merely compares how right- or left-wing the policy positions of the voters are with how right- or left-wing are those of the representatives, taking no notice of the relative importance of different policies to the voters (which could easily cause a left-leaning person to vote for a more right-wing party, for example).

An even worse difficulty, not acknowledged as such by the author, is that the polarisation of left and right varies wildly between countries. The same units are used to measure them, yet the difference between, say, 1 and 2 on the left-right scale might be considerably greater for France than for the United States. This is not merely a lack of 'rigour'; the same units are being used to represent wildly different values (which themselves may not be measurable in the first place). It effectively makes the study meaningless.

Saturday, 23 November 2013

Moral freedom and the misanthropic bias in Randian ethics

In an interview, Ayn Rand was asked: "What is the meaning of life?" She said the question wasn't answerable in universal terms, and that it would be wrong to tell other people what the meaning or purpose of their lives is.  Her answer captures what I think is one of the best things about Rand's ethics, which is her understanding of freedom; her acceptance of the diversity of people's decisions, but within a clearly objective moral framework. I will argue that taking seriously the idea of freedom in ethics is not compatible with 'egoism' as Rand saw it, and that the Randian preoccupation with selfishness is either part of a 'negative' ethics that only refutes others and lacks positive proposals of its own, or a mistaken ethics that undervalues human beings as such.


Liberty, not selfishness
Rand attacked conventional morality for its altruism. But the problem isn't so much altruism as authoritarianism in ethics.

The one is a general idea that it's good to help people. Rand is said to have used the word in a strictly Comtean sense, which means the idea that helping others is the ultimate purpose of man's life. I don't see that calling something the 'purpose' of life is much different (in its practical implications) from the general idea that it's good to do that thing. It might alter our conception of why certain actions are good. But in any case, most people today who say they support altruism don't mean this. They just mean it's good to help people.

The other, authoritarianism, refers to all the pronouncements that x person should give up y thing to help z other person – for example a parent ordering a child to share his toys. This is not, though some people might think it is, the right way to approach or encourage beneficence. A beneficent act is intended to help someone. But it doesn't necessarily succeed – it depends on the knowledge of the people involved. For this reason, people should focus their efforts on helping those who are close to them. Family, friends, colleagues, neighbours, etc. Another implication is that it's hard to help other people. It's hard both to create resources and to know how to use them in a way that genuinely helps others. A rich man could send his money to Africa, have it fall into the wrong hands and be completely wasted.

Because the success of altruism is so dependent on knowledge,  it is wrong to think of any particular act as a moral obligation – which is a more universal concept that should be reserved for negative obligations. If I tell someone, "You should do x beneficent act," I am making multiple presumptions about his and the benefactor's states of knowledge. If they're wrong, the action I suppose to be an obligation might not even be praiseworthy. So particular altruistic acts can't have the character of (universal) moral obligations, any more than reading my favourite book is a moral obligation for everyone. I also can't order people to do these things, as this, again, is an arrogant presumption about their knowledge and the best use of their time. It is also parochial. It assumes that the value of certain knowledge can only be obtained in a particular place that I happen to know about. Lots of parents make this mistake when they decide all kids should learn to play an instrument or learn a language, and so pressure their children into doing these things. Or others who think that all students should have part time jobs, or that all citizens should serve in the army.

Parochial ethics that favour some kinds of knowledge over others, and make presumptions about what knowledge others possess, is the proper target for criticism. Rand has an important point to make about moral freedom, but its importance isn't limited to the question of egoism vs altruism and shouldn't be framed as such.

Why we praise altruism
In fact, the claim that everything a person does must be in his rational self-interest, which implies that he must be the beneficiary of all of his actions, at some points threatens the very coherence of Rand's ethics. In the Virtue of Selfishness she gives a counterintuitive example of a selfish action: a man's decision to save the life of a woman he loves. The man is the beneficiary of this action, Rand argues, because life is not worth living to the man without that woman. Another example given is of someone risking his life to try and escape from prison.

Realistically, neither of these cases would involve the agent saying life is not worth living – that it has no value – without the thing the agent is trying to save. If this were the case, the first man would commit suicide if the woman died before he could save her – as would the second, if he found no way to escape from prison and gave up trying. Yet no rational person would kill himself under these circumstances. Hence life is still worth living; it's just that one can decide sometimes that a certain thing is worth more than continuing to live. If values are objective, it follows that my values don't have to refer to the sphere of my own life (otherwise it's not clear in what sense values are objective at all), and therefore I don't have to be the beneficiary of all of my actions. I also have preferences about what happens after my own death, and it's right that I do.

Altruistic acts are thought to be especially praiseworthy when they have that same characteristic, looking beyond the needs of the moment – as when we help others without expecting reciprocation, perhaps without the satisfaction of seeing the result, and perhaps in lieu of fulfilling some more immediate wish. We praise these actions more than 'selfish' ones because they require a degree of moral sophistication that selfish actions don't require. Of course there is also the idea that if an action is sacrificial, that necessarily makes it more praiseworthy. My guess is that sometimes these actions aren't so much sacrificial as altruistic in the above-mentioned sense, and sometimes, as a sentimental ideal, sacrifice leads people to make bad decisions.

Disregarding the latter cases, it's right that these actions receive more praise than most selfish actions. The point of morality is to help us make better decisions than the ones we initially want to make, and this very often involves acting against prima facie self interest. It's not easy to know the best way to achieve specific goals, and in this sense it's not easy to be a 'rational egoist' – but this confuses contingent, practical matters with general moral truths. And as for prima facie self-interest, I don't need morality to tell me to follow it any more than I need morality to tell me to eat when I'm hungry.

It's easy to be selfish. On the other hand, it's not always easy to empathize with others, to be kind in the face of conflict, to maintain an altruistic concern beyond desires of the immediate moment. And it is still less easy when the other person is a stranger. But I think we have to maintain these standards in some way,  if we are to take seriously the inherent value of people. A moral code that doesn't explain the virtue of altruism is incomplete, and a moral code that goes out of its way to attack it is misanthropic.


Edit: Here is a response to this post which I haven't yet got around to replying to.

Tuesday, 30 July 2013

Methodological musings, cultural clashes, and something in the air at Mises University 2013

When I arrived in Auburn, Alabama for Mises University 2013 I was not expecting much more than the usual mixed bag of conference attendees, standard libertarian spiels and a bit of exacting academic work thrown in. My roommate was a pleasant, quietly self-assured girl in a libertarian T-shirt. In the hour before the evening meal, all over the rooms of the Mises Institute there were knots of libertarians having animated discussions about economics. This was promising enough. 

I had come to the Institute after having read about Hayek's view of the method of the social sciences. He recognised that they cannot proceed by deriving theories from observations – that we can only interpret social phenomena in the light of pre-existing theories. Part of this clearly comes from the old debate between positivists and 'praxeologists'. But the Austrian rejection of inductivism was also endorsed in The Poverty of Historicism by Karl Popper, who argued that the natural sciences, too, do not make progress via induction, but rather by creative conjecture and refutation as attempts to solve particular problems.

So this was interesting. There was a school of thought, the Austrian School of economics, which not only had libertarianism as its conclusion, but a critical-rationalist-influenced methodology as its foundation. From my perspective this was a convergence of two pillars of rational thought. It didn't come as a surprise, then, that the opening speaker argued that it is the only school of economics which takes seriously the dignity and freedom of economic actors – of people. He also assured us that the work of Austrian economists is descriptive; prescriptive, political arguments are strictly separate.

The lectures that followed were a fascinating introduction to the science. There is a doubt about whether economics really is a 'science', if it doesn't make testable predictions. But, as I discovered over the course of the week, the central ideas of Austrian economics come from assuming the truth of certain axioms, taking them seriously and thereby using them to interpret economic phenomena. As Steve Horwitz put it, this approach of "[r]endering human action intelligible means telling better stories about what happened and why." It has a degree of sophistication which arguably brings it closer to the status of a real 'science' – or at least more deserving of the epistemological prestige of that title – than more empirics-focused mainstream economics, where naive interpretations of data are widespread. (For example, statistical significance has been prized as a mark of scientific status, even if it's not clear that this has any bearing on the economic significance of a given dataset). 

The course exceeded my expectations in all kinds of small ways. The atmosphere was like that of an actual university, because there were readings to do, lectures to attend and academics to question. But there was something more. The Institute was welcoming, comfortable, and from morning to evening there was an interesting discussion to be overheard in every room and hallway. Mingling in the social hours, I would repeatedly come face to face with an ordinary-looking student, brace myself for small talk or fruitless debate, and then quickly discover that they were in fact very well read, thoughtful and knowledgeable. This happened with practically everyone. Fortunately there were also places to be alone, and places to play chess, places to smoke and drink. The speakers were obviously passionate about their research, but the difference from my normal university life was that they were able to communicate their passion and generate an atmosphere of excitement.

Despite the first speaker's assurances, the political side of the course cast a surreal tinge on my experience of it. Judge Napolitano spoke on the growth of the Commerce Clause and how it was leading to the erosion of freedoms enshrined in the Constitution. He ended by declaring that a certain proportion of us would die in prisons, and some may die in public squares, while fighting for our principles. This got a standing ovation. But it unnerved me. For a start I wasn't sure what had just happened. While I could imagine some of the more militant attendees being arrested for civil disobedience, it's not at all plausible that this would spiral into a situation where libertarian activists get life sentences (let alone being shot in the street). This kind of unjustified alarmism, I thought, implies that the main evil to society is the state itself, rather than the overwhelmingly statist milieu. Yet the latter is the only thing that enables the former to exist. Criticism of things like excessive government surveillance is not restricted to libertarians – it's mainstream. What counts as 'excessive' is also being subjected to public debate. A government that doesn't respond to its conclusions will not survive an election.*

Discussing the speech with other students, I said it was a bad idea to cause such alarm that people are ready to defend their lives. Someone asked: "Why?" I wondered why I had said it. Why shouldn't people be ready to defend their lives? Isn't this the eternal vigilance which that Founding Father was talking about? 

I didn't think so. Holding politicians to account is one thing, but if the attitude is one of preparing for revolution (if the story of another attendee is correct, Napolitano did prophecy a revolution), time and attention are directed away from education, away from improvement, and toward antagonism. A fellow European diplomatically suggested that my perturbation was down to a cultural difference between British and American libertarians. And since coming home, it has occurred to me that it may not be necessary to take these alarmist-type claims literally. They are true, in the sense that they are expressions of a devotion to freedom. I cannot express myself in that way, nor understand expressions of that kind, because my idea of liberty is one of ancient freedoms that have grown up alongside a government and a monarchy which themselves were subject to the rule of law that secured those freedoms. This is simply a difference between the American and the British mindset – and one may not be objectively better than the other.

I went in as a conservative, libertarian to the extent of finding plausible the arguments in favour of private law. I have left more skeptical of my own cautious traditionalism. Watch this space for further arguments - for what, I don't yet know. But I do know that every week of my life should have the intellectual intensity of that week at the Mises Institute. I'm glad to have had the chance to attend.



* I'm aware that this is just a naïve statement of the argument and doesn't address public choice theory, etc. It is not meant here as a rebuttal so much as a description of what I was thinking at the time.

Friday, 21 June 2013

Ontological argument for the existence of morality – Thoughts from a lecture on Philippa Foot

http://graphics8.nytimes.com/images/2010/10/09/us/OBIT-FOOT/OBIT-FOOT-articleInline.jpgI went to a lecture at the LSE recently on the life and work of Philippa Foot. It was a very nice event, describing the life of a thoughtful daughter from an aristocratic family and how its events – including those surrounding the Second World War – led her to the philosophical problems with which she was chiefly concerned.

The main topics under discussion were moral relativism and the question, "Why be moral?" Professor Sarah Broadie gave an outline of how Foot's answer developed, in which she eventually arrived at the notion that there are certain universal and basic human needs, whose fulfillment leads to flourishing and long-term survival. This view of morality as fulfillment of intrinsic needs or goals, or needs which arise as part of the logic of the situation, still seems easy to pick apart. One need only find examples of actions that are intuitively morally impermissible and yet cohere with a long-term survival strategy – for instance, a medical procedure that causes extreme and unnecessary pain, but also wipes the patient's memory of it completely, without fail, after the operation has finished.

Perhaps those are just so many details to be ironed out. I had a more skeptical concern. Even if there are these intrinsic needs, what is good about fulfilling them? Why is it good to survive and flourish in the first place? The speakers' answer was essentially that Foot did not address this sort of question in her work; her interest was in finding "where the trouble is" – where our moral intuitions conflict and why. (Question and discussion here from 68:00.)

One way to respond to the question could be this. Ethics is an intellectual pursuit, a field of knowledge, whose theories can be better or worse (as per various criteria, with some normative theories imposing a very specific set of criteria, some of which we all agree on, such as non-contradiction).  Since morality is about what we should do, by definition we should act in accordance with the best available theory.

I think this is unlikely to satisfy the skeptic. He could concede that one moral theory is better than a rival theory that is riddled with contradictions. But this says no more about the objectivity of ethics than about the existence of fairies at the bottom of the garden – about which there could equally well be better and worse theories. It is a sort of ontological argument for the existence of morality.

Is skepticism self-defeating?
The conjecture our skeptic disputes is that it is good to survive and flourish. If he simply says, "That's rubbish," and ends the discussion, that is of course his prerogative. But if he wants to continue to argue about it, the view that it is not good to survive and flourish quickly becomes untenable, for the following reason. Taking this view seriously, we would stop trying to survive and flourish. We might die out as a species, or at any rate there would be fewer of us, and we would be less productive. Yet criticism and improvement of the theory that it is not good depends upon people being around and disposed to work on it. So the skeptic, taking his view seriously, destroys the means by which his own opinion might be proved wrong. Inherently, this view is incompatible with truth-seeking. The question that then arises is: "Why be truth-seeking?"

It seems to me that certain assumptions are made by the nature of the topic and the conversation itself. Firstly, one cannot get away with saying "I take no position on whether or not it's good to survive, flourish or be truth-seeking," because it is either good or not, and one's actions necessarily assume and express a position on it. Regardless of what the skeptic says or is aware of believing, not striving to survive, etc., implies that he does not think it is good. Secondly, by taking up a view and advocating it as true, the skeptic takes for granted that he and others should hold true theories – that we should be truth-seeking. Yet we've determined that his answer to the moral question is not truth-seeking. For this reason, any claim that it is not good to survive and flourish is ultimately self-contradictory.

Unlike the 'ontological' argument, this is not a completely a priori proof of objective morality, although I suspect it has some related problem. Perhaps it tries to be an argument without being an explanation. But I can't find any specific fault with it.

Tuesday, 18 June 2013

Everyday prediction of the growth of knowledge (or is it?)


I received the following response to my last post:

When deciding to research a topic, a person may decide that he expects to learn more about a particular subject by using his computer than by reading a book.

But as you point out, it is a contradiction to perfectly know which path to new knowledge is best, because one would have to know all the possible states of knowledge we could possibly have (ahead of time!) to decide perfectly which path we should take. But if we had the knowledge ahead of time, then we wouldn't need to develop it. Instantaneous (timeless) knowledge growth would seem to violate a law of physics.

But when we decide to research a topic using a computer rather than a book, we conjecture a path to new knowledge that we expect to be more efficient. This knowledge is not assumed to infallible, which leads to the above contradictions. One could call a person's ability to guess which path to new knowledge is faster his relative "depth of knowledge". Like other forms of knowledge, knowledge about how to create knowledge is conjectural.

People do form expectations (make predictions) about which type and amount of knowledge will be forthcoming utilizing different techniques, with better accuracy of their predictions depending on their depth of knowledge.

Unless depth of knowledge is a myth, people can form expectations about what will help them learn a particular type of knowledge in a way that is better or faster.  So people can form reasonable expectations about what they will learn.  Can't they?

The argument seems to be that we can predict the effectiveness of different ways of creating knowledge, and we can do this more or less accurately depending on how good are our ideas about it - suggesting that there is an objective form of predictive knowledge about the growth of knowledge itself. One thing to notice about this form of knowledge is that it only allows us to predict at a kind of meta-level. It doesn't say what I will learn in the future - only that (say) the computer will be more helpful than a book.

But more importantly, this argument seems to have assumed that the case against predicting the growth of knowledge is only against infallible prediction. By this account, fallible prediction is possible: I could say the computer would be helpful in my research, and while I might be wrong, this would still be a better conjecture than, say, that watching the Simpsons (or any other  unrelated activity) would be helpful.

http://thumbnails.illustrationsource.com/huge.25.125786.JPGThere is a crucial difference being overlooked here, and it is not about fallibility. A prediction of how a chemical will react on being exposed to air might be wrong. But this would be because some aspect of the theory that produced the prediction was wrong, or because a measurement (or a background assumption) was wrong. Similarly, the truth of the explanatory theory that the computer is a better source for my research project depends on aspects of the computer and the nature of the research. Yet the predictive claim that I will create more new knowledge if I use the computer is a different thing entirely. I could be right about the former, non-predictive theory about the usefulness of computers, and still be wrong about the latter: I might still learn less by using the computer, for any number of other reasons. For example, it might lead me to get a Facebook account or a video game and spend the whole day procrastinating. Or I might find a wealth of information about another topic and change my whole direction of research. Or I might commit to my original topic and complete the project, with the result that it only impedes the growth of knowledge, because the finished publication perpetuates a violent political ideology.

The problem with predicting the growth of my own knowledge is that it depends not only on objective, unchanging factors, but also on whether I will change my mind. And this is not something I can predict without having the relevant knowledge already. It is true that the more I know about why the computer will be useful and what I will find there, the less my prediction that I will learn more depends on whether I will change my mind. But again, to the extent that I have detailed expectations about what I will find on the computer, this is not a prediction of the growth of knowledge so much as a theory about what information is available on computers.