Tuesday 30 July 2013

Methodological musings, cultural clashes, and something in the air at Mises University 2013

When I arrived in Auburn, Alabama for Mises University 2013 I was not expecting much more than the usual mixed bag of conference attendees, standard libertarian spiels and a bit of exacting academic work thrown in. My roommate was a pleasant, quietly self-assured girl in a libertarian T-shirt. In the hour before the evening meal, all over the rooms of the Mises Institute there were knots of libertarians having animated discussions about economics. This was promising enough. 

I had come to the Institute after having read about Hayek's view of the method of the social sciences. He recognised that they cannot proceed by deriving theories from observations – that we can only interpret social phenomena in the light of pre-existing theories. Part of this clearly comes from the old debate between positivists and 'praxeologists'. But the Austrian rejection of inductivism was also endorsed in The Poverty of Historicism by Karl Popper, who argued that the natural sciences, too, do not make progress via induction, but rather by creative conjecture and refutation as attempts to solve particular problems.

So this was interesting. There was a school of thought, the Austrian School of economics, which not only had libertarianism as its conclusion, but a critical-rationalist-influenced methodology as its foundation. From my perspective this was a convergence of two pillars of rational thought. It didn't come as a surprise, then, that the opening speaker argued that it is the only school of economics which takes seriously the dignity and freedom of economic actors – of people. He also assured us that the work of Austrian economists is descriptive; prescriptive, political arguments are strictly separate.

The lectures that followed were a fascinating introduction to the science. There is a doubt about whether economics really is a 'science', if it doesn't make testable predictions. But, as I discovered over the course of the week, the central ideas of Austrian economics come from assuming the truth of certain axioms, taking them seriously and thereby using them to interpret economic phenomena. As Steve Horwitz put it, this approach of "[r]endering human action intelligible means telling better stories about what happened and why." It has a degree of sophistication which arguably brings it closer to the status of a real 'science' – or at least more deserving of the epistemological prestige of that title – than more empirics-focused mainstream economics, where naive interpretations of data are widespread. (For example, statistical significance has been prized as a mark of scientific status, even if it's not clear that this has any bearing on the economic significance of a given dataset). 

The course exceeded my expectations in all kinds of small ways. The atmosphere was like that of an actual university, because there were readings to do, lectures to attend and academics to question. But there was something more. The Institute was welcoming, comfortable, and from morning to evening there was an interesting discussion to be overheard in every room and hallway. Mingling in the social hours, I would repeatedly come face to face with an ordinary-looking student, brace myself for small talk or fruitless debate, and then quickly discover that they were in fact very well read, thoughtful and knowledgeable. This happened with practically everyone. Fortunately there were also places to be alone, and places to play chess, places to smoke and drink. The speakers were obviously passionate about their research, but the difference from my normal university life was that they were able to communicate their passion and generate an atmosphere of excitement.

Despite the first speaker's assurances, the political side of the course cast a surreal tinge on my experience of it. Judge Napolitano spoke on the growth of the Commerce Clause and how it was leading to the erosion of freedoms enshrined in the Constitution. He ended by declaring that a certain proportion of us would die in prisons, and some may die in public squares, while fighting for our principles. This got a standing ovation. But it unnerved me. For a start I wasn't sure what had just happened. While I could imagine some of the more militant attendees being arrested for civil disobedience, it's not at all plausible that this would spiral into a situation where libertarian activists get life sentences (let alone being shot in the street). This kind of unjustified alarmism, I thought, implies that the main evil to society is the state itself, rather than the overwhelmingly statist milieu. Yet the latter is the only thing that enables the former to exist. Criticism of things like excessive government surveillance is not restricted to libertarians – it's mainstream. What counts as 'excessive' is also being subjected to public debate. A government that doesn't respond to its conclusions will not survive an election.*

Discussing the speech with other students, I said it was a bad idea to cause such alarm that people are ready to defend their lives. Someone asked: "Why?" I wondered why I had said it. Why shouldn't people be ready to defend their lives? Isn't this the eternal vigilance which that Founding Father was talking about? 

I didn't think so. Holding politicians to account is one thing, but if the attitude is one of preparing for revolution (if the story of another attendee is correct, Napolitano did prophecy a revolution), time and attention are directed away from education, away from improvement, and toward antagonism. A fellow European diplomatically suggested that my perturbation was down to a cultural difference between British and American libertarians. And since coming home, it has occurred to me that it may not be necessary to take these alarmist-type claims literally. They are true, in the sense that they are expressions of a devotion to freedom. I cannot express myself in that way, nor understand expressions of that kind, because my idea of liberty is one of ancient freedoms that have grown up alongside a government and a monarchy which themselves were subject to the rule of law that secured those freedoms. This is simply a difference between the American and the British mindset – and one may not be objectively better than the other.

I went in as a conservative, libertarian to the extent of finding plausible the arguments in favour of private law. I have left more skeptical of my own cautious traditionalism. Watch this space for further arguments - for what, I don't yet know. But I do know that every week of my life should have the intellectual intensity of that week at the Mises Institute. I'm glad to have had the chance to attend.



* I'm aware that this is just a naïve statement of the argument and doesn't address public choice theory, etc. It is not meant here as a rebuttal so much as a description of what I was thinking at the time.

Friday 21 June 2013

Ontological argument for the existence of morality – Thoughts from a lecture on Philippa Foot

http://graphics8.nytimes.com/images/2010/10/09/us/OBIT-FOOT/OBIT-FOOT-articleInline.jpgI went to a lecture at the LSE recently on the life and work of Philippa Foot. It was a very nice event, describing the life of a thoughtful daughter from an aristocratic family and how its events – including those surrounding the Second World War – led her to the philosophical problems with which she was chiefly concerned.

The main topics under discussion were moral relativism and the question, "Why be moral?" Professor Sarah Broadie gave an outline of how Foot's answer developed, in which she eventually arrived at the notion that there are certain universal and basic human needs, whose fulfillment leads to flourishing and long-term survival. This view of morality as fulfillment of intrinsic needs or goals, or needs which arise as part of the logic of the situation, still seems easy to pick apart. One need only find examples of actions that are intuitively morally impermissible and yet cohere with a long-term survival strategy – for instance, a medical procedure that causes extreme and unnecessary pain, but also wipes the patient's memory of it completely, without fail, after the operation has finished.

Perhaps those are just so many details to be ironed out. I had a more skeptical concern. Even if there are these intrinsic needs, what is good about fulfilling them? Why is it good to survive and flourish in the first place? The speakers' answer was essentially that Foot did not address this sort of question in her work; her interest was in finding "where the trouble is" – where our moral intuitions conflict and why. (Question and discussion here from 68:00.)

One way to respond to the question could be this. Ethics is an intellectual pursuit, a field of knowledge, whose theories can be better or worse (as per various criteria, with some normative theories imposing a very specific set of criteria, some of which we all agree on, such as non-contradiction).  Since morality is about what we should do, by definition we should act in accordance with the best available theory.

I think this is unlikely to satisfy the skeptic. He could concede that one moral theory is better than a rival theory that is riddled with contradictions. But this says no more about the objectivity of ethics than about the existence of fairies at the bottom of the garden – about which there could equally well be better and worse theories. It is a sort of ontological argument for the existence of morality.

Is skepticism self-defeating?
The conjecture our skeptic disputes is that it is good to survive and flourish. If he simply says, "That's rubbish," and ends the discussion, that is of course his prerogative. But if he wants to continue to argue about it, the view that it is not good to survive and flourish quickly becomes untenable, for the following reason. Taking this view seriously, we would stop trying to survive and flourish. We might die out as a species, or at any rate there would be fewer of us, and we would be less productive. Yet criticism and improvement of the theory that it is not good depends upon people being around and disposed to work on it. So the skeptic, taking his view seriously, destroys the means by which his own opinion might be proved wrong. Inherently, this view is incompatible with truth-seeking. The question that then arises is: "Why be truth-seeking?"

It seems to me that certain assumptions are made by the nature of the topic and the conversation itself. Firstly, one cannot get away with saying "I take no position on whether or not it's good to survive, flourish or be truth-seeking," because it is either good or not, and one's actions necessarily assume and express a position on it. Regardless of what the skeptic says or is aware of believing, not striving to survive, etc., implies that he does not think it is good. Secondly, by taking up a view and advocating it as true, the skeptic takes for granted that he and others should hold true theories – that we should be truth-seeking. Yet we've determined that his answer to the moral question is not truth-seeking. For this reason, any claim that it is not good to survive and flourish is ultimately self-contradictory.

Unlike the 'ontological' argument, this is not a completely a priori proof of objective morality, although I suspect it has some related problem. Perhaps it tries to be an argument without being an explanation. But I can't find any specific fault with it.

Tuesday 18 June 2013

Everyday prediction of the growth of knowledge (or is it?)


I received the following response to my last post:

When deciding to research a topic, a person may decide that he expects to learn more about a particular subject by using his computer than by reading a book.

But as you point out, it is a contradiction to perfectly know which path to new knowledge is best, because one would have to know all the possible states of knowledge we could possibly have (ahead of time!) to decide perfectly which path we should take. But if we had the knowledge ahead of time, then we wouldn't need to develop it. Instantaneous (timeless) knowledge growth would seem to violate a law of physics.

But when we decide to research a topic using a computer rather than a book, we conjecture a path to new knowledge that we expect to be more efficient. This knowledge is not assumed to infallible, which leads to the above contradictions. One could call a person's ability to guess which path to new knowledge is faster his relative "depth of knowledge". Like other forms of knowledge, knowledge about how to create knowledge is conjectural.

People do form expectations (make predictions) about which type and amount of knowledge will be forthcoming utilizing different techniques, with better accuracy of their predictions depending on their depth of knowledge.

Unless depth of knowledge is a myth, people can form expectations about what will help them learn a particular type of knowledge in a way that is better or faster.  So people can form reasonable expectations about what they will learn.  Can't they?

The argument seems to be that we can predict the effectiveness of different ways of creating knowledge, and we can do this more or less accurately depending on how good are our ideas about it - suggesting that there is an objective form of predictive knowledge about the growth of knowledge itself. One thing to notice about this form of knowledge is that it only allows us to predict at a kind of meta-level. It doesn't say what I will learn in the future - only that (say) the computer will be more helpful than a book.

But more importantly, this argument seems to have assumed that the case against predicting the growth of knowledge is only against infallible prediction. By this account, fallible prediction is possible: I could say the computer would be helpful in my research, and while I might be wrong, this would still be a better conjecture than, say, that watching the Simpsons (or any other  unrelated activity) would be helpful.

http://thumbnails.illustrationsource.com/huge.25.125786.JPGThere is a crucial difference being overlooked here, and it is not about fallibility. A prediction of how a chemical will react on being exposed to air might be wrong. But this would be because some aspect of the theory that produced the prediction was wrong, or because a measurement (or a background assumption) was wrong. Similarly, the truth of the explanatory theory that the computer is a better source for my research project depends on aspects of the computer and the nature of the research. Yet the predictive claim that I will create more new knowledge if I use the computer is a different thing entirely. I could be right about the former, non-predictive theory about the usefulness of computers, and still be wrong about the latter: I might still learn less by using the computer, for any number of other reasons. For example, it might lead me to get a Facebook account or a video game and spend the whole day procrastinating. Or I might find a wealth of information about another topic and change my whole direction of research. Or I might commit to my original topic and complete the project, with the result that it only impedes the growth of knowledge, because the finished publication perpetuates a violent political ideology.

The problem with predicting the growth of my own knowledge is that it depends not only on objective, unchanging factors, but also on whether I will change my mind. And this is not something I can predict without having the relevant knowledge already. It is true that the more I know about why the computer will be useful and what I will find there, the less my prediction that I will learn more depends on whether I will change my mind. But again, to the extent that I have detailed expectations about what I will find on the computer, this is not a prediction of the growth of knowledge so much as a theory about what information is available on computers.

Saturday 16 March 2013

On predicting the growth of theoretical knowledge

This can either mean predicting a) that certain theories will be accepted as well tested, or b) that we will derive new explanations or predictions from other theories.

If we predict what theories others will accept, a), the forecast clearly needs to be kept secret from the people in question lest it influence them now (as Popper calls it, the 'Oedipus-effect'). If we keep the prediction secret, either it becomes a prediction from withoutand therefore is not self-predictionor it still requires us to predict that we will keep our results secret. Yet whether we decide to keep our results secret will depend on future circumstances, which in turn depend on the growth of our own knowledgeso to assume we can predict our own adherence to secrecy begs the question of whether self-prediction is possible.

Could we predict the creation of new theories without understanding what we predict? For example, we might describe the shapes of some letters that will be written down, and predict their historical consequences. Arguably this isn't possible either, because if we can predict those shapes, we can write them down now, and if we can write them down now, there is no reason they shouldn't have consequences now. 


Popper argues that we similarly cannot predict the future acceptance of a new theory in the light of new evidence or argument. If that information is available now, the new theory is also available now. If we're talking about evidence that is not presently available, that we can predict its existence using our current theories makes any new theory superfluous.

He also proves that it is impossible even for a Laplacean demon to predict its own future state, as this would involve both describing the initial conditions and describing the future state. To describe its own state 1 hour from now, it would have to describe (1) the inital conditions (required for any prediction task) and (2) all its actions up to that point. (1) takes some amount of timeso it would never be able to predict the future state before it comes to pass.

As for predicting the growth of knowledge in someone else's mind (prediction from without): To do this reliably you would need to know everything that might be relevant to their decisionin other words you would have to possess a significant chunk of all their knowledge. You would have to be superhuman.

So, predicting the growth of knowledge is either self-defeating or practically impossible. (For more, see Karl Popper, The Open Universe, Chapter III, particularly section 21.) Some implications are:

- Free will. If the growth of knowledge could be predicted in principle, so could our decisions, which would make them deterministic. In fact, knowledge creation is unpredictable, and free will is a better explanation for this than 'randomness'.

- When you try to teach or persuade someone of something, the result is unpredictable. If you could predict what ideas the person would have as a result of your teaching, it would not be new knowledge. Yet we know from the theory of  conjecture and refutation that it is new: People only learn by creative conjecture and criticism. I'm not sure if this particular argument holds. It would be interesting to hear others' views on it. But there are also the other problems with predicting the growth of knowledge in other people's mindsthe Oedipus-effect and the sheer complexity of the knowledge that would be required by such a predictor.