Friday 21 June 2013

Ontological argument for the existence of morality – Thoughts from a lecture on Philippa Foot

http://graphics8.nytimes.com/images/2010/10/09/us/OBIT-FOOT/OBIT-FOOT-articleInline.jpgI went to a lecture at the LSE recently on the life and work of Philippa Foot. It was a very nice event, describing the life of a thoughtful daughter from an aristocratic family and how its events – including those surrounding the Second World War – led her to the philosophical problems with which she was chiefly concerned.

The main topics under discussion were moral relativism and the question, "Why be moral?" Professor Sarah Broadie gave an outline of how Foot's answer developed, in which she eventually arrived at the notion that there are certain universal and basic human needs, whose fulfillment leads to flourishing and long-term survival. This view of morality as fulfillment of intrinsic needs or goals, or needs which arise as part of the logic of the situation, still seems easy to pick apart. One need only find examples of actions that are intuitively morally impermissible and yet cohere with a long-term survival strategy – for instance, a medical procedure that causes extreme and unnecessary pain, but also wipes the patient's memory of it completely, without fail, after the operation has finished.

Perhaps those are just so many details to be ironed out. I had a more skeptical concern. Even if there are these intrinsic needs, what is good about fulfilling them? Why is it good to survive and flourish in the first place? The speakers' answer was essentially that Foot did not address this sort of question in her work; her interest was in finding "where the trouble is" – where our moral intuitions conflict and why. (Question and discussion here from 68:00.)

One way to respond to the question could be this. Ethics is an intellectual pursuit, a field of knowledge, whose theories can be better or worse (as per various criteria, with some normative theories imposing a very specific set of criteria, some of which we all agree on, such as non-contradiction).  Since morality is about what we should do, by definition we should act in accordance with the best available theory.

I think this is unlikely to satisfy the skeptic. He could concede that one moral theory is better than a rival theory that is riddled with contradictions. But this says no more about the objectivity of ethics than about the existence of fairies at the bottom of the garden – about which there could equally well be better and worse theories. It is a sort of ontological argument for the existence of morality.

Is skepticism self-defeating?
The conjecture our skeptic disputes is that it is good to survive and flourish. If he simply says, "That's rubbish," and ends the discussion, that is of course his prerogative. But if he wants to continue to argue about it, the view that it is not good to survive and flourish quickly becomes untenable, for the following reason. Taking this view seriously, we would stop trying to survive and flourish. We might die out as a species, or at any rate there would be fewer of us, and we would be less productive. Yet criticism and improvement of the theory that it is not good depends upon people being around and disposed to work on it. So the skeptic, taking his view seriously, destroys the means by which his own opinion might be proved wrong. Inherently, this view is incompatible with truth-seeking. The question that then arises is: "Why be truth-seeking?"

It seems to me that certain assumptions are made by the nature of the topic and the conversation itself. Firstly, one cannot get away with saying "I take no position on whether or not it's good to survive, flourish or be truth-seeking," because it is either good or not, and one's actions necessarily assume and express a position on it. Regardless of what the skeptic says or is aware of believing, not striving to survive, etc., implies that he does not think it is good. Secondly, by taking up a view and advocating it as true, the skeptic takes for granted that he and others should hold true theories – that we should be truth-seeking. Yet we've determined that his answer to the moral question is not truth-seeking. For this reason, any claim that it is not good to survive and flourish is ultimately self-contradictory.

Unlike the 'ontological' argument, this is not a completely a priori proof of objective morality, although I suspect it has some related problem. Perhaps it tries to be an argument without being an explanation. But I can't find any specific fault with it.

Tuesday 18 June 2013

Everyday prediction of the growth of knowledge (or is it?)


I received the following response to my last post:

When deciding to research a topic, a person may decide that he expects to learn more about a particular subject by using his computer than by reading a book.

But as you point out, it is a contradiction to perfectly know which path to new knowledge is best, because one would have to know all the possible states of knowledge we could possibly have (ahead of time!) to decide perfectly which path we should take. But if we had the knowledge ahead of time, then we wouldn't need to develop it. Instantaneous (timeless) knowledge growth would seem to violate a law of physics.

But when we decide to research a topic using a computer rather than a book, we conjecture a path to new knowledge that we expect to be more efficient. This knowledge is not assumed to infallible, which leads to the above contradictions. One could call a person's ability to guess which path to new knowledge is faster his relative "depth of knowledge". Like other forms of knowledge, knowledge about how to create knowledge is conjectural.

People do form expectations (make predictions) about which type and amount of knowledge will be forthcoming utilizing different techniques, with better accuracy of their predictions depending on their depth of knowledge.

Unless depth of knowledge is a myth, people can form expectations about what will help them learn a particular type of knowledge in a way that is better or faster.  So people can form reasonable expectations about what they will learn.  Can't they?

The argument seems to be that we can predict the effectiveness of different ways of creating knowledge, and we can do this more or less accurately depending on how good are our ideas about it - suggesting that there is an objective form of predictive knowledge about the growth of knowledge itself. One thing to notice about this form of knowledge is that it only allows us to predict at a kind of meta-level. It doesn't say what I will learn in the future - only that (say) the computer will be more helpful than a book.

But more importantly, this argument seems to have assumed that the case against predicting the growth of knowledge is only against infallible prediction. By this account, fallible prediction is possible: I could say the computer would be helpful in my research, and while I might be wrong, this would still be a better conjecture than, say, that watching the Simpsons (or any other  unrelated activity) would be helpful.

http://thumbnails.illustrationsource.com/huge.25.125786.JPGThere is a crucial difference being overlooked here, and it is not about fallibility. A prediction of how a chemical will react on being exposed to air might be wrong. But this would be because some aspect of the theory that produced the prediction was wrong, or because a measurement (or a background assumption) was wrong. Similarly, the truth of the explanatory theory that the computer is a better source for my research project depends on aspects of the computer and the nature of the research. Yet the predictive claim that I will create more new knowledge if I use the computer is a different thing entirely. I could be right about the former, non-predictive theory about the usefulness of computers, and still be wrong about the latter: I might still learn less by using the computer, for any number of other reasons. For example, it might lead me to get a Facebook account or a video game and spend the whole day procrastinating. Or I might find a wealth of information about another topic and change my whole direction of research. Or I might commit to my original topic and complete the project, with the result that it only impedes the growth of knowledge, because the finished publication perpetuates a violent political ideology.

The problem with predicting the growth of my own knowledge is that it depends not only on objective, unchanging factors, but also on whether I will change my mind. And this is not something I can predict without having the relevant knowledge already. It is true that the more I know about why the computer will be useful and what I will find there, the less my prediction that I will learn more depends on whether I will change my mind. But again, to the extent that I have detailed expectations about what I will find on the computer, this is not a prediction of the growth of knowledge so much as a theory about what information is available on computers.