Photography Media Journal
ISSN 1918-8153

Blog|Journal | Gallery|Contact|Site map|About 

Deciding to Deliberate
Tomasz Neugebauer

first posted 1998.
updated: August, 2001

Pages > 1   2   3   4   5   Print version (full-text)

Human freedom of will is a significant aspect of our intelligence that is inseparable from the human body. The mind is not "what brains do" as Marvin Minsky suggests (*see notes). The mind is what live human bodies do. We constantly make decisions about whether or not to deliberate about some choice. Deliberation before the decision to deliberate begs the question. Turing machines simply deliberate before every decision. We could never come to model human intelligence because we cannot separate it from the rest of the infinitely complex processes that constantly occur within our bodies, like the decisions to deliberate or not. We can only model some of our thought processes.

Let me begin by differentiating reality from possibility. What I mean by other possible worlds is our conscious power to imagine and deduce outcomes, possibilities and consequences. For example, I am sitting here in front of a computer typing, but I could be out having a beer with my friends. Thus, there is another possible world that is just the same as this one, except that I am not sitting in front of the computer in this other world, but instead I am drinking beer with my friends. This other world is relatively closer to reality than is the world in which I was not born at all, or in which I am someone completely different and doing different activities from what I am doing in reality. Possible worlds exists only in the imagination for some period of time. The possible worlds that I have the power to turn into reality as a result of my own decisions are closer to reality.

Reality & Imagination: Possible Worlds


The possible worlds in this view do not cross paths (ie., they are parallel) and it is impossible to cross over to other worlds. However, I can imagine a world in which I am having a cup of tea, and another world in which I am not, and then turn the imaginary world of me having tea into reality, simply by deciding to make myself some tea. We need to combine the concept of possible worlds (fig1) with the following picture of free will and decision making:

Free Will


We need crossroads inbetween reality and possibility. Are some decisions 'reversible'?

At the crossroad, we exercise our freedom of will, and decide to pursue one course of action. With that decision, one possible world became reality. What lead up to the decision is a deliberation process that was sharply restricted by time constraints. Deliberation can be completely paralyzing if it does not lead to a decision in proper time, because at some point the deliberation becomes one of the options, and it is too late to see the movie or go to the café, and we have decided to deliberate all night instead of doing anything else. Deliberation is always only an option. One could live a life without deliberating at all, at least consciously, and only act. Thus we can either act one way, or another or to deliberate. Opportunity is bald in the back and so we must grab onto it before it passes us. Figure 4 is what we get when we combine free will with the notion of possible worlds.

The question arises at this point as to the 'reversibility' of decisions. Can two different choices result in the exact same world?

fig 4

There is an intuition for the view that decisions can be divided into significant and insigificant ones. The insignificant decisions are 'reversible' in that they can lead to exactly the same place as some other option. Say that I am faced with a decision of going to the theatre or going to a café. This could be the first crossroad in our diagram (as I already noted, sitting there and deliberating is always also an option). What our diagram implies is that at some later point in time such as two hours later, the two roads meet again. Imagine that the café is actually right next to the theatre. I decided to go to the café and sat there for exactly as long as the theatre performance, and then came back the exact same way as if I went to the theatre. You can imagine only with much difficulty running into yourself right outside of the theatre, and merging into one person again on your way home. The difficulty is in the fact that even if my café experience lasted exactly as long as the theatre performance, my decision not to go to the theatre had a permanent and significant effect on reality. I could not merge into one outside of the theatre because for the last two hours life was different for me as a result of my decision. Furthermore, my life will be different permanently as a result of this decision. I will never really know what life would have been like had I gone to the theatre. I will not have sat next to the same people, and I would not have had the same thoughts and memories as a result of my choice. Our personalities and lives are a result of our decisions and experiences. For this reason, our choices are permanent and irreversible as well as absolutely significant to the people that we become.

All decisions are significant, even those made without deliberation. This diagram shows an illusory distinction between significant and insignificant decisions


Like it or not, by raising your hand you are changing reality permanently. Even if decisions were divided into significant and insignificant ones, because of our uncertainty of how things would have turned out, had we acted differently, it is impossible to tell a significant decision from an insignificant one.

The decisions that we make without deliberation are significant. An example of such a significant decision is the implicit decision about whether or not to deliberate. A Turing machine can only model a strict deliberation leading to a decision, it cannot model the decision to deliberate.



This picture is familiar to computer scientists in the field of Artificial Intelligence. In fact, this sort of tree of possiblities can be searched with the Min-Max in combination with Alpha-Beta cut-offs or Iteratively Deepening Depth First Search algorithms which are used by programmers of AI to search for a good chess move for example. AI has the power to send the computer on a search for solutions that human beings cannot achieve by themselves AI algorithms perform an analysis of a tree of possiblities like that of fig.6. The computer needs a scoring function which it uses to evaluate each situation, and using this scoring function it searches deeper and deeper into the tree to find a move that will lead to the most favourable position. It is important to note that this tree becomes enormous just a few moves down, and this is why no computer has yet been able to solve the game of chess. In the course of the search, it uses Alpha-Beta cutoffs, for example, to eliminate entire branches of possiblity and does not search them. We can also program the computer to learn from previous games and thus eliminate moves that have led to unfavourable consequences in the past. This is the kind of mechanical thinking that it is programmed to perform.

Consider another mechanical invention, the automobile. Human beings can only travel so fast given their body, and this is why they invented the automobile to do travelling with. The car drives, it is designed and programmed to travel along and according to the driver's wishes. The point is that the car was designed to do something that human beings also do, albeit very differently, and that is to travel through space. The car does this successfully, and we generally have no problem accepting this.

Human beings also have brains that they use for higher level thinking. What I mean by higher level is meant in opposition to the kinds of lower level thinking that happens in our brain which control our motor functions and synchronizes all those body parts that are required to move in some way. It is this sort of higher level thinking that we associate with consciousness, free will, and soul. It is also this sort of thinking that we associate with the ability to play chess well, and make decisions about which move to make. We accepted so easily our ability to create machines which accomplish what is needed to travel through space, we seem to struggle in accepting that we also create machines which perform the higher level thinking, such as is involved in playing a good game of chess, or being a good doctor. Meanwhile, we continue to create computer programs and machines that successfully accomplish the equivalent of just those higher level thought processes.

A computer plays chess in a mechanical way, listing possible moves at each step, and considering each possible countermove, and so on. Although this is not necessarily how an expert human player plays, it does capture at least one strategy that a human player can use. It takes this strategy a lot further than a human ever could. Human beings create the machines, and so it is disturbing that our creations could surpass the creators in at least some trivial way. This does happen. A car can accelerate faster than any human being. The programmers of Deep Blue created a program that destroys them in a game of chess. You need not be a good chess player to create a killer chess playing program. The car will not accelerate, nor will Deep Blue play chess well unless we, its creators supply it with that goal or objective. When and if we do give machines that objective, it is senseless to deny that they accomplish it when they do. Just as the car travels through space, the chess playing program thinks about what move to make before it makes it.

This thinking does not change the machine into a human being. It is obvious a human being, while thinking about what move to make during a game of chess, remains a human being during that time, just as it is obvious that a computer remains a computer while thinking about what move to make. These two creatures are completely different. A human being has a physiology, he might be hungry or have an itch in his left ear while making his decisions about what move to make, while the computer simply does not experience anything remotely similar to that. A human being can make the decision to play or not without deliberation and its reasons, whereas a computer has no choice but to deliberate before every move.


Marvin Minsky, in his Society of Mind, argues that minds are simply "what brains do" and proposes that it is the interaction between many functional autonomous agents that gives rise to intelligent behaviour. Thus, the human mind is proposed to be understood as a collection of agents in the human brain interacting and producing coherent behaviour. Minsky tries to dispel our illusion of unity about ourselves. There is no single me-mind, there is only a single brain within a single body, and the world of conscious thought is actually an enormous mass of interacting feelings, desires, emotions, thoughts and sensations. The majority of the functioning of our brain is hidden beyond our consciousness. However, while we live out our lives as objects of natural selection, we do have the ability to learn and become more successful at life.

Minsky holds that, 'According to the modern scientific view, there is simply no room at all for freedom of the human will. Everything that happens in our universe is either completely determined by what is already happened in the past or else depends, in part, on random chance. Everything, including that which happens in our brains, depends on these and only on these: A set of fixed, deterministic laws. A purely random set of accidents.' (Society of Mind, 306).

Minsky considers freedom of will to be a convenient myth, he says, 'No matter that the physical world provides no room for freedom of will: that concept is essential to our models of the mental realm. Too much of our psychology is based on it for us to ever give it up. We are virtually forced to maintain that belief, even though we know it is false - except of course, when we are inspired to find the flaws in all our beliefs, whatever may be the consequence to cheerfulness and mental peace' (Society of Mind, 307).

Machines and computers have their goals and objectives given or programmed in to them by its creators. Is this not something that makes human beings fundamentally different from machines? The strong AI supporters such as Marvin Minsky argue that this does not make us different from machines. We too, have our goals and objectives programmed into us, by evolution and genes, through our parents and the people we love and admire. The notion of freedom is tied to responsibility, we need to think that we are free in order to be responsible for our actions. Too many of our social structures, and ideas about morality are deeply ingrained and useful in making the world go around for us to reject this idea. Some people believe that we can be understood as though we are simply complex machines, and concepts such as free will,, consciousness, really run along the lines of spirit and soul and are extremely pragmatic illusions.

AI takes aspects of human reasoning and functioning that we understand, and models them with the use of machines. There is a large intellectual leap from this practice to the view that human beings are nothing but complicated problem solving machines that can be modelled with Turing machines.

Debate about the difference between Turing-machines and human beings, as well as the difference between arriving at something computationally and human understanding is extremely rich and diversified. Minsky seems to presents us with a view that has been summarized by Penrose as view A, that 'All thinking is computation; in particular, feelings of conscious awareness are evoked merely by carrying out appropriate computations.' (Penrose, The Large, The Small, and the Human Mind, p101), whereas Penrose himself is an advocate of view C which states that 'Appropriate physical action of the brain evokes awareness, but this physical action cannot even be properly simulated computationally' (Penrose, The Large, The Small, and the Human Mind, p101).

Deciding to Deliberate

by: Tomasz Neugebauer

first posted 1998.
updated: August, 2001

in this section: