By Paul Almond, 30 December 2003
'Nobody can be exactly like me. Sometimes even I have trouble doing it.'|
- Tallulah Bankhead (1903-1968)
This article discusses some scenarios involving mind uploading [1, 2] and 'free will' that I find interesting. The article does not resolve any philosophical problems; however I suggest that it adds an interesting perspective to the debate about human actions and 'free will'.
Imagine you find yourself in the situation that will now be described. You are told the following and are convinced that it is true. You are also convinced of the honesty of the speaker and his/her ability to do as he/she claims:
'What you think of as "you" is actually dead. You are now a computer program made from a very accurate scan of your brain. It models the thought processes that your brain used to perform and is running in a virtual reality. None of what you are experiencing is actually real.
'The computer system that is modelling this environment and your behaviour is totally deterministic. For the duration of this exercise it will have no interaction with the real world. This means that what happens now, in this simulation, is purely determined by its programming, which, of course, includes the simulation of your brain. I myself am merely software running on this computer.
'You are not the only version of yourself. We actually made two versions. Let us call them Version 1 and Version 2. Both versions were made from exactly the same brain scan data and started off being identical to each other. They have also been put in identical virtual reality simulations in separate computers.
'The computer systems that 'run' Version 1 and Version 2 of you are the same and are deterministic, with no external input, so they will behave identically: anything that happens in the computer running one version will also happen in the computer running the other.
'You are probably wondering whether you are Version 1 or Version 2. Unfortunately, I cannot tell you. The computer systems that run Version 1 and Version 2 are identical, which means that each cannot contain specific information about what version it is: to tell you this would make the versions different from each other. Even I cannot know this, because it would make the two versions of me different from each other, and so would make the computers different.
'We are going to play a game. In a few seconds I will ask you if you want to say, "Yes." I will give you a certain, limited amount of time to reply. An external assessor will then examine what went on in both computer systems. If the other version of you said, "Yes," then the assessor will give you a fantastic reward. You will have some choice about what the reward is, but options include delightful experiences in your virtual reality world or even a return to biological existence.
'The other version of you is having a conversation just like this, of course, and whether or not you choose to say, "Yes," will determine whether or not he/she gets his/her reward.'
What should you do?
This is hardly a trick question.
The other version of you is reacting to the conversation in the same way and if you say, 'Yes,' he/she is also saying, 'Yes.' If you fail to say, 'Yes,' then the other version of you will also fail to say, 'Yes.' Your best course of action is, therefore, quite clear: you should say, 'Yes.' If you fail to do this the other version of you will also fail to do it and you will not collect your reward.
Deciding what to do is trivial. It is more interesting to think about why we should choose to act in that way and how it would actually feel to be in this situation.
Would your motive for saying, 'Yes,' be altruistic? What you say will not really affect whether or not you get rewarded in the conventional sense, as that is being decided on a computer somewhere else, but maybe you feel some regard for your other version? This is not about altruism, so to prevent any complications from being caused by it we will use a second scenario in which actions based purely on altruism are not rewarded.
In this scenario you are in the same situation as in Scenario 1, with one difference. While the other version of you is making his/her decision the internal state of his/her simulated brain is monitored to determine the motives for his/her choice. If the only motive is altruism, you will not receive your reward. Of course, you are also being monitored and if you base your decision purely on altruism then the other version of you will not receive a reward.
If the other version of you has altruism as his/her only motive, then so will you, because the two computer systems are the same. You may be concerned that you are unable to put all thoughts of altruism from your mind, so that you cannot win the reward, even if you want to win it for yourself, because you still feel some regard for the other version of you. You need have no such concerns: as long as the other version of you says, 'Yes,' and your motivation is not purely altruistic then you will receive your reward. In fact, if you are worried that any altruistic tendencies that you might have will stop you from receiving your reward then you can be sure that you will have no problem: the fact that you feel such a concern at all indicates that your motivation is not purely altruistic.
This issue of altruism is not important in this article, and will not be mentioned further. Instead, it will be assumed that no actions taken in these scenarios involve altruism.
What should you do?
Nothing has changed here. You will not be rewarded for altruism. If you do not say, 'Yes,' then the other version of you will not say, 'Yes,' and you will not receive a reward. You should say, 'Yes,' for your own benefit. We do not have the complications of altruism now, though.
This is similar to Scenario 2, except that you are now told that a very large number of versions of you have been made - a few million, maybe. You are told that you are one of these. You are also told that each version of you has a number and that whether or not you get your reward depends on whether or not version number 9,345 of you says, 'Yes.' As a matter of fact, whether or not any of you get a reward is dependent on the actions of version 9,345.
What should you do?
There are no surprises here. You may, of course, happen to be Version 9,345. If you are you should clearly say, 'Yes.' It is more likely that you are not Version 9,345, but you should still say, 'Yes.' If you do not say, 'Yes', then version 9,345 will not have said, 'Yes.' There is no causal link between what you do and what Version 9,345 does - unless you happen to be Version 9,345 - but the only way to win your reward is to act as if there is such a causal link.
This is the same as Scenario 3 with one difference. Version 9,345 is run in the year 2070 CE and all the other versions are run in the year 2080 CE. You do not know what the year is during the game because you do not know if you are Version 9,345 or one of the other versions and the computers all have to start in the same state. The program runs are still identical.
What should you do?
Clearly, you should still say, 'Yes.' Even if you are not Version 9,345, if you fail to say, 'Yes,' it follows that when Version 9,345 played the game, back in the year 2070 CE, he/she did not say, 'Yes,' either and you will not gain your reward.
Little has changed here, except that your lack of any real control over the outcome, as many people conventionally think of 'control', is a bit more obvious. An external observer, watching you play the game in the year 2080 CE could do so with full knowledge of what your answer was going to be if he/she had watched Version 9,345 playing the game in 2070 CE.
There is one slightly strange aspect to this new scenario. When you make your decision you have to behave as if you are actually Version 9,345 - which is very unlikely - or as if your decision actually affects the decision making process of Version 9,345, despite the fact that Version 9,345 has already made its decision. There is no causal connection between what you do in the year 2080 CE and what Version 9,345 did in the year 2070 CE, but you need to conduct your affairs as if there is such a connection.
A consideration of what we have so far
'There is no mind absolute or free will, but the mind is determined for willing this or that by a cause which is determined in its turn by another cause, and this one again by another, and so on to infinity.'|
- Spinoza (1673)
When I state that there is no causal connection between you and an identical version of you, and yet go on to suggest that situations could arise where you should act as if there is such a connection, it may appear that I am trying to sneakily slip some sort of causal connection or strange supernatural effect into the discussion. This is not the case and I am not suggesting that anything at all is going on here that could not be explained by our ideas of how computers work.
The whole problem here is caused by our perception of free will. When an uploaded mind running in a computer system is presented with a choice in these scenarios, it will be obvious to an outside observer that there really is no choice to make - that what is going to happen has already been decided by the initial state of the system.
It is tautological to say it, but any system has to do whatever that system does and we should not expect humans to be an exception. This certainly applies to computers, but the feeling that we can somehow 'step outside' what we are and exercise 'free will' is a powerful one. Douglas R. Hofstadter  discussed this idea in his book Godel, Escher, Bach: an Eternal Golden Braid, in which he stated:
'It is still of great interest to ponder whether we humans ever can jump out of ourselves-or whether computer programs can jump out of themselves. Certainly it is possible for a program to modify itself-but such modifiability has to be inherent in the program to start with, so that cannot be counted as an example of "jumping out of the system".'
Hofstadter relates this to computer programs in particular, saying:
'No matter how a program twists and turns to get out of itself, it is still following the rules inherent in itself. It is no more possible for it to escape than it is for a human being to decide voluntarily not to obey the laws of physics.'
Some readers may disagree with the very idea of discussing what people should do in various scenarios, given such reasoning. It may seem futile to talk about what a human should do in one of these scenarios because, being tautological again, he/she is going to do what it is in the nature of him/her as a system to do; however, we routinely talk like this. There is not really any inconsistency: telling someone what he/she should do can be taken as an expression of what the ideal behaviour for him/her is, irrespective of what his/her actual behaviour will be. Also, giving information to someone about what he/she should do can affect his/her behaviour because the person does not have to jump out of whatever formal system 'contains' him/her to respond to information about what he/she should do: the system can simply respond to this information in whatever way its own nature disposes it to respond towards it.
When we view the other version of you as simply being a system that is going to develop from an initial state and behave in a way that is dependent on what this initial state was, it may seem a mistake to try to act in the way that will produce the optimum results, when what is going to happen is already determined by the initial state of the other version of you! The behaviour of the other version of you may be determined purely by its initial state, but so is your behaviour in these scenarios. Any argument that acting to produce the desired behaviour in the other version of you is futile, on account of this dependency on its initial state, would also imply that acting to produce the optimum behaviour by yourself, in some game where there is only one version of you would be equally futile. In such a game, however, you would have to act on this basis. Doing so would not really change anything: it would not let you 'jump out of yourself' and it would simply show that you had already started with an initial state that predisposed you to act in this way. This, however, is the apparent paradox that faces us because of how our perception of choice works.
There may be one way that we could deal with this issue, and that would be to use different language to describe choices. Conventionally, if I have just picked up a glass we would say that I chose to pick it up. This whole idea of 'choosing' can cause us cognitive difficulties. Maybe it would be better to consider my 'choice' to pick up the glass as really 'finding out' that I was predisposed to pick it up.
There is a lot of debate about the nature of 'self-awareness', what a system has to do to do to be self-aware and, indeed, whether or not such a concept is even meaningful. I suspect that a good way of thinking about self-awareness may be to regard it as being the capability of a system to get into the sort of philosophical mess that we have just been discussing!
We now return to a situation similar to the one in Scenario 2. You are still a computer program simulating a brain in a virtual reality and another, identical, version of this system exists. You will shortly be asked if you want to say, 'Yes.' You will receive your reward if, and only if, the other version of you says, 'Yes.'
One change is now introduced for this scenario. If you say, 'Yes,' you will immediately experience a small electrical shock or, rather, as you are in a virtual reality simulation, you will have an experience indistinguishable, to you, from an electrical shock; however if the other version of you, in the same situation, says, 'Yes,' then you will receive your reward. Of course, if the other version of you says, 'Yes,' then he/she will receive an electrical shock as well, but he/she will also receive his/her reward on account of you saying, 'Yes.'
What should you do?
Not much changed here. The other version of you must do whatever you do, as the computer software containing it starts from the same initial state and has no interaction with the outside world. If you say, 'Yes,' then the other version of you will say, 'Yes,' causing you to receive your reward, but you will receive the shock for saying, 'Yes.' This should be acceptable, provided that the reward is sufficient to make receiving the electrical shock worthwhile - and remember that you have some choice about what your reward should be: you should be able to think of something desirable enough.
'Someone had to make a high resolution, whole-brain Copy - and let it wake, and talk.
In 2024, John Vines, a Boston neurosurgeon, ran a fully conscious Copy of himself in a crude Virtual Reality. Taking slightly less than three hours of real time (pulse racing, hyper-ventilating, stress hormones elevated), the Copy's first words were: "This is like being buried alive. I've changed my mind. Get me out of here."
His original obligingly shut him down - but then later repeated the demonstration several times, without variation, reasoning that it was impossible to cause additional distress by running exactly the same simulation more than once.'|
Permutation City, Greg Egan  (1994)
Some readers will have accepted that if you happen to be a computer program, and you know that another version is running on a machine that starts in exactly the same state as the machine that is running you and receives no inputs from outside the machine - or receives the same inputs, which amounts to the same thing - and if you know that whether or not you win a game of some sort depends on the actions of the other version of you, then it makes sense to do what you want the other version of you to do. Other readers will have reached the same conclusion, but from a slightly different perspective. Such people would suggest that the two versions of you should not even be regarded as discrete entities, but that you should regard both computers simply as being responsible for your thinking with a high degree of redundancy. Near the start of this article I pointed out that the scenarios here demand that you cannot be told which of two machines you are running on because it would make the machines different. This point of view, however, goes further than this and suggests that the question, 'Which machine am I running on?' is meaningless while the machines are identical. It should be noted that this interpretation should not change what you do in this sort of situation. You should still say, 'Yes,' if you want to win. The only difference, if you take this position, is that you regard the situation as involving one person who says, 'Yes,' to win his reward and then, presumably, splits into two people after the game if the systems are allowed to interact with the real world and diverge from each other due to receiving inputs. The quotation above, from Greg Egan's novel Permutation City, appears to describe a fictional character, John Vines, taking this position: because he regards two instances of the same simulation as not being separate individuals having separate experiences, he appears to think that running a simulation of a suffering brain more than once causes no ethical problems.
It should be noted that taking this position would result in slightly different descriptions of these scenarios than I have used so far. For example, in Scenario 3, I stated that it was very likely that you were Version 9,345. Taking this position, however, would imply even asking whether or not you are Version 9,345 is irrelevant.
In all the scenarios that we have discussed so far we have presumed that you, and one or more other versions of you, are software that starts from the same initial state and receives no external inputs during this game. In this scenario we will make the issue slightly less clear-cut.
Scenario 6 is identical to Scenario 5, except in one detail. When both computers were set up a small change was made to the initial state of one of the machines. This small change was extremely small. As an example, it could have involved slightly altering a single simulated brain cell, or slightly adjusting the colour of the simulated room in which this all occurs at one spot on the wall. This means that we now start with two versions of you, together with a simulated environment, that are almost, but not quite, identical.
As with Scenario 5 you are again given the option of saying, 'Yes.' If you say, 'Yes,' you will immediately receive a small electrical shock - or, rather, a simulated one - but if the other version of you, in the other virtual reality, says, 'Yes,' then you will receive your reward.
What should you do?
This situation is more complicated and it is now that it starts to get more interesting. There is now no guarantee that the other version of you will do exactly as you do. It starts in a slightly different initial state and small changes to the initial state of a system can have dramatic effects on its development. As the other version of you waits for the scenario to be explained it will become less like you as the simulation develops in a different way. It will become even more different as it considers whether or not to say, 'Yes.' It is possible that you could say, 'Yes,' while this other version of you, because of this slight difference in its initial state, chooses not to say, 'Yes,' Alternatively, of course, it is possible that you may choose not to say, 'Yes,' while the other version of you does choose to say, 'Yes.'
What is the correct course of action here?
You may decide that what the other version of you does has no connection with what you do. If this is so, then you should certainly not say, 'Yes.' Saying, 'Yes,' would cause you to receive an electrical shock, but whether or not you received your reward would depend on what the other version of you did, regardless of whether or not you said, 'Yes.' To some people, saying, 'Yes,' and receiving an electrical shock, before waiting to see if an almost identical copy of yourself said, 'Yes,' to see if you will receive your reward, is no different than not saying, 'Yes,' not receiving an electrical shock and also waiting to see if an almost identical copy of you says, 'Yes', to see if you will receive your reward, with one difference: in the second case you do not get an electrical shock.
When I discussed this scenario with other people, while preparing to write this article, someone told me that he would certainly not say, 'Yes,' in this situation and he explained it by saying something like, 'The other version of me can develop in its own way. It can do what it wants anyway, so what I do is irrelevant.'
Is this view correct, though?
The other version of you starts off being almost identical to you, so we would expect that, in the short term, it will be behaving in a manner which is almost identical to the way that you behave. Small initial variations in a system tend to have greater effects later, of course, so we can hardly expect this to last forever. However, provided that the game finishes without too much time elapsing, there should still be enough similarity between you and the other version of you to prevent this divergence from being a problem.
If you would have said, 'Yes,' in the previous scenarios in which the other versions of you had to act in exactly the same way as you then this suggests that you accept that it makes sense to act in the way that you want another system to act if you know that the other system has to do what you do. While the other system in this scenario does not have to do what you do, we should certainly expect a correlation between your actions and its actions, so is it correct to discount your own actions totally when considering what it is going to do? It could make sense to say, 'Yes,' in this situation and accept the electrical shock, knowing that the other copy of you is more likely to say, 'Yes,' if you do. In other words, the strategy is the same as that used in the scenarios where other versions of you have to do as you do and the fact that it does not have the same certainty of working does not make it invalid.
You should, of course, take the loss of the absolute guarantees offered by previous scenarios into account and you should base your decision of whether or not to say, 'Yes,' on the cost to you of doing it, the size of the reward that you stand to gain if the other version of you also says, 'Yes' and the probability that the other version of you will say, 'Yes'. The probability that the other version will be so cooperative depends on how different it was to start with and how long it has been left to become increasingly different. You should therefore base your decision on:
- How painful the electrical shock will be - that is to say, the shock that you will receive for saying, 'Yes.'
- The size of the reward that you will receive if the other version of you happens to say, 'Yes.'
- The extent of the difference between the states of you and your environment and the other version of you and his/her environment when this game started.
- The amount of time that has elapsed for any such variation to increase.
The last item on this list - the amount of time that has elapsed - is a little vague, so it deserves a bit more discussion. As more time passes then the two simulations will diverge to a greater extent. We should measure this time as starting when the two simulations are set up at the start of the game. Some time may then elapse before the rules of the game are explained to you and still more time while you are awaiting the moment when you will be expected to say, 'Yes,' if that is your choice. After I say something like, 'It is time for you to choose. Say, "Yes," now if want to do so,' how long will you have to make your mind up - a few seconds, a few minutes or maybe even longer? This period of time is important because while you are thinking about whether or not to say, 'Yes,' the other version of you becomes more different from you and less likely to do what you do. We should therefore include any 'thinking time' as part of the elapsed time that we allow for divergence: the more thinking time that you use then the less likely it is that the other version is going to give the same answer that you will give.
This suggests one important strategy for this scenario. If you are given a certain amount of time to decide whether or not to say, 'Yes,' then, if you are going to say, 'Yes,' you should do so immediately before the other version of you has diverged so much as to make this action pointless.
I have previously mentioned that, although there is no causal connection between you and another version of you, there can be some sense in making your decisions as if there is such a causal connection. In this context, the motivation for saying, 'Yes,' early is because the capability for your actions to 'force' the other version of you into doing the same thing as you is strongest in the early stages and decreases as time passes and divergence occurs.
What if you think that the uploaded versions of you are not conscious?
Some readers will have found all of this discussion pointless because they do not regard the human mind as something that could have a physical explanation. This article will not attempt to persuade such readers. Others will agree with Roger Penrose's views [5, 6] that the human brain exploits non-computable physics and could not be modelled by a computer. I shall regard the issue of non-computability and brains as beyond the scope of this article, other than saying that the next scenario to be considered, Scenario 7, may not be too problematic in a non-computable context.
A number of readers will accept the idea that the human mind could have a physical cause and that the brain's behaviour could be modelled by a computer, but will reject the idea that actual consciousness could be implemented in a computer. Proponents of the views of John Searle [7, 8], an American philosopher, may have this sort of opinion, although Searle does not explicitly state that human consciousness could never be realised in a computer. Searle thinks that computers could model human mental processes, but that no real consciousness would necessarily be associated with such modelling. According to Searle, we may very well be able to make a computer model of his brain that is able to act and talk as he does, but there would be no guarantee that there is any real conscious experience going on it: when the machine said that it had just enjoyed a pleasant discussion we could not be sure that anyone had just enjoyed anything.
To a person who believes what Searle says, questions such as, 'What would you do if you suddenly found yourself to be a computer simulation of your brain, playing a game as follows….?' may appear to be meaningless, because the question may be perceived as asking what things look like from a point of view that does not exist, just as it would be rather pointless to ask me what my plans would be if I was to find out that I was a rock. Some people who subscribe to Searle's views - and not necessarily all of them - may hold the view that any question about what you would do if you were to find out that you are a computer program is just as meaningless.
I differ with this idea. The question of what you would do if you found out that you were a computer program in some given situation is one that can be reasonably asked of an advocate of John Searle's position, with the qualification that the question would have to be translated to remove consciousness from the scenario. To illustrate this, here is an imaginary conversation between a 'questioner' and a 'Searle advocate'. I do not agree with the views of the Searle advocate, but I have tried to represent his/her possible position fairly.
Questioner: What would you do if you suddenly found out that you were a computer program and…
Searle Advocate: I have to stop you there. This is meaningless. I do not even know that I could find out that I was a computer program. That would be assuming that a computer program can have the consciousness that I have. I cannot prove it could not, but I do think it is a bit unlikely.
Questioner: You admit that you cannot be sure that your mind could never be implemented in a computer, so could you not answer my question on the assumption that it can?
Searle Advocate: I cannot prove that my mind could not be implemented in a chair, either. Do you want to ask me how I would act if I found out that I had become a chair as well? This is a pointless discussion.
Questioner: No, I agree that it would be silly for me to ask that. Let us try a different approach. Suppose that I was to perform some sort of scan of your brain and that this scan was extremely accurate. If I put the information from this scan into a computer and programmed this machine to run a simulation based on it, are you saying that the simulation would not behave in an intelligent way? Are you saying that your brain could not be simulated by a machine because the machine and its software would lack some sort of 'supernatural' component?
Searle Advocate: No, I am not saying any such thing. I accept that a model of my brain would replicate its externally observable behaviour, but there is no reason to think that it would replicate the actual consciousness. We can model a thunderstorm in a computer and the computer model exhibits the correct behaviour, but we know that nobody is actually going to get rained on by the computer model. It is the same for the computer model of my brain. I can accept that it may produce all the right behaviour, but it would merely be modelling what my brain would do in a given situation. There is no reason to presume that it would actually be conscious.
Questioner: So you accept that I could make a computer model of your brain and put it into one of the scenarios discussed in Almond's article and expect it to appear to be considering the situation?
Searle Advocate: I have no problem with this. You could run a computer model of my brain in a scenario in which I am told that I am actually computer software, but it would just be a computer model. There is no reason to think that anyone is really in the scenario and you cannot be sure that you can even put me into such a scenario. To do so you would have to implement my brain as a computer model and ensure that the physical system behind all this caused real mental states to emerge, rather than just modelling them. While I cannot prove that you cannot do this, it seems unlikely.
Questioner: Are saying that, for this reason, asking what you would do in one of these scenarios is meaningless?
Searle Advocate: I might like to qualify my answer somewhat, but to keep things simple my answer is, 'Yes.'
Questioner: Could you tell me what you would do in a hypothetical scenario that I will give you?
Searle Advocate: Does this scenario involve more futile attempts at implementing my mind in a computer?
Questioner: No, it is merely a question about what you would do, as the biological person that you are, in a situation that many people have actually experienced.
Searle Advocate: I should be able to answer that. What is it then?
Questioner: You are walking across a street and you see a vehicle driving at high speed towards you. You are sure that if you stay where you are the vehicle will hit you. What would you do?
Searle Advocate: I would move out of the way. I prefer not to die.
Questioner: Now, let us suppose that you are actually a computer simulation, made from scanning the brain of the original version of 'you'. Your mind is exactly as it is now, but it so happens that it is running on a computer system. You do not know that this is happening, as you are interacting with a virtual reality which tricks you into thinking that you are interacting with the real world.
Searle Advocate: As in The Matrix?
Questioner: Yes, with one exception. In that film people were tricked by means of plugging computers into their brains. In this situation you do not even have a biological brain to be plugged into the computer. Your brain itself is being simulated by computer software and a virtual reality is being used to fool you into thinking that everything is normal. The original, biological version of you could be walking around now, getting on with his/her life, after the brain scan was performed, but you are not: you are running on a computer.
Searle Advocate: We have been through this before. It is a pointless question.
Questioner: Is it? You accept that a model of your brain could, at least in principle, be built in this way. Is it not reasonable to ask what behaviour the model of your brain would exhibit when interacting with a particular sort of virtual reality, even if you do not think that the model of your brain would actually be conscious? Can we not just look at this from the perspective of what the model of your brain, as a piece of computer software, would do?
Searle Advocate: I agree that we could ask such a question.
Questioner: In that case, if a scan was taken of your brain now and used to produce a computer model which was made to interact with a virtual reality, and this whole process was contrived so that no evidence of its artificial nature, or of the virtual nature of its reality, was available to the computer model, and we arranged for the virtual reality to mimic the inputs that a human brain would receive if the owner was just about to be hit by a car while crossing a road, what do you think the computer model of your brain would do?
Searle Advocate: I think that it would provide the sort of outputs that a human brain would provide if trying to get out of the way of a car.
Questioner: The computer model of your brain would be an incredibly complex piece of software. Are you an expert in computer science? Is it not presumptuous of you to anticipate its reactions like this?
Searle Advocate: Not really: this is a simulation of my brain, remember! I would try to get out of the way of a car if it seemed about to hit me. The computer model of my brain would have a lot of the physical characteristics of my brain, so I would expect it to behave similarly. I may not be absolutely sure that it would not try to throw itself in front of a car, but I would be surprised if it did. I think I am more stable than that and I would expect a computer model of my brain to be more stable than that, as a piece of computer software, even if nobody is actually in it.
Questioner: So you are saying that you think you can make a reasonable prediction about what a model of your brain would do by using your knowledge about what you would do?
Searle Advocate: Yes.
Questioner: Are you maintaining that, while you regard any question about what you would do in some situation in which you are computer software as being meaningless, such a question can be rephrased as merely a question about what you think a computer model of your model of your brain would do and that you could answer such a question by using your own knowledge of how you would react?
Searle Advocate: Yes, with one qualification: I cannot be sure that I could answer every such question. You might place the model of my brain in a situation which is so alien to anything of which I have experience that I am not sure what it would do. However, I do not have any objection in principle to what you are saying.
Questioner: So would it be reasonable to say that, while you may find it meaningless to be asked what you would do in those scenarios which involved finding out that you are being 'run' on a computer, the questions asked about these scenarios could be rephrased in such a way that they become questions about what you think a computer model of your own brain would do if it was in one of these scenarios?
Searle Advocate: Yes, I can accept that.
Questioner: So, you can accept that, subject to the need for some rephrasing, these scenarios can be meaningfully considered?
Searle Advocate: Yes, subject to rewording.
I would be interested to find out what advocates of the Searle position think of the above dialogue and how I have represented an advocate of their case. Clearly, the responses that I invented for the Searle advocate are not going to be responses that all such advocates will find acceptable. While I disagree with Searle's position, the purpose of this article is not to try to refute it and the above dialogue has been provided merely to show that, even if they are right, advocates of Searle's position should not automatically view any scenarios like the ones in this article as meaningless.
What the above dialogue did not explore is whether or not the possible existence of a model of his/her own brain creates any issues for advocates of the Searle position. For example, would it actually be possible to persuade such a model that it was a computer program? Unfortunately, such questions are beyond the scope of this article.
Searle's position, although it strongly suggests that computers cannot have consciousness - even if it makes no claim to prove this - does not automatically imply that the scenarios considered in this article cannot be explored. The reader may dispute this. If this is the case then, fortunately, the next scenario will not require any consideration of your brain being modelled by a computer. Here is Scenario 7, which is decidedly more biological.
Imagine you find yourself in the following situation:
You wake up lying on a couch in a small room. You remember experiencing an awful accident of some kind, but nothing after that.
A sign on the wall tells you that you are actually a clone of your original body. You actually died in the accident and, as an experiment, your body was placed in some sort of scanning machine and a computer file was made describing it in enough detail to allow it to be reconstructed.
The sign explains that your memories and personality are intact because they are features of the physical structure of your brain. The scanning machine captured all this data and it all went into the computer file describing you. The computer file contained enough information to allow a copy to be made, complete with the fine structure of the brain and all its memories.
You are the reconstructed copy. The sign tells you all this and then informs you that you are part of a slightly tasteless game.
You are not the only copy. The computer file describing your body was used to build two people. You are one of these two people.
The other version of you was awoken at the same time as you, or as close as possible to the same time. His/her experiences are as similar to yours as is practically possible. He/she has woken up in a room which is as similar as possible to the one in which you awoke and, of course, the other room has a similar sign on its wall.
The sign tells you that, as in the scenarios that we have previously discussed, you have to decide whether or not to say, 'Yes,' within some reasonable period of time. If you say, 'Yes,' then you will receive a small electrical shock, irrespective of what the other version of you happens to do. However, if the other version of you says, 'Yes,' then you will receive a reward.
If you found yourself in such a disturbing situation then obtaining such a reward may be the furthest thing from your mind, but let us assume that there is a reward that you want to receive. You may also doubt that the information provided to you by the sign is true, but to simplify things let us assume that you are persuaded, perhaps by other evidence, that the situation is as described by the sign.
What should you do?
This scenario is broadly similar to Scenario 6, in which it was suggested that you were a computer program with a copy that started in almost the same initial state as yourself, but diverged as time passed.
In this scenario you are a biological person with a conventional human brain, although it has been constructed from the information in a computer file. Another copy has been made from the same computer file. We can presume that any technological process that builds bodies and brains in this way is not going to be perfect, so you and the other version of you will not be absolutely identical. You have, however, been made as similar to each other as practically possible and there will be a lot of similarity between the brains of you and the other version of you and between your body and the body of the other version of you.
In the previous scenarios computer software was used to provide a virtual reality environment for you and the other version of you. The initial state of this software was part of the initial state of the computer simulating you so that in the last scenario, Scenario 6, the software providing the virtual reality for the other version may also have been slightly different, in its initial state, to the software providing your virtual reality. In this scenario a virtual reality is not being used: you are a real person in a real reality. The two rooms cannot be exactly the same though: there will be small details, beyond the control of the people who set this game up, in which the two rooms differ from each other. As an example, the movement of air in the room will not be identical, or the door may have been painted with slightly different movements of a paintbrush, causing a slight difference in its appearance.
What does all this mean? Your brain is very similar, but not identical, to the brain of the other version of you. Your immediate environment is very similar, but not identical to the environment in the immediate vicinity of the other version of you. This makes the situation equivalent to Scenario 6, even though it now occurs in reality, rather than inside a computer.
This means that you should view your actions as if they have a statistical effect on the actions of the other version of you and you should act on the basis that saying, 'Yes,' means that it is more likely that the other version of you will say, 'Yes,' causing you to receive your reward, and that not saying, 'Yes,' means that the other version of you is less likely to say, 'Yes.' There is no certainty, of course, so when making your decision you should consider the following:
- How painful the electrical shock will be - that is to say, the shock that you will receive for saying, 'Yes.'
- The size of the reward that you will receive if the other version of you happens to say, 'Yes.'
- The amount of difference that you think will exist between you and the other version of you, immediately after you have both been made by whatever technology does this.
- The amount of difference between the environment in which you find yourself and the environment in which the other version of you finds him/herself.
- The amount of time that has elapsed after you were both made for differences between the two of you to increase further.
'I'm not saying, "Yes." It would be a stupid thing to do!'
It is with this last scenario, Scenario 7, and the conclusion that followed in our consideration of what we should do in Scenario 7, that this article has made its main point. My experience of discussing these scenarios with people has caused me to expect that some readers will accept the conclusions reached with Scenario 7 to be valid and, in fact, to be so obvious as to be trivial and not to merit an article. I also expect, however, that some readers will reject the conclusions totally. The idea that situations could arise in which you should act in the way that you want another person to act, when there is no mechanism to allow your own behaviour to influence the behaviour of the other person, may seem, to some people, to be something that can arise only as a result of a cognitive illusion.
If someone is going to propose that the conclusion is wrong, he/she could say that while an external observer may notice a statistical correlation between the behaviour of pairs or people in games like this, when studying a large number of games, this does not help a person who is playing the game, because his behaviour cannot have a direct affect on the behaviour of the other person. In my opinion, this would be wrong: it is not necessary for any mechanism to exist which allows one person's behaviour to affect that of the other.
If you still insist that the conclusion is wrong I would make this point:
Let us suppose that you think that the conclusion is not only wrong, but obviously wrong, to the extent that you would find it extremely irrational to regard the conclusion as correct. If the price of saying, 'Yes,' is an electrical shock then you have no reason at all to say, 'Yes,' - it would be an irrational thing to do as you would not expect it to have anything at all to do with whether or not the other version of you also says, 'Yes,' and causes you to get the reward.
Let us now suppose that, immediately after you have read this article, you have an accident, after which some detailed brain scans are made, and you then find yourself in Scenario 7, knowing that another version of you also exists. You will clearly not say, 'Yes,' because it is so obvious that you have nothing to gain and that what you do has nothing to do with what the other version of you will do.
Let us consider the other version of you, however. This person has been created from the same brain scan data. He/she shares a common past with you and remembers reading this article and having the opinions about it that you remember holding. If it is so obvious that there is no sense in saying, 'Yes,' then it would probably be expecting too much for this other version of you to say, 'Yes,': after all he/she has had exactly the same insights into the stupidity of saying, 'Yes.' The only way that he/she could say, 'Yes,', if you are right, would be if the slightly different experiences that he/she had during the few minutes after he/she awoke made him/her extremely stupid. You could say, right now, 'No, I will not change my mind later. It is stupid, but I am not speaking for any other versions of me,' but there is a problem here: both of the versions of you that later exist in this room will remember saying this.
This means that if you hold such an opinion, the very fact that you hold it makes it very unlikely that you will gain any reward in this game.
Earlier in this article, I suggested that it makes sense to think of choices as involving 'finding out' what your brain - or whatever system happens to be responsible for your thinking - was predisposed to do in a particular situation. This would mean that in this situation, firmly deciding that it is always obviously pointless to say, 'Yes,' makes it very likely that, when the time comes to make a choice, you will 'find out' that you are not predisposed to saying, 'Yes,' - and so will the other version of you, to your cost.
Some readers may argue that this article is based on an assumption that the universe, and therefore the human brain, is deterministic, and that such an assumption is incorrect.
Due to quantum mechanics, physics, as it is currently and widely accepted does have an indeterministic component; however quantum mechanics is not without controversy and various models exist that could lead to a deterministic form of quantum mechanics. Arguments could also be made about the coherency of the position that the universe actually is indeterministic, rather than any indeterminacy being only a characteristic of what we can find out about the universe.
Even if indeterminacy is admitted to exist, this does not, in itself, substantially affect the scenarios and arguments used in this article. The first six scenarios in this article dealt with deterministic computer simulations of brains. If the universe, and the workings of the brain, were truly deterministic then this would imply that at any moment of time a human brain and had many future possible states, with no single one of these states being required as a result of its current state. There is no reason, however, why this could not be represented by a deterministic computer simulation, with the computer simply selecting one of these possible future states in an arbitrary, yet deterministic way.
The effects of chaos, in which systems diverge considerably due to small variations in their initial states is likely to be much more significant in any case and to mask any indeterminacy that may be inherent in the laws of physics.
Scenario 7 involved consideration of actual human brains in real environments. A small amount of difference between the initial states of the brains and the environments of the two versions was assumed for this scenario and it was assumed that they would develop in a deterministic way, yet diverge due to the difference in their initial states. Indeterminism would not substantially alter this. It would cause two versions of the same person, even if they started identically, to diverge, but divergence has been assumed anyway. Indeterminism would simply be another cause of the eventual divergence of the two versions that has already been accepted and there is no reason to presume that it would preclude substantial correlation between the behaviour of the two versions until divergence becomes significant.
A More Unpleasant Game
I mentioned earlier this idea, discussed by Hofstadter, that a program cannot jump out of itself. Even if a system appeared to be jumping out of the constraints of a formal system, there would be another formal system that described how it did this and it would be this formal system that had really contained the program all along.
The issue of being constrained by a formal system in this way has had some relevance to this article, so we will finish with another example of a scenario to illustrate it. As with previous scenarios, this one also takes the form of a game; however, it is not part of the main argument. Here it is:
You brain is being simulated by a computer, which also provides a virtual reality.
The virtual reality takes the form of a maze. This maze has one curious property: the left hand side is a flipped over reflection of the right hand side. For example, the top left hand corner is the same as the bottom right hand corner and the top right hand corner is the same as the bottom left hand corner. If you were shown the entire maze and then blindfolded and disorientated before being led into it, because of this property of the maze, you could never work out whether you were on the left side or the right side.
You are standing in one corner of the maze. You are told that another version of you, who is identical to you, is standing in the corner of the maze which is diagonally opposite to you. His/her situation is exactly the same as yours, including what he/she sees when he/she looks at the maze, the position his/her simulated body is in and the structure of his/her brain: if the entire maze was rotated 180o then an external observer would not notice any difference in the maze or its contents.
You are carrying a gun, and so, of course, is the other version of you. A sign on the wall nearby informs you that if either of you is shot in this simulation then the software that is modelling him/her will be erased immediately. The sign further informs you that, unless one of you gets shot within the next fifteen minutes you will both be erased.
Someone clearly has a distasteful gladiatorial exhibition in mind and, of course, the other version of you is reading an identical sign on the other side of the maze.
The computer that is running this simulation is deterministic and shows no preference for either player.
Leaving the ethics of even wanting to win such a game aside, you may go stalking your other self, but your other self has started in exactly the same situation as you. He/she will have exactly the same behaviour. When you go left you can expect your other self to go left. When you go right you can expect your other self to go right. When you fire a shot at your other self you can expect a similar shot to be fired back by him/her at the same time.
During the next fifteen minutes you may think about trying to 'jump out of yourself', but whatever you think while trying to do this is also a product of the computer simulation and any 'twists or turns' your simulated brain makes to try to do this will also be made by your opponent.
You are in a lot of trouble…
This article has argued that it is possible, at least in principle, for you to be in a situation in which the knowledge that someone very similar to you exists, whose actions determine your own success in some sort of game, makes it rational to select courses of action which are of no direct value in themselves when performed by you but should still be performed by you because the similarity between you and this other person makes it more likely that this other person will be performing the same actions.
While this article hardly contains a paradigm shift in philosophy or cognitive science, readers will be divided in their views of this. Some readers will regard the conclusion reached here as trivial, while others will think that it is wrong.
 Strout, J. (2002). Mind Uploading Home Page. Retrieved June 22, 2003 from http://www.ibiblio.org/jstrout/uploading/MUHomePage.html
 Mind Uploading Research Group. (2002). Retrieved June 22, 2003 from http://minduploading.org/
 Hofstadter, D. R. (1980). Godel, Escher, Bach: an Eternal Golden Braid. New York: Vintage. Chapter 15.
 Egan, G. (1994). Permutation City. London: Millennium. (Fiction). Chapter 3.
 Penrose, R. (1989). The Emperor's New Mind: Concerning Computers, Minds and the Laws of Physics. Oxford: Oxford University Press.
 Penrose, R. (1994). Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford: Oxford University Press.
 Searle, J. R. (1980). Minds, brains and computers. The Behavioral and Brain Sciences 3:417-457
 Searle, J. R. (1997). The Mystery of Consciousness. New York: The New York Review of Books.