By Paul Almond, 6 July 2005.
This document will propose a way of increasing the security of computer systems. It will be suggested that there is too much complacency about threats to computer systems from programs that have been installed on them, possibly with the consent of the user, and that current methods of dealing with these threats are inadequate. A way of dealing with this kind of threat will be described.
The Power Given to Software
Most computer security measures relate to preventing unauthorised users from accessing computer systems and keeping unauthorised software off computers. There is a clear focus here on perimeter security - protecting the system at the point of entry.
If a program does manage to get onto a computer then it poses a number of threats, including:
- Global actions (e.g. reformatting a hard disk)
- Use of an unacceptably large amount of system resources (memory, backing storage, processor time)
- Alteration of files
- Deletion of files
- Creating other programs (e.g. copies of the original program)
- Difficulty in removal of the program from the system
- Functioning that deviates from the program's specifications
- Unauthorised internet access
- Theft of information
- Subversion of other programs
For all these, except 7, which is hard to define in any case, similar sorts of actions may be performed validly by some programs; for example, a disk utilities program may contain routines to delete files anywhere on the system.
Once a program is on the system it has enormous power. The way that computers and operating systems are designed means that programs can do almost anything once on a system.
This applies both to programs that get onto the system without authorisation and programs that are installed with the user's permission. Programs that are installed with the user's permission can still pose a serious system threat because the user only has the word of the program's maker that it will do what it is supposed to do. Anything could be buried in the program's code and the program could violate the user's privacy or damage a computer system. A program could be dangerous as a result of malice or error by its programmer.
When you install programs onto a computer you are placing enormous trust in their creators. You are trusting them to have almost total power over your computer and access to data about your business operations or personal life. In the past people have often had to trust people who have been hired to do jobs, to some degree, but the degree of trust which is placed in software makers is unparalleled in human history. It is a degree of trust which would be alien to most people's way of thinking in other contexts.
Could the solution be to obtain software only from 'trusted sources'? While this may help it is not absolutely satisfactory. How do we decide what a trusted source is? When a new source of software appears how do we decide whether or not to trust it? What if an individual working to produce software for a trusted source has malicious intentions and puts some unwanted code into a program without the knowledge of his/her employers? What if an organisation, possibly supported by a government, were to spend years becoming a trusted source of software only to gain the degree of trust needed to mount a huge attack on computers throughout the world? The problem is made worse by the fact that many computers now contain software from many sources and Jacqui Lait MP has pointed out that the need to trust many suppliers has implications for government and defence stating:
'Software is written all over the world. Everyone uses sub-contractors. In Euro-fighter alone there are about 60 micro-processors. Does anyone know exactly where they were made and has anyone checked each one? Many organisations use sub-contractors for the maintenance of their computer systems. "Maintenance ports" are left open, providing an easy way into the system..' 
Relying on the idea of software being from a trusted course is equivalent to accepting freelance, temporary staff into a company from some 'trusted' agency and then implicitly trusting each such person to have access to everything and authority to do anything in that company. Few corporations would take such a risk. Why should it make any more sense to be cautious about other people entering an organisation but to have implicit trust in software created by other people outside the organisation?
When considering computer security we should be aware of this important principle:
Any computer program acts, effectively, as an agent of the programmer who created it. Putting total trust in a computer program is equivalent to giving its programmer absolute power within an organisation and then trusting that he/she is not incompetent or malicious.
The Seriousness of the Threat
We are used to individuals causing all kinds of harm on the internet for their own amusement and this in itself should cause enough concern to businesses, other organisations and individuals. The threat is not restricted to this: organised crime now makes attacks on computers . The real danger could be still greater: there are concerns about the possibility of a Digital Pearl Harbour , in which terrorists or a country would launch crippling attacks on computer networks to massively damage an enemy. The current standards of security in computers are poor and some issues, one of which is that of the trust placed in software, must be addressed if we are not to have regrets later.
The Social Analogy
We can consider a computer system as analogous to a human organisation such as a company or government department. Each program in the computer is analogous to a person in the organisation. Why do human organisations tend to be less vulnerable than computer systems? Computer systems have many programs brought in from outside, each of which could pose a threat, but human organisations have people brought in from outside which could pose a threat.
Human organisations may seem as vulnerable as computers, given examples of theft from within companies and extreme examples like the unauthorised actions of a single trader that triggered the collapse of the Barings bank. The occasional failure of real-world organisations, however, does not change the fact that they tend to be less vulnerable than computer systems and examples of total security failure are rarer in real-world organisations than they are in computers: a bank being brought down by one man is rare enough for it be regarded as a newsworthy item while computers are brought down by ill-advisedly installed programs so often that individual cases are not usually not a matter for media interest.
Human institutions have been admitting people from outside and protecting themselves from people who could have entered with malicious intent for millennia, even if their methods are often informal. As an example, let us consider an extreme situation as follows:
You are running some kind of research organisation during a war and a brilliant expert on 'the other side' defects and wishes to join you. You accept his offer and want to use him, but you do not want merely to interrogate him. He is so clever, and you are so short of clever people, that he could be very useful if he were working as one of the many staff in your institute's building. There is the obvious danger: what if he lied and his intentions are malicious? How do you let him work within your building and allow him enough freedom to do productive work for you without a great risk that he could do enormous damage?
This is the sort of problem that we face when we install a new program onto a computer.
We can make some progress by examining how social entities - real world organisations like companies - protect themselves and then using this social model to view computers as social organisations, so that we can use the same ideas to protect machines.
How do social organisations protect themselves?
For all practical purposes, programs are implicitly trusted in computers. This may not be the intention of computer users, but as users can hardly watch each command in programs' code being executed to make sure they are behaving reasonably, they are effectively placing such trust in programs.
Such implicit trust does not tend to occur in human organisations. If an employee of a company started to act maliciously then, unless he/she were clever enough to hide this, it would be noticed and the activity stopped. The detection of malicious actions may be the responsibility of co-workers or management. In some organisations, particularly large ones, some individuals may have the detection of malicious activity by other employees as their main task.
This detection is easier in real-world organisations because they are not the same as computers. In a computer most of the processing occurs below the level of human experience in a 'black-box' way. Such processing is behind the scenes and apart from what programs actually output. In contrast, actions by humans in real-world organisations occur, by definition, at the level of human experience and can therefore be more easily observed by humans.
Real-world organisations protect themselves by having humans detect undesirable acts which other humans attempt to perform, so that such acts can be prevented.
A critical reader could say that it is more likely that malicious actions would be detected after they were performed, rather than being stopped, so that the organisation could at best prevent the same individual from performing them repeatedly, but this need not concern us too much: whether the emphasis is on preventing actions being committed in the first place or on identifying those who have committed them is likely to depend on what resources the organisation allocates to security and, in any case, it is the idea of people in an organisation preventing acts by other people that is important rather than the details. I will be taking a somewhat idealised view of human organisations in this document.
This point may appear trivial: I may seem to be saying that we merely need programs in the system which detect malicious behaviour and stop it. Some readers may point out that if the solution were that easy it would have been done long ago and that the problem is that it is not that easy to detect malicious behaviour.
This could be a valid point, if this were my entire argument: in fact it is just the start. We do have to deal with the problem that unreasonable behaviour, for a person in a company or for a program in a computer, is hard to define in any general way. Altering the company's accounts may be considered dubious in some situations, but some people may have a reason to do it. Removing money from the company may be fraudulent when performed by one person, but a valid task when done by someone else.
If a large organisation simply outlawed certain behaviour and made the requirements for legitimate behaviour very restrictive then many valid tasks could not be performed: the security procedures would interfere with them. On the other hand, if an organisation made sure that the security requirements were lenient enough to allow the performance of all tasks that were necessary while banning all other tasks, many malicious tasks could be performed using the leniency built into the system for the valid tasks.
Despite this, human organisations still do not tend to suffer from the problem of having to place absolute trust in people. How do they manage this?
Different Rights for Different People
The way that human organisations tend to decide which actions should be allowed and which should be prohibited may seem to be common sense. If a receptionist were seen altering computer software then this would probably be regarded as a prohibited action, as would a cleaner leaving the building with bags of money or a bookkeeper taking the plans for a prototype product home with him/her.
It may look like common sense to us: we tend to have an intuitive idea of what people should or should not be doing in various situations. People have different rights to do various things depending on what their jobs are: if someone's job would not conceivably require them to perform a particular action then it seems wrong for them to perform it. We may make this judgement informally, but there is one sense in which, at least to some degree, it is formalised: an employee in a large organisation is likely to have a contract of employment which defines that employee's duties. If that employee is known to be doing a certain activity then the contract of employment can be consulted to determine whether or not the activity could possibly fall under the description of the employee's duties.
In reality, some judgement may be required. The contract of employment may define an employee's duties, but this may not give an absolute answer for every possible action that an employee could perform because contracts of employment tend to be stated more in terms of duties than permitted or prohibited actions. A contract of employment can be used, however, to help a human to judge what actions should be allowed or prohibited by an employee and some employees will have contracts of employment that do give detailed descriptions of allowed and prohibited actions. Taking a rather idealised view we can state:
In real-world organisations contracts of employment are used to distinguish between permitted and prohibited actions. A contract of employment defines a person's duties in a human organisation and what he/she is allowed to do to perform those duties: it gives a person privileges needed to perform his/her job. When an employee attempts to perform actions that are inconsistent with his/her contract of employment then this can be regarded as an attempted breach of security.
This avoids the problem of security restrictions preventing anyone from executing their duties or being so lenient that anyone can do anything:
A contract of employment matches privileges to duties, so that where the freedom awarded to an employee has to be very extensive for the duties that are to be performed they can be extensive and where they need not be extensive they can be restricted.
This raises the issue of how we can make sure that the privileges awarded to an employee match well with his/her duties. This job can be done by a human manager:
A human manager can construct a contract of employment and use his/her own knowledge of the employee's duties to determine appropriate privileges for that employee.
At this stage some readers will be thinking 'Really? You think things happen like that? You should see my company!' The point still stands that even if security in real-world organisations is not always perfect it tends to provide more security than exists in computers: we are taking an idealised view of it to get what we need.
Are contracts of employment totally Effective?
I am not claiming that contracts of employment, used in this way, remove the need to place any trust in employees. It is possible, of course, that an employee's duties could require him/her to have rights that allow performance of some malicious activity. The idea is that the trust that has to be placed in each employee is reduced as much as possible. There are two intended outcomes:
- Malicious behaviour by people is prevented as often as possible.
- When malicious behaviour occurs it is limited in scope as much as possible and the consequences are less serious than they might have been without restrictions.
This second objective is important: while the ideal is prevention of malicious behaviour, total prevention is not needed to claim success and merely limiting the scope of a malicious employee's actions, so that their effects are trivial rather than serious, can be claimed as a success of the security procedures.
Prevention Rather Than Detection
A lot of human security measures are better at catching someone for an illicit action than preventing him/her from performing it in the first place, but there are ways in which this security can be made tighter.
One type of security is what I will call reactive security. This would involve someone watching what an employee is doing and consulting a copy of his/her contract of employment to decide whether or not any given action should be permitted. If an action is not permitted then it would hopefully be stopped, or at least the employee would be dealt with to prevent any further undesirable actions.
Another type of security is what I will call dependent security. This sort of security could be used to prevent someone from performing a malicious action if you were concerned that that continuous monitoring may not allow you to stop them in time. With dependent security you actually make the person dependent on security staff to perform any but the most limited actions. Instead of being able to perform such actions directly he/she must ask the security staff to perform them on his/her behalf. This means that, no matter how fast the person is, he/she can never perform an action with which the security staff disagree: to stop an action being performed the security staff need not do anything except decide not to perform it on the person's behalf.
To see how dependent security could work let us return to our idea of an employee whose skills may be valuable, but in whom there is a distinct lack of trust: the 'turned' expert in the research institute that I mentioned near the start of this document. We want to use his/her skills in an organisation, and to allow him/her to work unobstructed, but we are worried that, if we have been fooled, our new employee could become treacherous at any instant. How would a dependent security system control such a potentially dangerous employee?
The solution may seem a little extreme for a social situation, but it could work. We could give the employee his/her own office and escort him to the office at the start of the day and out of the office and off the premises at the end of the day. When he/she enters the office we could lock him/her in.
While in his/her office the employee can do what he/she wants provided that it stays in the office. He she can read any written material that happens to be in there, write or type documents, file documents in his/her filing cabinet, retrieve documents from his/her filing cabinet and even destroy them. The scope of all of these actions is limited, however, because the employee is only allowed such freedom over his/her own office and his/her own documents.
From time to time the employee will need to interact with other parts of the organisation to effectively discharge his/her duties. When he/she wishes to do this he/she presses a button on his/her desk to summon a security officer. The relationship between the security staff and the employee is strange. They are both his/her servants (in the sense that they perform many actions on his/her behalf) and his/her masters (in that they can choose whether or not these actions are allowed).
When a member of the security staff is summoned he/she goes to a window in the office wall where he/she can use an intercom to talk to the employee. He/she acts finds out what the employee wants to be done and does it, provided that it does not go beyond the rights that the employee has been given and that are stated in his/her contract of employment.
As an example, if the employee wants to have a conversation with someone else in the building or outside it, the security officer uses a telephone placed near the window to make a phone call on the employee's behalf. The security officer tells the employee what the person on the other end of the line says and also relays to him/her what the employee says. If the security officer is unhappy, at any stage, he/she can choose not to make the call or not to relay any of the conversation to either party.
A similar procedure is followed for any other actions the employee may wish to perform that involve interacting with things that are outside his/her office, all the time maintaining dependency on the security staff. If the employee wants to look at a file stored elsewhere in the building then he/she can ask the security officer to go and get it and show it to him/her through the office window. If the employee wants to amend the information in a file stored outside his/her office then he/she can ask the security officer to make the amendments for him/her. If the security officer accepts that any amendments can be made to a file from outside the employee's office he/she could, of course, just pass it through a slot in the wall and pick it up later when the employee has finished with it.
We may need to search the employee when he/she enters the building and when he/she leaves it and, if we were concerned about theft of information we may even require the employee to live in his/her office, so that no knowledge that he/she has gained while in the building leaves the premises without approval.
The result of all this is that the employee's scope for committing a malicious action is very much reduced: to do so he/she would need to get a security officer to perform it on his/her behalf.
Dependent security is the type that fits better with this article. The idea of dependent security like this in a real-world organisation may appear extreme, but that does not matter too much: if we are going to apply human security ideas to computers we may as well examine ideals of human security. In any case, I expect that something like this sort of dependent security has been used at some times.
Employees from an Agency
We earlier introduced the idea of a contract of employment defining what an employee is allowed to do. A contract of employment may be created by the management of a company when a vacant position has to be filled, but this may require the management to be very knowledgeable about the job and it could also be time consuming for them. There is another way in which contracts of employments could be created, which we will now look at:
Let us imagine that a company wants to use contracts of employment to control the actions of its employees and, due to the rapidly changing nature of its work, often has temporary positions becoming vacant. The company's managers decide to use a recruitment agency to fill each temporary position as it becomes vacant, but they have a problem: creating a contract of employment for each temporary position as it becomes vacant can be difficult and time consuming and may sometimes require a deeper knowledge of the job than the managers actually possess. A solution is needed to this problem and managers decide to put the responsibility of creating contracts of employment onto the recruitment agency's contractors - the people who want to come and work at the company - rather than having them created within the company.
Having contracts of employment, which determine the privileges of employees within the company, created outside the company may seem to be a serious error, but it could actually be effective. This is how it could work:
The company has a temporary position to fill and tells a recruitment agency about it. The job description is given to the agency, which then contacts a number of possible employees who seem to meet the criteria and gives them the job description. Each of these prospective employees then writes his/her own contract of employment. This means that the person who wants the job is actually deciding what he/she will be allowed to do in the company if he/she gets it. The recruitment agency then sends the prospective employees to the company, where they are interviewed and the job is awarded to one of them. When the job is given to one of the prospective employees the contract of employment that he/she wrote becomes his/her contract of employment for the duration of the job and determines what he/she can and cannot do within the company.
At first glance, this may seem like an ideal opportunity for people to get into the company with malicious intent. Could a criminal not simply write him/herself a contract which allows him/her to do anything and then try to get the job?
It would not be that easy. Having the contract of employment prepared by the prospective employee does not mean that it is not examined by the company's management. When each prospective employee is interviewed the contract of employment that he/she has prepared, together with his/her justification for the privileges granted by it, is examined by the company's management team and they can use the information about what privileges the employee demands together with all the other information about the employee, such as what experience he/she has, what salary he/she demands, etc, in making a decision about whether or not to give the job to him/her. A prospective employee could write him/herself a contract which allows him/her to do anything, but this would be at the expense of reducing his/her desirability.
It could be argued that the management team may not have any idea about what privileges are required to do some jobs and could easily be misled by an employee with malicious intentions, but as the management team interview prospective employees, and examine contracts of employment, for more temporary positions they can start to gain experience of this issue and can have some idea of what sorts of privileges will be needed to do various jobs. In addition, if a prospective employee's demands for privileges are excessive then that will be apparent when the management team interview other prospective employees for the same position and examine their contracts. Legitimate candidates will have contracts that demand the minimum level of privileges to do the job, as this will be needed to avoid appearing at a disadvantage when compared with other candidates. The job interview process is, in part, a kind of bidding process where there is an incentive to try to request lower privileges than competitors. In practice, of course, there is a minimum level of privileges needed to do the job, which most legitimate employees will tend towards, and other factors will also affect the decision about whom to hire.
An important principle is established here:
A contract of employment for an employee can be produced outside an organisation without security being compromised, provided that it is assessed before the employee enters the organisation. This assessment could involve comparison with the contracts of employment for other prospective employees.
A contract of employment may give an employee the right to perform a particular action whenever he/she wants, but there may be situations in which the action is so sensitive that this could be giving too much freedom. On the other hand, banning such actions totally may prevent an employee from doing his/her job.
One solution may be to make some of the privileges in the contract of employment conditional on permission from the company's management, so that the employee can perform the sensitive action, but he/she must ask the permission of a manager every time he/she wants to do it.
This is an example of a normal, unconditional privilege:
The employee may open the company safe.
If this were thought to be going too far then a version could be used that is conditional on permission:
The employee may open the company safe provided that he/she asks permission from a manager on each occasion.
It is assumed, in this document, that the conditions in contracts of employment cannot be broken. We could make this clear with more detail if we wished:
The employee may open the company safe. He/she must obtain permission from a manager and ask a security officer to unlock it for him/her. The security officer will only unlock the safe if a manager has given authorisation.
Contracts of employment could contain a mixture of unconditional privileges and privileges dependent on permission being granted. Many privileges would be unconditional on permission as the company's management would not want to be bothered by a request for authorisation of every action.
Applying the Social Model to Computers
We have more experience in making social organisations secure than we have for computers and it makes sense to apply a social model like this to computers.
In this sort of context we regard the applications programs in the computer as analogous to people in a social organisation: each is to be trusted to the minimum extent commensurate with execution of its duties.
Each program will have a contract of employment that is produced by the program's designers and comes with the program. That contract of employment will define the program's duties and indicate the privileges that the program needs to fulfil them. When a program is to be installed the human supervisor of the computer - which will typically mean the main user or owner of the machine for most homes and small businesses - will examine the program's contract of employment and decide whether or not it is acceptable. If he/she is considering a number of programs then he/she may reject one which has a contract of employment awarding extensive rights in favour of one which only has a restrictive contract of employment. The human supervisor of the computer is equivalent to the management team in the social model that was discussed.
A security system in the computer will ensure that no program can perform actions that would require privileges outside the bounds of its contract of employment: the contract of employment will define what the program is allowed to do and will effectively control its behaviour.
When we discussed security in social systems we looked at the idea of a contract of employment giving privileges that are conditional on the permission of an organisation's management team. This idea will be used in the proposed system. A contract of employment for a computer program will be able to award some privileges that are conditional on the approval of the user on a case by case basis. Every time the program needs to exercise such a privilege it must seek permission from the user by means of a secure dialogue.
The Scope of Program Behaviour
The biggest security issues caused by programs are when they perform unwanted actions of wide scope. What do I mean by 'wide scope'? Some of the actions of a program could go beyond the program itself and we can consider the degree to which this happens for various actions.
As an example, if a program simply added two numbers together then the scope of this would be low: the program would merely be 'thinking' by itself. If the program were to access a file that was also accessed by other programs then this could be viewed as having wider scope: the program would now be starting to interact with things beyond itself. If a program was to interact with an external website or to send email then this could be taken to have still wider scope as it would now be interacting with the world outside its computer system. It may not always be simple to define scope; for example, a program may alter a file which is read by another program and has a huge effect on the behaviour of that program when it sends some data over the internet.
It is not necessarily bad for a program to be acting with wide scope: for some programs it is necessary for them to be able to perform their tasks. The scope with which a program is acting, however, does have an affect on the seriousness of the consequences if the program should start to behave undesirably due to malicious intent or an error on the part of its creator.
Philosophy Behind the Method
The idea behind the method is to reduce the scope of each program as much as possible while still allowing it to function. This means that, if possible, a program will be limited to only storing and accessing a limited amount of its own private data and will not be able to interact with other programs or their data or directly affect things beyond the computer. Many programs, of course, will need to interact with the rest of the system with wider scope and the purpose of the contract of employment is to allow the user to make an informed decision in each case and to give program makers an incentive to reduce the scope with which their software can act as much as possible.
Many programs will need to perform actions with wide scope. In such cases the method is intended to at least limit the number and type of such actions so that only those really required for the program to fulfil its task can be performed.
Even when a program is accepted as having to perform actions with wide scope it may be desirable that they are performed openly, with the consent of the user, and not behind the scenes. It is because almost everything that happens in computers does go on behind the scenes, due really to the complexity of the systems involved, that we have all kinds of security problems in the first place. This issue is dealt with by the secure dialogue which can act as a check on wide-scope actions while still allowing them to be performed.
One security problem is the subversion of one program by another. This would be when a program causes another program to behave in an undesirable way to achieve some purpose that its own contract of employment prevents it from achieving directly. Subversion can take the following forms:
- Amending the program code of an application.
- Altering the data files associated with an application.
Subversion can be thought of as the 'hijacking' of a program, possibly by 'tricking' it into behaving maliciously by altering its data. When a security system such as the one I am proposing here is in use then subversion will become more of an issue. This is because contracts of employment will limit malicious behaviour: the last resort of a malicious programmer may be to create a program that has a contract of employment which, while not allowing the malicious behaviour that he/she wants, does allow it to alter the data of some other program for an apparently valid reason, the 'hijacked' program being selected so that it has a contract of employment that allows the malicious action that he/she has in mind.
One example of subversion could be if a program lacked permission to send email, but another program had permission to send emails and used a file to store details of what was to be sent; for example, maybe it stores lists of contacts and sends email in batches each day. The program that lacked permission to send emails could potentially alter the data of the program that could send them and 'fool' it into thinking it had to send an email that had been produced by the first program. If the first program achieved this it would have circumvented its contract of employment.
While a program may attempt to interfere with another program to subvert it, it could also attempt such interference merely to disrupt its functioning in a much cruder way; for example, by deleting or corrupting the files of another program. While this, strictly speaking, is not subversion, it is the same sort of problem in that it involves programs interfering with each other and the solutions to it are the same. For this reason, I do not intend to make any real distinction between subversion and cruder sorts of interference in this article.
Another way in which a program could cause problems by accessing data that is not its own is by simply reading it without making any changes. This could allow a program to steal confidential data. As an example, a program which has permission to access the internet could examine the company's list of customers, maintained by its accountancy software, and send details of this list to a competitor. No alteration of information would take place, but valuable information would be stolen. As with the previous issue, while not strictly being subversion, this is also a problem of programs getting at data associated with other programs and the solutions are the same.
Protection against an application being subverted and creating widespread damage in the system is provided by the application's contract of employment. This means that, when assessing a contract of employment, the consequences of possible subversion of the program or of it causing subversion of other programs should be considered.
We have been discussing contracts of employment, and the responsibilities of programmers, in terms of limiting rights for programs, but any reputable programmer should not only want his/her program's contract to show that the program's freedom of action is as limited as possible, but also to show that the opportunities for another program to subvert it are limited.
The reason that this may be slightly strange is that it is slightly different from the contract of employment imposing limitations on what a program can do. To provide security against subversion a contract of employment should also limit the actions of other programs to prevent subversion against the program to which the contract relates. This means that we are regarding the programmer as having the responsibilities of :
- limiting the actions of his/her program
- protecting his/her program and its data
To protect a program's code an application's contract should typically demand that its code is protected by the security system and that the security system disallows unauthorised modifications to its code by other applications. This is such a basic requirement that we may want the security system to warn a user when a program fails to demand this.
To protect a program's data then, if possible, it should only access data in a private region, known as its working area. Such data cannot be accessed by other programs, so there is a higher standard of safety. For many programs this will not be possible as they will need to access data which is accessed by other programs, but privacy should be demanded whenever possible. When it is not possible the security system still gives some protection: each of the programs that could be doing the subverting should have its own contract that limits its potential to compromise other programs, but no programmer should rely on this. He/she should ensure that his/her program has a contract of employment that demands as much privacy as possible.
It may seem strange to some readers that we would want a contract of employment to demand privacy for a program, when we may naturally expect it to limit a program. This makes sense in real-world situations, however: if we were employing a new member of staff we may want guarantees, not only that he/she will not do malicious things in other people's offices, but also that he/she will lock his/her own office in case anyone else is managing to act maliciously. Though leaving one's own office open for any malicious person to wander in may not be a malicious act in itself, it is not prudent.
A contract of employment, then, can provide assurance in two main areas:
- When a contract of employment limits the scope of actions this is an assurance of ethics on the part of the program and its creators.
When a contract of employment demands some privacy for a program and/or its data this is an assurance of prudence on the part of the program and its creators - a sign that the program's data is not going to be damaged by other programs or that other programs could alter the program's code, or its data, to subvert it for some malicious purpose. We want a guarantee, not only that the program is not a Trojan Horse, but that it cannot be turned into one.
Is it an ethical program? Is it a prudent program? These are the questions that a user should ask when examining a program's contract of employment.
The Importance of Private Data
This article places a lot of emphasis on having programs do as much as possible in private, only affecting anything else when there is a need for it. A big part of this is the idea of private data - data that only a particular program can access.
The idea of private data is that it places a responsibility onto programmers to do as much processing as possible with data that is purely associated with the program and does not relate to other programs or anything else on the system. The contract of employment chosen by the programmers of an application, which will limit a program's behaviour, can be employed by a user to assess how well the programmers have met this responsibility.
This is one use of idea of private data: it allows increased trust in a program as its ability to influence the rest of the computer system can often be reduced without compromising its behaviour. Privacy, therefore, does not serve only prudence, but also serves ethics.
The second use of private data is to protect a program's data from interference or theft by other programs as part of subversion attacks or other sorts of attacks or malfunctions. As has been said previously, this assurance of prudence, in the contract of employment, is part of the responsibility of a programmer.
Ownership of Data
An important idea in this document is the idea of ownership of data. When a program is accessing data we are interested in whether or not that access is authorised and this has some dependency on what program owns the data: data accessed by a program is owned either by that program or by another program.
When a program creates a file we should presume that it has implicit permission to access it, though we may want a secure dialogue to occur each time that any future access occurs. One reason is simple pragmatism: considering that the program has just created the file and presumably put data into, then it is already too late to worry about whether or not it should be accessing it.
The issue of ownership and right to access is, however, complex. While we may regard the program that made a file as having 'ownership', this may be simplistic: it may be that other programs should also be regarded as having ownership, that ownership should be transferred in some situations or that some paradigm other than file ownership would be more useful.
More consideration is needed on this matter. The purpose of this document is to propose a change in philosophy, rather than all the finalised details of a new system. This document will use ownership as an idea and will assume that the program that makes a file is the one that owns it to at least provide something that can be discussed.
Comparison with Firewalls
Some users may see a similarity between the method proposed here and the way that firewalls currently work. As well as preventing intrusion into computers, firewalls are intended to prevent unauthorised access to the internet by programs that are already on the computer. A firewall typically maintains a list of programs that have permission to access the internet and if a program that is not on this list attempts internet access then the user is asked to give permission.
This kind of system is inadequate for the following reasons:
- It only deals with the implications of programs attempting to access the internet, which is a very narrow part of the overall threat to the system. Once on a computer programs can do many undesirable things that do not need internet access.
- Being able to do many things that do not need internet access could even allow programs by to gain internet access, despite the attempts of the firewall to do otherwise. A program could, for example, trick or 'subvert' another program which does have internet access into performing its undesirable operations for it, or it could attempt to damage the firewall system itself.
The control provided by firewalls is inadequate and the method proposed here is intended to provide an acceptable level of security. The deficiencies of firewalls are resolved in two ways as follows:
- Each program is treated in the same sort of way that current firewalls treat the entire computer. Instead of treating the entire computer as a system outside of which programs need special privileges to act, each program now becomes a separate system, with a barrier separating it from the rest of the computer or any other computers and special permissions being required to allow programs to act 'through' these barriers. This is equivalent to each program being guarded by its own firewall, though the permissions involved to make this work will be somewhat more complex than the allow internet access/do not allow internet access permissions in firewalls.
- The permissions for any program are set out in its contract of employment and are considered by the user at the time of selecting software or installation when his/her mind will be focussed on the issue of the software's permissions and not on some other task that he/she is doing. This is better than the user being interrupted to make very serious security decisions while in the middle of some other task.
Some readers may regard secure dialogues as being similar to the questions asked by firewalls of the form 'Program X is trying to access the internet. Do you want to allow it?' Both involve permission being asked relating to security matters while the user is performing some task, but that is where the similarity ends. Unlike the questions asked by firewalls, secure dialogues are used for operations for which some agreement has already been given at the time of installation and for which the contract of employment has specified that a secondary check is needed at the time of performing the action: firewall dialogues, which are more specific than this anyway, lack such a feature.
A Summary of What the System is Expected to Do
The security system is expected to:
- use contracts of employment to place limits on the scope with which programs can act, such as what data they can access, as a guarantee of ethics.
- use contracts of employment to allow programs to claim privacy for themselves and their data, as an assurance that the integrity of the program and its data is being preserved, as a guarantee of prudence.
- use secure dialogues to act as an extra check on some actions which a contract of employment may allow, conditional on user approval each time.
- prevent computer programs from acting in ways prohibited by their contracts of employment.
We will now examine all the items needed for the proposed method to work in more detail.
Components for the Method
Contract of Employment
Each program on a computer system has a 'contract of employment' which is installed at the same time as the program. The contract of employment defines the scope of the actions which the program needs to be able to perform within the system and also the level of privacy (freedom from intrusion) which the program requires to ensure that the program and its data are not compromised by other programs.
The contract of employment is made by the creators of the computer program, who decide what privileges their program's contract needs to request at the time of installation.
A contract of employment can be converted to a human-readable form for assessment by a human and each contract also includes a description of the program's purpose and a justification of the privileges which it demands for humans to examine. The contract of employment may also indicate the general type of the program and the security system (which will be discussed shortly) may alert the user if the contract of employment appears too extensive (for example, a screen saver requiring permission to perform a hard disk reformat) for the type of program.
When a program is about to be installed its contract of employment is presented to the computer's human user for assessment. The user only installs the program if the contract is acceptable.
This sort of contract of employment uses the same ideas as a human contract of employment. The creators of a contract of employment are free to request any privileges that they want, but whatever they request can be examined by a user before the program is installed. A user who is about to select a new program may examine a number of competing programs and their contracts of employment before making his/her decision. Reputable manufacturers will have an incentive to request the minimum privileges needed for their programs to work as failure to do this will make the program seem undesirable alongside other programs that can fulfil the same purpose with less extreme privileges being granted.
A computer program or set of programs fulfil a security role on the system. When a program is about to be installed it is the security system that handles the process of converting its contract of employment to a human understandable form and presenting it for examination by the user, as has been discussed. The user can then determine whether the contract of employment is reasonable or not, given the purpose of the program. If the user decides to install the program then the security system will ensure that the contract of employment which was examined by the user is associated with the installed program.
The security system also has the purpose of preventing any programs from breaking their contracts of employment. It could monitor program execution and any attempt by a computer program to break its contract of employment would not be permitted. Should this happen the user may be informed. In practice, a simple sort of reactive approach like this, in which the security system 'looks out for things', is probably not best suited to the way that computers work and dependent security is probably a better approach. This would mean that any program that needed to perform any operation with security implications would need to call the security system, which would check that the task was allowed by the program's contract of employment and perform it on the program's behalf only if it were permitted.
The security system provides facilities to allow the effects of program installation to be reversed to varying degrees. These include facilities to obtain lists of files currently on the system, facilities for uninstallation of programs and facilities for removal of files from the system. This is more likely to be practical when the scope of programs is limited.
The security system plays a role in the construction of new contracts of employment. Some programs, such as language compilers, will need to make new programs and issue them with contracts. The security system can provide services to programs like this to reduce the workload on the programmer.
Secure dialogues are a way of ensuring that, when the user's permission is requested to perform an operation, the process is under the control of the security system. Standard dialogues are generated for this purpose by the security system and the user's replies used to give permission to applications.
The system must be engineered in such a way that the following conditions are met:
- It is apparent to the user when a secure dialogue is occurring.
- The appearance of the secure dialogue on screen (or with whatever output is used) cannot be interfered with in such a way that the secure dialogue appears to be presenting different information.
- The input given to the secure dialogue by the user cannot be interfered with in such a way that the decisions given to the secure dialogue are different from those that the user intended.
- A secure dialogue can only be initiated by the security system. There is no problem if another application attempts to display a 'fake' secure dialogue, as this will not be a genuine secure dialogue and it will not be able to return information to the security system to grant permission for an action.
- The application which is using the secure dialogue to request permission for an operation must be clearly identified.
These requirements will form the core of the secure dialogue system. In conjunction with a contract of employment which the user can view they will give an assurance that programs are not damaging the system.
It is likely that secure dialogues would take three basic forms. The first is the obvious one: the secure dialogue would involve a request for permission to perform a specific action being made to the user (meaning it would most likely be displayed on the computer's screen) and the response would be a simple 'yes' or 'no'. I will call this the Boolean secure dialogue. 'Boolean' has been chosen as it simply means the type of mathematical notation associated with yes/no decisions.
The second type of secure dialogue is a little more sophisticated and we will consider some issues before it is described.
A contract of employment should restrict a program's actions as much as is possible without compromising the program's functionality or ease of use. If we ignored ease of use this would mean that all privileges in a contract of employment should be conditional on user permission. The problem is that this would make a program almost impossible to use: it would constantly be asking for permission from the user before performing tasks. Clearly, permission should be asked on some occasions, but there is a way in which we can allow a program to ask permission from the user without causing undue inconvenience.
In existing computer systems programs often interact with users using the 'open file' and 'save file' dialogues. This is where the program needs to know which file to open for processing or what file name and directory to use to save some data. This is significant. Allowing a program to write data to files that consume storage capacity presents security issues. Files created by a program may also be visible to other programs (if its privileges allow it to create files of this type) and this could allow programs to interact with each other, creating a further security issue. When programs open existing files they could be opening files made by other programs (if permissions allow it) and reading or modifying data that they saved. The actions of saving and opening files present a number of security issues as they have potential for allowing programs to consume system resources and allowing programs to interact with each other by using files as an intermediary.
Combining these, this means:
- the operations of opening and saving files present security issues and would be ideal candidates for use of a secure dialogue in which permission is requested from the user before they are performed.
- the operations of opening and saving files are ones in which computer programs already typically engage in a dialogue with the user - the 'open file' and 'save file' dialogues - in which the user selects a file name and directory on the computer.
These could be combined in a second type of secure dialogue which would replace the 'open file' and 'save file' dialogues. As well as fulfilling their purposes it would also act as a secure dialogue, to allow the security system to ask permission, on behalf of a program, to save or open a file and this is the second kind of secure dialogue, which I will call a filename secure dialogue.
A filename secure dialogue will not look substantially different to the user than the usual 'open file' or 'save file' dialogues. The only differences are that it will be made clear that a secure dialogue is in operation and the program making the request will be identified. The security system will ensure that the dialogue is not being tampered with to allow one program to 'hijack' the secure dialogue of another.
The idea of a filename secure dialogue is that it allows a program to have its permission to open or save files made conditional on user approval in its contract of employment. Every time that the program needs to use this it has to present the filename secure dialogue to the user in much the same way that programs currently ask for filenames and this reassures the user that the program is not accessing files behind his/her back. If a program's file access is made conditional on user permission then the user knows that he/she is being consulted each time the program needs to access a file and that he/she will not find that the program has corrupted half the files on his/her computer or has been in his/her sales ledgers to change them.
It should be noted that a filename secure dialogue would not automatically have to be used by programs to request permission to handle files. It would only be needed if a program's permission to handle files were made conditional on user approval.
We can now get to the idea of what the third type of secure dialogue would be. This will encompass quite a lot. We can imagine various privileges for which the user's decision does not involve a simple yes/no or a decision on where to save a file and what name to give it or which file to open. A privilege could involve the user giving more information in his/her authorisation. As an example, a type of privilege could be conditional on the user stating how large a file can be or what information a program is allowed to send over the internet. Rather than bother with the detail of how more complex conditional privileges will work I will regard them collectively as the third type of conditional privilege, and I will call the associated dialogue the complex secure dialogue.
The definition of the term 'file' within the context of this system is slightly different from the normal concept of a file. A file, in this proposal, has the following properties.
- It is identified as a single unit, with a single pathname, by the security system in security dialogues. This means that if a program is given permission, by means of a secure dialogue, to create file x:/example/test, then this pathname may refer to a number of separate files at the level of the operating system, each with its own pathname, provided that the security system ensures that the files are always logically associated with the x:/example/test pathname. If a file is moved, so that its pathname changes, then the association between the operating system level files and the new filename must be maintained.
- It is identified as a single unit, in the way described above, by the security system, for the purpose of providing information about files. For example, if the security system is requested to list all files associated with an application, it will list the pathnames as described above.
- It is identified as a single unit, in the way described above, by the security system, for the purpose of performing operations on files. For example, if the security system is instructed to erase a file with a particular pathname then the association between that pathname and any relevant operating system level files will be used to erase all such files.
It will be apparent from this that the concept of a file, within this context, is a high level one, and that the term 'file' is very similar to the term 'document' used in the instructions for an application.
When programs write information to files there could be security implications if these files are 'visible' to other programs and these files could also be used by an application to consume disk space. It should be noted that both of these security issues only arise if an application is allowed to make files that are visible to other applications or is allowed to use an unlimited amount of storage space for files and both of these depend on the permissions in its contract of employment.
Programs will often need to store data and the working area is intended to allow this without any security implications.
Each program has a working area and it is a part of the storage space set aside for the exclusive use of that program. Files in the working area are not 'visible' to other programs, with the exception of the security system. This means that other programs cannot read data in these files or make any changes to them.
A program's permissions could allow its working area to consume storage space without the security system imposing any limit, but it will be more usual for a program's contract of employment to impose an upper limit on the size of the program's working area to prevent a rogue program from compromising a computer by consuming unreasonable system resources.
Security implications exist when programs are able to interact with data that is not their own. The idea of the working area is that it provides each program with a space that is 'insulated' from the rest of the computer and serves as nothing more than a 'memory aid' for the program. The invisibility of the data in a working area to other programs serves three purposes:
- It stops another program from maliciously or accidentally damaging that data: a program's contract should not just restrict its capability to do damage but should prevent its possible involvement in a malicious act of another program that involves its own data being used by that program.
- If the size of the working area is limited it stops the program from consuming unlimited resources to save that data that is stored in the working area.
- It encourages programmers to use private data, rather than data shared between programs, where possible, allowing increased trust that their programs are not intended to cause disruption.
As the security implications of a program using its own working area are low then a program's permissions would not normally require secure dialogues to be completed to save or open files in its own area.
Some readers may be thinking that I do not understand how files need to be accessed in computers and that I have overlooked the fact that many files need to be shared between programs. This is not the case, however: I am aware of the amount of sharing of data that happens in computers and it is merely other data that would be stored in a working area. As an example, it is unlikely that a user's document would be saved in the working area of a word processing program or a piece of artwork saved in the working area of a graphics program: the working area is not really intended for documents like this but is for other 'behind the scenes' storage of information that a program may need.
In the context of security, the files in the working area might best not be regarded as real files in the sense that files outside program working areas are. Instead, it could make more sense to regard the working area as part of the program, which can change during program execution.
I should point out that when I describe the working area as 'part of the storage space' I do not mean this in any physical sense. The working area for a program will be more like a directory which, on most modern computers, could have its contents scattered all over backing storage medium and located by means of various record tables in the filing system, Similarly, a working area would not actually be a physical section of a hard disk or some other storage medium, but would simply be a collection of files on the storage medium that the security system only makes available to the program that made them.
I should also point out that, just because I have said that files in a working area must be 'invisible' to other applications, it does not follow that files outside the working are 'visible' to other applications merely by virtue of being outside the working area. The visibility of a program's other files depends on the permissions in its contract of employment.
The Types of Permission in the Contract of Employment
I have said that this document is not intended to propose the full, finalised details of a security system, yet I will describe how a possible system may work. This is simply so that an example is available which could provide something to consider. What follows now is a list of the sorts of permissions that a contract of employment could provide to a program or withhold from it:
1. Global Operations
- Permission to perform any global operation, without use consent.
- Permission to perform any global operation, with user consent.
- Permission to perform specific global operations, without user consent.
These actions are extreme. Global operations are ones that affect the entire computer merely by virtue of being performed. Examples are reformatting hard disks or shutting the machine down. Few programs would be expected to have a contract of employment that allows such actions and their inclusion in a contract of employment should generally be treated with suspicion.
These permissions mention 'user consent'. This refers to whether or not a secure dialogue is required to perform the action. If a contract of employment gives permission for an action with no user consent then the action may be performed without asking the user. If the permission is only given subject to user consent then the action can be performed, but some sort of secure dialogue is required each time in which the user gives consent.
I should point out that the list that I am presenting here is somewhat simplified. As suggested in (c) there would probably not simply be a single permission to act globally, but separate permissions for a range of global acts, such as reformatting or shutting down, though a general permission allowing all global actions may also be available. This also applies to the other permissions stated in this list: in reality many more specific permissions may be needed where I have used a small number to put the idea across.
2. Use of System Resources
- Permissions relating to the maximum size of the application's working area. This does not include high level files outside the working area. However, within its own allocation of system resources a program can create such operating system level files as it wishes, provided that these are not allowed to be used to break its contract of employment.
- Permission to have any data files which are hidden from the user. If this permission is not requested then the user may use the security system to view a list of all the files currently on the system which have been created by the application. If the permission has been requested then the list will not show files which the application has chosen to hide.
- Permission to create data files, associated with this application without the consent of the user.
- Permission to create data files, associated with this application, with the consent of the user.
- Permissions relating to the amount of system storage available for files outside the program's working area if a limit is placed on this.
- Permissions relating to the maximum size of a file outside the program's working area, or on the maximum size of file that can be stored without user consent.
These permissions relate to how a program is allowed to use the system's storage facilities for its own files.
It should be noted that just because a file is outside the working area, this does not automatically imply that it is 'visible' to other programs. This depends on the program's contract of employment.
3. Alteration of the Program's Own Files
- Permission to alter files, associated with this application, without the consent of the user.
- Permission to alter files, associated with this application, with the consent of the user.
- Permission to alter unprotected files, associated with this application, without the consent of the user.
- Permission to alter unprotected files, associated with this application, with the consent of the user.
- Permission to alter protected files, associated with this application, without the consent of the user.
- Permission to alter protected files, associated with this application, with the consent of the user.
These permissions relate to how much freedom the program is being given to alter files that already exist on disk and that are 'owned' by it. The first permission (a) allows it to alter its own files without consent, and this may be quite common. The second permission (b) allows it to do this, but only when consent is given each time. Permissions (c), (d), (e) and (f) relate to 'protected' and 'unprotected files'. It may be useful to allow some files to be marked as 'protected' by the user so that special permission is needed to access them. This could relate to the facility to mark a file as 'Read Only' already present in the Windows™ operating system.
It is not expected that this permission would be needed by the program to operate within its own working area.
4. Read-Only Access to Files Associated with Other Programs
- Permission to read files, associated with other applications, without the consent of the user.
- Permission to read files, associated with other applications, with the consent of the user.
These permissions relate to how much freedom the program is being given to read files that already exist on disk and that are 'owned' by other programs. The first permission (a) allows it to read the files of other programs without user consent. The second permission (b) allows it to do this, but only when consent is given each time.
It should be noted that a program can only make use of this permission if the program whose files are being read also consents to this: a program cannot force its way into another program's files if that program's contract demands that these files are private. These permissions will never allow a program to access the working area of another program, which must always be private.
5. Alteration of Files Associated with Other Programs
- Permission to alter files, associated with other applications, without the consent of the user.
- Permission to alter files, associated with other applications, with the consent of the user.
These permissions relate to how much freedom the program is being given to alter files that already exist on disk and that are 'owned' by other programs. The first permission (a) allows it to alter the files of other programs without user consent. The second permission (b) allows it to do this, but only when consent is given each time.
It should be noted that, as with the previous permissions, a program can only make use of this permission if the program whose files are being altered also consents to this: a program cannot force its way into another program's files if that program's contract demands that these files are private.
These permissions, as I have described them here, are probably simplistic. There will be situations in which we would not want simply to prohibit any access to the data of other programs, or allow it to occur for any data, even with user consent, but would rather specify which data can be accessed or not access by a program. As an example, a contract of employment may ideally give a program to access nothing except word processor and spreadsheet files, or to give it permission to access all files except those owned by the accountancy system.
The requirements may go further than this. We may want to specify that a program can only access some of the data owned by another program and this suggests the concept of each program having a number of defined 'areas' for its data files, so that a program, for example, may have permission to access files owned by the accountancy system and in 'area 1' of the accountancy system, but not files in 'area 2'. These areas need not be like the directories and may not actually define where data appears to be to the user: they could be an alternate grouping of files purely for security purposes, in addition to the grouping of whatever directories the files are in. The way that the areas of a program are set up could be defined in its contract of employment and each area could be identified with a meaningful name.
These issues would have some complications and would need more consideration.
6. Deletion of the Program's Own Files
- Permission to delete files, associated with this application, without the consent of the user.
- Permission to delete files, associated with this application, with the consent of the user.
- Permission to delete unprotected files, associated with this application, without the consent of the user.
- Permission to delete unprotected files, associated with this application, with the consent of the user.
These permissions allow a program to delete files that it owns.
These permissions are not expected to be needed by the program to operate within its own working area.
7. Deletion of Files Associated with Other Programs
- Permission to delete files, associated with other applications, without the consent of the user.
- Permission to delete files, associated with other applications, with the consent of the user.
- Permission to delete unprotected files, associated other applications, without the consent of the user.
- Permission to delete unprotected files, associated other applications, with the consent of the user.
These permissions allow a program to delete files owned by other programs.
It should be noted that, as with the previous permissions, a program can only make use of these permissions if the program whose files are being deleted also consents to this: a program cannot force deletion of another program's files if that program's contract demands that these files are private.
8. Creating Other Programs
- Permission to create any other applications which must have the same contract of employment as the original program.
- Permission to create any other applications, each of which has a new contract of employment which must not have any permissions that the original program lacks.
- Permission to create any other applications which can have contracts of employment that can exceed that of the program making them.
Permissions like these are needed to allow programs such as compilers (tools to allow humans make programs) to be written. Language compilers would probably need to be able to allow users to make a program and provide it with a contract of employment, so language compilers would need to be able to make a program and produce a contract of employment to be associated with it.
With compilers there is the possibility of this facility being used to make programs with extensive contracts of employment that are used by the compiler to threaten the system. This makes compilation software a particular cause for concern.
One solution is never to only allow contracts of employment made by programs to exceed the permissions of the program that made them. This could mean that the compiler's contract of employment would have to be extensive.
Another solution is to allow a compiler to make programs with extensive contracts of employment and demand that they are presented to the user for consideration at the time, possibly using the secure dialogue system, or to rely on the fact that the security system will present the contract to the user before any program can start acting anyway.
There is a range of solutions exist here and this would require more consideration.
9. Uninstallation of the Application
- Permission of the application to be exempt from an attempt to uninstall it by the security system.
- Permission of the application to make its data files exempt from an attempt to uninstall them with the application.
These are rather extreme privileges.
10. Internet Access
- Permission of the application to communicate over the internet without consent.
- Permission of the application to communicate over the internet with consent.
- Permission of the application to access websites without consent.
- Permission of the application to access websites with consent.
- Permission of the application to send email without consent.
- Permission of the application to send email with consent.
Very similar to the permission already given to programs by users of firewalls, except that the permissions are stated in a contract, rather than being given at run-time. When consent is required this is, of course, given at run-time by means of a secure dialogue.
11. Access to Input and Output Devices
The existence of programs such as 'keyloggers'  suggests that we may want permissions to control access to input and output devices. The need to present secure dialogues and contracts of employment without interference also provides a strong case for controlling access to input and output devices.
As an example, a program's contract of employment could be displayed on the screen for a user to assess and another program could alter what is displayed on the screen to make the user think that he/she is agreeing to a very limited set of permissions.
An obvious solution to this is for programs not to have automatic and direct access to input and output devices. The security system should always oversee this and ensure that viewing of contracts of employment and secure dialogues, and answers relating to these given by the user, are not being interfered with by programs. The security system should also ensure that any permissions relating to access to input and output devices are also observed.
The idea of using permissions and the security system to restrict access to input and output devices makes a lot of sense anyway, in the context of the rest of the system. Anything being typed on the keyboard, for example, is presumably data intended for some program. The ideas of ethics and prudence mean that that data should only be available to the program that is supposed to receive it.
As an example, let us suppose that you are typing data into an accountancy program and that the accountancy program's contract of employment demands that files that it creates are protected from access by most other programs. This would mean that privacy is wanted for data which is managed by the accountancy program. This could be futile, however, if another program is able to read keystrokes as data is entered into the accountancy program because the data can already be stolen before it even gets into the accountancy program's files.
Access to some parts of the system, such as keyboard input, clearly needs to be controlled. A program should require permission in its contract to access various input and output devices. Even when its contract does grant permission it does not follow that that permission should be granted at any time. We may, for example, want only a single 'active' program to be able to access keyboard input and we may want it to be clear to the user which program is the active one.
Ideally, these permissions would not need a lot of programming in the security system for each individual input or output device. A general system should be available for controlling access to input and output devices, with options that can be set in the security system for each particular device. The modern tendency to treat devices as files may simplify this somewhat.
It is also worth noting that programs such as keyloggers could have legitimate uses, such as monitoring, and may be installed with the user's approval. Some facility may be needed in the security system to allow programs such as this to work with user approval. Whatever approach is used to resolve this, it must never be at the expense of a contract of employment, which must always remain inviolate. One solution would be for programs' contracts of employment to have permissions that declare whether or not various inputs and outputs are private. As an example, a program's contract may specify that its keyboard input is private, meaning that when the program is accepting keyboard input then no other program can read that input at the same time, or it may specify that other programs can read this read this input. Treating devices as files, which is common now, could allow this issue to be dealt with using the system of permissions provided for file access.
These sorts of permissions are only provided to give an idea of what a security system may do. The reality would be more complex than this and other sorts of permissions would be needed. As an example, some provision would be needed for ensuring that individual programs do not consume too much processor time and that execution of programs can be halted on a user demand.
Privacy Rights in the Contract of Employment
As I mentioned previously a contract of employment should assure the user that the program meets appropriate standards of ethics and prudence. The permissions that have just been discussed define what the program is allowed to do and relate to ethics. We will now give some consideration to prudence. This involves guaranteeing that a program is safe from interference by other programs and involves the program claiming 'privacy rights' to assure the reader that it is exercising prudence.
Privacy rights are basically a form of 'employee rights' entitling an application to perform its duties without interference by other applications. Their purpose is to allow an application to protect itself from subversion and other sorts of interference by other programs. The following privacy rights seem relevant and may be stated by a program:
- Altering information in the application's reserved space, by other applications, is not allowed.
- Altering of any data files associated with this application, by another application, is not allowed.
A program's privacy rights are never overruled by the permissions in the contract of employment of another program. If a program has permission to access data owned by another program that has privacy rights that do not grant access then the data cannot be accessed. For a program to access another program's data its contract needs to give it permission to do the accessing and the program that owns the data must not have demanded privacy rights that prevent such access.
When I discussed access permissions I mentioned that the treatment here is probably simplistic. The same applies to privacy rights. There will be situations in which we would not want simply to demand privacy from all programs, or allow access by any programs, even with user consent, but would rather specify which programs can access the data or which programs cannot.
I also discussed the possibility that we may want different files for the same program to be treated differently and raised the idea of areas, where an area is merely a grouping of files for security purposes. Each program could have a number of defined areas and other programs could have different permissions for accessing these areas. In the context of privacy rights this would simply mean that a program's contract of employment could demand different privacy rights for different areas. This would fit with the idea of the working area quite well: the working area would merely be a special area for which total privacy is demanded automatically.
As with permissions, there are issues here that would need further consideration.
An Example of a Secure Dialogue
A word processing program has permission to amend files associated with itself, but only after consent by the user. The user wishes to edit a letter. The program now needs to obtain permission, via the security system, to amend this file.
A standard 'open file' secure dialogue appears. The user is allowed to select the file which he/she wishes to modify. The file is then opened and the user amends the document.
During the amendment, a permission commentary message is displayed on the screen, unobtrusively, to the effect that a file amend is in progress.
When the user saves the document the relevant file is updated. Depending on how the program is written this could actually involve updating several operating system level files associated with the pathname of this document.
The amendment has now ended. The permission commentary message is removed.
Should a further alteration to this document now be required the program must use the secure dialogue method again to gain access.
It should be noted that the whole process, from the point of view of the user, is experienced with little difference than if he/she had used the standard 'open file' dialogue box to open the file. The only changes are:
- The program must use the dialogue to open the file. If a user has not requested opening of a file then it cannot be changed.
- The 'open file' dialogue has now been replaced by a secure dialogue serving the same purpose. The only difference, to the user, is that it will be clearly indicated as such and the user will be made aware that he/she is giving permission to open a file for amendment.
Further Consideration of Subversion
The problem of data associated with the application being compromised in some way is difficult. As an example, a word processing document may contain macro instructions which result in those same instructions being placed in all documents subsequently opened by the word processor.
This, in itself, may be possible within the scope of the contract of employment of the word processor because the word processor is not causing damage to the system. It is merely receiving instructions, and executing them, which cause it to perform its task in a compromised way such that its future processing and a number of data files associated with it are affected.
There are two approaches to this:
- The contract of employment should be as restrictive as possible, to reduce the consequences of any subversion of the application by its data files. This means that if an application is subverted by its own data then the contract of employment for the application will at least minimise the scope for this. A good contract of employment would do this anyway.
- Ideally, the application should be designed with its own internal security system, similar to that discussed for an entire computer system here, to prevent subversion. In this model, a macro virus attempting to subvert a word processor is similar to a conventional virus attempting to subvert a computer system. Security measures within the word processor are required to deal with it.
Subversion of applications by their own data is not much of a problem for simple programs. A simple program will be incapable of being told to do anything malicious by its own data. The problem becomes more significant for more sophisticated programs that interpret their data in more sophisticated ways. This problem will become more serious as what I call Turningisation occurs, which we will now explore.
Turningisation is the name that I shall give to a process that I view as ongoing in computing. That process is the blurring of the line between applications and data and it happens as more applications start processing data in more complex ways so that the data for these applications effectively becomes computer programs. I made the word Turningisation up and apologise to anyone who already made it up: at the time of writing an internet search found no evidence of it. Turingisation has a lot of relevance to the issue of programs being subverted by their own data.
I give the process the name Turningisation because Turing equivalency is a mathematical term for a requirement that a system of processing data must meet to allow its use as a general purpose computer. Modern computer systems tend to be Turing equivalent and this means that they have enough flexibility about how they interpret data (programs) stored on them that they are equivalent to a type of device known as a universal Turing machine  and can, in principle, do anything that any other computer can do.
The idea of Turingisation is that an increasing number of applications are being provided with this sort of flexibility, or something close to it, in the way that they process data. This makes the applications themselves Turing equivalent, or close to it, and it makes the data for these applications something that is Turing equivalent, or close to being Turing equivalent.
Here are examples of Turningisation:
- Word processors now allow users to place 'macros' in documents. Macros can be written in a 'macro language' and are effectively small programs that allow automation of various functions.
- Some computer games are actually allowing scripting to be performed by users in order to make objects that can be used within the game.
As the power of computers increases we can expect more of this.
Turningisation is not a disaster for the sort of system that I am proposing. In fact, it increases the need for this sort of system, so that if an application is subverted by its own data then it is more of an issue for the program, its data and the person who wrote it, rather than an issue for the computer system as a whole. It would be unreasonable to expect a security system to be able to guarantee prevention of this sort of issue because it really is an internal matter for a programmer to deal with, provided that the program's scope is adequately limited. A program could be caused to depart from its specification due to devious action taken against it by someone feeding it data of a certain type, but a program can depart from its specification purely as a result of bad programming. Whether or not programs conform to their specifications is not the responsibility of a security system. Controlling the scope of programs is its responsibility.
It is possible to do something to help with this issue, however. We could regard subversion of an application by its data as not being our problem, but ideally the method would provide some sort of facilities which the programmer of an application could use to make it secure. We will now look at this idea.
Security within an Application
A computer system needs a security system that protects it from its own software because it can run computer software. This may seem a trivial point, but it becomes important when we start to consider the issue of subversion of a program by its own data.
If a program interprets its data in a way that provides Turing equivalency, or something close to this, then that program and its data effectively form a computing system within the computer. The logical result of this is that if a program interprets its data in a Turing equivalent way, such as a word processor allowing macros of sufficient complexity, if there is any possibility of the program being subverted by its own data, or if the data is interpreted in any other way that also creates subversion possibilities, then, as it resembles a computer system, the program would preferably have its own internal security system, similar to the one that we have been discussing in this document.
As an example, if a word processor contained a sophisticated macro language that allowed any subversion possibilities, then the word processor could have its own security system that required any documents containing macros to have contracts of employment that would need to be accepted by a user before a document was loaded into the word processor.
Creating an entire security system like this could be impractical for programmers, but there is a solution: programmers could be provided with access to some of the functionality of the security system to use in their own programs. The security system for the entire computer could provide security services to programs and allow contracts to be made by the internal security systems of applications and deal with presentation of these to the user and other tasks such as management of secure dialogues.
Some readers may be concerned that this would weaken the security system, but it would not do so. In a sense, a contract of employment made by an application's internal security system would be less 'real' than a contract of employment made by the computer's main security system. This is because any interpretation of that contract of employment would be done by the application that made it and it would have no authority at all with the main security system. The term 'sub-contract' seems attractive for describing this.
How would a system like this be made?
The security system could be made as a program that is added onto a computer system and then supervises all the other programs on it. Firewalls are routinely added onto systems in this way, so it is at least conceivable. The security system would need to intercept a very large number of program actions and check that they meet security standards, so this could introduce a lot of complexity.
Another approach would be to build the security system into the computer's operating system. This would probably provide a higher degree of security. One problem here is that current operating systems are very large and the operating system itself could contain bugs that leave holes in the system. Even more security would be achieved if the security system were designed into an operating system from the start and used to supervise even all but the most basic parts of the operating system itself. With this sort of approach a simple operating system - as simple as possible - would be made and the security system built into it. The rest of the operating system would be added as programs to the system which would have contracts of employment, meaning that a defect in the rest of the operating system would be limited in scope. I think that it would be some time before a system which went as far as this would be implemented, but I also think it will be necessary eventually if we are to have sufficient confidence in computers.
Still higher security and efficiency might be provided if hardware design changes were made to support the security system.
What about the resources you are consuming?
Some readers will be aware that what I have proposed here could consume significant system resources. The threat to computers and the issue of trustworthiness of software is, in my opinion, so serious, that we do have to allocate some of the resources of computers to protecting them from their own software. A system with more resources is of no use if it is not working.
The issue would become more important if technology such as molecular nanotechnology [6, 7, 8, 9, 10] is developed, which would lead to the idea of digital matter processing - the rearrangement of matter into almost any form under software control - and cause software to have enormous power in the real-world. At some stage the effect of computers on the world becomes such that malicious interference with these systems cannot be tolerated.
How could extra hardware help?
I have mentioned that hardware could be designed to assist the security system. This would be unlikely to occur immediately after a system like this was first used, but there would be opportunity for specific hardware to improve it later. When hardware was developed it could be used to support the system either for all users or only for certain users in situations where the consequences of an attack on their computers is great, such as in military or government applications.
Hardware support would be likely to take one of two main forms:
- Hardware to support enforcement of contracts of employment.
- Hardware to support secure presentation of information to the user in a secure way.
Support of enforcement of contracts of employment would involve hardware that was involved with the execution of programs' machine code. This could be done by alterations to the design of CPU (central processing unit) microchips or by adding extra microchips to the computer, possibly between the CPU and the computer's memory. Hardware of this type would make some simple checks on the machine code instructions executed by the computer to ensure some basic complains with contracts of employment. Software on the computer would still play a major role.
Support of presentation of information to the user in a secure way would involve helping with the presentation of contracts of employment or secure dialogues to the user. When a contract of employment is presented to the user it is important that, if the user agrees to the contract, he/she is really agreeing to the contract that he/she sees on screen and that a program has not somehow managed to hide the real contract with a fake contract on the computer's display screen. Likewise, when a user gives consent in a secure dialogue it is important that the dialogue he/she sees on screen is the one that is in effect and that a program has not managed to alter the information on screen to make the secure dialogue appear different from what it is. Both of these issues could be dealt with in software; for example, when the security information writes information to the display it could prevent any other application from writing any other information to that part of the display - an idea that I have previously discussed. In some situations, however, a higher standard of security may be needed and a hardware device may be used that allows a contract of employment or a secure dialogue to be presented to the user, and a decision to be made, with less risk that any 'exploit' has compromised its presentation. Such a device could be, for example, an LCD display device that only allows display of text without any of the complications of graphics and windows and it may prevent any removal of text from the display, once it has been written, until the user has given an answer. I do not expect that this would be a common solution for most users in the near future, but systems like this could have a role in some institutions.
More on File Ownership and Sharing of Data
This document has mentioned the issue of file ownership. I have suggested the simple idea of each file being owned by the program that made it and using the contracts of employment to specify whether or a particular program's files are to be visible to other programs or modifiable by them and whether a not a particular program is allowed to modify files owned by other programs.
As I have said, a complication is that we may not wish to globally ban or allow file access between programs: we may not want a particular program's files to be open to any program and we may not want a particular program to be able to access any program's files.
Secure dialogues provide a possible solution, if a program's contract of employment requires file operations outside its working area to be conditional on user consent. Another way would be for contracts of employment to make specific reference to other programs. I will now suggest another possible way of addressing this.
When we consider programs sharing data it may be useful to consider a kind of virtual network existing in a machine. We could draw this in a number of ways. One way would be to represent each application program by a symbol and to show arrows between applications programs to represent access permissions.
As an example suppose we have a database containing client records and an emailing program intended to send email mail-shots to people in this database. The emailing program needs to be able to access the data which has been stored by the database program. For a start, this means that the database program must store this data outside its working area, but how should permission for the access be given?
One way would be for the emailing program to ask for permission at some time and then for the permission to be shown on some diagram like the sort that I have described here. Alternatively, the user could view the 'access diagram' for the computer and use a mouse pointer to make connections between programs to indicate sharing permission. Sharing permissions could be given or revoked in this way and the situation could be checked by viewing the diagram at any time.
Such a diagram would need to show any access which contracts of employment allow, as well as any access which has been granted later. An important point is that contracts of employment must never be allowed to be broken by the system, even when an approach like this is being taken. If the user is to be allowed to establish permissions like this on a diagram of this type then the contracts of employment for the programs involved must specify that additional access can be made by them, or made to files that they own, if granted in this way.
As I mentioned earlier, we may want programs to have different areas for files which allow access in different ways, so that a program's contract can specify which areas can be access or which areas of another program the program can access and this may be reflected in this sort of system. The user could see different boxes for each program, each with a meaningful label, and the permissions could be shown and changed in the way that has just been discussed.
The Multi-User Analogy
Although I have used a real-world social analogy in this document, another analogy could have been used: that of the multi-user computer system. A multi-user computer system is a type of computer system which allows a number of users to 'login' simultaneously and they are often found in places such as universities or large companies.
Multi-user systems are based around multi-user operating systems - an example is UNIX - and are designed to allow users to do what they need to on the system without compromising other users.
We could consider this a good analogy for the sort of security system being proposed here. Computer programs could be considered analogous to users of a multi-user system. Just as a multi-user system protects the system from users, and users and their data from each other, then this sort of security system needs to protect a computer system from its own programs, and its programs and their data from each other. Each user of a multi-user operating system like UNIX has a 'home area' in which data can be privately stored and this provides an obvious analogy with the 'working area' in the proposed security system.
The analogy is only a basic one, however, and should not be taken too far: a security system to protect computers needs to do more than multi-user systems. The main complication is caused by the fact that a number of programs will need to act on the same data and there must be enough sophistication in the system of contracts of employment and secure dialogues to deal with this.
Existing methods of providing computer security, with their focus on perimeter defence, are inadequate. A major weakness is in the protection of a computer system from programs that are installed on it.
This document has proposed a method, analogous to methods of providing security in social situations, for protecting computer systems and their data from programs installed on them. This uses the idea of associating contracts of employment with programs, allowing programmers to decide what access rights their programs need but also allowing users to examine these rights before using the software. Some operations may require consent from the user at the time that they are performed and this is dealt with by secure dialogues.
The purpose of contracts of employment is to control the actions of programs and demonstrate to users that programs are trustworthy. Creating a contract of employment that provides safety is the responsibility of the programmer and the contract should satisfy requirements for:
- ethics - the user should be able to have confidence that the program is not able to interfere with the computer system any more than is needed to allow it to do its job.
- prudence - the user should be satisfied that the program is reasonably secure against unwanted interference by other programs.
The presentation of contracts of contracts of employment to the user for consideration, and the enforcement of such contracts is managed by a security system.
The system could be used to provide a high level of security. Higher security could be provided if it were built into the operating system and still higher security could be provided if it were built into a very basic operating system around which the rest of the operating system was built as programs subject to the control of the security system. Modifying hardware to work with the system could give even more security and efficiency.
 Lait. J. (1999). Information Warfare Never Dies. Reformer magazine, Autumn 1999. Retrieved on 8 January 2005 from http://core2.trg.org.uk/reformer/1999autumn/informationwarfare.html.
 National Criminal Intelligence Service. (2003). United Kingdom Threat Assessment of Serious and Organised Crime 2003 8. Hi-tech crime. National Criminal Intelligence Service. Retrieved on 6 January 2004 from http://www.ncis.co.uk/ukta/2003/threat08.asp.
 Gartner. (2005). Cyberattacks: The Results of the Gartner/U.S. Naval War College Simulation. Gartner. Retrieved on 29 December 2004 from http://www3.gartner.com/2_events/audioconferences/dph/dph.html.
 SpyPatrol. (2003). Keylogger Information: Definition of A Keylogger. Retrieved on 26 January 2005 from http://keylogger.spy-patrol.com/.
 Weisstein, E. W. (1999?). Universal Turing Machine. Mathworld - A Wolfram Web Resource. Retrieved on 26 January 2005 from http://mathworld.wolfram.com/UniversalTuringMachine.html.
 Drexler, K.E. (1986). Engines of Creation. Published by Anchor Books.
 Drexler, K.E., Peterson, C., Pergamit, G. (2000). Unbounding the Future: the Nanotechnology Revolution. New York: William Morrow.
 Drexler, K.E. (1992). Nanosystems. New York: John Wiley and Sons Inc.
 The Foresight Institute. (1986-2002). Retrieved on 22 June 2003 from http://www.foresight.org/.
 Zyvex. (n.d.) Retrieved on 22 June 2003 from http://www.zyvex.com.