Justice and Design

Penny Duquenoy & Harold Thimbleby
Middlesex University
Bounds Green Road
N11 2NQ

penny2@mdx.ac.uk, harold@mdx.ac.uk

ABSTRACT Within the field of HCI there are a number of preferred approaches towards design. As within other disciplines, these approaches are often irreconcilable. We explore the possibilities of using ethics as a way to bridge the gap and re-establish the design focus of doing good towards the user. This is the idea of justice to aid improved design. According to Aristotle, justice is classed as a virtue: to do justice is to act for the good, which is what is wanted for good HCI design. John Rawls' classic "A Theory of Justice," (1971) talks about justice as fairness, and it is in this context we apply justice to the area of design. We show some surprising links with HCI practice, and hence suggest some new perspectives on HCI.

KEYWORDS Ethics, Justice, Veil of Ignorance, Design.


This paper introduces the concept of justice to the area of design. HCI is concerned with making things better, improving usability and making interactive systems (for one or more people) better.

Aristotle defines justice as doing good for others. This is essentially what HCI is: doing good for others through the interactive systems designed, and imposed on others lifes and work. Aristotle warns that justice is the only one of the virtues that can be done accidentally: that is, unlike, say, integrity, justice can be achieved without intention. This paper, then, can safely argue HCI is like justice (in ways to be elaborated), but the fact that we had not noticed is not a counter-argument to ours.

Following the Rawls (1971) idea of justice as fairness, we explore the notion of system design from the point of view of an "original position" of equality. From this ethical perspective (the core concept being the Rawls' "veil of ignorance") the designer adopts the standpoint of unspecified potential users. From an HCI perspective, we see the veil of ignorance as corresponding to the principle that designers should know the user and not design for themselves; ideally, they should design for people who they do not know, and that they know they do not know (Thimbleby, 1998).

Computers are complex systems, and so are humans, the design of complex systems for complex systems can lead to complex design procedures. Where design approaches and methodologies differ "fairness" in design provides a simple ethical foundation of certain principles which are commonly understood and promoted within our culture. We begin the paper from this standpoint, examining the concept behind the Rawls Theory of Justice and explaining how the principles of liberty and equality are derived from that concept.

As far as these principles are important to us as members of society, they should be equally important in our work, especially when that work directly relates to, and impacts upon, other members of the society to which we belong. This is, of course, not a novel ideal within the design arena (e.g., on the subject of Information Systems Design, see Hirschheim et. al. 1995; on social/democratic design context, see Feng, 1998).

The impact of software on others is recognised by Collins, Miller, Spielman, and Wherry, who not only include the buyers and users of the software, but also recognise "bystanders who fall under the shadow of the behaviour of the software" (Collins, et. al., 1994, p.81). Their article (based on a case study) used a Rawlsian approach, emphasising responsibilities and obligations during the initial negotiation period of the design proposal. In this paper we are particularly interested in the suitability of Rawls theory to the field of design, with the emphasis on "being fair to do good," following Rawls' two principles of liberty and equality which are arrived at from the veil of ignorance.

We finally assess the advantages and disadvantages of Rawls theory as an aid to good design, and offer some practical ideas on implementation.


Rawls theory of justice emphasises justice as fairness, arriving at two fundamental principles liberty and equality. This theory is intended for application in a political sphere, and as such addresses social, rather than individual, ethics. The essential idea is of a social contract the key elements of Rawls theory are the original position (the veil of ignorance) and the two principles of liberty and equality.

2.1. The original position

Rawls uses this idea to provide a justification for the basic principles which constitute his theory. The strategy aims to disassociate the individual from preconceptions and prejudices by adopting a starting point (original position) of ignorance. From this position the individual is free to perceive the world from any potential vantage point unencumbered by inherited social status. Thus the original position is a device for ensuring an equal starting point, and from this point the individual perceives the world through a veil of ignorance. This gives a basis for entering a (fair) social contract.

The next stage is to construct the contract in such a way as to ensure a fair outcome. This, according to the Rawls theory, is best achieved by the parties concerned imagining that they could be at any potential receiving end of the contract. So, for example it would be unwise to devise a contract that benefited, say, the homeless, at the expense of the property owner, if you were to become the property owner. As Dworkin (1977, p.181) says, "Men who do not know to which class they belong cannot design institutions, consciously or unconsciously, to favour their own class."

For institutions, read systems: in HCI, designers create systems (for example, software systems or physical devices). These systems become embedded within the users world, and constrain what those users can and cannot do. They are social institutions, not enforced by law or convention (as Rawls conceives it) but enforced by design. For example, a hardware device (say, a mobile phone or video recorder) is unalterable by the user; or an aircraft flight management system is far too complex for a pilot to change (and, yes, there are social conventions that stop pilots tinkering with the aircraft software!) Thus to design means creating a "world." To design a good world, means to act justly. By Rawls, one should design the good world acting under a veil of ignorance. To do otherwise, allows the designer to create a special world in which they are treated beneficially, typically at the expense of others.

According to Rawls, "The original position is defined in such a way that it is a status quo in which any agreements reached are fair. It is a state of affairs in which the parties are equally represented as moral persons and the outcome is not conditioned by arbitrary contingencies or the relative balance of social forces." (Rawls, 1972, p.120)

2.2. The two principles

It is Rawls argument that a search for basic principles to underpin a social contract, from the perspective of the veil of ignorance, must result in the two principles of liberty and equality.

The principle of liberty ensures against persecution, discrimination and political oppression, and the principle of equality allows each person of equal ability and motivation the same chance of success, regardless of social status.


This theory then, addresses issues of rights and social advantages and disadvantages. These issues are very much incorporated into the design sphere, highlighted in todays technological society (and particularly magnified by the Internet). For example, the Internet has raised issues of the right to freedom of expression, and equality of access (financial and technological capabilities).

There are inequalities between designers and users, by definition designers will have more knowledge of these systems than most users. Rawls recognises this natural inequality in a social system, and utilises it within a third principle (the "difference" principle) which states that inequalities are justified only if they benefit the worst off. Therefore the inequality in knowledge which exists between the designer and potential user can only be justified if the designer uses that knowledge to benefit the user. So, for example, in a design context those who have the advantage of say, expertise and knowledge, should use that to give benefit to the otherwise disadvantaged.

The theory (being a political theory) is specifically a group ethic, to be utilised in the group situation, rather than an individual ethic. Individual ethics are notoriously difficult to apply in group situations. Design is a group situation. Things that are designed, are designed (usually) for groups. This applies particularly to technology. In addition, it is usually the case that groups are involved in the design process, and that the resulting artefact will have an impact on groups of people. (The manufacturing and marketing industries are based on these assumptions.)

3.1. Links to HCI slogans

The idea of beginning from a veil of ignorance is anyway enshrined in conventional good practice: "know the user" (cf. Thimbleby, 1990; Landauer, 1995). Rather than merely "know" one's way into all the other possible roles, one might more easily, and more reliably, do some experiments and surveys with other people (though to do this requires the product, or perhaps an earlier version of it, to exist). It is pleasing that accepted design practice is also just (who wants to be called unjust?)

Conventional HCI has a range of slogans. We briefly show that these slogans have Rawlsian counterparts.


Rawls approach is not an automatic solution to good design. The approach has advantages and disadvantages:

Rawls is but one approach to justice, and (for many moral philosophers) is by no means the last word. It seems likely, then, that the approach will not be sufficient for all purposes in HCI.

For example, could designers imagine all possible users, so that they designed properly under an unbiased veil of ignorance? Certainly not! For example, special effort will have to be made to design international user interfaces that work in other cultures.

Is it possible (or even desirable) to design an artefact for all possible users? Something designed to be easy to use for everyone might provide no satisfaction for anyone. Reeves and Nass (1996) suggest that computer systems have personalities, and a generic personality would be disliked by everyone; better, they say, to create a distinctive personality, which is at least liked by some users!

The theory does not work with some areas of design for example missile design (you would design a missile very differently if you were to imagine being at the receiving end). Depending on ones politics, one might asseverate Rawls and claim missiles are wrong; or you might say we need missiles, and there are some circumstances where Rawls is inappropriate.

Finally, there are difficulties with taking Rawls too seriously. There are duties of just action to non-contracting parties, such as to the environment. How we design things to take their responsible place in a larger ecosystem beyond other users, say to be recyclable, is beyond the scope of this paper but that is not to imply such issues are optional; see Borenstein (1998)


The Rawls theory of justice makes a nice match with HCI, but can the insight be used creatively or constructively to actually design better?

Abstract theories and discussions help to highlight issues, but what of the practical applications? Although the ideal implementation of this theory in design is unlikely, if not impossible, the basic principles could be incorporated in a design policy. A starting point might be a simple check sheet addressing

We now give some more concrete examples.

Suppose someone wished to design ballistic missiles. If they design them under a veil of ignorance, then they are supposed to be creating a world in the future, where the missiles exist, but where they do not know what roles they will have. Well, they may end up living in the cities targeted by the missiles. Since most designers probably would not wish to live under the threat of being hit by a missile, they should not make them. Of course, in reality, the designers are affiliated to a particular country and they do not consider it likely that they would live in their own countrys enemys territories. But the Rawls conception does not admit "likely" because it is a possibility, then the designer should account for it.

It is widely recommended that software writers should include comments in their programs. This advice is often strongly resisted, because when one writes program code, it is obvious what it is supposed to do, and a further explanation seems tedious. Yet in the future, the programmer may be a different person. How would the (original) programmer like to be the (future) programmer and not have the privileged insight into how the code is supposed to work. More to the point, in the future, the programmer may have forgotten what was going on in a sense, they will be a different person (their mind will have changed). Thus, acting fairly under the veil of ignorance, a programmer would anticipate that the people reading the code in the future world where it exists might not have the benefit of his or her timely insights. Comments would help!

When a sweet bar has to be divided fairly between two people, a standard approach is for one person (A) to divide the bar approximately into two equal halves. The other person (B) chooses which ever half they prefer. The intention is that A will not cheat, because if A does so, then B can take the larger piece. This is a good example of creating a world under a veil of ignorance. Person A must create a future world, and there are two possible worlds, "A has this piece" and "A has the other piece" the protocol of the sharing ensures that A cannot guarantee which of these worlds they will end up in. They are under a veil of ignorance, so they tend to promote equality by making the pieces as nearly equal as they can, whichever world they end up in (owning one piece or the other) they end up as well off.

It would be very interesting to pursue sharing algorithms in the context of CSCW and of sharing resources between users. For more details of sharing algorithms, see Robertson and Webb (1998).

That is a nice example of using Rawls to promote justice in a practical way. It should be considered an existence proof that there are (interesting) ways in which Rawls can be used to achieve practical and just ends in design.

Now consider a directly HCI example. A typical designer creates a product, and can be certain they will end up in the new world where that product exists as its designer. They are therefore in a privileged position; they will know how it works, and all of its curious features will be "obvious." Now consider a Rawlsian designer. They design a similar system, but being under a veil of ignorance, they do not know whether they will have the designers insight into that system. Indeed, they may be on the product support team, having to explain the system to irate users. Or they may be the technical authors who have to explain the system in plain English. Or they may be the pilot who has to land their aeroplane in fog.

Finally, consider the "oracle effect." (Oracles are standard computing science devices.) When a user complains that they do not know how to do something, some expert typically condescends to tell that that "it is obvious" that doing something trivial (like pressing the twiddle key) has the appropriate effect. This is trivial knowledge, but the (ignorant) user had no way of finding the fact out. An oracle was needed. Without an oracle, the system is unusable. With the oracle, the system is trivial. Thus users are often made to feel stupid, because they do not know trivial facts. In a Rawlsian world, designers of systems would have to be more careful, because they have to consider how to design systems where they would not have access to the oracular knowledge. Probably, they would design their systems to be more self-explanatory.

Since programmed systems are intrinsically complex, it is inevitable that the designer (or at least the programmers) have oracles into the systems detailed behaviour. Thus, we see an application of the Rawls difference principle.


Thus, creating systems for other people to use, which is the concern of HCI can be conceived as an act of justice. Rawls has a particular conception of justice that makes a fruitful correspondence with HCI practice. Moreover, there are alternative conceptions of justice (for example, utilitarianism): we might suggest that some disagreement in HCI methodologies would be fruitfully related to the great ethical traditions that is, if after several thousands of years, ethics has not reconciled itself to a single point of view, then HCI is unlikely to reconcile itself to a single view, whether social, computational, psychological, phenomological, or otherwise. All represent (in ways we do not have space to explore) ethical conceptions, and each suits particular agendas. HCI, then, we can surmise should take a "metaethical" stance: metaHCI is the study of choices in HCI.

What is metaHCI? Some people in HCI consider that any valid contribution must involve empirical evaluation with users. Not to involve users would seem to them to be anathema to the ideals of HCI. This might be equated with utilitarianism: what is the greatest good for the greatest number and can it be measured? Or we might view HCI as a creative discipline, where expert designers use their artistic intuitions to create new innovative systems this might be equated with virtue ethics. Our analogies are not intended to be close, but rather to suggest that doing good (in the ethical sense) is as complex as doing good (in the HCI sense), and that the great traditions of ethics have not reconciled themselves but instead lead to higher-level, meta, debate. HCI may well be enriched by taking metaHCI seriously.


The hypothetical model of a social contract brings an explicit ethical focus into our working world. Is such a contract applicable in the area of design? We believe that the notions, arguments and concepts presented by Rawls can be applied to the area of design, and that the resultant outcome is as beneficial to the user society as Rawls implies it would be to the political society. Politics refers to rights in a design context does the user have rights? If so, according to Rawls theory, the notion of equal rights comprises not only the right to equal treatment, but also the right to treatment as an equal.

Do designers of things act justly by the Rawls definition? Mostly not. They design things they know they will not use, and even if they did use, they would have oracular knowledge. Designers are never in a veil of ignorance. Many programmers build systems that they have no intention of using. If, instead, they worked under the Rawls veil of ignorance, they might try harder in case they ended up being a user of their system. If they were programming a tax program, they might end up "born as" accountants, tax-payers, civil servants designing tax law, tax evaders, auditors, managers, as their own colleagues having to maintain their system at a later date, or even as the manual writers they would have to design their tax program carefully and well from all points of view.

Perverting the course of justice is one of the most serious crimes. Perhaps if HCI was seen as a primarily ethical discipline, pursuing the good, and employing justice, doing HCI diligently would be seen as the serious discipline that it is. To do HCI well is to improve human life.


Diane Whitehouse and other members of the IFIP WG 9.2.2 made many valuable suggestions after a presentation of these ideas at Namur, 1999. Nick Merriam suggested some of the algorithmic approaches to justice.


Aristotle, Nicomachean Ethics, Book V (in Classics of Western Philosophy, Cahn, S. M. (Ed) Hackett Publishing Co. Inc. 1977).

Borenstein, N. S., 1998, "Whose Net is it Anyway?" Communications of the ACM, 41(4), p.19.

Collins, W. R., Miller, K. W., Spielman, B. J., & Wherry, P., 1994, "How Good is Good Enough?", Communications of the ACM, 37(1), pp.8191.

Dworkin, R., 1997, Taking Rights Seriously, Gerald Duckworth & Co. Ltd..

Feng, P., 1998, Rethinking technology, revitalizing ethics: overcoming barriers to ethical design, Proceedings of ETHICOMP98, Erasmus University Rotterdam.

Hirschheim, R., Klein, H. K. & Lyytinen, K., 1995, Information Systems Development and Data Modeling: Conceptual and Philosophical Foundations, Cambridge University Press.

Landauer, T., 1995, The Trouble with Computers, MIT Press.

Rawls, J., 1972, A Theory of Justice, Oxford University Press. (Originally published 1971, Harvard University Press.)

Reeves, B. & Nass, C., 1996, The Media Equation, CSLI Publications/Cambridge University Press.

Robertson, J. & Webb, W., 1998, Cake Cutting Algorithms, A. K. Peters.

Thimbleby, H. W., 1990, User Interface Design, Addison-Wesley.

Thimbleby, H. W., 1998, "The Detection and Elimination of Spurious Complexity," Proceedings of Workshop on User Interfaces for Theorem Provers, Backhouse, R. C., ed., Report 9808, pp.1522, Eindhoven University of Technology.