In 1972 NASA sent into deep space an interstellar probe called Pioneer 10. It bore a golden plaque.
The art historian Ernst Gombrich offers an insightful commentary on this:
The representation of humans is accompanied by a chart: a pattern of lines beside the figures standing for the 14 pulsars of the Milky Way, the whole being designed to locate the sun of our universe. A second drawing (how are they to know it is not part of the same chart?) 'shows the earth and the other planets in relation to the sun and the path of Pioneer from earth and swinging past Jupiter.' The trajectory, it will be noticed, is endowed with a directional arrowhead; it seems to have escaped the designers that this is a conventional symbol unknown to a race that never had the equivalent of bows and arrows. (Gombrich 1974, 255-8; Gombrich 1982, 150-151).
Gombrich's commentary on this attempt at communication with alien beings highlights the importance of what semioticians call codes. The concept of the 'code' is fundamental in semiotics. Whilst Saussure dealt only with the overall code of language, he did of course stress that signs are not meaningful in isolation, but only when they are interpreted in relation to each other. It was another linguistic structuralist, Roman Jakobson, who emphasized that the production and interpretation of texts depends upon the existence of codes or conventions for communication (Jakobson 1971). Since the meaning of a sign depends on the code within which it is situated, codes provide a framework within which signs make sense. Indeed, we cannot grant something the status of a sign if it does not function within a code. Furthermore, if the relationship between a signifier and its signified is relatively arbitrary, then it is clear that interpreting the conventional meaning of signs requires familiarity with appropriate sets of conventions. Reading a text involves relating it to relevant 'codes'. Even an indexical and iconic sign such as a photograph involves a translation from three dimensions into two, and anthropologists have often reported the initial difficulties experienced by people in primal tribes in making sense of photographs and films (Deregowski 1980), whilst historians note that even in recent times the first instant snapshots confounded Western viewers because they were not accustomed to arrested images of transient movements and needed to go through a process of cultural habituation or training (Gombrich 1982, 100, 273). As Elizabeth Chaplin puts it, 'photography introduced a new way of seeing which had to be learned before it was rendered invisible' (Chaplin 1994, 179). What human beings see does not resemble a sequence of rectangular frames, and camerawork and editing conventions are not direct replications of the way in which we see the everyday world. When we look at things around us in everyday life we gain a sense of depth from our binocular vision, by rotating our head or by moving in relation to what we are looking at. To get a clearer view we can adjust the focus of our eyes. But for making sense of depth when we look at a photograph none of this helps. We have to decode the cues. Semioticians argue that, although exposure over time leads 'visual language' to seem 'natural', we need to learn how to 'read' even visual and audio-visual texts (though see Messaris 1982 and 1994 for a critique of this stance). Any Westerners who feel somehow superior to those primal tribesfolk who experience initial difficulties with photography and film should consider what sense they themselves might make of unfamiliar artefacts - such as Oriental lithographs or algebraic equations. The conventions of such forms need to be learned before we can make sense of them.
Some theorists argue that even our perception of the everyday world around us involves codes. Fredric Jameson declares that 'all perceptual systems are already languages in their own right' (Jameson 1972, 152). As Derrida would put it, perception is always already representation. 'Perception depends on coding the world into iconic signs that can re-present it within our mind. The force of the apparent identity is enormous, however. We think that it is the world itself we see in our "mind's eye", rather than a coded picture of it' (Nichols 1981, 11-12). According to the Gestalt psychologists - notably Max Wertheimer (1880-1943), Wolfgang Köhler (1887-1967) and Kurt Koffka (1886-1941) - there are certain universal features in human visual perception which in semiotic terms can be seen as constituting a perceptual code. We owe the concept of 'figure' and 'ground' in perception to this group of psychologists. Confronted by a visual image, we seem to need to separate a dominant shape (a 'figure' with a definite contour) from what our current concerns relegate to 'background' (or 'ground'). An illustration of this is the famous ambiguous figure devised by the Danish psychologist Edgar Rubin.
Images such as this are ambiguous concerning figure and ground. Is the figure a white vase (or goblet, or bird-bath) on a black background or silhouetted profiles on a white background? Perceptual set operates in such cases and we tend to favour one interpretation over the other (though altering the amount of black or white which is visible can create a bias towards one or the other). When we have identified a figure, the contours seem to belong to it, and it appears to be in front of the ground.
In addition to introducing the terms 'figure' and 'ground', the Gestalt psychologists outlined what seemed to be several fundamental and universal principles (sometimes even called 'laws') of perceptual organization. The main ones are as follows (some of the terms vary a little): proximity, similarity, good continuation, closure, smallness, surroundedness, symmetry and prägnanz.
The principle of proximity can be demonstrated thus:
What you are likely to notice fairly quickly is that this is not just a square pattern of dots but rather is a series of columns of dots. The principle of proximity is that features which are close together are associated. Below is another example. Here we are likely to group the dots together in rows.
The principle also applies in the illustration below. We are more likely to associate the lines which are close together than those which are further apart. In this example we tend to see three pairs of lines which are fairly close together (and a lonely line on the far right) rather than three pairs of lines which are further apart (and a lonely line on the far left).
The significance of this principle on its own is likely to seem unclear initially; it is in their interaction that the principles become more apparent. So we will turn to a second major principle of perceptual organization - that of similarity. Look at the example below.
Here the little circles and squares are evenly spaced both horizontally and vertically so proximity does not come into play. However, we do tend to see alternating columns of circles and squares. This, the Gestalt psychologists would argue, is because of the principle of similarity - features which look similar are associated. Without the two different recurrent features we would see either rows or columns or both...
A third principle of perceptual organization is that of good continuity. This principle is that contours based on smooth continuity are preferred to abrupt changes of direction. Here, for instance, we are more likely to identify lines a-b and c-d crossing than to identify a-d and c-b or a-c and d-b as lines.
Closure is a fourth principle of perceptual organization: interpretations which produce 'closed' rather than 'open' figures are favoured.
Here we tend to see three broken rectangles (and a lonely shape on the far left) rather than three 'girder' profiles (and a lonely shape on the right). In this case the principle of closure cuts across the principle of proximity, since if we remove the bracket shapes, we return to an image used earlier to illustrate proximity...
A fifth principle of perceptual organization is that of smallness. Smaller areas tend to be seen as figures against a larger background. In the figure below we are more likely to see a black cross rather than a white cross within the circle because of this principle.
As an illustration of this Gestalt principle, it has been argued that it is easier to see Rubin's vase when the area it occupies is smaller (Coren et al. 1994, 377). The lower portion of the illustration below offers negative image versions in case this may play a part. To avoid implicating the surroundedness principle I have removed the conventional broad borders from the four versions. The Gestalt principle of smallness would suggest that it should be easier to see the vase rather than the faces in the two versions on the left below.
The principle of symmetry is that symmetrical areas tend to be seen as figures against asymmetrical backgrounds.
Then there is the principle of surroundedness, according to which areas which can be seen as surrounded by others tend to be perceived as figures.
Now we're in this frame of mind, interpreting the image shown above should not be too difficult. What tends to confuse observers initially is that they assume that the white area is the ground rather than the figure. If you couldn't before, you should now be able to discern the word 'TIE'.
All of these principles of perceptual organization serve the overarching principle of pragnänz, which is that the simplest and most stable interpretations are favoured.
What the Gestalt principles of perceptual organization suggest is that we may be predisposed towards interpreting ambiguous images in one way rather than another by universal principles. We may accept such a proposition at the same time as accepting that such predispositions may also be generated by other factors. Similarly, we may accept the Gestalt principles whilst at the same time regarding other aspects of perception as being learned and culturally variable rather than innate. The Gestalt principles can be seen as reinforcing the notion that the world is not simply and objectively 'out there' but is constructed in the process of perception. As Bill Nichols comments, 'a useful habit formed by our brains must not be mistaken for an essential attribute of reality. Just as we must learn to read an image, we must learn to read the physical world. Once we have developed this skill (which we do very early in life), it is very easy to mistake it for an automatic or unlearned process, just as we may mistake our particular way of reading, or seeing, for a natural, ahistorical and noncultural given' (Nichols 1981, 12).
We are rarely aware of our own habitual ways of seeing the world. It takes deliberate effort to become more aware of everyday visual perception as a code. Its habitual application obscures the traces of its intervention. However, a simple experiment allows us to 'bracket' visual perception at least briefly. For this to be possible, you need to sit facing the same direction without moving your body for a few minutes:
This process of bracketing perception will be more familiar to those who draw or paint who are used to converting three dimensions into two. For those who do not, this little experiment may be quite surprising. We are routinely anaesthetized to a psychological mechanism called 'perceptual constancy' which stabilizes the relative shifts in the apparent shapes and sizes of people and objects in the world around us as we change our visual viewpoints in relation to them. Without mechanisms such as categorization and perceptual constancy the world would be no more than what William James called a 'great blooming and buzzing confusion' (James 1890, 488). Perceptual constancy ensures that 'the variability of the everyday world becomes translated by reference to less variable codes. The environment becomes a text to be read like any other text' (Nichols 1981, 26):
Key differences between 'bracketed' perception and everyday perception may be summarized as follows (Nichols 1981, 13, 20):
Bracketed Perception | Normal Perception
A bounded visual space, oval, approximately 180° laterally, 150°
vertically
| Unbounded visual space
| Clarity of focus at only one point with a gradient of increasing vagueness
toward the margin (clarity of focus corresponds to the space whose light falls upon the fovea)
| Clarity of focus throughout
| Parallel lines appear to converge: the lateral sides of a rectangular surface
extending away from the viewer appear to converge
| Parallel lines extend without converging: the sides of a rectangular surface
extending away from the viewer remain parallel
| If the head is moved, the shapes of objects appear to be deformed
| If the head is moved, shapes remain constant
| The visual space appears to lack depth
| Visual space is never wholly depthless
| A world of patterns and sensation, of surfaces, edges and gradients
| A world of familiar objects and meaning
|
|
The conventions of codes represent a social dimension in semiotics: a code is a set of practices familiar to users of the medium operating within a broad cultural framework. Indeed, as Stuart Hall puts it, 'there is no intelligible discourse without the operation of a code' (Hall 1980, 131). Society itself depends on the existence of such signifying systems.
Codes are not simply 'conventions' of communication but rather procedural systems of related conventions which operate in certain domains. Codes organize signs into meaningful systems which correlate signifiers and signifieds. Codes transcend single texts, linking them together in an interpretative framework. Stephen Heath notes that 'while every code is a system, not every system is a code' (Heath 1981, 130). He adds that 'a code is distinguished by its coherence, its homogeneity, its systematicity, in the face of the heterogeneity of the message, articulated across several codes' (ibid., p.129).
Codes are interpretive frameworks which are used by both producers and interpreters of texts. In creating texts we select and combine signs in relation to the codes with which we are familiar 'in order to limit... the range of possible meanings they are likely to generate when read by others' (Turner 1992, 17). Codes help to simplify phenomena in order to make it easier to communicate experiences (Gombrich 1982, 35). In reading texts, we interpret signs with reference to what seem to be appropriate codes. Usually the appropriate codes are obvious, 'overdetermined' by all sorts of contextual cues. Signs within texts can be seen as embodying cues to the codes which are appropriate for interpreting them. The medium employed clearly influences the choice of codes. Pierre Guiraud notes that 'the frame of a painting or the cover of a book highlights the nature of the code; the title of a work of art refers to the code adopted much more often than to the content of the message' (Guiraud 1975, 9). In this sense we routinely 'judge a book by its cover'. We can typically identify a text as a poem simply by the way in which it is set out on the page. The use of what is sometimes called 'scholarly apparatus' (such as introductions, acknowledgements, section headings, tables, diagrams, notes, references, bibliographies, appendices and indexes) - is what makes academic texts immediately identifiable as such to readers. Such cueing is part of the metalingual function of signs. With familiar codes we are rarely conscious of our acts of interpretation, but occasionally a text requires us to work a little harder - for instance, by pinning down the most appropriate signified for a key signifier (as in jokes based on word play) - before we can identify the relevant codes for making sense of the text as a whole.
Even with adequate English vocabulary and grammar, think what sense an inter-planetary visitor to Earth might make of a notice such as 'Dogs must be carried on the escalator'. Does it mean that you must carry a dog if you go on the escalator? Is it forbidden to use it without one? Terry Eagleton comments:
Without realizing it, in understanding even the simplest texts we draw on a repertoire of textual and social codes. Literary texts tend to make greater demands. Eagleton argues that:
Semioticians seek to identify codes and the tacit rules and constraints which underlie the production and interpretation of meaning within each code. They have found it convenient to divide codes into groups. Different theorists favour different taxonomies, and whilst structuralists often follow the 'principle of parsimony' - seeking to find the smallest number of groups deemed necessary - 'necessity' is defined by purposes. No taxonomy is innocently 'neutral' and devoid of ideological assumptions. One might start from a fundamental divide between analogue and digital codes, from a division according to sensory channels, from a distinction between 'verbal' and 'non-verbal', and so on. Many semioticians take human language as their starting point. The primary and most pervasive code in any society is its dominant 'natural' language, within which (as with other codes) there are many 'sub-codes'. A fundamental sub-division of language into spoken and written forms - at least insofar as it relates to whether the text is detached from its maker at the point of reception - is often regarded as representing a broad division into different codes rather than merely sub-codes. One theorist's code is another's sub-code and the value of the distinction needs to be demonstrated. Referring to the codes of film-making, Stephen Heath argues that 'codes are not in competition with one another...; there is no choice between, say, lighting and montage. Choice is given between the various sub-codes of a code, they being in a relation of mutual exclusion' (Heath 1981, 127). Stylistic and personal codes (or idiolects) are often described as sub-codes (e.g. Eco 1976, 263, 272). The various kinds of codes overlap, and the semiotic analysis of any text or practice involves considering several codes and the relationships between them. A range of typologies of codes can be found in the literature of semiotics. I refer here only to those which are most widely mentioned in the context of media, communication and cultural studies (this particular tripartite framework is my own).
These three types of codes correspond broadly to three key kinds of knowledge required by interpreters of a text, namely knowledge of:
The 'tightness' of semiotic codes themselves varies from the rule-bound closure of logical codes (such as computer codes) to the interpretative looseness of poetic codes. Pierre Guiraud notes that 'signification is more or less codified', and that some systems are so 'open' that they 'scarcely merit the designation "code" but are merely systems of "hermeneutic" interpretation' (Guiraud 1975, 24). Guiraud makes the distinction that a code is 'a system of explicit social conventions' whilst 'a hermeneutics' is 'a system of implicit, latent and purely contingent signs', adding that 'it is not that the latter are neither conventional nor social, but they are so in a looser, more obscure and often unconscious way' (ibid., 41). His claim that (formal) codes are 'explicit' seems untenable since few codes would be likely to be widely regarded as wholly explicit. He refers to two 'levels of signification', but it may be more productive to refer to a descriptive spectrum based on relative explicitness, with technical codes veering towards one pole and interpretative practices veering towards the other. At one end of the spectrum are what Guiraud refers to as 'explicit, socialized codes in which the meaning is a datum of the message as a result of a formal convention between participants' (ibid., 43-4). In such cases, he argues, 'the code of a message is explicitly given by the sender' (ibid., 65). At the other end of the spectrum are 'the individual and more or less implicit hermeneutics in which meaning is the result of an interpretation on the part of the receiver' (ibid., 43-4). Guiraud refers to interpretative practices as more 'poetic', being 'engendered by the receiver using a system or systems of implicit interpretation which, by virtue of usage, are more or less socialized and conventionalized' (ibid., 41). Later he adds that 'a hermeneutics is a grid supplied by the receiver; a philosophical, aesthetic, or cultural grid which he applies to the text' (ibid., 65). Whilst Guiraud's distinctions may be regarded as rather too clearcut, as 'ideal types' they may nevertheless be analytical useful.
When studying cultural practices, semioticians treat as signs any objects or actions which have meaning to members of the cultural group, seeking to identify the rules or conventions of the codes which underlie the production of meanings within that culture. Understanding such codes, their relationships and the contexts in which they are appropriate is part of what it means to be a member of a particular culture. Marcel Danesi has suggested that 'a culture can be defined as a kind of "macro-code", consisting of the numerous codes which a group of individuals habitually use to interpret reality' (Danesi 1994a, 18; see also Danesi 1999, 29, Nichols 1981, 30-1 and Sturrock 1986, 87). For the interested reader, texts on intercultural communication are a useful guide to cultural codes (e.g. Samovar & Porter 1988; Gudykunst & Kim 1992; Scollon & Scollon 1995).
Food is a fundamental example of the cultural variability of codes, as is highlighted in The Raw and the Cooked by the anthropologist Claude Lévi-Strauss (Lévi-Strauss 1970). Food is a clear manifestation of the interaction of nature and culture. It is 'natural' for all animals (including humans) to consume food, but the modes of consumption employed by human beings are a distinctive part of human culture. As Edmund Leach puts it, 'cooking is... universally a means by which Nature is transformed into Culture' (Leach 1970, 34). He adds that 'men do not have to cook their food, they do so for symbolic reasons to show that they are men and not beasts. So fire and cooking are basic symbols by which Culture is distinguished from Nature' (ibid., 92). Unlike other animals, human beings in different cultures follow social conventions which dictate what is edible or inedible, how food should be prepared and when certain foods may be eaten. In various cultures, the eating of certain foods is prohibited either for men, women or children. Thus food categories become mapped onto categories of social differentiation. Lévi-Strauss regards such mapping between categories as of primary importance.
Referring initially to 'totemism', Lévi-Strauss notes that the classification systems of a culture constitute a code which serves to signify social differences. He argues that such systems are like interpretative 'grids' and suggests that they are built upon 'differentiating features' which are detachable from a specific content. This makes them suitable as 'codes... for conveying messages which can be transposed into other codes, and for expressing messages received by means of different codes in terms of their own system'. Such codes, he argued, constitute 'a method for assimilating any kind of content' which 'guarantee the convertibility of ideas between different levels of social reality' (Lévi-Strauss 1974, 75-6; see also 96-7). Such codes are involved in 'mediation between nature and culture' (ibid., 90-91). They are a way of encoding differences within society by analogy with perceived differences in the natural world (somewhat as in Aesop's Fables). They transform what are perceived as natural categories into cultural categories and serve to naturalize cultural practices. 'The mythical system and the modes of representation it employs serve to establish homologies between natural and social conditions or, more accurately, it makes it possible to equate significant contrasts found in different planes: the geographical, meteorological, zoological, botanical, technical, economic, social, ritual, religious and philosophical' (ibid., 93). In the case of the Murngin of Arnhem Land in northern Australia, the mythical system enabled equivalences to be made as in the following table:
Pure, sacred | male | superior | fertilizing (rains) | bad season
Impure, profane
| female
| inferior
| fertilized (land)
| good season
|
|
As can be seen, such systems are not without contradictions, and Lévi-Strauss argued that the contradictions within such systems generate explanatory myths - such codes must 'make sense' (Lévi-Strauss 1974, 228). Whilst 'classificatory systems belong to the levels of language' (ibid.), a framework such as this 'is something more than a mere language. It does not just set up rules of compatibility and incompatibility between signs. It is the basis of an ethic which prescribes or prohibits modes of behaviour. Or at least this consequence seems to follow from the very common association of totemic modes of representation with eating prohibitions on the one hand and rules of exogamy on the other' (ibid., 97). Although Lévi-Strauss's analytical approach remains formally synchronic, involving no study of the historical dimension, he does incorporate the possibility of change: oppositions are not fixed and structures are transformable. He notes that we need not regard such frameworks from a purely synchronic perspective. 'Starting from a binary opposition, which affords the simplest possible example of a system, this construction proceeds by the aggregation, at each of the two poles, of new terms, chosen because they stand in relations of opposition, correlation, or analogy to it'. In this way structures may undergo transformation (ibid., 161).
Lee Thayer argues that 'what we learn is not the world, but particular codes into which it has been structured so that we may "share" our experiences of it' (Thayer 1982, 30; cf. Lee 1960). Constructivist theorists argue that linguistic codes play a key role in the construction and maintenance of social realities. The Whorfian hypothesis or Sapir-Whorf theory is named after the American linguists Edward Sapir and Benjamin Lee Whorf. In its most extreme version the Sapir-Whorf hypothesis can be described as relating two associated principles: linguistic determinism and linguistic relativism. Applying these two principles, the Whorfian thesis is that people who speak languages with very different phonological, grammatical and semantic distinctions perceive and think about the world quite differently, their worldviews being shaped or determined by their language. Writing in 1929, Sapir argued in a classic passage that:
This position was extended by his student Whorf, who, writing in 1940 in another widely cited passage, declared that:
The extreme determinist form of the Sapir-Whorf hypothesis is rejected by most contemporary linguists. Critics note that we cannot make inferences about differences in worldview solely on the basis of differences in linguistic structure. Whilst few linguists would accept the Whorfian hypothesis in its 'strong', extreme or deterministic form, many now accept a 'weak', more moderate, or limited Whorfianism, namely that the ways in which we see the world may be influenced by the kind of language we use.
Probably the most well-known example of the cultural diversity of verbal and conceptual categories is that Eskimos have dozens of words for 'snow' - an assertion which is frequently attributed to Benjamin Lee Whorf. Actually, Whorf seems never to have claimed that Eskimos had more than five words for snow (Whorf 1956, 216). However, a more recent study - not of the Inuit but of the Koyukon Indians of the subarctic forest - does list 16 terms for snow, representing these distinctions:
This is not the place to explore the controversial issue of the extent to which the way we perceive the world may be influenced by the categories which are embedded in the language available to us. Suffice it to say that words can be found in English (as in the admittedly wordy translations above) to refer to distinctions which we may not habitually make. Not surprisingly, cultural groups tend to have lots of words (and phrases) for differences that are physically or culturally important to them - English-speaking skiers also have many words for snow. Urban myths woven around the theme of 'Eskimos' having many words for snow may reflect a desire to romanticize 'exotic' cultures. This does not, however, rule out the possibility that the categories which we employ may not only reflect our view of the world but may also sometimes exercise subtle influences upon it.
Within a culture, social differentiation is 'over-determined' by a multitude of social codes. We communicate our social identities through the work we do, the way we talk, the clothes we wear, our hairstyles, our eating habits, our domestic environments and possessions, our use of leisure time, our modes of travelling and so on (Fussell 1984). Language use acts as one marker of social identity. In 1954, A S C Ross introduced a distinction between so-called 'U and Non-U' uses of the English language. He observed that members of the British upper class ('U') could be distinguished from other social classes ('Non-U') by their use of words such as those in the following table (Crystal 1987, 39). It is interesting to note that several of these refer to food and eating. Whilst times have changed, similar distinctions still exist in British society.
U | Non-U
luncheon
| dinner
| table-napkin
| serviette
| vegetables
| greens
| jam
| preserve
| pudding
| sweet
| sick
| ill
| lavatory-paper
| toilet-paper
| looking-glass
| mirror
| writing-paper
| note-paper
| wireless
| radio
|
|
A controversial distinction regarding British linguistic usage was introduced in the 1960s by the sociologist Basil Bernstein between so-called 'restricted code' and 'elaborated code' (Bernstein 1971). Restricted code was used in informal situations and was characterized by a reliance on situational context, a lack of stylistic variety, an emphasis on the speaker's membership of the group, simple syntax and the frequent use of gestures and tag questions (such as 'Isn't it?'). Elaborated code was used in formal situations and was characterized by less dependence on context, wide stylistic range (including the passive voice), more adjectives, relatively complex syntax and the use of the pronoun 'I'. Bernstein's argument was that middle-class children had access to both of these codes whilst working-class children had access only to restricted codes. Such clear-cut distinctions and correlations with social class are now widely challenged by linguists (Crystal 1987, 40). However, we still routinely use such linguistic cues as a basis for making inferences about people's social backgrounds.
Linguistic codes serve as indicators not only of social class but even of sexual orientation, as in the case of 'Polari', a set of 'camp' terms and expressions which used to be employed by gay men in British theatrical circles. Polari was made better known in the late 1960s by the characters 'Julian and Sandy' in the BBC radio programme, Around the Horne.
Polari | Standard English | Polari | Standard English
bijou
| small
| nanti
| no, nothing, not
| bold
| outrageous, flamboyant
| omi
| man
| bona
| good
| omi-palone
| gay man
| butch
| masculine
| palone
| girl, young woman
| drag
| clothes, to dress
| polari
| speak, chat, speech, language
| eek
| face
| riah
| hair
| fantabulosa
| excellent
| trade
| casual sex
| lally
| leg
| troll
| go, walk, wander
| latty
| house, home, accommodation
| varda
| see, look, a look
|
|
Social differentiation is observable not only from linguistic codes, but from a host of non-verbal codes. A survey of non-verbal codes is not manageable here, and the interested reader should consult some of the classic texts and specialist guides to the literature (e.g. Hall 1959; Hall 1966; Argyle 1969; Birdwhistell 1971; Argyle 1983; Argyle 1988). In the context of the present text a few examples must suffice to illustrate the importance of non-verbal codes.
Social conventions for 'appropriate' dress are explictly referred to as 'dress codes'. In some institutions, such as in many business organizations and schools, a formal dress code is made explicit as a set of rules (a practice which sometimes leads to subversive challenges). Particular formal occasions - such as weddings, funerals, banquets and so on - involve strong expectations concerning 'appropriate' dress. In other contexts, the wearer has greater choice of what to wear, and their clothes seem to 'say more about them' than about an occasion at which they are present or the institution for which they work. The way that we dress can serve as a marker of social background and subcultural allegiances. This is particularly apparent in youth subcultures. For instance, in Britain in the 1950s 'Teddy boys' or 'Teds' wore drape jackets with moleskin or satin collars, drainpipe trousers, crêpe-soled suede shoes and bootlace ties; the hairstyle was a greased 'D-A', often with sideburns and a quiff. Subsequent British youth subcultures such as mods and rockers, skinheads and hippies, punks and goths have also had distinctive clothes, hairstyles and musical tastes. Two classic studies of postwar British youth subcultures are Stuart Hall and Tony Jefferson's Resistance through Rituals and Dick Hebdige's Subculture: The Meaning of Style (Hall & Jefferson 1976; Hebdige 1979). Marcel Danesi has offered a more recent semiotic account of the social codes of youth subcultures in Canada (Danesi 1994b).
Non-verbal codes which regulate a 'sensory regime' are of particular interest. Within particular cultural contexts there are, for instance, largely inexplicit 'codes of looking' which regulate how people may look at other people (including taboos on certain kinds of looking). Such codes tend to retreat to transparency when the cultural context is one's own. 'Children are instructed to "look at me", not to stare at strangers, and not to look at certain parts of the body... People have to look in order to be polite, but not to look at the wrong people or in the wrong place, e.g. at deformed people' (Argyle 1988, 158). In Luo in Kenya one should not look at one's mother-in-law; in Nigeria one should not look at a high-status person; amongst some South American Indians during conversation one should not look at the other person; in Japan one should look at the neck, not the face; and so on (Argyle 1983, 95).
The duration of the gaze is also culturally variable: in 'contact cultures' such as those of the Arabs, Latin Americans and southern Europeans, people look more than the British or white Americans, while black Americans look less (ibid., 158). In contact cultures too little gaze is seen as insincere, dishonest or impolite whilst in non-contact cultures too much gaze ('staring') is seen as threatening, disrespectful and insulting (Argyle 1988, 165; Argyle 1983, 95). Within the bounds of the cultural conventions, people who avoid one's gaze may be seen as nervous, tense, evasive and lacking in confidence whilst people who look a lot may tend to be seen as friendly and self-confident (Argyle 1983, 93). Such codes may sometimes be deliberately violated. In the USA in the 1960s, bigoted white Americans employed a sustained 'hate stare' directed against blacks which was designed to depersonalize the victims (Goffman 1969).
Codes of looking are particularly important in relation to gender differentiation. One woman reported to a male friend: ‘One of the things I really envy about men is the right to look’. She pointed out that in public places, ‘men could look freely at women, but women could only glance back surreptitiously’ (Dyer 1992, 103). Brian Pranger (1990) reports on his investigation of 'the gay gaze':
Just as with codes of looking, there are 'codes of touching' which vary from culture to culture. A study by Barnlund in 1975 depicted the various parts of the body which informants in the USA and Japan reported had been touched by opposite-sex friends, same-sex friends, their mother and their father (Barnlund 1975, cited in Argyle 1988, 217-18). The resulting body-maps show major differences in cultural norms in this regard, with body areas available for touch being far more restricted in Japan than in the United States. An earlier study of American students showed differences in the patterns for males and females in the amount of touching of different areas of the body by the various others (Jourard 1966, cited in Argyle 1983, 37). The students reported that they had been touched most by their mothers and by friends of the opposite sex; their fathers seldom touched more than their hands. Social codes also govern the frequency of physical contact. Jourard also reported the following contacts per hour in different cities: San Juan (Puerto Rico) 180; Paris 110; Gainesville (Florida) 2; London 0 (cited in Argyle 1969, 93). We will allude to the related work of Edward T Hall on the topic of proximity when we discuss 'modes of address'.
Codes are variable not only between different cultures and social groups but also historically. It would be interesting to know, for instance, whether the frequency of touching in various cities around the world which was reported by Jourard in the 1960s is noticeably different now. Saussure, of course, focused on sychronic analysis and saw the development of a language as a series of synchronic states. Similarly, Roman Jakobson and his colleague Yuri Tynyanov saw the history of literature as a hierarchical system in which at any point certain forms and genres were dominant and others were subordinate. When dominant forms became stale, sub-genres took over their functions. Historical change was a matter of shifting relations within the system (Eagleton 1983, 111). Unlike Saussure, the French historian of ideas Michel Foucault focused not on the 'language system' as a homogeneous whole but on specific 'discourses' and 'discursive practices'. Each historical period has its own épistème - a set of relations uniting the various discursive practices which shape its epistemologies. For Foucault, specific discourses such as those of science, law, government and medicine are systems of representational codes for constructing and maintaining particular forms of reality within the ontological domain (or topic) defined as relevant to their concerns. A particular 'discursive formation' is dominant in specific historical and socio-cultural contexts and maintains its own 'regime of truth'. A range of discursive positions is available at any given time, reflecting many determinants (economic, political, sexual etc.). Foucault focused on power relations, noting that within such contexts, the discourses and signifiers of some interpretative communities are privileged and dominant whilst others are marginalized. The non-employment of dominant codes is a mark of those who are 'outsiders' - a category which includes both foreigners from other cultures and those who are marginalized within a culture. On the other hand people who feel marginalized are often very well-attuned to analogue nuances within dominant social codes - if you want to codify stereotypical straight male behaviour try asking a gay man to describe it.
We learn to read the world in terms of the codes and conventions which are dominant within the specific socio-cultural contexts and roles within which we are socialized. In the process of adopting a 'way of seeing' (to use John Berger's phrase), we also adopt an 'identity'. The most important constancy in our understanding of reality is our sense of who we are as an individual. Our sense of self as a constancy is a social construction which is 'over-determined' by a host of interacting codes within our culture (Berger & Luckmann 1967; Burr 1995). 'Roles, conventions, attitudes, language - to varying degrees these are internalized in order to be repeated, and through the constancies of repetition a consistent locus gradually emerges: the self. Although never fully determined by these internalizations, the self would be entirely undetermined without them' (Nichols 1981, 30). When we first encounter the notion that the self is a social construction we are likely to find it counter-intuitive. We usually take for granted our status as autonomous individuals with unique 'personalities'. We will return later to the notion of our 'positioning' as 'subjects'. For the moment, we will note simply that 'society depends upon the fact that its members grant its founding fictions, myths or codes a taken-for-granted status' (Nichols 1981, 30). Culturally-variable perceptual codes are typically inexplicit, and we are not normally conscious of the roles which they play. To users of the dominant, most widespread codes, meanings generated within such codes tend to appear 'obvious' and 'natural'. Stuart Hall comments:
Learning these codes involves adopting the values, assumptions and 'world-views' which are built into them without normally being aware of their intervention in the construction of reality. The existence of such codes in relation to the interpretation of texts is more obvious when we examine texts which have been produced within and for a different culture, such as advertisements produced indigenously in a different country from our own for the domestic market in that country. Interpreting such texts in the manner intended may require 'cultural competency' relevant to the specific cultural context of that text's production, even where the text is largely visual (Scott 1994a; Scott 1994b; McQuarrie & Mick, 1999).
John Sturrock argues that:
Understanding a sign involves applying the rules of an appropriate code which is familiar to the interpreter. This is a process which Peirce referred to as abduction (a form of inference along with deduction and induction) (see Mick 1986, 199 and Hervey 1982, 19-20). On encountering a signifier we may hypothesise that it is an instance of a familiar rule, and then infer what it signifies from applying that rule (Eco 1976, 131). David Mick offers a useful example. Someone who is confronted by an advertisement showing a woman serving her family three nutritionally balanced meals per day can infer that this woman is a good mother by instantiating the culturally acquired rule that all women who do this are good mothers (Mick 1986, 199). As Mick notes, abduction is particularly powerful if the inference is made about someone or something about whom or which little more is known (such as a new neighbour or a fictional character in an advertisement).
The synchronic perspective of structuralist semioticians tends to give the impression that codes are static. But codes have origins and they do evolve, and studying their evolution is a legitimate semiotic endeavour. Guiraud argues that there is a gradual process of 'codification' whereby systems of implicit interpretation acquire the status of codes (ibid., 41). Codes are dynamic systems which change over time, and are thus historically as well as socio-culturally situated. Codification is a process whereby conventions are established. For instance, Metz shows how in Hollywood cinema the white hat became codified as the signifier of a 'good' cowboy; eventually this convention became over-used and was abandoned (Metz 1974). For useful surveys of changing conventions in cinema see Carey 1974, Carey 1982 and Salt 1983. William Leiss and his colleagues offer an excellent history of the codes of magazine advertising (Leiss et al. 1990, Chapter 9).
In historical perspective, many of the codes of a new medium evolve from those of related existing media (for instance, many televisual techniques owe their origins to their use in film and photography). New conventions also develop to match the technical potential of the medium and the uses to which it is put. Some codes are unique to (or at least characteristic of) a specific medium or to closely-related media (e.g. 'fade to black' in film and television); others are shared by (or similar in) several media (e.g. scene breaks); and some are drawn from cultural practices which are not tied to a medium (e.g. body language) (Monaco 1981, 146ff). Some are more specific to particular genres within a medium. Some are more broadly linked either to the domain of science ('logical codes', suppressing connotation and diversity of interpretation) or to that of the arts ('aesthetic codes', celebrating connotation and diversity of interpretation), though such differences are differences of degree rather than of kind.
Every text is a system of signs organized according to codes and subcodes which reflect certain values, attitudes, beliefs, assumptions and practices. Textual codes do not determine the meanings of texts but dominant codes do tend to constrain them. Social conventions ensure that signs cannot mean whatever an individual wants them to mean. The use of codes helps to guide us towards what Stuart Hall calls 'a preferred reading' and away from what Umberto Eco calls 'aberrant decoding', though media texts do vary in the extent to which they are open to interpretation (Hall 1980, 134).
One of the most fundamental kinds of textual code relates to genre. Traditional definitions of genres tend to be based on the notion that they constitute particular conventions of content (such as themes or settings) and/or form (including structure and style) which are shared by the texts which are regarded as belonging to them. This mode of defining a genre is deeply problematic. For instance, genres overlap and texts often exhibit the conventions of more than one genre. It is seldom hard to find texts which are exceptions to any given definition of a particular genre. Furthermore, the structuralist concern with synchronic analysis ignores the way in which genres are involved in a constant process of change.
An overview of genre taxonomies in various media is beyond the scope of the current text, but it is appropriate here to allude to a few key cross-media genre distinctions. The organization of public libraries suggests that one of the most fundamental contemporary genre distinctions is between fiction and non-fiction - a categorization which highlights the importance of modality judgements. Even such an apparently basic distinction is revealed to be far from straightforward as soon as one tries to apply it to the books on one's own shelves or to an evening's television viewing. Another binary distinction is based on the kinds of language used: poetry and prose - the 'norm' being the latter, as Molière's Monsieur Jourdain famously discovered: 'Good Heavens! For more than forty years I have been speaking prose without knowing it!'. Even here there are grey areas, with literary prose often being regarded as 'poetic'. This is related to the issue of how librarians, critics and academics decide what is 'literature' as opposed to mere 'fiction'. As with the typology of codes in general, no genre taxonomy can be ideologically neutral. Traditional rhetoric distinguishes between four kinds of discourse: exposition, argument, description and narration (Brooks & Warren 1972, 44). These four forms, which relate to primary purposes, are often referred to as different genres (e.g. Fairclough 1995, 88). However, texts frequently involve any combination of these forms and they are perhaps best thought of as 'modes'. More widely described as genres are the four 'modes of emplotment' which Hayden White adopted from Northrop Frye in his study of historiography: romance, tragedy, comedy and satire (White 1973). Useful as such interpretative frameworks can be, however, no taxonomy of textual genres adequately represents the diversity of texts.
Despite such theoretical problems, various interpretative communities (at particular periods in time) do operate on the basis of a negotiated (if somewhat loose and fluid) consensus concerning what they regard as the primary genres relevant to their purposes. Television listings magazines, for instance, invariably allocate genre labels to the films which they broadcast. The accompanying illustration shows the labels used by one such British magazine (What's On TV) over several months in 1993, together with the links with each other which are implied by the nomenclature. A more basic variation on the same theme is found in the labelled sections of video rental shops. Readers may care to check the genre classifications used for films in their own localities.
Whilst there is far more to a genre code than that which may seem to relate to specifically textual features it can still be useful to consider the distinctive properties attributed to a genre by its users. For instance, if we take the case of film, the textual features typically listed by theorists include:
Some film genres tend to defined primarily by their subject matter (e.g. detective films), some by their setting (e.g. the Western) and others by their narrative form (e.g. the musical). Less easy to place in one of the traditional categories are mood and tone (which are key features of the film noir). In addition to textual features, different genres (in any medium) also involve different purposes, pleasures, audiences, modes of involvement, styles of interpretation and text-reader relationships. A particularly important feature which tends not to figure in traditional accounts and which is often assigned to text-reader relationships rather than to textual features in contemporary accounts is mode of address, which involves inbuilt assumptions about the audience, such as that the 'ideal' viewer is male (the usual categories here are class, age, gender and ethnicity). We will return to this important issue shortly.
In Writing Degree Zero, Roland Barthes sought to demonstrate that the classical textual codes of French writing (from the mid-seventeenth century until the mid-nineteenth century) had been used to suggest that such codes were natural, neutral and transparent conduits for an innocent and objective reflection of reality (i.e. the operation of the codes was masked). Barthes argues that whilst generating the illusion of a 'zero-degree' of style, these codes served the purpose of fabricating reality in accord with the bourgeois view of the world and covertly propagating bourgeois values as self-evident (Barthes 1953; Hawkes 1977, 107-108). In his essay 'Rhetoric of the Image' (1964), Barthes developed this line of argument in relation to the medium of photography arguing that because it appears to record rather than to transform or signify, it serves an ideological function. Photography 'seems to found in nature the signs of culture... masking the constructed meaning under the appearance of the given meaning' (Barthes 1977, 45-6). Many theorists extend this notion to film and television. For instance, Gerard LeBlanc comments:
Textual codes which are 'realistic' are nonetheless conventional. All representations are systems of signs: they signify rather than 'represent', and they do so with primary reference to codes rather than to 'reality'. From the Renaissance until the nineteenth century Western art was dominated by a mimetic or representational purpose which still prevails in popular culture. Such art denies its status as a signifying system, seeking to represent a world which is assumed to exist before, and independently of, the act of representation. Realism involves an instrumental view of the medium as a neutral means of representing reality. The signified is foregrounded at the expense of the signifier. Realist representational practices tend to mask the processes involved in producing texts, as if they were slices of life 'untouched by human hand'. As Catherine Belsey notes, 'realism is plausible not because it reflects the world, but because it is constructed out of what is (discursively) familiar' (Belsey 1980, 47). Ironically, the 'naturalness' of realist texts comes not from their 'reflection of reality' but from their uses of codes which are derived from other texts. The familiarity of particular semiotic practices renders their mediation invisible. Our recognition of the familiar in realist texts repeatedly confirms the 'objectivity' of our habitual ways of seeing.
However, the codes of the various realisms are not always initially familiar. In the context of painting, the art historian Ernst Gombrich has illustrated (for instance, in relation to John Constable) how aesthetic codes which now seem 'almost photographic' to many viewers were regarded at the time of their emergence as strange and radical (Gombrich 1977). Eco adds that early viewers of Impressionist art could not recognize the subjects represented and declared that real life was not like this (Eco 1976, 254; Gombrich 1982, 279). Most people had not previously noticed coloured shadows in nature (Gombrich 1982, 27, 30, 34). In the cinema, 'the gestural codes and the bodily and facial expressions of actors in silent films belonged to conventions which connoted realism when they were made and watched' (Bignell 1997, 193), whereas now such codes stand out as 'unrealistic'. When the pioneering American film-maker D W Griffith initially proposed the use of close-ups, his producers warned him that the audience would be disconcerted since the rest of the actor was missing (Rosenblum & Karen 1979, 37-8). What count as 'realistic' modes of representation are both culturally and historically variable. To most contemporary western audiences the conventions of American cinema seem more 'realistic' than the conventions of modern Indian cinema, for instance, because the latter are so much less familiar. Even within a culture, over historical time particular codes become increasingly less familiar, and as we look back at texts produced centuries ago we are struck by the strangeness of their codes - their maintenance systems having long since been superseded. As Nelson Goodman put it: 'Realism is relative, determined by the system of representation standard for a given culture or person at a given time' (Goodman 1968, 37).
As noted earlier, Peirce referred to signs in (unedited) photographic media as being primarily indexical (rather than iconic) - meaning that the signifiers did not simply 'resemble' their signifieds but were mechanical recordings and reproductions of them (within the limitations of the medium). John Berger also argued in 1968 that photographs are 'automatic' 'records of things seen' and that 'photography has no language of its own' (cited in Tagg 1988, 187). In 'The Photographic Message' (1961), Roland Barthes famously declared that 'the photographic image... is a message without a code' (Barthes 1977, 17). Since this phrase is frequently misunderstood, it may be worth clarifying its context with reference to this essay together with an essay published three years later - 'The Rhetoric of the Image' (ibid., 32-51). Barthes was referring to the 'absolutely analogical, which is to say, continuous' character of the medium (ibid., 20). 'Is it possible', he asks, 'to conceive of an analogical code (as opposed to a digital one)?' (ibid., 32). The relation between the signifier and the thing signified is not arbitrary as in language (ibid., 35). He grants that photography involves both mechanical reduction (flattening, perspective, proportion and colour) and human intervention (choice of subject, framing, composition, optical point-of-view, distance, angle, lighting, focus, speed, exposure, printing and 'trick effects'). However, photography does not involve rule-governed transformation as codes can (ibid., 17, 20-25, 36, 43, 44). 'In the photograph - at least at the level of the literal message - the relationship of signifieds to signifiers is not one of "transformation" but of "recording"'. Alluding to the indexical nature of the medium, he notes that the image is 'captured mechanically' and that this reinforces the myth of its 'objectivity' (ibid., 44). Unlike a drawing or a painting, a photograph reproduces 'everything': it 'cannot intervene within the object (except by trick effects)' (ibid., 43). 'In order to move from the reality to the photograph it is in no way necessary to divide up this reality into units and to constitute these units as signs, substantially different from the object they communicate; there is no necessity to set up... a code, between the object and its image' (ibid., 17). In consequence, he noted, photographs cannot be reduced to words.
However, 'every sign supposes a code' and at a level higher than the 'literal' level of denotation, a connotative code can be identified. He noted that at the 'level of production', 'the press photograph is an object that has been worked on, chosen, composed, constructed, treated according to professional or ideological norms' and at the 'level of reception', the photograph 'is not not only perceived, received, it is read, connected by the public that consumes it to a traditional stock of signs' (ibid., 19). Reading a photograph involved relating it to a 'rhetoric' (ibid., 18, 19). In addition to the photographic techniques already noted, he refers for instance to the signifying functions of: postures, expressions and gestures; the associations evoked by depicted objects and settings; sequences of photographs, e.g. in magazines (which he refers to as 'syntax'); and relationships with accompanying text (ibid., 21-5). He added that 'thanks to the code of connotation the reading of the photograph is... always historical; it depends on the reader's "knowledge" just as though it were a matter of a real language, intelligible only if one has learned the signs' (ibid., 28).
Clearly, therefore, it would be a misinterpretation of Barthes' declaration that 'the photographic image... is a message without a code' to suggest that he meant that no codes are involved in producing or 'reading' photographs. His main point was that it did not (at least yet) seem possible to reduce the photographic image itself to elementary 'signifying units'. Far from suggesting that photographs are purely denotative, he declared that the 'purely "denotative" status of the photograph... has every chance of being mythical (these are the characteristics that common sense attributes to the photograph'. At the level of the analogue image itself, whilst the connotative code was implicit and could only be inferred, he was convinced that it was nonetheless 'active' (ibid., 19). Citing Bruner and Piaget, he notes the possibility that 'there is no perception without immediate categorization' (ibid., 28). Reading a photograph also depends closely on the reader's culture, knowledge of the world, and ethical and ideological stances (ibid., 29). Barthes adds that 'the viewer receives at one and the same time the perceptual message and the cultural message' (ibid., 36).
Barthes did not outline the institutional codes involved in photojournalism. Sympathetically pursuing Barthes' insights, the British sociologist Stuart Hall emphasizes the ideological character of news photographs:
Most semioticians emphasize that photography involves visual codes, and that film and television involve both visual and aural codes. John Tagg argues that 'the camera is never neutral. The representations it produces are highly coded' (Tagg 1988, 63-4; cf. 187). Cinematic and televisual codes include: genre; camerawork (shot size, focus, lens movement, camera movement, angle, lens choice, composition); editing (cuts and fades, cutting rate and rhythm); manipulation of time (compression, flashbacks, flashforwards, slow motion); lighting; colour; sound (soundtrack, music); graphics; and narrative style. Christian Metz added authorial style, and distinguished codes from sub-codes, where a sub-code was a particular choice from within a code (e.g. western within genre, or naturalistic or expressionist lighting subcodes within the lighting code). The syntagmatic dimension was a relation of combination between different codes and sub-codes; the paradigmatic dimension was that of the film-maker's choice of particular sub-codes within a code. Since, as Metz noted, 'a film is not "cinema" from one end to another' (cited in Nöth 1990, 468), film and television involve many codes which are not specific to these media.
Whilst some photographic and filmic codes are relatively arbitrary, many of the codes employed in 'realistic' photographic images or films 'reproduce many of the perceptual cues used in encountering the physical world, or correlates of them' (Nichols 1981, 35; see also Messaris 1982 and 1994). This is a key reason for their perceived 'realism'. The depiction of 'reality' even in iconic signs involves variable codes which have to be learned, yet which, with experience, come to be taken-for-granted as transparent and obvious. Eco argues that it is misleading to regard such signs as less 'conventional' than other kinds of signs (Eco 1976, 190ff): even photography and film involve conventional codes. Paul Messaris, however, stresses that the formal conventions of representational visual codes (including paintings and drawings) are not 'arbitrary' (Messaris 1994), and Ernst Gombrich offers a critique of what he sees as the 'extreme conventionalism' of Nelson Goodman's stance (Gombrich 1982, 278-297), stressing that 'the so-called conventions of the visual image [vary] according to the relative ease or difficulty with which they can be learned' (Gombrich 1982, 283) - a notion familiar from the Peircean ranking of signifier-signified relationships in terms of relative conventionality.
Semioticians often refer to 'reading' film or television - a notion which may seem strange since the meaning of filmic images appears not to need decoding at all. When we encounter a shot in which someone is looking offscreen we usually interpret the next shot as what they are looking at. Consider the following example offered by Ralph Rosenblum, a major professional film editor. In an initial shot, 'a man awakens suddenly in the middle of the night, bolts up in bed, stares ahead intensely, and twitches his nose'. If we then cut to 'a room where two people are desperately fighting a billowing blaze, the viewers realize that through clairvoyance, a warning dream, or the smell of smoke, the man in bed has become aware of danger'. Alternatively, if we cut from the first shot to 'a distraught wife defending her decision to commit her husband to a mental institution, they will understand that the man in bed is her husband and that the dramatic tension will surround the couple'. If it's a Hitchcock movie 'the juxtaposition of the man and the wife will immediately raise questions in the viewers' minds about foul play on the part of the woman'. This form of editing may alert us not only to a link between the two consecutive shots but in some cases to a genre. If we cut to an image of clouds drifting before the full moon, we know that we can expect a 'wolf-man' adventure (Rosenblum & Karen 1979, 2).
Such interpretations are not 'self-evident': they are a feature of a filmic editing code. Having internalized such codes at a very young age we then cease to be conscious of their existence. Once we know the code, decoding it is almost automatic and the code retreats to invisibility. This particular convention is known as an eyeline match and it is part of the dominant editing code in film and television narrative which is referred to as 'the continuity system' or as 'invisible editing' (Reisz & Millar 1972; Bordwell et al. 1988, Chapter 16; Bordwell & Thompson 1993, 261ff). Whilst minor elements within the code have been modified over time, most of the main elements are still much the same now as when they were developed many decades ago. This code was originally developed in Hollywood feature films but most narrative films and television dramas now routinely employ it. Editing supports rather than dominates the narrative: the story and the behaviour of its characters are the centre of attention. Whilst nowadays there may be cuts every few seconds, these are intended to be unobtrusive. The technique gives the impression that the edits are always required and are motivated by the events in the 'reality' that the camera is recording rather than the result of a desire to tell a story in a particular way. The 'seamlessness' convinces us of its 'realism', but the code consists of an integrated system of technical conventions. These conventions serve to assist viewers in transforming the two-dimensional screen into a plausible three-dimensional world in which they can become absorbed.
A major cinematic convention is the use of the establishing shot: soon after a cut to a new scene we are given a long shot of it, allowing us to survey the overall space - followed by closer 'cut-in' shots focusing on details of the scene. Re-establishing shots are used when needed, as in the case of the entry of a new character.
Another key convention involved in helping the viewer to make sense of the spatial organization of a scene is the so-called 180° rule. Successive shots are not shown from both sides of the 'axis of action' since this would produce apparent changes of direction on screen. For instance, a character moving right to left across the screen in one shot is not shown moving left to right in the next shot. This helps to establish where the viewer is in relation to the action. In separate shots of speakers in a dialogue, one speaker always looks left whilst the other looks right. Note that even in telephone conversations the characters are oriented as if facing each other.
In point-of-view (POV) shots, the camera is placed (usually briefly) in the spatial position of a character to provide a subjective point-of-view. This is often in the form of alternating shots between two characters - a technique known as shot/reverse shot. Once the 'axis of action' has been established, the alternation of shots with reverse-shots allows the viewer to glance back and forth at the participants in a dialogue (matched shots are used in which the shot-size and framing of the subject is similar). In such sequences, some of these shots are reaction shots. All of the techniques described so far reflect the goal of ensuring that the same characters are always in the same parts of the screen.
Because this code foregrounds the narrative, it employs what are called motivated cuts: changes of view or scene occur only when the action requires it and the viewer expects it. When cuts from one distance and/or angle to another are made, they are normally matches on action: cuts are usually made when the subject is moving, so that viewers are sufficiently distracted by the action to be unaware of the cut. There is a studious avoidance of jump cuts: the so-called 30° rule is that a shot of the same subject as the previous shot must differ in camera angle by at least 30° (otherwise it will feel to the viewer like an apparently pointless shift in position).
This cinematic editing code has become so familiar to us that we no longer consciously notice its conventions until they are broken. Indeed, it seems so 'natural' that some will feel that it closely reflects phenomenal reality and thus find it hard to accept it as a code at all. Do we not mentally 'cut' from one image to another all of the time in everyday visual perception? This case seems strongest when all that is involved is a shift corresponding to a turn of our head or a refocusing of our eyes (Reisz & Millar 1972, 213-16). But of course many cuts would require us to change our viewing position. A common response to this - at least if we limit ourselves to moderate changes of angle or distance and ignore changes of scene - is to say that the editing technique represents a reasonable analogy to the normal mental processes involved in everyday perception. A cut to close-up can thus be seen to reflect as well as direct a purposive shift in attention. Of course, when the shot shifts so radically that it would be a physical impossibility to imitate this in everyday life, then the argument by perceptual analogy breaks down. And cuts reflect such shifts more often than not; only fleetingly does film editing closely reflect the perceptual experience of 'being there' in person. But of course a gripping narrative will already have led to our 'suspension of disbelief'. We thus routinely and unconsciously grant the film-maker the same 'dramatic licence' with which we are familiar not only from the majority of films which we watch but also from analogous codes employed in other media - such as theatre, the novel or the comic-strip. For an argument questioning the interpretative importance of a cinematic editing code and emphasizing real-life analogies, see the lively and interesting book by Paul Messaris on Visual Literacy (Messaris 1994, 71ff). However, his main focus of attack is on the stance that the cinematic editing code is totally arbitrary - a position which few would defend. Clearly these techniques were designed where possible to be analogous to familiar codes so that they would quickly become invisible to viewers once they were habituated to them. Messaris argues that context is more important than code; it likely that where the viewer is in doubt about the meaning of a specific cut, interpretation may be aided by applying knowledge either from other textual codes (such as the logic of the narrative) or from relevant social codes (such as behavioural expectations in analogous situations in everyday life). The interpretation of film draws on knowledge of multiple codes. Adopting a semiotic approach to cinematic editing is not simply to acknowledge the importance of conventions and conventionality but to highlight the process of naturalization involved in the 'editing out' of what 'goes without saying'.
The emphasis given to visual codes by most theorists is perhaps partly due to their use of printed media for their commentaries - media which are inherently biased towards the visual, and may also derive from a Western tendency to privilege the visual over other channels. We need to remind ourselves that it is not only the visual image which is mediated, constructed and codified in the various media - in film, television and radio, this also applies to sound. Film and television are not simply visual media but audio-visual media. Even where the mediated character of the visual is acknowledged, there is a tendency for sound to be regarded as largely unmediated. But codes are involved in the choice and positioning of microphones, the use of particular equipment for recording, editing and reproduction, the use of diegetic sound (ostensibly emanating from the action in the story) versus non-diegetic sound, direct versus post-synchronous (dubbed) recording, simulated sounds (such as the highly conventionalized signifier for a punch) and so on (Stam 2000, 212-223; Altman 1992). In the dominant Hollywood tradition, conventional sound codes included features such as:
Any text uses not one code, but many. Theorists vary in their classification of such codes. In his book S/Z, Roland Barthes itemised five codes employed in literary texts: hermeneutic (narrative turning-points); proairetic (basic narrative actions); cultural (prior social knowledge); semic (medium-related codes) and symbolic (themes) (Barthes 1974). Yuri Lotman argued that a poem is a 'system of systems' - lexical, syntactical, metrical, morphological, phonological and so on - and that the relations between such systems generated powerful literary effects. Each code sets up expectations which other codes violate (Lotman 1976). The same signifier may play its part in several different codes. The meaning of literary texts may thus be 'overdetermined' by several codes. Just as signs need to be analysed in their relation to other signs, so codes need to be analysed in relation to other codes. Becoming aware of the interplay of such codes requires a potentially recursive process of re-reading. Nor can such readings be confined to the internal structure of a text, since the codes utilized within it extend beyond any specific text - an issue of 'intertextuality' to which we will return.
One simple typology of codes was offered at the start of this section. The typologies of several key theorists are often cited and it may be useful to alert the reader briefly to them here. Pierre Guiraud (1975) proposed three basic kinds of codes: logical, aesthetic and social. Umberto Eco offered ten fundamental codes as instrumental in shaping images: codes of perception, codes of transmission, codes of recognition, tonal codes, iconic codes, iconographic codes, codes of taste and sensibility, rhetorical codes, stylistic codes and codes of the unconscious (Eco 1982, 35-8). The value of any such typologies must clearly be assessed in terms of the interpretive light which they shed on the phenomena which they are used to explore.
Whatever the nature of any embedded ideology, it has been claimed that as a consequence of their internalization of the codes of the medium, 'those born in the age of radio perceive the world differently from those born into the age of television' (Gumpert & Cathcart 1985). Critics have objected to the degree of technological determinism which is sometimes involved in such stances, but this is not to suggest that our use of such tools and techniques is without influence on our habits of mind. If this is so, the subtle phenomenology of new media is worthy of closer attention than is typically accorded to it. Whatever the medium, learning to notice the operation of codes when representations and meanings seem natural, obvious and transparent is clearly not an easy task. Understanding what semioticians have observed about the operation of codes can help us to denaturalize such codes by making their implicit conventions explicit and amenable to analysis. Semiotics offers us some conceptual crowbars with which to deconstruct the codes at work in particular texts and practices, providing that we can find some gaps or fissures which offer us the chance to exert some leverage.