IIMS 92 contents
[ IIMS 92 contents ]

Hypermedia, non-linearity and the generation of meaning

David Frampton
Director, Educational and Training Technology
Griffith University


Cultural 'linearity' and the critique of instructional practices

An influential philosophical perception of western civilisation in the late modern era has emphasised its cumulative continuity and directionality, expressed most forcibly perhaps through science and technology, but equally through other mechanisms for the transmission of knowledge. There has been a current of thought interpreting this characteristic as the imposition of a fundamentally arbitrary logic on the world and human order. Thus Reiss (1982) writes: 'a discursive order is achieved on the premise that the 'syntactic' order of semiotic systems (particularly language) is coincident both with the logical ordering of 'reason' and with the structural organisation of a world given as exterior to both orders' (p. 31). Similarly, information, in Koch's (1987) view, is 'the mark of a logical ordering imposed on the world'. As arbitrary, this perceived imposition of a linear syntax on objective reality has been seen as oppressive or 'logocentric', and a series of writers has both heralded and encouraged its fragmentation into multiple, heterogeneous and autonomous perspectives.

A perception of linearity has similarly emerged in current thinking about cognition and computer software systems which are held in some way to model cognition, and is now often the mainstay of a critique of instructional practices. It corresponds at many points with the larger canvas sketched out above, but risks running into contradictions which may make it more difficult for us to understand the phenomena and the technologies in question.

We are thus concerned at the same time with a philosophical, social and cultural critique of the implicit claim of the western logos to universal truth and validity, and with a parallel assumption that traditional knowledge organisation and presentation in instruction are inhibiting and discourage possible alternative modes of thinking and conceptual organisation. We shall pursue for a moment the question of the congruence between these two positions.

Baudrillard (1983) suggests that 'our entire linear and accumulative culture would collapse if we could not stockpile the past in plain view' (p. 19): that is, linearity must be visible. Reiss' (1982) interpretation is also of this general order. Linearity is one of the fundamental characteristics of what he sees as the analytico-referential discourse of western civilisation. He instances the thrust of Bacon's new science: 'The linear order of analysis is the structure creative of the very idea of a human-historical 'future' .... it can only constitute itself as the shape of the history to come' (p. 221). The world and reality are simple correlates of the syntactic ordering of discourse, which 'thus provides us with an automatic analysis of the world, a logical analysis: its minimal parts coincide with the minimal parts of the material world' (p. 211).

Moulthrop (1989) proposes that hypertext systems are the practical implementation of resistance to such authoritarian, orthodox and logocentric hierarchies of language, whose modes, he contends, are 'linear and deductive', and he thus provides a bridge between the two conceptual domains. In the latter - the hypermedia and learning literature - there has been a common theme that the linear presentation of text or other media does not allow the underlying structure of its ideas or content to emerge, but rather conceals them beneath the linear syntax (Jonassen, 1986; Churcher, 1989). Thus, lectures may be seen as failing to detail the structural interrelationships that pertain between the ideas presented (Diekhoff and Diekhoff, 1982).

Linearity is also criticised for being at odds with the claimed congruence between hypermedia knowledge representations and human information processing mechanisms. It is claimed that adult learners do not use linear logic in their thinking (Stevens, 1989), while Harris and Cady (1988) argue that students using hypertext follow a cognitive map of the (non-linear) steps taken by a teacher in choosing words. The mapping theme is echoed by Joyce (1988), who finds that readers can, with hypertext, test alternative structures against their own, an insight not available with traditional, linear forms of teaching, which rely heavily on a narrative, story telling tradition (Wilson and Jonassen, 1989). Kearsley (1988) suggests that knowledge must be structured, through hypertextual approaches, to support the mental models which readers may create in using them. It is in this sense that hypermedia can be regarded as a specific aid to thinking (Seyer, 1989), or as an enabling rather than directive environment (Marchionini, 1988). Joyce (1988) also describes a sense of hostility towards linear, authoritarian styles by remedial students using hypertext systems.

Churcher (1989) indicates that hypertext, psychology, education, artificial intelligence and information all share an underlying structure of representation in the basic form of node, link and node, and the case for finding hypermedia technology isomorphic in some senses with human cognitive processes is, as I have indicated above, widely accepted. Human cognition is seen to evince a biological predisposition for intuitive, associative linking between concepts, propositions, ideas, patterns, images, or scripts, ie. whatever may be held to constitute a node in the network or web in particular instances (Shastri and Feldman, 1986; Shuell, 1986). A teaching or instructional design approach which invites and encourages this associating function by activating network nodes may therefore be more efficient and productive (Dede, 1989; Locatis, Letourneau and Banvard, 1989; Harris and Cady, 1988; Stevens, 1989, Hannafin and Rieber, 1989; Jonassen, 1986; Wilson and Jonassen, 1989). Anticipating which associations an enquirer or learner may wish to make is seen to be a valuable strategy, more motivational than a sequential chaining of propositions, facts, images, or whatever, which does not invite, encourage or reward intuitive associations.

The linear logic of traditionally encoded information, the case continues, is at odds with this preferred form of information processing. Processing capacity is therefore expended in translating where it could be used, for example, in the elaboration of content - that is, the knowledge domain - and its efficient transfer into long term memory.

I will briefly summarise those views. Linearity is generally taken to refer to the continuous flow of a communication. It is used to refer to the vectorial parallelism of rationality and knowledge organisation in western culture, and specifically to the continuous, sequential organisation of information in books, films, video productions, sound recordings and so forth. The characteristic of linearity, according to the arguments cited, inhibits awareness of the synchronic or structural composition of the concepts, facts, images or other instantiating elements of the knowledge domain concerned. Linearity also clearly implies, in the readings I have cited, non-interruptability, in that the message invites or implies continuity, or places barriers in the way of alternative structurings. At the level of the critique of logocentricity in western culture, the linear logic is interruptable, or controvertible, only at the risk of incoherence.

The linearities of non-linearity

My concern here is with the validity and applicability of the key terms we use in research into relationships among hypermedia, multimedia and learning. I will argue that it is misleading to nominate linearity in opposition to associative structures in human information processing and hypermedia, because non-linearity accounts for only one dimension of a multi-dimensional phenomenon. Although not a crucial aspect of my argument, it is worth observing that the vector concept would be more appropriate if one wished to continue to use some like term. It is apparent in the form of argument used by Baudrillard (1983), for example, and supported by Moulthrop (1989), that the phenomenon under consideration has both direction and magnitude. Without the magnitude, in fact, one may reasonably well infer that the direction would have changed more easily. In the critique by Diekhoff and Diekhoff (1982) of the lecture as the prime vehicle of instructional delivery, it is again the cumulative magnitude of the tradition which helps to determine its lack of openness to random alternative structurings of the knowledge domain. The use of 'vector' rather than 'line' would tend to have greater explanatory force. Instead of the common perspective of a linear, hierarchical message in opposition to an associative, non-hierarchical web of intuitively accessed nodes, we should, rather, see certain divergent 'vectors' (ie. of individual learners) the magnitude of which is so slight that they merge substantially with the dominant vector.

However, whether we continue to see the characteristic of traditional messages under consideration as linear, or prefer to see it as vectorial, as I have suggested, it can be neutralised only when time is no longer an important factor, ie. when it is considered in some synchronic dimension, or where the constituents are reduced to what we may call the atomic level. The idea of a 'knowledge atom' appears in cognitive science and a possible version is discussed by Smolensky (1986). He presents the cognitive system as 'an engine for activating coherent assemblies of atoms and drawing inferences that are consistent with the knowledge represented by the activated atoms'. Even here there is a residual vectorial quality to the atom, but only through deference to the overall mathematical modelling involved ('each atom is simply a vector of +, - and 0 values').

The point I am making is that the minimalist nature of knowledge atoms, in terms of direction and magnitude, approximates the neutralisation of linearity referred to above and infers the possibility of alternative re-assemblies. If that proposition seems somewhat hypothetical or unlikely, it serves to underline the problematical nature of the supposedly 'non-linear' in the hypertext and hypermedia literature. Hypertext nodes are comparatively large and in fact predominantly linear, as they are in cognitive science models. At their smallest, the 'atomic propositions' of Graesser and Clark (1985) retain obviously linear characteristics and are quite different conceptualisations from Smolensky's. So do the minimal elements in Anderson's (1983) tri-code theory of knowledge representation, which assumes three codes or 'representational types': temporal strings, spatial images and abstract propositions. The node may be presented as a 'chunk' in a knowledge base (Locatis, 1989; Kearsley, 1988), and is described in hypertext terms by Seyer (1989) as 'a small collection of data, usually organised around a single topic. Shastri and Feldman (1986) describe how 'chunking' may involve strengthening the links between a cluster of committed nodes and a free node (p. 197). According to Jonassen (1986), nodes are instances of propositional structures, or may be 'single screens' in computing terms (Locatis, 1989).

In any event, nodes, chunks, clusters and screens have very obviously temporal and linear dimensions, and also relative strength or magnitude (Shuell, 1986; Shastri and Feldman, 1986). In the course of the user's interactive interventions, held to subvert the traditionally imposed linearity of content presentation in hypermedia environments, we should note, then, that each node maps itself perfectly in conceptual terms onto the linearity with which it is purportedly in opposition. It is consequently arguable that the 'freedom' given to the learner or user of hypermedia is precisely that of re-mapping her or his idiosyncratically patterned linear nodes back onto the preferred vector.

The size of elements in knowledge domains, whether at the nodal level or at the level of the cumulative history of the domain, does not then significantly affect their intrinsic linearity, so it is misleading to suppose that the small size of a node in a hypermedia document, or an unusual ordering of such nodes, somehow escape the linearity of the historical, cultural or informational document of which they are fragments. The problem has been that in seeking apparently straightforward terms in which to spell out the difference between conventional knowledge representation and the machine supported associative referencing offered by hypertext (Conklin, 1987), researchers and writers have to some extent cut off the domain of enquiry from a broader picture in which the terms become contradictory. If applications of hypermedia, and the design of multimedia applications around hypermedia strategies, are to be conceived according to these misconceptions, even indirectly, we clearly have a strong interest in defining our terms robustly in order to have a firm basis for future research and development.

Another view: Series, structures and differences

In the second part of this paper, I want to sketch out a tentative line of enquiry which may help in producing some new orientations for research and in exploring the information/ meaning dimension. The common ground for any linking of cognitive representations or of external symbolic representations involves some notion of series, and this is the case whether the phenomena we are dealing with are those which have been seen, in the work cited, as linear or hierarchical, or those seen as associatively networked. Supposing that we find certain associatively linked elements (which could be images) not conforming to a linear logic, they remain even so in a serial relation through the simple fact of following one upon the other. In other words, linearity is a continuum: seriality is a concatenation or adjacency. The elements of a series may be heterogeneous (presumably this is why Easterbrook (1990) cautions against 'conceptual fragmentation' in hypertext environments, or why Stevens (1989) has concerns about 'meaningless understanding'); series, whether complete or incomplete, may be heterogeneous to each other. This formal delineation of heterogeneous elements within a series, and of heterogeneity between series, corresponds with a theoretical method adopted by Deleuze (1968, 1969, 1973).

Space here does not allow for a full account of Deleuze's quite complex theoretical strategies for undermining conventional representation (since that is his central project, which has a kinship with the subversion of linearity as it has been outlined earlier in this paper). However, a brief outline of some key elements will serve for present purposes. Deleuze is considered to be a post-structuralist, that is, as reacting in particular ways to the structuralist intellectual movement of the fifties and sixties. Structures, in the broad stream of this movement, tend to be defined in terms of differential relations between elements in a synchronic dimension, with some resulting problems in defining structural changes in a diachronic dimension, whether it is a question of how structures originate or how they change.

It will be useful at this point to note certain analogies between the dialectic of post- structuralism and more recent discussions of knowledge representation, such as that of Stillings et al. (1987) in relation to declarative and procedural knowledge. The former is, in their view, 'simply a way of referring to the static, fact-like nature of representations: they are inert structures that are operated on by processes' (p. 18). Or we might see the matter from learning perspectives, as in the case of Wittrock's (1974) concern with 'a process of generating semantic and idiosyncratic associations between stimuli and stored information'. The common element in all these cases is a concern with structures, their components, their static characteristics (or states), the dynamics of change between states, and the meanings which emerge in the course of such interactions.

Deleuze conceptualises virtual structures in a domain of problems or ideas which is 'sub-representative', ie. prior to representation, or, as one might then suppose, transcendental in terms of Kant's schemata or categories of understanding (Bogue, 1989, p. 59). However, Deleuze rejects Kant's notion of schemata which simply define the limits of possible experience. His domain of virtual structures relates to ideas in their sub-representative empiricity (ie. experienced prior to representation) rather than to Kantian concepts of understanding (Deleuze, 1968, p. 40; Bogue, 1989, p. 58). In other words, although the problem space is virtual, its virtuality is not abstract but part of the psychologically experienced world (Deleuze, 1968, p. 89). These virtual structures are seen to exist in a topological space. Their elements, or nodal points, can be thought of as positions or places in structures and have primacy in relation to the things or real beings who come to occupy them in the domain of actuality (1973, p. 305). The elements of structure (or occupiable positions) form series of singular points or singularities which have no meaning in themselves (like Smolensky's knowledge atoms), but only in the actualisations of problems or ideas in events, relations or observable physical structures (p. 306).

The strategies Deleuze develops provide a tool for exploring the dynamism which allows structural states to be changed or transformed. In addressing closely related issues in educational psychology, Ausubel (1968) uses notions such as 'integrative reconciliation or synthesis' (p. 53), that of relating new material to aspects of existing cognitive structure and of recoding material in idiosyncratic terms (p. 56), or that of assimilation (pp. 89-93). An example from cognitive science is afforded by Graesser, Millis and Long (1986), who describe the dynamics of the generation of bridging inferences in the comprehension of text. Among inferences of other kinds, bridging inferences are crucial to 'the dynamic growth of conceptual graph structures' (p. 146) and are seen as the 'guts of the meaning structure' (p. 147). The dynamics in this case involve the wholesale 'pruning' of nodes and links related to a part of the cognitively represented structure of propositions which is no longer relevant, and the reassembly of new and existing node/link chains.

Such formulations can be useful in accounting for the dynamics of structural change. Together with their empirical grounding, however, we must be aware of the metaphoricity they share with alternative exploratory approaches. Deleuze's approach is to propose that all structure is 'multi-serial' (1973, p. 319), but that, as a minimum requirement, there must be two basic series of terms or singularities which are in themselves sub-representative 'differences' or 'intensities', prior to actualisation (1969, p. 66). It is in this sense that differential relations between terms in a series constitute their heterogeneity, and that relations between series are those of differences between differences of a different order. An example would be that a temperature is not composed of temperatures (nor a speed of speeds): each temperature is already a difference in itself or an intensity (1968, p. 306); in actualised symbolic representations, it might be a question of series of signifiers and series of signifieds.

It is of interest here that, reasoning from a semiotic standpoint, Koch (1987) relates 'idea' and 'information' at a level akin to that on which Deleuze's virtual, differential structures are active. He observes: 'The common thread in the way 'information' is used in varying contexts is that it refers to a kind of relationship .... of 'difference'; like Deleuze, he sees this difference as organising matter-energy relationships. The crucial point, however, is that information is not itself 'meaning', but, in the same way as Deleuzean series and structures, 'that which makes meaning possible'. Deleuze perceives the individual human subject as always already situated within meaning when making judgements about meaning and the relations which constitute it (1969, p. 41; Bogue, 1989), and he therefore contends that there is always a surplus of meaning being produced rather than a deficiency. As to the mode of its production, he presents meaning as a surface effect of the distribution of singular points in the heterogeneous series which constitute virtual structures in the idea/problem domain. If structures are presented as multiplicities of differential relations in the way that has been described, the agent of change must be a differentiating factor which constantly affects the distribution of singular points in the system by differentiating between differences. This factor is seen variously by Deleuze as a paradoxical instance, an aleatory point or an 'empty slot'. The latter notion, particularly, indicates that, as in a game where series of event points of different orders constitute the virtual structure of a potential outcome, it is the migration of the paradoxical empty slot - ie. the next 'move' - which generates meaning.

This paradoxical instance circulates in aleatory fashion through all series and sets them in communication (Deleuze, 1969, pp. 67-68). Deleuze's formulation can therefore be seen as implying a 'chaos' theory of meaning, in contrast with theoretical formulations which see a change as affecting only a limited segment of cognitive structure. In language, for example, meaning can be denoted only through a process of infinite regression, that is: 'Meaning is always presupposed as soon as I begin to speak; I could not begin without such a presupposition. In other words, I never say the meaning of what I am saying. On the other hand, I can always take the meaning of what I say as the object of another proposition whose meaning, again, I do not say' (Deleuze, 1969, p. 41). Meaning is thus an elusive, paradoxical and contradictory entity hovering at the limit of words and objects (1968, p. 203), and emerging unpredictably from the transformations of virtual structures. Hence, it can readily be understood as negotiable.

Conclusion

Current concerns in the study of the relationships among hypermedia, learning and knowledge representation involve similar notions of topological information space to those which Deleuze proposes. They also parallel Deleuze's concern with the generation of meaning rather than with the simple recognition of supposed 'fixed' meanings. Jones, Li and Merrill (1990) suggest, in relation to knowledge representation, that such philosophical considerations can safely be ignored in instruction, since 'the developer of instruction explicitly desires that the learner adopt the meaning intended by the developer'. While this is a necessary constraint in certain areas of training and instruction, we need to recognise the limitations of such a view. A notion of 'negotiated' meaning such as Marton and Ramsden (1988) suggest, even where it is a question of students' scientific concepts, seems at odds with it. Again, it would seem only tenuously relatable to the mapping of a differential semantic domain for a concept such as 'freedom fighter', and one might ponder how a fixed 'meaning' should be arrived at through the coordinates of recognition for, say, the 1991 Gulf War.

I argued, in the first part of this paper, that to contrast 'linearity' with 'non-linearity' in relation to human information processing, hypermedia constructs and learning, is a strategy which has outlived its usefulness and occasions contradictions when extended beyond a limited descriptive application. A particular weakness is that it does not allow the domain of meaning to be addressed with any real force. Yet this is crucial. If new technologies somehow give access to new contents, and thus to new meanings (Salomon, 1979), we must have flexible tools for exploring them. I suggest that conceptual strategies of the kind developed by Deleuze can offer a critical exploratory tool in the hypermedia development environment.

References

Anderson, J. R. (1983). The Architecture of Cognition. Cambridge, Mass: Harvard University Press.

Ausubel, D. (1968). Educational psychology: A cognitive view. New York: Holt, Rinehart and Winston.

Baudrillard, J. (1983). Simulations. Trans. D. Ross, P. Patton, P. Beitchman. New York: Semiotext(e).

Bogue, R. (1989). Deleuze and Guattari. London and New York: Routledge.

Churcher, P. R. (1989). A common notation for knowledge representation, cognitive models, learning and hypertext. Hypermedia, 1(3), 235-254.

Conklin, J. (1987). Hypertext: An introduction and survey. IEEE Computer, 20(9), 17-41.

Dede, C. (1989). The evolution of distance learning: Technology mediated interactive learning. ERIC Document Service, ED325099.

Deleuze, C. (1968). Difference at repetition. Paris: Presses Universitairies de France.

Deleuze, C. (1969). Logique du sens. Paris: Editions de Minuit.

Deleuze, C. (1973). A quoi reconnait-on le structuralisme? In F. Chatelet (ed.), Histoire de la philosophie. Le XXe siecle. Paris: Hachette.

Diekhoff, S. M. and Diekhoff, R. B. (1982). Cognitive maps as a tool in communicating structural knowledge. Educational Technology, 22(1-6), 28-30.

Easterbrook, S. (1990). An introduction to hypermedia. ITs News, 22, 18-24.

Graesser, A. C. and Clark, L. F. (1985). Structures and procedures of implicit knowledge. Norwood, NJ: Ablex Publishing Corporation.

Graesser, A. C., Millis, K. K. and Long, D. L. (1986). The construction of knowledge structures and inferences during text comprehension. In N. E. Sharkey (ed.), Advances in Cognitive Science, 1. Chichester: Ellis Horwood.

Hannafin, M. J. and Rieber, L. P. (1989). Psychological foundations of instructional design for emerging computer based instructional technologies. Educational Technology Research and Development, 37(2), 91-114.

Harris, M. and Cady, M. (1988). The dynamic process of creating hypertext literature. Educational Technology, 28(11), 33-39.

Jones, M., Li, Z. and Merrill, M. (1990). Domain knowledge representation for knowledge analysis. Educational Technology, 30(10), 7-32.

Joyce, M. (1988). Siren shapes: exploratory and constructive hypertexts. Academic Computing, November, 10-14, 37-42.

Kearsley, C. (1988). Authoring considerations for hypertext. Educational Technology, 28(11), 21-24.

Koch, C. (1987). The being of idea: The relationship of the physical and the non-physical in the concept of the formal sign. Semiotica, 66(4), 345-357.

Locatis, C., Letourneau, C. and Banvard, R. (1989). Hypermedia and instruction. Educational Technology Research and Development, 37(4), 65-77.

Locatis, L. (1989). Informational retrieval systems and learning. Performance Improvement Quarterly, 2(3), 4-15.

Jonassen, D. H. (1986). Hypertext principles for text and courseware design. Educational Psychologist, 21(4), 269- 291.

Marchionini, C. (1988). Hypermedia and learning: Freedom and chaos. Educational Technology, 28(11), 8-12.

Marton, F. and Ramsden, P. (1988). What does it take to improve learning? In P. Ramsden (ed.), Improving learning: New perspectives. London: Kogan Page.

Moulthrop, S. (1989). Hypertext and "the Hyperreal". Hypertext '89 Proceedings, 259-267. New York: ACM.

Reiss, T. J. (1982). The discourse of modernism. Ithaca: Cornell University Press.

Salomon, G. (1979). Interaction of media, cognition and learning. San Francisco: Jossey-Bass.

Seyer, P. (1989). Performance improvement with hypertext. Performance and Instruction, 28(2), 22-28.

Shastri, L. and Feldman, J. A. (1986). Neural nets, routines, and semantic networks. In N. E. Sharkey (ed.), Advances in Cognitive Science, 1, 158-203. Chichester: Ellis Horwood.

Shuell, T. J. (1986). Cognitive conceptions of learning. Review of Educational Research, 56(4), 411-436.

Smolensky, P. (1986). Formal modelling of sub-symbolic processes: An introduction to harmony theory. In N. E. Sharkey (ed.), Advances in Cognitive Science, 1, 204-235. Chichester: Ellis Horwood.

Stevens, G. H. (1989). Applying hypermedia for performance improvement. Performance and Instruction, 28(6), 42- 50.

Stillings, N. A., Feinstein, M. H., Garfield, J. L., Rissland, E. L., Rosenbaum, D. A., Weisler, S. E. and Baker-Ward, L. (1987). Cognitive Science: An Introduction. Cambridge, Mass: MIT Press.

Wilson, B. C. and Jonassen, D. H. (1989). Hypertext and instructional design: Some preliminary guidelines. Performance Improvement Quarterly, 2(3), 34-49.

Author: David Frampton, Director, Educational and Training Technology, Division of Information Services, Griffith University, Brisbane.

Please cite as: Frampton, D. (1992). Hypermedia, non-linearity and the generation of meaning. In Promaco Conventions (Ed.), Proceedings of the International Interactive Multimedia Symposium, 209-218. Perth, Western Australia, 27-31 January. Promaco Conventions. http://www.aset.org.au/confs/iims/1992/frampton.html


[ IIMS 92 contents ] [ IIMS Main ] [ ASET home ]
This URL: http://www.aset.org.au/confs/iims/1992/frampton.html
© 1992 Promaco Conventions. Reproduced by permission. Last revision: 4 Apr 2004. Editor: Roger Atkinson
Previous URL 11 Apr 2000 to 30 Sep 2002: http://cleo.murdoch.edu.au/gen/aset/confs/iims/92/frampton.html