One significant characteristic of hypermedia is that it is multi-disciplinary in nature. As such, it is not a distinct form of information processing, but rather a series of extensions to existing activities. These include activities in the related technologies of object oriented languages and databases, multimedia, knowledge based systems, information retrieval, computer supported cooperative work, human-computer interaction, and document processing/publishing. Hypermedia can thus play an important role as catalyst and unifier for the many exciting ideas that are emerging in these areas. This paper is a road map to current literature on hypermedia research and practice. We attempt to do this by surveying the key ideas, issues and trends in a selection of topics.
In a simplistic sense, hypermedia can be described as "multimedia enriched with a notion of linking". [Woodhead 90] calls hypermedia a subset of general interactive multimedia. According to this view, not all multimedia approaches can claim the 'hypermedia' label. Only those that support referential structures can be so termed.
One significant characteristic of hypermedia is that it is multi-disciplinary in nature. As such, it is not a distinct form of information processing, but rather a series of extensions to existing activities. These include activities in the related technologies of object oriented languages and databases, multimedia, knowledge based systems, information retrieval, computer supported cooperative work, human-computer interaction, and document processing/publishing. Hypermedia can thus play an important role as catalyst and unifier for the many exciting ideas that are emerging in these areas.
The purpose of this paper is to provide readers with a road map to current developments in hypermedia research and practice. We attempt to do this by surveying the key ideas, issues and trends in a selection of topics in hypermedia.
For this paper, we select a number of topics for examination and discussion. These include data modelling in hypermedia systems, the relationship between hypermedia and databases, hyperbases, and hypermedia standardisation. Due to length considerations, a number of topics have been excluded. Their omission in no way implies that they are any less significant. For instance, the user interface/cognitive aspects of hypermedia is a huge area of investigation, and much has been reported . There is also growing interest in coupling hypermedia and knowledge base technologies.
Section 4 elaborates on data modelling issues. The rationale for the trend towards greater emphasis on data modelling in hypermedia is explained, and recent attempts at developing formal data models (for hypermedia) are discussed.
Section 5 considers the close relationship between hypermedia and databases. The potential for each technology to leverage off well established techniques in the other is examined. For example, future generation hypermedia systems can benefit from well established database techniques for handling large amounts of data. Conversely, database management systems can add another dimension to information retrieval by incorporating link-following 'navigation' techniques.
'Hyperbases' is one particular spin-off of the hypermedia-database symbiosis. The focus presently is very much on building general purpose hypermedia back end engines or servers. Section 6 takes a closer look into this topic, and surveys some of the research prototypes that have been reported.
Section 7 details the current status of standardisation efforts. The motivation for standardisation is explained. Various hypertext reference models that have been proposed are examined.
Some concluding remarks are offered in Section 8.
In the hypermedia that we know of today, authors provide links between related pieces of information in a manner that allows readers to choose which links they wish to explore. Links may be used to direct readers to additional information in the same way that footnotes and glossaries do in hard copy documents, or they may provide more direct links between related pieces of information. Mechanisms (eg. via mouse driven graphical user interfaces) are provided to allow readers to follow links. Other features may be provided for orientation and navigation support, for example bookmarks, annotation, graphical overviews, backtrack, etc...
Whilst the above is the base understanding, a close inspection of the literature reveals a startling variety of in the interpretation of goals for the hypermedia approach. These goals range from simply supporting a single person's activities in 'non-linear' reading and writing to organising and controlling collections of interactive applications, to managing and guiding almost all aspects of the computer interactions of distributed groups of people.
We have found it most useful to adopt the following position. The hypermedia approach is not a specific form of information processing. It is rather a series of extensions to existing activities in the information science domain, such as object oriented languages and databases, multimedia, knowledge based systems, information retrieval, computer supported cooperative work, and document processing/publishing. Researchers have pushed at the frontiers of these individual fields of study. They have now reached a degree of maturity where it is appropriate to consider the convergence of these technologies. The multi-disciplinary nature of hypermedia places it in an ideal position to act as the focal point for this convergence.
[Woodhead 90] puts it in another way - "... hypermedia approach is not a specific technology, limited to a narrow informational domain... It goes beyond the means to structure groups of documents seamlessly. Rather, it is potentially a unifying paradigm for the present diversity where each task or material requires a specialised and independent tool. The seamless hypermedia environment is the idealised applications manager".
Early systems started off with a simple, flat 'nodes and links' data model. However, the simplicity of such a model limited its usefulness to small volumes of data. More powerful mechanisms were required to support user navigation through large information spaces.
The next step up in the level of sophistication is to attach attribute/value pairs to the nodes and links. These can be static or dynamic attributes. (Obviously, the latter is preferred, from the point of view of easy evolution of the hypermedia information base, but more difficult to implement). This was proposed in a paper by Goodman and Campbell in which they described HAM (Hypertext Abstract Machine). The data model part of HAM introduces 5 objects, namely graphs, contexts, nodes, links and attributes. These objects are organised into a hierarchy. For example, the graph is the highest level HAM object; it contains one or more contexts; nodes and links are defined within a context; attributes can be attached to all the other objects.
The ability to attach attribute/value pairs to HAM objects provides a very flexible mechanism to filter and organise these objects. This gives structure to what would have been a flat collection of objects, and allows the representation of multi-level hierarchies and composites.
Ideas from semantic data modelling (SDM) and object oriented technology have spilled over into the hypermedia world as well, and led to the development of more sophisticated data models which support concepts of classification, aggregation and generalisation.
There is a clear preference for an object oriented approach (to hypermedia data modelling) presently. This is exemplified in the work of [Schuett & Streitz 90], [Klas & Neuhold 90] and [Lange 90]. These data models also promote the idea of links as first class objects. This is a shift from the traditional treatment of links as being subservient to nodes.
Others have embraced the ideas of SDM even more closely. For example, [Rossiter et al. 90] uses the TAXIS model , whilst [Schnase et al. 91] uses Saberel . Those intending to pursue this approach will find a lot of synergy with so-called structural object oriented systems. These are object oriented database systems which have their roots in semantic data modelling  (eg. DAMOKLES, Cactis and SOOM ).
The move to greater sophistication in the data models is accompanied by attempts at the use of formal techniques. [Lange 90] uses the Vienna Development Method (VDM) for the specification of an abstract model of hypertext. [Stotts & Furuta 89] shows the potential of Petri Nets as a formal basis for specifying both structure and browsing semantics of hypertext. [Tompa 90] and [Garg 88] have their foundations on set theory. Z is used for the formal description of the Dexter Model [Halasz & Schwartz 90].
There is still a lack of consensus on formal data models. The way ahead is to devote greater efforts towards a joint push on two fronts, namely data modelling and standardisation, in view of the close relationship between these two areas. Opportunities also exist for alignment with the emerging work by OMG  and OODBTG .
The organisation of hypermedia data continues to be an important topic of study. [Delisle & Schwartz 88] proposed the concept of contexts to provide hierarchy and partitioning facilities to support multiple authors working together on a document. [Halasz 88] talked about the need for virtual structures and composites in next generation hypermedia systems. [Garg 88] considered die relevance of various abstraction mechanisms for hypertext.
[Zobel et al. 91] notes that conventional database systems are purely concerned with the organisation, management and retrieval of data. The stored data is generally simple, and has regular/repetitive structures. Data is usually indexed by key values, although increasingly, database systems are including full text capabilities also. Until recently, there has been a lack of sophisticated user interfaces to support high level exploration of stored data.
Hypermedia systems on the other hand have the following characteristics. Information is 'chunked' into small units (nodes, frames, documents, etc ... ), which are interconnected by links. Users obtain information by navigation - ie. following links. In a sense, one can say that hypermedia systems provide a high level, abstract model of 'documents' and relationships between these 'documents'. The emphasis is on the presentation of information in a non-linear fashion, and on support for user browsing/navigation through the information space. Also, the data that is handled by hypermedia systems is typically complex.
Advances in semantic data modelling (SDM) and in object oriented databases (OODBs) has created great opportunities for the blending of hypermedia and database technologies. The complex nature of hypermedia data is best handled in an object oriented fashion. [Smith & Zdonik 87] reports on a case study to evaluate the use of an object oriented database to meet the storage requirements of Intermedia.
The 'multimedia-lity' of hypermedia data and thus its database requirements is also a topic of current study. Here, the special nature of spatial multimedia objects, eg. bitmaps, and linear ones eg. text, audio, require a systematic approach. The problems encountered here are similar to those in Office Automation (OA) and CAD/CASE/CAM applications. These include issues such as object sharing, historical and archival versioning, and support for long duration transactions. Constant monitoring of developments in these areas, and in OODB research will allow us to creatively adapt new techniques to hypermedia.
Figure 1 shows the layered architecture. The Presentation Layer is the user interface of the hypermedia application. The Hypertext Abstract Machine (HAM) layer is the 'nodes and links' knowledgeable layer. It deals with the concepts of nodes and links, and their interrelationships, and node attributes and link types. This layer is the implementation of the hypermedia data model, and much thought needs to be put into this if it is to be sufficiently generic to serve as the back end for a variety of hypermedia applications. The Database Layer deals with the (mundane) task of data storage and retrieval. It also needs to handle other database functions such as multi-user access, data sharing, security, backups, transaction and concurrency control, version management, etc...
|Hypertext Abstract Machine (HAM)|
There are trade-offs in such a layered approach. On the plus side, the clean separation allows the use of existing database systems for the storage layer, thus leveraging off extensive knowledge that went into the design and construction of the database. There is also the benefits of reduced implementation time, general robustness, and possibility of sharing of data between hypermedia applications and other applications which understand the format of the particular database system.
On the negative side, the database is not optimised for the specific types of queries typical for hypermedia systems. The database is also not able to optimise for space and time efficiency since it does not know about the data objects/items it is storing. Finally, the overall system is generally larger and requires more system resources, since it has to carry the 'excess baggage' of features which come with the database but are not used.
There have been quite a number of research prototype hyperbases. The early one that set the pace was Neptune/HAM . About the same time, researchers at Brown University reported a case study  which compared the use of a relational database (Ingres) and an object oriented database (Encore) as the back end store for Intermedia.
More recent ones include HyperBase (GMD), HooperText (HP Software Labs), HyperBase (University of Aalborg, Denmark), Hyperion (RMIT, Australia) and HOT/Eggs (AT & T Bell Labs) . Some of the interesting aspects of these systems are described below.
HyperBase (GMD) is built on top of a commercial relational database, namely Sybase. The goal of the project was to define an application independent data model for hypermedia, and to develop a query language with respect to that model. Its designers emphasised the separation of object storage management from the interpretation and display (ie. browsing semantics) of those objects.
The goals of the HooperText project were "to explore the use of object oriented technology to build a hypertext system", and to prototype a multi-user hypertext platform which will "provide individuals and groups with effective access to large amounts of complex and dynamic information" . Various issues dealing with data modelling and database support were examined by the authors. The database used was PCLOS, which is described as a "a persistent storage mechanism for CLOS (Common Lisp Object System)".
HyperBase (University of Aalborg) is a multi-user back end for hypertext systems, built on the client-server model. It was designed especially to support collaboration. Unlike the others, HyperBase was written from scratch. A layered approach was taken. The lowest layer interfaces to the Unix file system, providing storage for the basic entities - ie. node and link objects. The intermediate layer provides operations for building and maintaining hypertext network structures. The highest layer has features to support multi-user use - ie. locking and event handling.
Hyperion (RMIT) is an on-going project to develop a "hypertext database system" suitable for the management of large volumes of text. The intention is to use Titan+, an experimental nested relational database  as the database layer.
Puttress and Guimaraes, the designers of HOT/Eggs (AT & T Bell Labs) point out that "hypermedia systems are usually developed as a single, self contained application, making the system specialised and difficult to retool for other applications". This means that each time a developer needs to write a hypermedia application for a new domain, he/she has to "re-invent and re-implement the HAM layer". To alleviate this problem, they propose a toolkit approach. Their Hypermedia Object Toolkit (HOT) provides a simple hypermedia data model and an object oriented user interface. One component of HOT is the storage system interface (called Eggs ), and is a simplified version of the HAM described in [Delisle & Schwartz 86].
Interchange and communication between 'open' hypermedia systems is predicated on the development of appropriate standards for hypermedia. Such standards (for the production and encoding) of hypermedia content and structure will allow existing and future hypermedia systems to access and reuse material without expensive recoding and processing - the major stumbling block alluded to above.
A first necessary step in discussing standardisation (in any domain) is the development of a reference model for that domain. This reference model can be considered a high level framework within which specific topics for discussion can be defined. A number of reference models have been proposed recently [NIST 90]. The more well known of these are the Dexter Hypertext Reference Model (DHRM) and Trellis Hypertext Reference Model (THRM).
The Dexter model is an attempt to capture formally (using Z) and informally (using English prose) the important abstractions found in a wide range of existing and future hypermedia systems. The aim of the model is to provide a basis for comparing systems, as well as for developing interchange and inter-operability standards.
DHRM takes a layered approach. Figure 1 shows the three layers which make up the model. The Runtime Layer describes the mechanisms for supporting user interactions with the hypermedia system. The Within-Component Layer  on the other hand deals with the contents and structures within the hypermedia nodes. The most important layer is the Storage Layer. This is not surprising since it describes the network of nodes and links. The model focuses on this layer, plus the thin veneers where it interfaces with the other two layers. The latter two describe the mechanisms of anchoring and presentation specification.
The Trellis Hypertext Reference Model (r-model for short) is a 'meta-model' - one that provides an organisation for the architecture of hypermedia models and systems. The r-model is based around a collection of representations of hypermedia at different levels of abstractions. In [Furuta & Stotts 90], use is made of the r-model to classify 4 different hypermedia (or hypermedia-like) systems, namely HyperTies, Trellis, HyperCard, and a fractal browser (!). Whilst the Dexter model is more operational in nature, the focus of the r-model is on taxonomy.
On the standardisation front, HyTime has emerged a clear leader [Goldfarb 91]. It recently (August 1991) progressed to Draft International Standard (DIS) status. HyTime is being developed as an American and international standard for structured representation of hypermedia information. It is being defined in the notation of the SGML Document Type Definition (DTD), and is interchanged using ASN.1.
The purpose of HyTime is to standardise the mechanisms which have to do with the addressing of portions of hypermedia documents, and their component multimedia information objects (including the linking, alignment and synchronisation aspects). It is important to note that HyTime does not standardise the data content notations (ie. media formats or encodings), the encoding of the information objects or the application processing that is performed on them. Nor does it standardise the overall document architectures, link types or document type definitions. As such, it is able to provide a neutral base for the interchange of a variety of application specific hypermedia information.
Various addressing mechanisms are provided. Name space address is the basis of hypermedia linking. Coordinate addressing is used to support scheduling, alignment and synchronisation. Finally semantic addressing is used to support addressability for arbitrary data. (For this mode of addressing, an interpretation program is required to convert semantic addresses to a name space or coordinate address).
Office Document Architecture (ODA) is an ISO standard for the storage and interchange of complex multimedia documents. This is obviously fertile ground for hypermedia extensions, and indeed work is proceeding in SC18/WG3 and CCITT SGVIII/Q.27 jointly on HyperODA.
Topics considered in this paper include data modelling in hypermedia, the relationship between hypermedia and databases, hyperbases, and hypermedia interchange and standardisation.
[Brailsford & Furuta 90] D. Brailsford and R. Furuta, in editorial of Electronic Publishing, 3(3) (Aug 1990).
[Campbell & Goodman 88] B. Campbell and J. Goodman, "HAM: A General Purpose Hypertext Abstract Machine", Comm. ACM, 31(7), 865-861 (July 1988).
[Carando 89] P. Carando, "Shadow: Fusing Hypertext with AI", IEEE Expert, 4(4), 65-76 (Winter 89).
[Delisle & Schwartz 86] N. Delisle and M. Schwartz, "Neptune: a Hypertext System for CAD Applications", Proc. ACM SIGMOD'86 - International. Conf. on Management of Data, Washington DC, May 1986, pp. 132-143.
[Delisle & Schwartz 87] N. Delisle and M. Schwartz, "Contexts - a partitioning concept for hypertext", ACM Trans. on Office Information Systems, 5(2), 168-186 (April 1987).
[Edwards & Hardman 89] D. N. Edwards and L. Hardman, "Lost in Hyperspace: Cognitive Mapping and Navigation in a Hypertext Environment" in R. McAleese (ed.), Hypertext: theory into practice, Ablex Publ., 1989, pp. 105-125.
[Garg 88] P. K. Garg, "Abstraction Mechanisms in Hypertext", Comm. ACM, 31(7),862-870 (July 1988).
[Goldfarb 91] C. F. Goldfarb, "HyTime: A Standard for Structured Hypermedia Interchange" IEEE Computer, pp. 81- 84, August 1991.
[Halasz & Schwartz 90] F. Halasz and M. D. Schwartz, 'The Dexter Hypertext Reference Model", Proc. Hypertext Standardization Workshop, NIST, Gaithersburg, MD, Jan 1990.
[Hofmann et al. 90] M. Hofmann, U. Schreiweis and H. Langendorfer, "An Integrated Approach of Knowledge Acquisition by the Hypertext System CONCORDE", Proc. 1st European Conf. on Hypertext (ECHT'90), Versailles. France, Nov 1990.
[Hull & King 87] R. Hull and R. King, "Semantic Data Modelling: Survey, Applications and Research Issues.", ACM Computing Survey, 19(3) 201-260 (Sept 1987).
[Ishikawa 90] H. Ishikawa, "An Object-Oriented Knowledge Base Approach to a Next Generation of Hypermedia System", Proc. COMPCON Spring '90, San Francisco, CA, pp. 520-527, Feb/March 1990.
[Klas & Neuhold 90] W. Klas and E. J. Neuhold, "Designing Intelligent Hypertext Systems Using an Object-Oriented Data Model", Arbeitspapiere der GMD 489, Nov 1990.
[Lange 90] D. Lange, "A Formal Model of Hypertext", Proc. Hypertext Standardization Workshop, NIST, Gaithersburg, MD, Jan 1990.
[McKnight 89] C. McKnight "Problems in Hyperland? A Human Factors Perspective", Hypermedia, 1(2), 167-178 (1989)
[O'Day et al. 90] V. O'Day, L. Berlin and R. Jeffries, "Group Information Access: A Retrospective Analysis of a Multi-User Hypertext Platform", HP Labs Tech. Rep., HPL-90-53, June 1990.
[Puttress & Guimaraes 90] J. Puttress, N. Guimaraes, "The Toolkit Approach to Hypermedia", Proc. 1st European Conference on Hypertext (ECHT'90), Versailles, France, Nov 1990.
[Rossiter et al. 90] B. N. Rossiter, T. J. Sillitoe, M. A. Heather, "Database Support for Very Large Hypertexts", Electronic Publishing , 3(3), 141-154 (Aug 90).
[Schuett & Streitz 90] H. A. Schuett, N. A. Streitz, HyperBase: A Hypermedia Engine based on a Relational Database Management System", Proc. 1st European Conference on Hypertext (ECHT'90), Versailles, France, Nov 1990.
[Stotts & Furuta 89] P. David Stotts and R. Furuta, "Petri Net Based Hypertext: Document Structure with Browsing Semantics", ACM Trans. on Office Info. Systems , 7(1), 3-29 (Jan 1989).
[Smith & Zdonik 87] K. E. Smith, S. B. Zdonik, "Intermedia: A Case Study of the Differences between Relational and Object-Oriented Database Systems", Proc. OOPSLA '87, 1987, pp. 452-465.
[Streitz & Neuhold 90] N. A. Streitz and E. J. Neuhold, "Concepts and Methods for the Design of Next Generation Hypermedia Systems", invited paper presented at FRIEND 21 - Workshop "Next Generation's Human Interface Architecture", Oct 1990, Oiso, Japan.
[Schnase et al. 91] J. L. Schnase, J. L. Leggett, D. L. Hicks and R. L. Szabo, "Semantic Data Modelling of Hypermedia Associations", Technical Report TAMU-HRL 91 -005 , Hypertext Research Lab, Texas A &M University, 1991.
[Tompa 89] F. Tompa, "A Data Model for Flexible Hypertext Database Systems", ACM Trans. on Office Info. Systems, 7(1),85-100 (Jan 89).
[Tanaka et al. 91] K. Tanaka, N. Nishikawa, S. Hirayama, K. Nanba, "Query Pairs as Hypertext Links", Proc. 7th Ind. Conf. on Data Engineering, Kobe, Japan, April 1991, pp. 456-463.
[Wiil & Osterbye 90] U. K. Wiil, K. Osterbye, "Experiences with HyperBase - a multi-user back-end for hypertext applications with emphasis on collaboration support", Technical Report R 90-38, The University of Aalborg, October 1990.
[Wright 89] P. Wright, Interface Alternatives for Hypertext", Hypermedia, 1(2), 146-166 (1989).
[Woodhead 90] N. Woodhead, Hypertext and Hypermedia: Theory and Applications, Sigma Press. Wilmslow. England, 1990.
[Zobel et al. 91] J. Zobel, R. Wilkinson, J. Thorn, E. Mackie, R. Sacks-Davis, A. Kent, M. Fuller, "An Architecture for Hyperbase Systems", 1st Australian Multi-Media Communications, Applications and Technology Workshop, Sydney, 1991.
|Author: Chek Yoon Wong, Australian Centre for Unisys Software, 115 Wicks Road, North Ryde NSW 2113, Australia. Email: email@example.com
Please cite as: Wong, C. Y. (1992). Research directions in hypermedia. In Promaco Conventions (Ed.), Proceedings of the International Interactive Multimedia Symposium, 299-310. Perth, Western Australia, 27-31 January. Promaco Conventions. http://www.aset.org.au/confs/iims/1992/wong.html