K. Maly, C.M. Overstreet, A. González, M. Denbar,
R. Cutaran, N. Karunaratne, and C.J. Srinivas
Computer Science Department,
Old Dominion University, Norfolk, VA 23529-0162, U.S.A.
Summary Higher education is undergoing structural changes in terms of composition of the student populations, learning paradigms, and curricula. As distance learning becomes an integral part of secondary institutions, student bodies are expanding to include more non-traditional students. Secondly, instructional methods in academia are shifting from a teacher-centered paradigm to a student-centered paradigm. In this new paradigm, the student becomes a more active participant in class, and peer collaboration becomes a more important component in the learning process. The advances in computer networking and digital media technology, together with the growth of the Internet, are making virtual classrooms and Web technology an effective framework for supporting active learning. In a virtual classroom the student interacts with other participants in learning activities using a desktop computer. In addition, this same computer can be used to support the active learning paradigm through its modeling capabilities. In this ntext we examine three particular aspects: the relation between learning paradigms and technology; steering and monitoring of synchronous class sessions; and automatic content generation and Web synthesis.
The pace of learning paradigm shifts has accelerated over the last decade, particularly in the arena of higher education, due to the exponential evolution of communication and computer technologies. Restructuring distance learning, and virtual classrooms are but a few concepts universities cannot ignore lest they become obsolete.
Learning paradigm characteristics We characterize a learning paradigm in terms of the following dimensions: scale the number of participants involved in a learning activity during a particular period; symmetry the degree to which any participant can become the focus of attention; synchrony the time differential at which learning occurs for different class members; perception the quality of the audio/visual input received by participants; interactivity the smallness of the time delay which participants experience when interacting; co-location the distance separating participants from each other; tools the breadth of tools available to any or all participants in the learning experience; cost the cost for one participant to achieve a fixed set of learning objectives; time the amount of control a student participant has in dictating the time needed to achieve a learning objective.
The ideal paradigm we propose, and believe current technology can support, is one which scales well, is symmetric, allows for both asynchronous and synchronous modes, has high perception quality, high interactivity with a delay smaller than 50 ms, allows for separation of students in space, allows for all computer learning tools to be used and used in a shared mode, is cost effective and can be taken over variable periods of time for a fixed set of learning objectives. The collocation and interactivity scales are dependent because the 50 ms bound on the delay implies that participants cannot be farther apart than 10,000 km.
Tools and environments We distinguish two types of tools: learning tools that help students master a concept, skill or knowledge, and technology tools that enable such learning. In the first category we have seen, through the pervasive availability of computers, steady replacement of physical learning tools by computer tools. The tools we describe in http://www.cs.odu.edu/~tele/iri/papers/www98/491.html are the technology tools and environments that enable new learning paradigms. Technology tools support or enable a particular feature in a learning experience; for example, a video conferencing tool supports group interaction. An environment is a set of integrated tools that provide support for most of the learning experience of a student. Specifically, we describe: collaboration tools, video conferencing tools, Web tools, cross-platform tools, and environments. IRI is an environment being developed with the goals of the ideal paradigm described above . he main IRI interface is displayed in Fig. 1.
Web technology in IRI This section focuses on the technical issues involved in building a Web-based interface used to control an IRI session replaying previously recorded sessions. In the past, a Motif interface has been used to control an IRI session and add the resources (e.g. slides) needed for a session. Since we are in the process of developing a cross-platform implementation of IRI, we decided to switch to a Web-based controller as a first step.
Multiple-user steering We allow several users to steer and monitor a session securely, and integrate their browsers with the IRI interface. IRI is based on reliable multicasting because of the number of students participating in a session. However, we expect only a few people, typically the teacher and perhaps an assistant, to be involved in the steering and monitoring process. Therefore, TCP/IP based, Web server-to-browser communication can be used to coordinate the browsers belonging to the controller group. The entire interface is a set of CGI scripts/programs, JAVA programs, and applets that access a protected directory on the server side. These scripts communicate with IRI through Unix sockets that have direct access to IRI files. Each Web page presented to the user requires a proper authentication token. Authentication tokens are obtained through a home page, which validates the userÔs Unix password and registration with IRI.
Content synthesis and presentation IRI, as a software system, records who is speaking, whose video is shown and what tools are running, at any point of time. The recording concept is simple: during a synchronous session, record all the individual streams and insert timing points. This information is synthesized and presented to the user later on demand as a set of Web pages which can be used to review any portion of the lecture through the Web navigation pages. IRI can run an arbitrary X program, but clearly cannot go inside the program and deduce what events have occurred. However, several specialized tools, written especially for IRI, track all events, and can synthesize a detailed sequence of these events in a mode a person remembers best; she can choose whatever stream is most appropriate.
Fig. 1. IRI Session with live and recorded streams.
 Maly, K., H. Abdel-Wahab, C. M. Overstreet, C. Wild, A. Gupta, A. Youssef, E. Stoica, and E. Al-Shaer, Interactive distance learning over intranets, IEEE Internet Computing, 1(1): 6071, 1997.