Evaluation Report

MICE - Esprit Project 7602

Table of Contents


MICE - ESPRIT Project 7602

Edited by

Hjalmar Martinsen

Gordon C. Joly

1 Introduction

This report covers Phase 1 of the MICE project (MICE-I), which started in December 1992, and finished at the end of February 1994. The aim of the project was to pilot inter-working between researchers using multimedia conferencing (audio, video and shared workspace facilities) over the emerging European 2 Mb research network infrastructure, and to some sites in the US. The participants in MICE projects during this phase were:

MICE was intended to be a piloting project and feasibility study. During 1993, the project has provided, demonstrated and improved multimedia facilities on a trial basis using packet-switched networks and ISDN. The project has been through the following (partly overlapping) phases:

In 1993, the MICE project staged four successful public demonstrations of multimedia conferencing, improving facilities each time. They were

Since March, MICE partners have been using the technology to hold weekly project meetings. Regular use of the MICE technology for on-going tasks meant that the partners could form an assessment of the future potential, and identify areas of improvement.

Since October 1993, the project has also been running a series of MICE International Research Seminars. Speakers working at or visiting partner sites gave presentations on recent research work, which were transmitted to MICE partners and up to 20 other sites via the MBONE. Remote participants could attend these presentations from their conference rooms or workstations, and ask questions and engage in a discussion. Most speakers and participants were not members of the project, so we have had a chance to introduce user groups to the technology.

This report is the deliverable of Workpackage 7. The purpose of this WP is to give an overview of facilities, to summarise our experiences from using these facilities and, most importantly, to provide a critical review of the current state of the technology and give an assessment about its use in future.

Our general impression at the end of Phase 1 is overwhelmingly positive. Many aims have been achieved in terms of making facilities available and gaining insight into their use and potential for improvement. The software is widely installed on SPARCstations, We are working with different hardware and software codecs. Other platforms such as DEC, HP and SG are also in use. The software tools that have been developed by the MICE-partners are:

Some tools have been used regularly by the MICE partners in the weekly project meetings. During the public demos mentioned above, we invited various US sites, and they took part actively. The audio tool vat (Visual Audio Tool) and shared workspace tool wb (White Board) from Van Jacobson and his group at LBL (Lawrence Berkeley Laboratory) have been an integral part of the execution of the MICE concept.

Several of the partners, in particular GMD, NTR, UiO and UCL, have developed Conference Rooms (CR) as local activities. Based on their experience with Conference Rooms a "reference Conference Room" has been defined. The Conference Rooms have been used in the public demonstrations and for seminars.

Another key concept in the MICE project is the Conference Multiplexing and Management Centre. UCL has developed a CMMC which is accessible from various networks such as EUROPANET, EBONE (as part of the Internet) ISDN, and SuperJanet. The CMMC is under development, and all the tools are described in more detail in the following chapters.

2 Networks and infrastructure for multimedia conferencing

2.1 Standardisation issues

A number of international standards exist for video telephony, videoconferencing and multi-point operation. Such standards have been developed with a different model for the intended usage. All are of interest to MICE, yet we have often had to make hard decisions on the adoption of such standards, for example in the multi-point areas. Often we have concluded that, whilst suitable for specialized hardware in a circuit switched environment, they are not appropriate for MICE. In other areas, we try to accommodate the standards.

2.1.1 ETSI and ITU-TSS standards
For our purposes, we examine such standards are as H.261, H221 and others, some of which we do not use directly. Standards and recommendations for videotelephony and videoconferencing exist for the following:

The most important organisations are ITU-TSS (for example CCITT) and ETSI. The ITU-TSS issues recommendations, while ETSI is responsible for standards. The important distinctions between recommendations and standards are:

2.1.2 Service descriptions
Service descriptions for telecommunication services are based on principles described in CCITT recommendation I.130. The service descriptions consist of the following stages:

Many of ITU-TSS's stage 1 descriptions are structured as one general recommendation followed by several specific recommendations adapted to different networks such as ISDN and B-ISDN. For the videotelephony service, ITU-TSS has made a general step-1 description (Recommendation F.720). There is also available a recommendation for the ISDN-videotelephony service (Recommendation F.721). A recommendation for B-ISDN will be available shortly.

ETSI has made a full service description for the ISDN-videotelephony service (ETS 300 264, ETS 300 266 and ETS 300 267). ITU-TSS has also made a stage-1 description for the videoconferencing service. The general part is available in recommendation F.730 while the ISDN- videoconferencing service is described in recommendation F.731. ETSI will in 1993 start the work of a stage-1 description of ISDN-videoconference service.

2.1.3 Inband signalling
To be able to synchronise audio and video for audiovisual services and also to control communication between terminals, or between terminals and a Multipoint control unit (MCU), it is necessary to have a system for inband signalling. ITU-TSS has defined a general system in the H.221 recommendation. The purpose of this system is to define various audiovisual applications for example conferencing systems. In addition to recommendation H.221 several control signals for communication between MCU and terminals are defined in recommendation H.230.

The definitions in these two ITU-TSS recommendations are collected in an ETSI-standard, ETS 300 144. All definitions are identical. ITU-TSS recommendation H.242 describes the procedures for inband signalling. The equivalent ETSI-standard is ETS 300 143.

By the introduction of videotelephone terminals in Europe it became obvious that the manufacturers had different interpretations of the CCITT-recommendations. As part of the EV project (European Videophone) an activity was started to identify the problems. Results from this work were used when the ETSI standards ETS 300 143 and 300 144 were defined. These standards are therefore more precise than the ITU-TSS recommendations.

2.1.4 Videoconferencing systems
A 2 Mbps videoconferencing system is described in ITU-TSS recommendation H.120, H.130 and H.140. Video coding is described in recommendation H.120, communication and signalling between the systems are described in H.130 and equipment and procedures for multipart conferences are described in recommendation H.140. Audio coding should be according to ITU-TSS recommendation G.711 or G.722. H.130 and H.140 have free capacity for data transmission and chairman control of multipart conferences. They do not, however, describe how these channels should be used.

2.1.5 Videophones
A systems description of videophone terminals is given in ITU-TSS recommendation H.320. This recommendation is based on inband signalling principles in the recommendations/standards mentioned above. The video coding algorithm is described in ITU-TSS recommendation H.261. This algorithm is based on transmission speed up to 1920 kbps.

Three audio coding algorithms are now available:

There are no ETSI-standards available for the audio coding algorithms. ETS300 142 is equivalent to ITU-TSS recommendation H.261. ETS 300 145 contains a systems description for 1B and 2B videophones and has therefore a more limited application domain than ITU-TSS recommendation H.320 which is covering systems up to 1920 kbps.

2.1.6 Multipoint control unit
As mentioned earlier, ITU-TSS recommendation H.140 specifies a multipoint control unit for a 2 Mbps system. ITU-TSS has also made recommendations for multipoint control units where the signalling is based on principles described in the recommendations H.221 and H.230. The physical unit is described in the ITU-TSS recommendation H.231 while recommendation H.243 specifies some additions to recommendation H.242 for inband signalling and procedures in connection to conference control. These recommendations are relatively general and contain no description of network related signalling. ETSI will start defining a standard for a multipoint control unit for ISDN. This standard will also contain requirements for D-channel signalling. A first version of the standard will be without the facilities for data communication and chairman control.

2.1.7 Data communication
Both 2 Mbps videoconferencing systems and systems based on inband signalling (ITU-TSS recommendation H.221/ETS 300 144) have the possibility for data transmission. While there are relatively few possibilities to choose transmission rate for the 2 Mbps system, H.221/ETS 300 144 offers transmission rates from 300 bps and up.

There is also work going on within ETSI to define a standard for describing an interface for a low speed data channel based on V.24-connections (DE/TE04114). This solution can be considered as a modem function and all known communication protocols from datacommunication within the analogue telephone network can be used.

This draft is also presented as a proposal for an ITU-TSS recommendation. In addition, ITU-TSS is working with recommendations for a special purpose communication protocol for data communication in multipoint conferencing. This protocol will also offer functions for controlling multipoint conferences.

2.1.8 Broadband networks
The following tasks will take place in ETSI Sub Technical Committee NA5:

2.1.9 Multimedia
The ETSI Strategic Review Committee 4, Recommendation 24 deals with multimedia. TC TE (Technical Committee Terminal Equipment) has got the responsibility for this work. Two new committees have been created:

2.1.10 Internet Standards
The majority of Internet protocol development and standardization activity takes place in the working groups of the IETF (Internet Engineering Task Force). Protocols which are to become standards in the Internet go through a series of states (proposed standard (RFC), draft standard (IDS), and standard) involving increasing amounts of scrutiny and experimental testing. At each step, the Internet Engineering Steering Group (IESG) of the IETF must make a recommendation for advancement of the protocol and the IAB must ratify it. If a recommendation is not ratified, the protocol is remanded to the IETF for further work.

To allow time for the Internet community to consider and react to standardization proposals, the IAB imposes a minimum delay of 4 months before a proposed standard can be advanced to a draft standard and 6 months before a draft standard can be promoted to standard. It is general IAB practice that no proposed standard can be promoted to draft standard without at least two independent implementations (and the recommendation of the IESG). Promotion from draft standard to standard generally requires operational experience and demonstrated interoperability of two or more implementations (and the recommendation of the IESG)

Advancement of a protocol to proposed standard is an important step since it marks a protocol as a candidate for eventual standardization (it puts the protocol "on the standards track"). Advancement to draft standard is a major step which warns the community that, unless major objections are raised or flaws are discovered, the protocol is likely to be advanced to standard in six months.

2.1.11 H.261 Description
The CCITT recommendation H.261 describes the video coding and decoding methods for the moving picture component of audio visual services at rates of px64 kbps, where p is in the range 1 to 30: see for example [1], [2], [3]. The compression techniques used are at the state of the art with regard to video compression encoding methods.

The H.261 standard includes descriptions of a coding mechanism and a scheme to organize video data in a hierarchical fashion. The compression techniques used by the coding mechanism include transform coding, quantisation, Huffman encoding and, optionally, motion vector compensation.

In order to allow a single recommendation to cover use between regions employing different television standards, the CCITT has adopted the Common Intermediate Format (CIF) and the Quarter-CIF (QCIF). Pictures are encoded as luminance (Y) and two colour-difference components (CB and CR). Y, CB and CR components are each functions of the standard chrominance components (red, green and blue) and are defined in CCIR Recommendation 601.

CIF has 352 pixels per line and 288 lines per picture. Since each block of four pixels is encoded with four Y, one CB and one CR component, sampling of the two colour-difference component is 176 pixels per line, 144 lines per image all in an orthogonal arrangement. QCIF (Quarter-CIF) has half as many pixels and half as many lines as CIF. All codecs must be able to handle QCIF whereas use of CIF is optional. As a rule, QCIF is used for desktop videophone applications and CIF is more suitable for videoconferencing applications due to its higher resolution.

Coding is done either on the input pictures, or on the difference between successive images, i.e. the prediction error. The first case is referred as intraframe coding (INTRA mode), the second case to interframe coding (INTER mode). Intraframe coding means that the image is encoded without any relation to the older sequences. This kind of encoding removes only the spatial redundancy in a picture whereas interframe coding also removes the temporal redundancy between pictures. With interframe coding, it is the difference between the current and the predicted image which is transformed by discrete cosine transform and then linearly quantized.

Intraframe coding is used for the first image and for later pictures after a change of scene whereas interframe coding is used for sequences of similar pictures with moving objects. H.261 does not determine a specific refreshment rate, but for control and accumulation of inverse transform mismatch error it requires that a macroblock must be forcibly updated at least once per 132 times it is transmitted.

CIF and QCIF pictures are arranged in a hierarchical structure consisting of four layers, namely the Picture layer (P), the Group Of Blocks layer (GOB), the MacroBlock layer (MB) and the Block layer (B). A CIF picture is divided into 12 GOBs. A QCIF picture is divided into 3 GOBs. A GOB is composed of 3*11 MBs and each MB contains six blocks: four 8x8 luminance blocks (Y) and two colour-difference 8x8 chrominance blocks (CB and CR). Chrominance difference samples are sited such that their block boundaries coincide with luminance block boundaries.

Data for each picture consists of a picture header followed by data for GOBs. Picture headers for dropped pictures are not transmitted. The picture header contains a 20-bit Picture Code Start and some additional information including Temporal Reference and video format used (CIF or QCIF).

Data for each GOB consists of a GOB header followed by MB data. The GOB header is transmitted even if no macroblock data is present in that GOB. It includes a 16-bit GOB start code, the group number, and the quantizer to be used until overridden by any subsequent MB quantizer information. Note that the quantizer can be changed at the MB or at the GOB level.

Data for a MB consists of an MB header followed by data for blocks. The MB header includes a variable length codeword (VLC) which indicates the position of the MB inside the GOB. It is followed by a VLC for MB type (type M). This type indicates whether the MB is interframe or intraframe, with or without motion vector estimation and/or loop filter, and with or without a new quantizer. Motion estimation may be used to encode the motion of a MB between two consecutive images. If type M so indicates, a new quantizer, motion vector data and coded block pattern may follow. The latter gives a pattern number signifying those blocks in the MB for which at least one transform coefficient is transmitted. The quantizer is identical for each block in the same macroblock. Note that not every MB in a GOB or not every GOB in an image needs to be transmitted if it does not contain any information for that part of picture.

Data for a block consists of codewords for transform coefficients followed by a fixed length code End of Block (EOB) marker. All of the quantized coefficients are ordered into a zigzag sequence. This order of transmission helps to facilitate entropy coding by placing low frequency coefficients (which are more likely to be non-zero) before high frequency coefficients.

The most commonly occurring combinations of successive zeros and the following value are encoded with a VLC. The other sequences are each encoded with a 20-bit static word. The coefficient with zero frequency in both dimensions is called the DC coefficient and the remaining 63 coefficients are called the AC coefficients. Generally, most of the spatial frequencies have zero and near-zero amplitude and need not be encoded. On the other hand, the DC coefficient frequently contains a significant fraction of the total image energy. So, the DC coefficient is treated with more precision than the 63 AC coefficients, using a linear quantizer.

2.1.12 H.320 Description
The ITU recommendations in the H.320 suite comprise:

This suite of protocols has been in use for some years in the circuit switched video conferencing domain, for which it was designed and is well suited. When working over ISDN this protocol family is a good choice, and is, in fact, the only non-proprietary standard in wide use. However, when we consider conferencing over packet-switched networks, we find that some of these protocols are more suited than others. H.261 video compression in itself is not a synchronous protocol, and if packetised based on the groups of blocks (see H.261 description above), can also be made fairly tolerant to packet loss.

The H.221 serial line framing protocol consists of 80 byte frames, with synchronisation, H.230, and H.242 protocols occupying bit 8 of the first 16 bytes in each frame. In the rest of a frame, there are policies for which bits in frame contain video data, audio data and user data. The video data itself is contained inside 64 byte frames which contain 18 bits of cyclic redundancy check to protect the video data against bit errors.

If we attempt to transmit H.221 framed video data over a packet network, we find that these three uncorrelated framing schemes (H.261 GOB's, H.261 CRC's and H.221 frames) now give no way to packetise the data and sustain all these framing schemes in the face of packet loss. Thus the H.221 standard (and thus the encapsulated H.230 and H.242 standards) are not well suited for transmission over packet networks.

In fact, we find that, if we have to use codecs which produce H.221 framed H.261 video to communicate over a packet network, it is advisable to utilise a workstation to remove the H.221 protocol and the H.261 CRC at the sending site, and re-insert them at the receiving site. This can be done at rates of up to around 512kbps on today's workstations. In fact, if this is done, the CRC padding frames are also removed, and much of the time the transmitted data rates over the packet network can be considerably less than the same stream with H.221 framing.

2.2 Technology trends

Broadband networks seem to be the most interesting topic on the agenda when it comes to future trends.

There has been a tremendous effort to specify a public broadband network for Europe during the last five years. Also initiatives to develop components and subsystems to make such networks cost effective. Several long term EC projects have been initiated. RACE has been arguably the most influential in the context of telecommunications.

Computer vendors and users seem to have run into a serious problem. Current leading LAN technology does not meet their requirements when it comes to speed and capability to keep up with new generations of workstations (desktop computers) that are doubling their performance every 12 to 18 months. According to a leading computer manufacturer the following three emerging technologies seem to have the power to overcome the LAN bottleneck:

There are also other technologies emerging, as for example Fiber Channel Standard, HiPPI etc. They are not as flexible and do not have the potential market of the three mentioned above. There is also basic rate ISDN which has been successfully used in MICE. There is, however, a need for further standardization so that equipment based on national versions of ISDN can interoperate.

According to workstation users, there are certain requirements to be met. First of all they need a high-speed/low cost LAN i.e. above 100 Mbps. Next is the ability to increase LAN speed without having to change the infrastructure (cabling etc.). They are also very interested in support for multimedia applications such as video multicasting for conferencing. The possibility to expand the LAN for wide area interconnection is also of high priority.

2.2.1 Fiber Distributed Data Interface (FDDI)
This is currently the most mature high-speed networking technology considering present vendors and products on the market. It has a very good level of interoperability and a large number of nodes installed. FDDI is primarily a backbone interconnect for lower speed networks like for example 10 Mbps Ethernet or 4/16 Mbps Token Ring. The present version of FDDI and the next generation (FDDI-II) are far more expensive to build than either of the Ethernet versions. They offer a limited upper speed range and are not competitive against ATM. FDDI is estimated to reach a peak in market share in 1995 - 1996.

2.2.2 Ethernet
The 10 Mbps Ethernet is extensively used around the world. There are currently installed approximately 20 million nodes. This number is expected to double within a two - three years period due to all PC users connecting. The new 100 Mbps Ethernet is currently under review for standardization by IEEE. If a standard is presented relatively quickly, the 100 Mbps Ethernet could be a success, otherwise most users are expected to go for ATM.

Future workstation and PC's will have the capability to operate at both 10 and 100 Mbps and automatically detect what speed is appropriate for a given situation.

Maintaining compatibility with the existing 10 Mbps by preserving the users investment in cabling etc. is the main emphasis in the standardization task. By doing this the transition from 10 to 100 Mbps is made more convenient.

2.2.3 Frame Relay
This is a new standardized connection oriented data service where the data are transported in variable length, HDLC based frames between end users. It is intended to be a cost effective alternative to leased line for LAN interconnection and is possibly the most attractive alternative in this respect.

A Frame Relay network is made up of network nodes and DTE (Data Terminal Equipment). The DTE can be a PC, router, or a host with a Frame Relay interface, and connects the user to the network. Both CCITT and ANSI describe the standard in their respective recommendations. The standard defines link-level data and signal transmission and OSI level 2 in the user equipment - network interface.

When a DTE transmits frames to the network the nodes will read the identifier then look up the tables to find the right outgoing channel and so send the frame to the next node. The use of several identification codes allows several parallel sessions in different directions to coexist on the same physical link. DTE's can therefore communicate with different destinations simultaneously over the same access to the network. This is a prerequisite for LAN users.

Frames indicated as erroneous by the Frame Checking Sequence are discarded. Higher level protocols must secure error-free DTE - DTE transmission. If errors occur and are corrected the data must be retransmitted through the entire network and this takes a longer time than retransmitting only over the link where the error occurred. Therefore the transmission medium quality has an important role for Frame Relay. A limit where Frame Relay is effective is set at Basic Error Rate < 10 - 6 on individual links. Such an error rate can be handled by a speed of several Mbps over most transmission links. A maximum link transmission speed of 2 Mbps is presently defined. Higher speeds, 2 - 34 Mbps, are under consideration, but it may be that other broadband services would be more suitable for this part of the market.

2.2.4 Metropolitan Area Network
A Metropolitan Area Network or MAN is a network consisting of subnetworks using the IEEE standard 802.6 "Distributed Queue Dual Bus" as the access protocol on dual buses as well as some principles defined by ETSI. The DQDB protocol supports both connectionless and connection oriented (this includes isochronous) services. However only the connectionless protocols are specified at the moment.

The interest in connectionless service originates from the need to provide cost-effective services with high throughput and low delay for bitrates higher than those provided by Frame Relay. ETSI has specified Connectionless Broadband Data Service. This service offers bitrates of 2 Mbps, 34 Mbps, 139 Mbps and 155 Mbps. These bitrates are compatible with the existing transmission hierarchy in public network.

The connectionless service features make CBDS very suitable for LAN interconnections, particularly in the medium to long term. As FDDI, with its nominal rate of 100 Mbps, is becoming widespread, the bandwidth requirement for LAN interconnection can rise to 30 to 60 Mbps, a rate easily handled by CBDS. The CBDS being used in a MAN can also be offered over ATM. Offered in a MAN it gives regional coverage, offered over ATM promises virtually unlimited coverage, depending on the penetration of ATM.

2.2.5 Asynchronous Transfer Mode (ATM)
ATM is an emerging technology which is creating interest in the field of high speed networks. The basis of ATM is switching small fixed size packets (called cells) at a considerably faster rate than the more traditional networks, such as Ethernet and FDDI where large packets with variable length are switched. ATM data rates scale from 51 Mbps to more than 2 Gbps. This technology is well suited for videoconferencing applications. It offers deterministic Quality of Service and implies that video conferencing applications are being delivered smoothly to the receiver.

Figure 1 ATM-cell

There is a high national activity within the area of high speed networks, and ATM is certainly the hottest item on the list. It is predicted that its only barrier will be cost, but, as the industry seems to find the technology most promising, the cost problem could be considerably reduced as the volume of ATM products is rapidly increasing.

An early recommendation is CCITT F.811 "Broadband Connection Oriented Bearer Service". Now, several RACE projects and the ATM Forum are de facto standardisation bodies. In reference [5], Haugen and Mannsåker (at NTR) suggest that there are two possible scenarios for introduction of public broadband networks:

ATM infrastructures are being planned all over Europe, although mainly in the West. National research networks in France, Germany, Norway, Sweden and UK will be part of the PTO-ATM pilot. The situation for the MICE partners is:

MICE have been invited to participate as result of being selected as very successful EPSRIT project.

2.3 Network and Infrastructure aspects

2.3.1 The Current Networks and the Partners' access
The current networks used in MICE are EBONE, EUROPANET, the INTERNET, and the ISDN, National Distribution networks. Each of them, and their interconnection are discussed below. It is important to realise that it is not adequate to consider only the bandwidth capability of the networks; MICE is concerned with real- time traffic, so that the Quality of Service (QOS) of individual packets of data is of vital importance. In the discussion below it is difficult to distinguish between the technical aspects of the networks and the political ones. Often the technical problems arise only because of political choices. The QOS parameters required are often not measured by the Operators - so that it is not known a priori whether they can be provided. In addition, the nature of the traffic is such that multicast distribution improves the performance; many of the networks used cannot, or sometimes will not, provide multicast inside the network. When the relevant facility is provided outside, it actually increases the load on the network - and yet is more difficult to police.

During the MICE project, two partners never had access to the international networks - Nottingham University and ONERA. The Nottingham University problem was that JANET did not permit their access; ONERA had internal difficulties in attaining network access. ULB (Belgium) only achieved access at the end of the project - in November 1993; this had to await the commissioning of the Belgian EUROPANET access. The international network topology and line speeds are changing rapidly. The comments in this section (as in the whole report) represent the situation to the end of1993.

EUROPANET now operates reliably, with 2 Mbps access from three of the partners' countries: Belgium, Germany and the UK; the Dutch access has also been used successfully for the IETF demonstration. Some access has been direct IP, some IP/X.25; we have not done detailed measurements of the difference in performance. There is a direct link between EUROPANET and EBONE at Amsterdam; however the total capacity of this link is only 384 kbps, and its QOS is often poor for MICE links to Sweden and Norway. The situation would be improved if more of the MIC E partners had direct 2 Mbps links. The network does not have a direct multicast capability, so that the MICE traffic does stress the network when multiple destinations are invoked.

It is important to have powerful enough routers attached to the EUROPANET access point. In the IETF demonstration, the German routers were not adequate, and even 192 kbps MICE traffic wreaked havoc on the DFN international connections (this has been remedied).

2.3.3 EBONE, the US Internet and US European links
The European multicast backBONE (EBONE) and the world-wide Multicast backBONE (MBONE) of the Internet are accessible directly from many of the MICE sites. The multicast mode of operation requires considerable traffic engineering, but works well for large number of participants; all the MICE seminars and most of the other MICE interactions use the multicast facilities.

France (INRIA), Nordunet (SICS and Oslo U), Germany (GMD-Birlinghoven and RUS), UK (UCL) all have direct access to EBONE; however, there were bandwidth difficulties in GMD-Darmstadt in accessing international facilities in that way. The traffic properties are well matched to MICE needs; the European bandwidth available is not. Currently the Swedish - Geneva link is all at 2 Mbps, though the Geneva- France is now at 2Mbps. The direct UK-France link no longer exist. UK-Swedish links are at 256 kbps; the links are heavily loaded without MICE traffic, and are quite inadequate for MICE usage. However, the traffic between Scandinavia and the UK and Germany can use EBONE to Amsterdam, and then the 512 kbps link to EUROPANET to carry MICE traffic; that link is usually adequate.

France, Germany, Sweden and the UK all have direct links to the US Internet on links which could be classed as EBONE via the US Internet. The German-US link is only at 256 kbps, and has not been used for MICE purposes. The other three are currently at 1.5 Mbps - though the UK-US one is hard-multiplexed (see below). All have been used for MICE traffic, and normally give adequate performance. This mode of us e is permissible for traffic with the US, but is discouraged for inter-European collaboration under MICE auspices. The May JENC demonstration used this mode of access - but it is clearly undesirable politically, and expensive to traverse the Atlantic twice. The route is quite feasible technically - and in fact is the only one with the present topology which allows INRIA to participate in MICE.

The EBONE-EUROPANET link does allow EBONE to be used for those sites that have reasonable connectivity by that route, and EUROPANET for those with better access to that network. At present SICS, Oslo U and RUS gain from connectivity via the EBONE connection and the EUROPANET EBONE link; when the CERN-Paris link is upgraded, we expect INRIA traffic to use the same route.

2.3.4 MBone - the Multicast Backbone
At the time of writing, it is generally been accepted that international meetings such as the Internet Engineering Task Force (IETF) meetings will be distributed via the MBone - the Multicast Backbone. Other events that have been multicast are the relay of position data from a robot at the bottom of the Sea of Cortez to schools around the world, a late Saturday night feature movie and a jazz concert. However, it has also been a cause of severe routing problems in the NSFnet backbone, and also the saturation of major international links rendering them useless. Some sites have been completely disconnected due to ICMP responses flooding the networks.

The MBone is not a production network; it is often emphasised that it is primarily an experiment. However, it is becoming very popular and also rather stable, so many people view it as a regular service. This is due to the tremendous effort put in by volunteers among network providers, researchers and end-users.

Here we will first cover some technical background on the MBone itself, in particular as it operates currently. Then we will list a number of symptoms that have surfaced during operation of MBone and examine the cause and what the cure has been. Last, we will take a broad look onto the problems of multicasting in general and the current technology specifically. The MBone is a virtual network running on top of the Internet. MBone is composed of islands that can directly support multicast, such as ethernet and FDDI. These network are linked by "tunnels". The tunnel end-points are hosts which continuously execute the "mrouted" multicast routing daemon.

Figure 2. MBone topology - islands, tunnels, mrouted

In the figure above there is a simple example of three islands as part of the MBone. Each island consists of a local network connecting a number of clients ("C") and one host running mrouted ("M"). The mrouted-hosts are linked with point-to-point tunnels. There are usually primary tunnels, with other routes acting as a backups. All traffic on MBone uses UDP rather than TCP. One reason is that TCP is a point-to-point connection oriented protocol that does not easily lend itself to multicasting. Another reason is that the reliability and flow control mechanisms are not suitable to live audio casting. Occasional loss of an audio packet (as when using UDP) is usually acceptable whereas the delay for retransmission (when using TCP) is not acceptable in a interactive conference.

Each tunnel has a metric and a threshold. The metric specifies a cost that is used in Distance Vector Multicast Routing Protocol (DVMRP), described in RFC 1075 and RFC 1058. To implement the primary (thick) and backup (thin) tunnels in figure 2, the metrics could have been specified as 1 for the thick tunnels and 3 for the thin tunnel. The threshold is the minimum time-to-live that a multicast datagram needs to be forwarded onto a given tunnel. With that mechanism we can limit the scope for a multicast transmission. Up until recently tunnels were implemented using the IP Loose Source and Record Route option. Mrouted modifies the multicast datagram by appending an IP LSRR option where the multicast address is placed. The IP destination address contains the unicast address of them routed on the other side of the tunnel.

There has been some problems with this approach which prompted the implementation of encapsulation. In this method the original multicast datagram will be encapsulated into the data part of a normal IP datagram that is addressed to the mrouted on the other side of the tunnel. The receiving mrouted will strip off the encapsulation and forward the datagram appropriately. Both these methods are available in the current implementation.

At present there is no pruning of the multicast tree. That is, every multicast datagram is sent to every mrouted in MBone if it passes the threshold limit. The only pruning is done at the leaf subnets, where the local mrouted will only put a datagram onto the local network if there is a client host that has joined a particular multicast group or address.

There is no network provider of MBone. In the spirit of Internet, MBone is loosely coordinated via a mailing list. The intent with this mailing list is to co-ordinate the top levels of the topology. It mainly consists of people who administrate the backbones, such as NSFnet/ANSnet regional networks such as NEARnet, SURAnet, and so on. The current position is that when a regional network wishes to join MBone, they will make a request on the mailing list and some MBone node "nearby" will set-up a tunnel to the new participant. When end-users wants to connect to MBone, they are encouraged to contact their network provider. If that network provider is not participating in MBone and for some reason does not want to, a tunnel can be set-up to another point in MBone. From time to time, there has been major overhauls of the topology as MBone has grown. Usually this has been prompted by an forthcoming IETF meeting which put a big strain on MBone. The IETF multicast traffic has been between 100 and 300 kbps with peaks of up to 500 kbps.

The use of the MBONE has been growing at a staggering rate. A wide variety of users, ranging across many disciplines have started to make use of the network. One of the more widely publicized projects has been the JASON project, from The Woods Hole Oceanographic Institution. In 1993, 600,000 school students took part in remote sensing of the sea bed (the Sea of Cortez mentioned previously), although this application was mainly for the multicasting of data (c.f. imm).

The Mbone FAQ [8] is currently the best source of information and is updated regularly.

2.3.5 ISDN
GMD, Nottingham University, NTR, and UCL currently have ISDN access. All these institutions have Basic Rate (BR) with UCL also having Primary Rate (PR) access. Moreover, while the UK PR access does not conform to international standards, and even the other national BR standards have some local differences, the international services interconnect well at single BR channel rates. Unfortunately the procedures for use of equipment over the ISDN are not well standardised, so that its use had been limited. To explain the problem, it is necessary to describe the equipment used in MICE with the ISDN which is the following:

We have provided very successful multi-way conferencing, including access via BR ISDN. This was achieved with limited effort only because we used the same adaptor equipment and BR ISDN channels in the CMMC as was used in the BRI videophones. We have put in a PR board, via a PR ISDN channel to the CMMC - but this has certain characteristics applicable to data traffic which must be extended to video. We have put in some workstations which can incorporate both the facilities of WP3 and the BR ISDN channels, but they need development to switch between the PR equipment and the existing BRI videophones.

These incompatibilities can be overcome in many ways - but we have not done so yet. To have done so during the period of MICE-I would have been possible only by replicating the equipment used for one channel, rather than using the PR interface; this would have been expensive, could not be scaled, and was contrary to the architecture of the rest of the MICE system. We have shown that the ISDN is quite satisfactory for connecting equipment internationally over single channels, and intend to develop our ISDN capability in full; this will require, however, work on several different components to achieve compatibility in the lower levels of data transmission (as distinct from the ISDN signalling) and it has not yet been established that all the systems would operate in any Open Systems way internationally over multiple BR ISDN channels. Clearly the message is that further standardisation of network protocols at the higher levels is still needed - particularly between the worlds represented by the manufacturers of codecs, workstations, and data transmission equipment.

2.3.6 The National Networks
In many cases, the MICE partners have been so close to their international nodes that the national research networks have not been involved. In the case of ULB, SICS and UCL this has been the case. GMD and RUS have had to use DFN; this has been adequate most of the time, but has given difficulties in accessing the international gateways with adequate performance. Oslo University must go via Trondheim for international connectivity; this gave no problems nationally, but there was difficulty between Oslo and Stockholm until that portion of Nordunet was upgraded. INRIA had few problems in getting access to the international node in Paris - though the performance of such international connections is poor except to the US. Nottingham University would have to link via JANET; MBONE traffic via JANET has been prohibited because of concerns at the lack of capacity.

2.3.7 High Speed Activities
There is an emerging set of national activities, and a more speculative set of international activities in higher speed networks. The MICE-I partners are heavily involved in many of these initiatives, and we review the current status of this work.

In France, Renater is moving towards a National higher speed infrastructure; more immediately, there is a 100 Mbps extended FDDI infrastructure which includes INRIA at Sophia Antipolis. There are some projects which will provide international connectivity at higher speeds - but these are not yet planned to provide general higher speed working In Germany, both GMD and RUS are part of national higher speed activities. GMD has interconnections between its own sites at higher speeds. GMD-Focus, in Berlin, is part of the BERKOM-II project, and is building an internal ATM infrastructure.

BERKOM-II itself is building an ATM infrastructure - with DFN collaboration. GMD-Darmstadt, the MICE-I partner, will require to deploy multimedia both inside GMD and the DFN. It is currently not assured that the GMD deployment will be based on MICE software, but this is certainly being considered. RUS, in addition to being part of the German ATM field trials, is central to the Baden-Wuerttemberg research network (BelWue) - which is just being upgraded to 34 Mbps ATM.

In Norway, the University of Oslo contains is one of the four main centres of the UNINETT national research network which is currently being upgraded to 34 Mbps. In addition to any work under MICE, UiO is involved in major development projects on electronic classrooms and other distance education tools. In Sweden, SICS is one of the main partners of the Multi-G project, which is providing an ATM infrastructure and also wide-area high-speed education environments.

Figure 3. Schematic of the MICE Conference Management and Multiplexing Centre

In the UK, UCL is one of the original major nodes on SuperJanet - with 155 Mbps access now, and a commitment to provide ATM in early 1994. Like UiO and SICS, UCL are involved in distance education projects. On the international scale, many of the national organisations have discussed participation in other european and intercontinental projects. The French, German, Norwegian, Swedish and UK partners expect to have access to the planned PTO ATM pilot through their national research networks. Independently, the same countries will have access to the same emerging EuroCairn facilities - which is a 34 Mbps infrastructure planned for the RTD community, and is the responsibility of DANTE.

2.3.8 The implications of MICE for the network providers
MICE provides an opportunity to encourage a standard video conferencing infrastructure on a European basis. This should ensure high levels of interoperability, cross platform. It is important to capitalise on the development investment by providing adequate end user support on a national basis. In addition it will be necessary to seek specialist communities that could benefit from the use of this technology, but who might not be expected to make the transition to the use of this technology without some encouragement. For instance we have already seen an interest from european language teachers, practising clinical staff (including pathologists) and custodians of rare artefacts. National network providers have a role in the coordination activity, whilst the existing MICE partners are well placed to provide the support role.

The national network providers must expect a significant increase in traffic as the use of MICE facilities increase. They must prepare for this traffic increase in their forward planning activities right now. It is also of prime importance that a high standard of national network interconnection is provided on a pan-european level.

2.4 Related European Initiatives

This is an area in which there are a large number of similar projects. The MICE partners are aware of what is going on - and in fact participate in many of the projects. We discuss here briefly some of the initiatives in workstations, conferencing and shared workspace which are most relevant.

Most of the projects are developmental in origin, and do not have an emphasis on piloting and operations. Moreover, many of the RACE projects are designed for the circuit-switched, high bandwidth, environment - and would not run over the current research networks. We do not know any which make extensive use of multicast - which is a prerequisite for a large-scale deployment on the current generation of networks; it would also be more efficient on the next generation. The majority of the projects are not open - in the sense that special hardware or software is being developed - but not with the aim of linking outside the project. For example, we know no other European projects which link into US ones.

Many national research network agencies are sponsoring national projects for use over their emerging high speed networks. Some of these are adopting MICE technology in a pilot form, others have sponsored technologies which have contributed to the MICE technology. While we mention a few of these projects in this chapter, it is more because these might be considered relevant in future MICE activities than any attempt to give an exhaustive list.

2.4.1 Conferencing
Under the Race projects EUROBRIDGE and NETMART 2079, TELES has developed a conferencing system on an INTEL 486 base. The platform can hold two different ISDN cards: a BR-ISDN card with two channels and an H1 card with thirty channels; However, the platform can accommodate three BR-ISDN cards to support H0 - 384 kbps. They use currently a version of Unix on the system - but are moving to supporting a version under NT. The system has two parts: workstations and a Conference Multiplexing Unit (CMU). The workstations use a special video compression card that does H.261 on the card. It can do n x 64 kbps compression, where n is between 1 and 30, so the workstations can work up to a full 2 Mbps.

The video can be sent both over UDP and TCP; the audio uses a different standard. The control information goes over a separate ISDN channel over TCP. The CMU allows a number of mm calls to be input to the CMU and reproduces together - just as the UCL CMMC; the calls are put as individual windows in the workstations. This system is similar in aim to the MICE workstation, but there is no plan to have it developed over multiple platforms, interwork with other systems, or have the open MICE development environment. Nevertheless, the system is very interesting, and future collaboration is being explored.

2.4.2 Shared Workspace
There are many European projects in this area. The four systems used in MICE all derive from other projects. For example, in the RACE PAGEIN (Pilot Application on a Gigabit European Integrated Network) involves the major part of European aerospace research, some manufacturers, two universities and INRIA; one university is also in MICE.

The main goal of PAGEIN is to improve the productivity of European aerospace research and industry by advanced computer and network supported cooperative working modes especially in the pre-design phase. The technical goal is to integrate heterogeneous distributed supercomputer simulation, visualization and scientific video with multimedia conferencing facilities into an easy-to-use working environment. The network-political goal is to raise the awareness of relevant political and technical parties of the need for a european high-speed infrastructure by demonstrating PAGEIN results on visible international high-speed network links.

PAGEIN provides a distributed shared memory system on top of both massive parallel and vector architectures as an integrated platform for cooperation aware modules such as simulation codes or modules of the visualization pipeline. Due to the high-dimensionality of data, no shared X-techniques can be used as distribution vehicle. A concept for the integration of the INRIA Video System was developed. In a first attempt PAGEIN has tried to realize an UltraNet to UltraNet Gigabit connection from Paris to Stuttgart via an 140 Mbps PDH link. This attempt was unsuccessful, although several demonstrations on a 140 Mbps PDH infrastructure have been carried out within Germany. PAGEIN was demonstrated internationally using a 2 Mbps link between the Stuttgart SMDS/CBDS field trial and an SMDS island at Interop 93 in Paris. PAGEIN is working towards the use of the international ATM field trial. A PAGEIN prototype is operational. Integration of IVS functionality in still under way.

In the CIO (Coordination Implementation and Operation of Multimedia Teleservices) project, which also involves RUS and is multivendor, they are developing both Multimedia Mail (with X.400 and X.500), and also have a joint viewing and TeleOperation mode. It involves many partners - including Dutch and Spanish PTTs, Siemens, Ascom, ETH, GMD, TUB, RUS and Liege U. In this case the media include text, graphics, video and audio. It includes a message store for the multimedia, and uses both ATM and FDDI. It uses a software package (JVTOS) which is available on the SPARCstation, Apple MAC (MacOS 7.1) and IBM (Windows 3.1). Its current four sites can now interwork. The system uses G711 audio and Motion JPEG, running over TCP/IP. The software would be available to all universities if desired, and will be used in the German Pilots BELWUE (Baden-Wurtenberg), DFN BERKOM and RARE. This system is inappropriate for lower speed networks, and hence would not fit into the CMMC concept. EUROBRIDGE has also developed a multimedia mail product, but shared workspace is not such an important feature of that project.

2.4.3 Workstations
There is a tremendous activity in the development of workstations. Virtually every workstation manufacturer is making multimedia offerings. There is a whole Action Line in RACE on image coding; several of its projects have led to products - or even contributed to the work of one or other of the MICE partners' work. The MICE partners are reasonably familiar with what is going on - and have used several of the resulting workstations in their work. It is clear that the days of the dedicated $30K codec have largely gone. An add-on workstation board can now perform the same function for approximately $3K. In fact with the current generation of WS, the decoding can be done in Software; this requires about 50 Mips - easily available these days. It is only for high bandwidth good fidelity, that codecs are still needed.

2.5 Cooperation with US-initiatives

During MICE-I, there has been close collaboration with a number of US groups - in particular LBL and Xerox. The collaboration has been on many levels: provision of equipment, network technology, components, system, participation in events and demonstrations.

On equipment provision, UCL - and hence the MICE project - has benefited from extensive equipment provision by Sun Microsystems. This has provided some of t he CMMC facility. At the network technology level, there has been close collaboration - particularly in the MBONE engineering. This activity is global in scope, and transcends the narrow needs of MICE. It has become very clear that resource allocation and reservation are vital ingredients for the success of activities such as MICE, and there has been close collaboration between many sites - in particular LBL, Xerox and UCL - in developing fair-share algorithms which should improve future generations of MBONE. It is also likely that the US router manufacturers will put the results of this collaboration into their next generation of equipment - we must hope that the european manufacturers will follow suit.

At the component level, there has been remarkably close collaboration between INRIA, SICS, UCL, LBL and XEROX - in design, implementation, and deployment. The LBL VAT is the voice tool used exclusively in MICE at present - but it has had input from a number of MICE partners in different aspects. The LBL WhiteBoard (WB) is the only shared workspace system used in production on MICE up to November 1993. The INRIA IVS and Xerox S/w video systems have been used on both sides of the Atlantic - and have both profited from the collaboration; it is the INRIA system which is used mainly in MICE, but we have also used the Xerox system in some of the events.

At the System level, the MICE partners have participated closely in the Internet t Engineering Task Force (IETF) working groups in different relevant areas. The most important of these are the following: multicast, resource allocation, multimedia transport and multimedia control. The IETF standards have been influenced strongly by MICE partners - and are often being used in the MICE components and systems.

The MICE partners have participated in many US events - and US partners in MICE ones. At all the European major public events (e.g. JENC 93, IETF July 1993 and INTEROP October 1993) there have been both European and US participants. MICE partners have participated in many US events (e.g. other IETF meetings, US INTEROP meetings and the Global Schoolhouse). Xerox and LBL participate in the MICE Seminar series - and MICE partners have participated in many similar US events.

MICE-I has attracted strong interest in the US; official proposals to the NSF from MCNC have specifically highlighted collaboration with MICE; we have received specific statements that proposals to the NSF for common management standards would be received favourably - and have identified two groups who plan to make such proposals to NSF. There is an ongoing collaboration with ARPA sites at least through 1994. There is a firm commitment from at least one DOE site to collaborate with MICE. At least one site funded by ARPA will be collaborating directly with MICE partners using this technology.

There have been initial contacts with several groups in Australia and Japan. They would like to participate but are not yet clear whether the trans-Pacific network connectivity could support such traffic. This question has been raised in the August 1993 CCIRN meeting, and will be explored further during 1994.

3 Overview of existing MICE facilities

In Appendix C we give a statement of the current position at each of the MICE sites. This appendix reflects the position as at February 1994 and notes both differences and similarities in hardware and software. The most common workstation is a SUN workstation from Sun Microsystems

3.1 CMMC

The current trend in the internet community is to provide multimedia services by applications that multicast directly to all sites in the conference, with no centralised conference control or multiplexing centre. Whilst we agree that an entirely distributed system is desirable, there are many reasons why we believe that many multi-point conferences will require a centralised Conference Management and Multiplexing Centre (CMMC). These reasons include:

UCL has responsibility for the Conferencing Multiplexing and Management Centre for the MICE partners. We believe that there will be many occasions when the multiplexing facilities of the CMMC are not required, as conferences can be entirely workstation and multicast based. When this is the case, it does not make sense for the CMMC to be used as a hub, or to route traffic through the CMMC. However, when the multiplexing facilities of the CMMC are required by some remote sites, the same distributed, multicast-based, conferencing software should be usable at the rest of the remote sites. This greatly affects the way we view the CMMC's role:

The important point here is that the CMMC should be able to move gracefully from one extreme to the other, and this has a strong influence on the way we design the necessary control protocols that link the participants and the CMMC to Conference Room.

3.2 Conference Rooms

A conference room in the context of MICE is a complex and delicate setup with lots of equipment including video cameras and monitors, microphones and speakers, interactive electronic whiteboards, control panels to setup and terminate video conferences, switchers between different cameras, controlling audio and video, and last but not least, a workstation being used for the shared electronic workspace and for controlling at least part of the equipment.

Most of the equipment in a traditional video conference room is designed specifically for video conferencing, under the assumption that one uses only one network, one partner at a time, and the simple metaphor of paper documents and flip-charts or whiteboards. Designers of video conference systems thought only lately about controlling their equipment by computers and using computers or computer controlled devices as input to video conferencing.

Most of the discussion below refers to specifically to the Conference Room at the GMD, but largely similar considerations apply to the other rooms (at UCL, UiO and elsewhere). Certain specific aspects, such as the way ISDN is handled, the use of video projection facilities and the local multiplexing hardware and software differ between installations. The aim here is to make general comments on the requirements of Conference Rooms.

The MICE Project has built up an infrastructure for videoconferencing for the European research community (including access to USA). Participants are either at workstations or in conference rooms (CR), The principal distinction drawn between workstations and conference rooms are the following.

The MICE partners in UCL, UIO, NTR, and at the GMD each established a CR which networked, via the CMMC, with the others. The CRs make use of a broad variety of equipment so that they can serve a variety of purposes. Under Work Package 4 (The Provision of a Reference Conference Room), GMD was contracted to produce a Reference Conference Room (RCR) for MICE.

The GMD conference room has been specifically built for teleconferencing, whereas the UCL conference room has been designed to serve both as a regular meeting room and a teleconferencing room. The various NTR and UIO

conference rooms are designed for a special application: distance education. The details of this work can be found in the document entitled Reference Conference Room Specification. The CRs and their inter-operation with other CRs and with workstations were demonstrated at several conferences (references?????). The experiences gained by these demonstrations and tests are described in Section 4 of this report.

3.3 Electronic Classrooms

This section is based largely on the University of Oslo installation. The electronic classroom provides for video and audio communication as well as an electronic white board. The electronic white board provides the user with a large (2 x 1.3 meters) drawing area for a pen. Two electronic white boards are interconnected so that they appear as a shared electronic white board in the participating classrooms. Communication between the electronic classrooms is based on IP networking technology.

Two small studios at USIT and UNIK were established and part of a one semester course was run from UNIK utilizing video conferencing to serve students both there and at the main campus. This experiment demonstrated very clearly the need for additional tools, in particular, one main requirement from the teachers was to have a familiar tool like an ordinary blackboard available. Subsequent work on this issue lead to the development of the present electronic white board which we believe may be unique on the world arena.

We now operate within a framework which we call Distributed Electronic Classrooms, which is further explained below. New equipment and new design is continuously being introduced into this concept.

3.3.1 Classroom equipment
The electronic classroom is equipped with two cameras and microphones to record activities for transmittal to the remote classroom. It has been a goal that the classroom be designed to facilitate meetings, seminars and standard lecturing situations, collectively termed conferences below. During lectures there may be students only remote or both in the local and remote classrooms. One camera is pointed at the teacher and the other camera is pointed at the students. Which image is sent from the classroom depends on which microphone is active, as the echo cancelling equipment is connected to a video switch.

3.3.2 The electronic white board
The electronic white board have the size of an ordinary black-board that usually is present in a lecture room. The board consists of a semi-transparent screen which is used as a display screen for a video canon placed behind the screen. The video canon is attached to the RGB interface of a powerful UNIX work station that is used to control the board. This makes it possible to do large scale presentation of everything that can be displayed on the work station. You may, for instance, in different windows display text, high resolution pictures, video, animations and of course be able to present sound.

Unlike most other similar schemes the board is active and enables hand writing on the board in much the same way as it is done on an ordinary blackboard. The present version of the electronic board realizes this by letting the semi-transparent screen also be a large scale digitizing board which can be written on with an electronic pen. The pen movements are conveyed to the controller machine via the ordinary pen coordinate registration facilities contained in such a board. It is of course also possible to erase previously written material on the board using an electronic sponge.

When the lecturer uses the electronic pen on the active part of the board, the coordinates are continuously transmitted to the work station via a 9.6 kbps serial interface. The receiving software in the work station will then initiate the drawing of the corresponding lines in the active board window of the electronic board. In passing, it is worth mentioning that it is considered to be an important presentation issue to make the lines follow the pen in a natural manner.

Persons in all the distributed class rooms can interact via the electronic pens being applied on the active parts of the electronic boards. This means that the lecturer and students, no matter where they are located, get the feeling that they are sharing a common class room, enabling them to write and erase on the electronic white board, as well as to observe and talk to their lecturer(s) and fellow students.

An Xll program has been developed for the electronic white board using the InterViews toolkit. The current program communicates between the classrooms using a simple protocol over TCP/IP.

At the moment, all of the electronic board is active, but in time, we envisage a larger board containing both an active and a passive part, the former allowing hand writing as described above, the other being used primarily as a pure display part showing pictures, animations etc. Furthermore, when there is no more writing space on the active part of the board, it is also possible to move all of the active part to (part of) the passive part in order to keep the previous writing and other material in view while continuing writing in the active part. Thus an effect similar to the well-known sliding blackboards is achieved.

3.3.3 Data communication technology
All communication between the electronic classrooms is carried out over IP networks. As far as possible we are basing our efforts on Internet standards. For the purpose of this project the Norwegian 34 Mbps IP backbone between the four Norwegian universities is used (the Supernet).

For the white board, communication currently takes place over TCP/IP between dedicated applications on UNIX workstations.

All network communication in the current system is point to point limiting conferences to two classrooms. We are in the process of developing the system further to support multi-party conferencing, and we are using IP Multicast as the base for the communication side of such an extension.

3.3.4 Pilot usage and evaluation
The two existing electronic classrooms are currently being used for the teaching of a graduate course twice weekly as well as a weekly seminar and sporadic meetings, including project meetings. This usage forms the basis of an evaluation of the technology being carried out by experts on distance education.

Figure 4. Electronic Whiteboard

3.3.5 Deployment of electronic classrooms
A number of education institutions in Norway as well as abroad have shown strong interest in the technology developed by the project. At least two educational institutions in Norway are planning to install electronic classrooms during the next six months. The Royal Institute of Technology in Stockholm is currently considering to incorporating the technology in lecture theatres under construction.

3.3.6 Ongoing development
We are continuing to refine the electronic classroom design and are working to package the system for easy replication. We are also working on integration with workstations. Some of the ongoing development activities are listed below:

3.4 IVS

As the bandwidth available on networks and the speed of computers increases, real time transmission of video between general purpose workstations becomes a realistic application. However, even with a high speed network, video has to be compressed before transmission. For example, sending uncompressed NTSC video requires about 60 Mbps. A relatively simple compression scheme can significantly decrease the rate of video flows due to the video sequences redundancy.

Video compression is generally performed by some form of differential coding, i.e. by sending only the differences between two consecutive images. This leads to highly variable transmission rates because the amount of information to code between two images varies a lot, ranging from very low for still scenes to very high for sequences with many scene changes. Packet-switched networks such as the Internet are very well suited for transmitting such variable bit rate traffic.

Many algorithms have been proposed for the coding of video data. Some of them have been standardized such as JPEG, for still images, or MPEG and H.261 for moving images. MPEG coding is suited for high definition video storage and retrieval. Since the H.261 standard is the best suited for videoconferencing applications, we chose to implement in software such a compression scheme in IVS (INRIA Videoconferencing System).

However, this standard was designed for use over the Integrated Services Digital Network (ISDN), i.e. for a network with fixed rate channels and packet-switched networks such as the Internet do not provide such channels. Therefore, it is necessary to adapt H.261 in order to use it over the Internet. We have developed a packetization scheme, an error control scheme and an output rate control scheme that adapts the image coding process based on network conditions. The packetization scheme allows the interoperability with already available hardware codecs.

Our objectives in developing IVS are the following:

This software brings a new dimension to classic workstations without high cost since minimal hardware is necessary. For instance, low-quality video can be sent on a 9600 bps link. Interoperability between IVS and hardware H.261 codecs such as GPT and Bitfield have been demonstrated as part of the MICE project. The feedback mechanism which we introduced guarantees that IVS behaves as a "good network citizen".

3.5 Shared Workspace

A number of tools for shared workspace operation have been developed as part of the MICE project. The MICE partners at INRIA have developed mscrawl and at SICS two packages were developed: xspy and multidraw. The standard shared drawing tool is wb, which is currently under development at LBL. These tools differ in what they share and how they share it. We have also studied other tools and a fuller report is given in reference [8]. Note that the ability to operate as multicast application is considered very important in all of the tools, and in particular shared workspace. Outside that brief, we evaluated the Collage tool from NCSA, and found that it had other features such as high quality graphics which some of the other tools lacked. However, we did not make any further use of Collage since it was not a multicast application, although it could be used in multipoint or point-to-point mode, employing a server to start the remote client applications.

3.6 Workstations

Here we attempt to provide a reference implementation for a multimedia workstation to be used in the MICE project with interactive audio/video capabilities.

3.6.1 Rationale
Increased bandwidth and computational resources have made interactive audio and video communication between workstations across the Internet a realistic application. Also, workstations are increasingly being equipped with built-in toll-quality (Sun SPARCstations, DEC workstations) or CD-quality (NeXt) audio hardware support. Desktop audio and video conferencing tools have generated a large amount of interest within the IP Internet Community: audio tools such as NEVOT, VAT in coordination with video tools such as IVS, NV are being used to permit meetings of various size and scope over the Internet and across the world.

Transmitting audio and video data across packet-switched networks offers the well-known benefits of service integration over the circuit-switched approach. However, unlike most other data sent across the network, audio and video data require resource guarantees which are not available yet in the Internet. This requires audio and video conferencing applications to handle packet loss, to maintain synchronization against randomly distributed packet delays and to control their output rate in order to avoid network congestion.

The Workstation workpackage focused on the above problems, in particular adapting to network conditions. In fact, on the Internet, most packet losses are due to congestion rather than transmission errors. Alternately, packets can be delayed or received out of order; this could happen as a result of the routing and flow control in the network. Due to real-time requirements, delayed video packets are considered as lost packets if delay exceeds a maximum delay value.

Using UDP, no mechanism is available at the sender to know if a packet has been successfully received. It is up to the application to handle the packet loss and re-sequencing of out of order packets delivery. In order to adapt to network conditions we proposed to use an end-to-end control mechanism; such a mechanism needs two components, namely a network sensor and a throughput controller. We use feedback mechanisms for video sources in order to adjust the parameters (and hence the output rate) of video encoders based on feedback information about changing capacity in the network.

3.6.2 Hardware equipment
The Workstation
The workstation should be a high-performance and high capacity station since several audio/video decoding and encoding processes may be executed at the same time. So, for SUN systems we recommend the use of SPARCstation 10 rather than SUN IPX or IPC. Note that the conferencing tools used are also running on others platforms such as SGI, HP and DECstations.

Video requirements
Video camera
A camera is needed to film the workstation's user. A camera with a zoom lens is not mandatory but is sometimes useful to scan documents. A different camera can be used to show documents when the first camera is fixed on the workstation. Currently, only one camera can be used at a time, and the selection is made using different video input ports of the framegrabber. PAL or NTSC camera is required using the SUN VideoPix board. A SECAM camera can also be used with the PARALLAX board. Note that a simple camcorder is sufficient for this purpose: for example, we have used a Panasonic wv-GL 350 (PAL).

Video framegrabber:
Several framegrabbers can be used on a SPARCstation. Cheap boards exist, such as the VideoPix board, but performances obtained are limited to 6 frames per second (fps). The PARALLAX board is more expensive and grabbing performance is up to 20 fps. The new SunVideo board allowing 30 fps appeared at the end of 1993.

Audio requirements
The SPARCstations can play back and record sound without additional hardware. SPARCstation audio driver characteristics are:

Machine Bits Max sampling rate Output channels


SS 10 U-LAW, 8, 16 48 kHz 1 (stereo)


A single microphone is sufficient. The microphone supplied by SUN can be used but its quality is mediocre. Examples for usable microphones are:


A headphone is recommended to avoid echo feedback. When several participants are listening to the conference at the same site, a loudspeaker is preferable. The loudspeaker included with the SS10 workstation is sufficient. An external loudspeaker is recommended when using IPC or IPX workstations.

A combined headphone-microphone is very effective, particularly where there may be other noise and disturbances nearby. Three examples are as follows:

The Beyerdynamik DT 109 requires an interface to connected to a Sun SS 10 speakerbox, with speakers at 2*400 ohm and microphone at 200 ohm impedance.

3.6.3 Software requirement
The minimal set of software requirements is as follows:

Workstations must support the IP multicast extensions in order to receive conferences. In SunOS, IP multicasting is not included, so the SunOS kernel must be extended by this mechanism. A free implementation of IP multicasting that can be used for this extension is available by anonymous ftp from ftp://gregorio.stanford.edu/vmtp-ip/. This implementation corresponds to the RFC-1045 (plus revisions), and can be used for the Sun-3, Sun-4, SS10, Vax and DEC mips architecture machines.

Audio and video conferencing software
Characteristics of the main software audio/video conferencing systems in frequent use in the Internet are listed in Appendix A.

The LBL Visual Audio Tool (VAT) is a well-established tool for conducting audio conferences over the Internet. For example, it has been used by more than 600 people to listen to the last IETF conference.

Conference Control
Conference control is needed to manage and coordinate all participants and their multiple media. Control functions includes dynamic membership management, distributed session management and policy management. Tools such as VAT, IVS or WB already include distributed conference control which allow them to be run independently. However, when the conference includes several multimedia applications, centralized control functions seem useful. The conference control system for the MICE CMMC is based on the CAR Conference Control system from UCL. (cf. Specification of the MICE Conference Management and Multiplexing Centre)

3.6.4 Evaluation of workstations (ivs)
Audio section
Work done
Work to do
Video section
Work done
Work to do
Figure 3 Workstation performing software and hardware codec functions

3.7 Network Systems Configuration

The actual network configurations have been discussed in Section 2.4. The main lesson we would like to draw here is the suitability of the infrastructure for MICE purposes.

Internationally, inside Europe, the infrastructure usually works for a single conference using up to 384 kbps to the countries attempted so far. The links to the US work also. Because of the lack of Quality of Service control, there can be problems in the quality, and the MICE conferences can have impact on other traffic in the networks. Moreover, as traffic always seems to be rising monotonically, as soon as certain links become more loaded, they often become useless from then on for MICE purposes. Examples of these are the EBONE links between London and both Paris and Stockholm; these are both only 256 kbps links, and have been too saturated throughout MICE-I to be usable for MICE purposes. Only the capacity increases in these other networks have made them usable for MICE:

Their continued usability will depend on the scale of traffic growth.

There is a particular problem in Europe due to the conflict between EBONE and Europanet. While the Swedes have ordered a 2 Mbps Europanet link, the French have not; other countries are not increasing their EBONE bandwidth. As a result, all traffic between France and the rest of Europe will have to use the EBONE-Europanet gateway; this is likely to become saturated unless this dispute is resolved soon. In any case, larger scale deployment of MICE technology in Europe depends on the deployment of a higher speed European fabric; we hope this will come from the Framework IV initiatives or in some other form.

The MBONE technology has been shown to work well between Europe and the US; this has been helped by three of the partners (France, Sweden and the UK) having good access to their National links to the US - which have been upgraded to 2 Mbps in each case during the life of MICE. At several stages of the project the US-European links have been so much more robust and lightly loaded, that it has been possible to get better service going twice through the US than adopting a direct European route. Clearly this approach may work - but it is politically unacceptable on both sides of the Atlantic. With current tariffs it is NOT obvious that the approach is uneconomic; a US-European link often costs only 10% more than its equivalent between most European countries (and in its higher bandwidth versions may be more available and even cheaper), so that economies of scale and usage may superficially make upgrade of specific US-European links more attractive than European ones.

The national access networks have given little trouble in most MICE countries. In Belgium the link is direct. In France, since the links to the EBONE node have been improved, there has been little problem. In Germany there have been difficulties in national distribution; they have been localised to problems with specific routers. In Scandinavia there were difficulties between Norway and Sweden on Nordunet until that bandwidth was increased. In the UK, traffic considerations led to a prohibition of the use of JANET for MICE purposes; when SuperJANET became available, there were no further difficulties.

Few of the MICE partners have used the ISDN; only GMD, NTR and UCL have done so. The network itself has been quite satisfactory for international and national access for these sites. The main problems have been the lack of adequate standardisation on the lower levels of protocol (those on the B-channels below transport), the lack of access to these levels in the workstations, and the lack of effort to reprogramme the CMMC primary rate router. This has resulted in the Tandberg videophone (and its badge-engineered equivalents in other countries) being the only device actually used effectively over the ISDN in MICE. This is now changing due to several factors:

Thus we expect to have much better ISDN facilities in the future.

4 Public Demonstratons of MICE Technology

There were four public demonstrations of the MICE technology in 1993 during Phase 1. They were

All the above are the subject of a fuller report, as indicated in the references [10], [11], [12], [13]. Briefly summaries are given here.

We believe that the successful demonstrations at JENC indicate that the project has now more than satisfied the following milestones, as specified in the Technical Annex of the MICE Phase I Project [14]:

Both demonstrations were recorded on standard video-tape.

MICE demonstrated at the Internet Engineering Task Force (IETF), which was held at the RAI Conference Centre, Amsterdam, July 11-16 1993, which was very successful: the system was extremely stable and ran almost continuously during the entire week. Claudio Topolcic of CNRI is very interested in taking part in any future demonstration, and the President of the Internet Society, Vinton Cerf, professed himself to be "very impressed by the whole thing".

At Interop, Paris, October 27-29, 1993, MICE was invited by the German Telekom, to show MICE Multimedia Conferencing as an application using their DATEX-M service (a DQDB based SMDS service). For this they provided a special 2 Mbit link from Stuttgart to the Interop. As at the ETF, the main goals for the demonstration at the INTEROP were to show, in a four-way demonstration:

In addition we have demonstrated:
The Final Demonstration 14 December 1993 was held at UCL. Along with the other features reported in [13], this event was notable for the large number of continuous video streams that were received by the sites involved and the 4-way video multiplexing at UCL.

5 Other MICE accomplishments

In this section we report on the other accomplishments and milestones of MICE-I which have been other than the demonstrations described in the previous section. During MICE-I, the application of MICE technology was in many ways most consistently successful in the weekly meeting and the seminar series. Both of these will continue during MICE-II. The public seminar series has had widespread success, and, indeed, is without any parallel. There is no equivalent series of events to which the MICE International seminars can be compared. The multicast of several international meetings has taken place during phase I, but there have been no other multimedia events that have operated on a consistent weekly basis.

5.1 The ISDN infrastructure in the MICE project with specific emphasis on workstations

First we describe the components of the MICE system. We then go into more detail on each component, describing the options available with regards to ISDN, both theoretically and practically. Where practical experience has been gained, we note it.

5.1.1 The system
In order to be clear what we are discussing, we must describe the basic system. This consists of the following components:

Each will be considered in turn.

Conference Rooms
These will usually require fairly high quality audio and video. The conference rooms will be fairly costly, making it easier to justify the added cost of codecs and bandwidths of p x 64 kbps, with p of 4, 6, or even 24 (or 30 over SuperJanet, but not for MICE unless international bandwidth is unexpectedly increased by a large factor).

Access to conference rooms may be via Local Area Networks (LANs) to Wide Area Networks (WANs) or directly to WANS. The WANs may be either Packet-Switched Networks (PSN) or multiple channels of the Basic Rate Integrated Services Digital Network (ISDN). Conference rooms will receive/send data either directly via codecs from/to the audio-visual equipment, or use an intermediate computer.

These will usually provide lower quality of audio and video. Mostly they cannot justify full hardware codecs, and will do the codec function in software. Access may be over Packet-switched Nets (PSN) or single/multiple channels of ISDN.

We define a workstation to be of type 1 if it can code/decode at bandwidths of p x 64 kbps; where p will usually be of 1, 2, 4, or 6. We could go to a p of 24 or even 30 over SuperJanet, but not for MICE, unless international bandwidth is unexpectedly increased by a large factor. It is impractical with current workstations to attain the larger p's with software codecs.

We define a type 2 workstation to be complete units which have a low-cost hardware codec tailored to 1 x 64 kbps or 2 x 64 kbps.

The Conference Management and Multiplexing Centre
The Conference Management and Multiplexing Centre (CMMC) has a number of functions including:

The Networks
The networks we have agreed to accommodate include:

5.1.2 The CMMC
ISDN Access to the CMMC is currently via a primary rate ISDN interface, i.e. 30 x 64 Kbps channels, which can be multiplexed, thus allowing partners with basic rate interfaces (1 or 2 x 64 Kbps) to connect. This interface supports either Q.931 or the British DASS-2 protocol. There also exists a Sun SPARC 10/LX/Classic, capable of supporting Basic Rate ISDN at 1x64 Kbps, using IP encapsulation over PPP. This would be via a separate, basic rate interface. This will be discussed in more detail later.

Finally there are also some SPARCstations with the German Diehl card at UCL, these support IP directly over LAPB.

The CMMC supports the following encodings over ISDN:

Also, if the Sun SPARC 10/LX/Classic is set up:

Options (1) and (2) also offer interesting possibilities of sending other than A/V data. Since the IP will encapsulate arbitrary packets, shared workspace information could be sent over ISDN. It remains to be seen whether this will be supported or if it is useful.

Conference Rooms have a need for high quality audio and video, so a hardware codec is usually available. When using ISDN to connect to the CMMC, these options are possible:

This equipment is needed to operate a conference room:

5.1.4 TYPE 1 Workstations with software codecs.
For ISDN Access to the CMMC there are two options available to MICE participants. They can use the German Diehl SBus card in any Sun workstation, or the ISDN-equipped Sun workstations i.e. SPARCstation 10, LX or Classic. The various methods of configuration are:

Diehl Card:
This is an SBus card which can be fitted to any Sun workstation. It requires some kernel modifications in order for it to work. It requires a Basic Rate ISDN interface, and uses only 1 x 64 Kbps channel. It can be setup to call different addresses on demand. The card takes IP packets and encapsulates them directly in the LAPB protocol. This means it is possible to connect to either the Primary Rate Interface at UCL, or another workstation with the Diehl card. The various connection possibilities are:

Sun ISDN SPARCstation:
These machines come equipped with the ISDN hardware built into the machine. The hardware supports a Basic Rate interface of 1 or 2 x 64 Kbit channels. Before the ISDN hardware can be used however, the machine must be running Solaris 2.2 and the ISDN software package from Sun must be installed. I think that Solaris 2.3 will have this software bundled. The software supports 1 x 64 Kbps channel, and uses PPP to encapsulate IP packets over ISDN. This extra layer of PPP, whilst useful for authentication/reliability etc. does mean that it is incompatible with both the Diehl card and the Primary Rate Interface at UCL, which both use straight IP over LAPB. It is possible that the UCL PRI may be extended to support PPP, but this is not the case at the moment. Thus, this configuration may only be used to connect to other Sun ISDN machines. UCL has or will shortly have a machine connected to Basic Rate ISDN which can be used. The various connection possibilities are:

Other Options:
H.221 framing of H.261 + audio directly over LAPB: It should be possible, given a complete ISDN implementation on the card, to write a driver which will frame the audio and video in H.221 and send this directly over LAPB to either another workstation which does exactly the same thing or, more sensibly, to an ISDN hardware codec which supports H.221.

This would turn a workstation into an ISDN videophone. As a flavour on this method, just H.261 over LAPB could be implemented, talking to another workstation or an ISDN codec which supports H.261. (1.5) However, it is currently not possible to do this, since the Sun ISDN implementation is not complete enough, and is hard coded to use one channel, and the Diehl card has no documentation regarding this. Also, the processing power needed to achieve this is not known.

Shared Workspace data over ISDN:
Both the Diehl card and the Sun implementation support IP over ISDN, thus it is very easy to send shared workspace data over ISDN. This might be desirable if e.g. the packet-switched network had a very low bandwidth or was busy etc. Thus, instead of multicasting the Shared Workspace data directly from the source site, it would be sent to UCL first, which would then multicast the data normally. It is not clear how easy this would be, or how useful. There would probably be a distinct lack of bandwidth available on a Basic Rate interface if audio and video were already being sent.


GMD possess a BINTEC card.

The equipment needed to operate such a workstation is:

Diehl Card:

5.1.5 TYPE 2 Workstations with hardware codecs.
At the time of writing, the only way to access the CMMC is to use an ISDN codec, fully independent in operation, just triggered from the workstation: this delivers full H.221 framing of H.261 and G.711 or G.726 over a Basic Rate Interface (1 or 2 x 64 Kbps).

5.2 Multimedia Conferencing From a Users Point of View

The following sections provides some informations on how Multimedia Conferencing looks from a user's point of view. The different sections describe in some detail the experience we got in using MICE facilities running Multimedia Conferences. Section 3.5.1 contains informations on Multimedia Conferencing on a Workstation. In some details the user point of view is different, when running a conference in a conference room. Section 3.5.2 adds informations on Multimedia Conferencing in a Conference room. Details on the tools we are using running these conferences can be found in section 3.2.

The information included in this document originate from our experience in running Multimedia Conferences within the MICE project using the specific facilities and tools. These conferences are run between the MICE partners as weekly project meetings and during 3 major project demonstrations at international conferences (JENC, IETF, INTEROP) and a number of smaller demos. A second source of information is a questionnaire which was sent out to all partners and others using the same set of tools. (see Appendix B)

5.2.1 Multimedia Conferencing on a Workstation
Many different Workstations are used by MICE partners to participate Multimedia Conferencing. A detailed description of the Reference Multimedia Conference Workstations (hardware and software) is given in the workstation document, [9].

. Running a conference
This part includes information on hardware specific problems of setting up a Multimedia Conference at a Workstation

Not many Multimedia Conference specific problems were noticed. If one knows how to handle a video camera and how to connect it to the video boards in the workstation everything works fine. Just check that your image is in focus and that the lighting in your room is ok, which can easily be done by using a local control image from the videotool (IVS).

This is the most difficult part of a Multimedia Conference. Whereas sometimes a conference can be held without video or bad quality video, it's impossible to run a conference when audio quality is too bad.

Very often its the analog part of the audio transmission (microphones, wrong plugs, cables) which can cause a lot of problems. Even the position of a the standard workstation microphone can cause a very disturbing background noise (hum). We use a lot of different types of microphones and everyone sounds different using it with the audio tools. A sound check at the beginning of a conference is therefore essential to adjust the different audio levels.

Another important part is the loss of audio data during the transmission over the network. A typical loss rate where speech perception is getting impossible is 40% (30% when it's not your mother language).

The environment:
First it has to be noted that running a multimedia conference has an influence on your working environment. Using the loudspeaker on the workstation or other speakerboxes can only be recommended when working alone in your office, otherwise audio will significantly disturb other people working there. Most of this can be avoided by using a headset. But using a headset cannot help to prevent other people being disturbed, when you are talking to other conference partners. It has to be noted that this disturbance is more than you get when participating in a phone call due to two facts:

Second there is an influence of the working environment on the user running a conference. Unlike in a closed conference most people are running Multimedia Conferences in their office. Here are some examples:

Additional information:
During most of the project demonstrations the conference sessions were recorded on videotape. In addition to this some weekly project meetings were videotaped, too. This is a valuable source of information on how a Multimedia Conference looks to the user, how the tools look and how they are used. In addition it provides details on video and audio quality during such a session. The video tapes are available on request.

5.2.2 Multimedia Conferencing in a Conference Room
Originally, the GMD conference room was connected to the VBN (a 140 Mbps broadband network). This connection has been moved to an in-house broadband switch that allows limited multiparty conferencing (three-way conferencing). With project MICE, video and audio have now to be shared/switched between the broadband switch and a codec with a workstation. Currently, the switching is done by hand. The rest of this report will ignore the broadband switch and consider only the case of the codec with workstation.

Running a conference
MICE conference rooms are equipped with a workstation, a codec, Ethernet connection, and the UCL software needed to control the codec and to set up a connection to either the CMMC or another conference partner.

Setting up the Codec
There is an explanation by Mark Handley on using the UCL software and how to configure a codec [15]. The codec compresses the video and feeds the resulting datastream via a high speed serial interface into the workstation for packetization or via an X.21 board and a terminal adapter into ISDN. Audio needs to be manually switched between the workstation's audio port for packet audio and the codec's audio port for audio over ISDN.

Configuring the codec is not a matter of the casual user; it needs expertise, but normally this should not cause problems because the codec stores the settings and should be in a usable state even after power cycling. After loading the device driver one must configure the driver for the desired baud rate (the manual points out a weakness: if the baud rate is not set here, before the next steps, then you will not succeed in setting device driver and codec to the same rate). To use the system one has to start the codec controller followed by the start-up of the user interface. Codec controller and user interface have to be synchronised with the baud rate selected to configure the driver.

Anyone running a multimedia conference from the conference room has to be familiar with both the codec and the network behaviour. In particular, one has to choose the baud rate according to one's quality requirements and the current network state; this has to be done before starting the conference. Any change of the rate means a restart. Starting the send direction is usually without problems, but the receiving direction often crashes and does not resynchronize automatically, so that it has to be restarted. The conference operator has to select the packet size and the TTL. He should permanently watch the codec user interface in case the codec has to be re-synchronized or sending or receiving have to be restarted.

There is no way to find out whether or not the remote partners are receiving the outgoing video except by some other form of communication. Also, when an expected incoming picture is missing, one is never sure whether or not the remote partner is actually sending.

Proper lighting and camera positioning and adjustment were points of concern. Another point that needs consideration, and is sometimes a cause of confusion, is the obvious asynchronous nature of audio and video data streams. Both audio and video feedback from the other participants are important human needs, which in general are tricky to meet in technical terms.

The audio side of the conference room needs much attention. Often it was felt that the volume of the loudspeaker was too low despite of the VAT control being all the way up; however, straightforward amplification would also amplify noise. There is currently a mismatch in level and impedance between mike and speaker ports of the studio, the audio ports of the Sun workstation, and the audio ports of the codec, which degrades audio quality. Further investigation is required to see if this can be overcome with a few adjustments or if more sophisticated conversion equipment is needed. Speaking into a mike without hearing the output on the other side (and thus not being able to self-adapt) makes some people feel uncomfortable.

User interface
Setting up and controlling the equipment in the conference room is currently not well organized. The situation has evolved partly from the transition from a single-network, single-partner video conference room to a multi-network multimedia conference room, and partly because user-friendly interfaces have been missing. For example there are several separate controls:

The videotool is used for grabbing e.g. the whiteboard and feeding it to a monitor so that people not sitting in front of the workstation can see it.

Then there is the fact that the workstation monitor is usually too small to show all the information and control windows without any overlapping and obscuring. Experience has shown that it would be useful to have two monitors: one for the shared workspace and one for conference control. To a casual user it is not obvious which button he has to press to accomplish a certain function. It would be a big help if there were a single user interface giving the user a consistent view, controlling the conference room equipment, hiding the peculiarities of the different networks, and controlling the shared workspace tools.

The current user interfaces to the codec and control software are developer interfaces, so to speak, and require a certain degree of expertise. On the one hand, much of this will eventually be superseded by remote control from the CMMC. On the other hand there will still remain a need for local control of the equipment, e.g. in case of point-to-point conferences not asking for CMMC services. GMD is trying to contribute to the development of a user interface that is more appropriate for the inexperienced user.

A videorecorder has been installed in the conference room, so there is now a possibility to record video and audio. One may record either the video received or the video transmitted, the received one being the more interesting one. Similarly with audio, but one can also mix both. A problem is that the microphones do not only pick up the local audio but also the remote audio from the loudspeakers. There are currently no provisions for recording the shared workspace.

Arranging a multimedia conference can well be accomplished via e-mail or a similar medium. Essentials are not only the times of start and finish, but also the exchange of multicast addresses, unicast addresses, ports, codec and other parameters, CMMC booking, as well as pre-tests or rehearsals and preparation arrangements (usually underestimated and causing embarrassing delays when not done beforehand with sufficient care).

5.3 The Current MICE Status

Currently, MICE software is operational on a SPARCstation platform, working with a variety of software and hardware codecs. Audio and video functionality has been ported to DEC, HP and SGI platforms. The shared whiteboard of Jacobson (WB) is used regularly with the system, and other Shared Workspace tools from INRIA and SICS are tested and stabilised and will be available for the start of MICE-II. These are not available yet on the full range of platforms.

Workstations exist at the sites of all the partners, and eight of the partners have participated in weekly conferences and the public demonstrations. Several US sites have been linked into multiway conferences with MICE partners in the public demonstrations. Conference Rooms now exist at four sites - they have been used in the public demonstrations and will be used more frequently in future events.

The CMMC functionality is still undergoing testing and further development. Currently, it works with distributed processors on one LAN rather than on a multi- processor system, but will be moved shortly. It supports both software and hardware codecs, does full quad-multiplexing in the analogue regime, and is accessible from SuperJanet, EUROPANET, EBONE, the Internet and the ISDN (via a primary channel so that several workstations can access it simultaneously). The performance of the CMMC still needs to be improved: delays up to several seconds can occur, and there is currently no attempt to synchronise audio and video streams.

Much of the code requires additional work to make it more efficient and fault-tolerant. User interfaces and documentation need to be provided to shrink-wrap it. Furthermore, many of the procedures involved in setting up and monitoring conferences could be made more efficient, easier to use, and automated. Whilst the Deliverables required under the Contract have been achieved, substantial additional effort is required to make the system more rugged and accessible to user groups outside the project.

5.4 Workstation Components

At the end of Phase 1 of the project, there exists a robust set of hardware and software for a Sun SPARCstation. It is also true that the development needs to be continued and the following highlights the areas in which progress can be made.

First we consider the video component. Further work needs to be done on bandwidth back-off on transmission. If there is an indication (e.g. through a congestion control scheme) that the conference is requesting too much bandwidth due to the video, it should be possible to request that the video transmit at a lower frame-rate or some other measure of lower bandwidth. While this activity could be implemented in the CMMC, there should be mechanisms for control also at the workstation; this may be more efficient, and it reduces the transmission load before the video reaches the CMMC.

There needs to be some measure of video quality at reception; this is needed to ensure that the request for video transmission reduction does not cause the video to degrade below a specified acceptable level. Below acceptable levels, specific workstations might have to run without video, and periodic still picture updates could be sent instead. While the measure requires some software in the workstation, the actions would be exercised via the CMMC. At the opposite end of the spectrum, we need the capability to provide higher quality video. This is probably not tolerable on the present networks, but may be required in some applications and could be available shortly on some segments of the international networks.

There needs to be further work on the coding algorithms used. The present set works with H.261 video coding; the potential advantages of using MPEG, and possibly even relays between MPEG and H.261, should be explored. Other coding schemes can be expected to become available on workstations; again the possibility of relaying between such schemes should be investigated.

Next we consider audio. All of the current audio compression schemes are derived from the Telecom world. Other compression schemes may be more appropriate for use over packet-switched networks, by allowing for more graceful recovery from packet loss, either by frequency domain compression allowing interpolation over missing packets, or by increasing redundancy (distributing a sample between a number of packets) so that as loss increases, we simply get a lower sample rate.

A measure of audio quality at reception is required to ensure that there are appropriate mechanisms for realising that the audio quality at specific sites has become unacceptable. Either management levels should then manipulate the bandwidth provided, multicast or audio relay topology, or specific workstations should then be removed from the conference. It is important to liaise with ongoing research both by the MICE partners and in the US under ARPA auspices, on resource allocation which will allow parts of the networks used by MICE to give audio traffic preference over video traffic.

It is worth investing effort into improving the audio system, since our experience has shown that audio quality is the most crucial factor by which users judge the effectiveness of the overall conferencing system. Experience from the weekly project meetings has shown that impaired or variable audio quality is particularly a problem in international conferences, where not all participants are able to use their native language. For some applications, such as remote tutoring of foreign languages, prime audio quality is a prerequisite. Such applications will become easier to realise with the move of such services to higher-speed networks.

Efforts should, however, be made to improve audio quality over existing networks to support users who will not have access to higher-speed networks in the foreseeable future. Alternative coding schemes might be more suitable for use over packet-switched networks, and one should look at the possibility of adapting the audio compression ratio to variable network transmission capacities. Packet-level forward error correction may also enhance audio quality in case of packet loss.

It is important to conduct systematic testing of interworking between peripherals, workstations, software, and networks. This expertise needs to be accumulated and made available to user sites. It is likely that this activity would lead to a more radical look at the audio component: development and shrink-wrapping of a new audio tool would make a significant contribution to the usability of multimedia conferencing systems.

We now consider stream synchronisation: the combination of the audio and video. So far, there have been few attempts to synchronise the audio to the video. Synchronisation is important from a given level of video quality or delay upwards, and therefore becomes an issue with the move to higher-speed networks. Further research is required to determine how to achieve the best effect as far as users' perception of quality is concerned, and what to do if not all participants receive the same quality video. Previous work has shown that users' perception of quality of synchronisation streams for multimedia cannot be predicted from objective measures. Further questions arise of what to do if not all participants receive the same quality of video.

Any future developments may depend on the new transport level format and session controls, including CCCP, the Conference Control Channel Protocol developed at UCL by Mark Handley and Ian Wakeman [16]. The Audio-Visual Transport Working Group (AVT-WG) of the IETF is defining a new transport level format. MICE partners are, and will continue to be, contributing to this group. It is important that European activities reflect this work.

Also, the higher level multimedia management Working Group of the IETF is defining higher level session control and floor control mechanisms. Discussions with this group at the 37th IETF in Amsterdam revealed that the experience of the MICE partners with development and use of conferencing systems is a much-needed contribution, to ensure applicability of the eventual standard in practice. In this area, the emergence of the CCCP will be of great value if it is accepted. The full details can be found it [16].

The final component is a shared drawing tool. Versions of INRIA's MSCRAWL, Van Jacobson's WhiteBoard, and SICS' X-SPY and MultiDraw are all operational by the end of MICE-I. Even though all four are shared whiteboards or drawing tools, they offer complementary functionality rather than replicating it. The ability to transfer data between these, and possibly other shared drawing tools, is a requirement which has not been addressed in MICE-I. It would be desirable to define a common protocol for Shared Workspace applications to address this requirement. In addition, developments in AVT-WG and MMUSIC are likely to require additional facilities to be added to the Shared Workspace tools. Various of these Shared Workspace tools need to be shrink-wrapped; currently they are somewhat fragile. Finally, it is very important to broaden the base of hardware on which the MICE facilities run. It would also be very desirable to be able to accommodate some of the other multimedia systems which are becoming available.

5.5 Workstation Platforms and Conference Rooms

By the end of the current project, there will be a complete usable system on a Sun SPARCstation. However, the whole platform should be broadened, and a number of specific features improved. The activities required here include the following:

5.5.1 The User Interface
Much work needs to be done into the user interface to the workstation at all levels. The user interfaces to current tools can be improved, and the concurrent handling of audio, video and shared workspace tools in conferences needs to be considered. The following issues should be addressed in the first year of the project:

For all these activities, detailed input should be provided from many parties who are just users - not developers - of MICE; it is important to organise such feedback. It is desirable to develop specifications for application-dependent workstations as a result of usability considerations.

5.5.2 Conference Rooms
It is not necessary to spend significant effort on improving general- purpose conference rooms. Some smaller but important activities still to be done include:

In some of the applications being considered, including distance education and medicine, additional peripherals may be required. It is important to replicate some form of electronic classroom; it is not immediately clear if the current UiO/NTR implementation is a solution which should be adopted by MICE. In particular, cheaper peripherals working with similar functionality are becoming available. The opportunity to gain experience with existing classrooms, and integrating/comparing cheaper software solutions, would be extremely valuable. It is clear that the current classroom with a scanner and a touch-sensitive screen are valuable. It would be desirable to develop a repeatable educational conference room with specific peripherals. For example in both medicine and education, facilities for adding photographs and slides to conferences should be provided.

5.5.3 ISDN Support
The support for the ISDN in the current workstations is rudimentary - even for the SPARCstation. Experience in MICE-I has shown that the manufacturers of workstations, codecs and terminal adapters have not considered interworking between various pieces of equipment. Eliciting essential technical information has turned out to be a long-winded and arduous process, and what has been achieved was largely done on the basis of guesswork and trials. Much further development is needed here to support conference participants with ISDN tools and access. One necessary development is a software videophone - IVS + H.221 + ISDN. Another is support for PPP, in the form IVS + H.221 + PPP + ISDN - though this may be finished during the current period.

5.5.4 High Speed Network Support
In order to be extendable to substantial numbers of simultaneous conferences, it is essential that the MICE facilities be made to operate over the emerging ATM infrastructure.

5.5.5 Movement to Other Platforms
There are many variants that could be proposed. Some tools already operate on DEC, HP and SGI machines, work is required to port the entire system. Porting the system to lower-end systems such as DOS/Windows, NT and MAC environments would make MICE software available to much larger user groups. This would be desirable, but the development effort required is significant. Porting to both the PC under NT and to the MAC would be important.

5.6 Conference Control and Management

At present the set-up and diagnostics of conferences are a nightmare. For larger scale deployment, much more sophisticated monitoring of the status and operation of the different components of the conference must be provided. At present many of the systems developers believe that the relevant tools must be provided inside the applications. This may be a correct approach, but in that case the requisite tools must be exercisable from a standard Management system, and the data recorded inside the application must be accessible to the management system.

This activity needs to embrace two quite different aspects: fault diagnosis and traffic measurements. The traffic measurements may be required for keeping the system operational; they may be used also to determine differences of behaviour in different applications.

There will be many things which need to be done in the CMMC. As we move to ATM networks, the multiplexing function itself will still remain vital with hardware codecs, but less so with software ones. Moreover, we may be able to distribute even this function. All the other functions will remain equally important, and in any case, even some of the multiplexing should be developed further. Thus the activities identified already are discussed below.

5.6.1 Modifications tracking Workstation changes
Clearly most changes in protocols adopted in the workstations will imply corresponding changes in the CMMC. Examples are the following the video and audio activities. It may be particularly important to put in the stream synchronisation 3 into the CMMC for some applications. The move to the AVT-WG formats may be required, and use of the higher level MMUSIC features will have even more impact on the CMMC than the workstations. The availability of new hardware will ease the activity of the CMMC even more than workstations; this is partly because it should be particularly needed in the CMMC, and partly because the relative small number of CMMCs may justify greater expenditure on each.

Thus this activity, which is a very big one, is similar to the whole of WP1, and in many cases will do things which are not necessary in the workstation itself.

5.6.2 Additional software functionality for video
Even when it may not be justifiable to put powerful enough processing in the workstation for higher speed software solutions, the availability of a multiprocessor at the CMMC should open up new possibilities in the CMMC. These will include parallel processing of incoming streams, mixing the video in the compressed domain, design and implementations both of video switches and quad mixers in software.

5.6.3 Design and Implementation of Multiple Hubs
Here one activity must be to decide which functions should be retained in further clones of the CMMC, and to decide whether a single clone or a family of CMMCs should be configured. The interaction between multiple hubs is a vital area of investigation; the method of providing a sensible distributed architecture has not yet been determined. At least one, and possibly more, clones of the CMMC should provided.

5.6.4 Design and implementation of further Video Relays
If it is decided from the Workstation studies that other coding schemes (like MPEG) should be used, then the CMMC must provide the necessary relay functions. It may even be desirable to provide limited CMMCs just for video relay purposes.

5.6.5 Resource Allocation and Booking Schemes
The system available at the end of MICE-I will be rudimentary; it will be necessary to provide considerable extension to provide a system which can fully automate the booking and resource allocation functions. Examples are the following:

It is important to consider how resource allocation is handled through such a system from the users sites' points of view - remote control or local control have to be considered, and a code of conduct needs to be developed in which sites may choose to participate at varying levels. Many tools will be distributed irrespective of the distribution of the rest of the CMMC functionality. Network management tools for MICE-type systems are being developed in RACE programmes; their relevance should be assessed.

5.6.6 Scalability
The present system will handle only one conference at a time, and with only a handful of sites; it will require considerable effort to allow the system to handle several simultaneous conferences - some of which may require distribution of the whole CMMC functionality. From our experience, we believe we know how to attack this problem.

5.6.7 Provide support for Higher Speed Networks
While individual workstations may be able to operate over the lower speed networks, even now the CMMC is being constrained by its network connectivity. As the number of channels is increased, or the bandwidth of each raised, migration to higher speed networks will become essential. The UCL CMMC and workstations already have access to - even though most countries still have only lower speed access.

5.7 Dissemination of the Technology and User Support

In MICE-I, a self-selected group of partners joined together to do some development and conferencing between each other. In any extension it would be necessary to extend the activity in the following directions:
to replicate the facilities deemed significant in MICE, and to support them;

5.7.1 Deployment and Replication
Each of the countries to which MICE partners belong have deployment plans for interactive multimedia in the context of National networks. In the context of National higher speed initiatives, it may be possible to distribute some of the CMMC functionality. Considering the wider context of lower speed European and European-intercontinental connectivity, however, it becomes clear that the CMMC and the MICE infrastructure will still be required for some time to come. One requirement of any future deployment activity must be to package the requisite software and specifications of the hardware to ensure that it can be replicated - and even to encourage such replication. In the case of workstations, this activity is mainly to ensure that the facilities are provided on a broader range of platforms. In the case of general purpose conference rooms, functional specifications will be provided; it is unlikely that identical equipment will be used. Several of the MICE partners wish to provide rooms equipped as distributed classrooms - though some upgrades will probably be incorporated.

5.7.2 National Support Centres
If the technology is to be used extensively, each participating country should appoint a National Support Centre (NSC) for the technology. Organisations taking on this role would act as the first point of contact for any inquiries from users. They would undertake most of the day-today functions of supporting the users.

5.7.3 User Support, training and consultancy
NSCs should build up facilities to provide user support and training both for routine and special events. Their functions should include: advising new users on equipment and S/w; providing sit-up documentation for conference rooms and workstation-based facilities; providing start-up support and training; organise user groups and workshops.

5.7.4 Shrink-wrapping and Software Support
In any MICE extension, it is essential that there be periodic releases of MICE software to other sites, and that there be a software maintenance commitment. It is also important to allocate the responsibilities between the releasing MICE partner and the NSC. This might, for instance, be on the following lines:

Prior to the release of any MICE software, the releasing MICE partner would run a series of tests with the NSC to ensure that the software is stable and functions with the intended platforms and networks. Software must be accompanied by appropriate documentation. The releasing partner would act to fix any new problems reported by the users.

NSCs would install and use the software to make sure that the accompanying information is correct and sufficient. It would be up to NSCs to identify the user groups to whom MICE software should be released and who they will support. A general release should, however, be done in consultation with the NSCs and after testing by the NSCs' user groups. Users report any problems with MICE software to their NSC, and NSCs should be able to deal with most reported problems. They would act as the buffer between the Releasing Party and users.

NSCs should maintain National user databases of MICE software and to whom it has been released. They disseminate information on developments and changes to MICE software or documentation. In addition, they maintain information on hardware and connectivity of their National users, and make this information available to other MICE partners if required.

5.7.5 Broader International Coverage
Belgium, France, Germany, Norway, Sweden and the UK are already represented in MICE. It is important in any future activity to ensure that the coverage is wide as can be accommodated by the communications infrastructure. Here it should be noted that it is not only the actual capacity of the international links, it is also the amount of that capacity that can be made available for this type of traffic. It is possible to bring in communities in other countries with reduced capabilities if the traffic capacity for video is inadequate; it is not really possible to work effectively without good voice capabilities

5.8 Multimedia Servers

It would clearly be desirable to introduce multimedia database servers into the conferences. Such servers would allow remote retrieval of digitised audio and video data, as well as presentation materials such as slides and images. Work needs to be done to introduce such facilities. Based on existing activities by the partners, we believe that it would be quite feasible to add such servers.

It would be desirable to introduce multimedia recording of seminars and important events. This would allow access to presentations given at times when people are not able to attend seminars, e.g. when it is held in another time zone at an inconvenient time. Experience with storing, indexing and retrieval of digital audio and video data will be a prerequisite for applications such as Distance Education. While such activity should be concentrated into the CMMC, some groups might wish also to introduce such a facility in their workstations.

Another facility which would be desirable is the ability to add minute taking and indexing into the archives. This should include portions of the shared whiteboards, for example. These facilities would also provide an ability to embrace more time zones in collaborations; partners not able to attend a particular meeting would be able to review its proceedings at a later time.

5.9 Security

At the moment, there are the following applications in MICE:

To secure their use, a conference access manager (CAM) should be added to the CMMC which will be capable of carrying out authentication and authorisation procedures, The CMMC would utilise the CAM to allow only specified users to join the conference, and could provide session keys to the conferees for encryption purposes. This would require additions both to the CMMC software and to the software of the remote workstations.

In the case of centrally controlled video use of the CAM would be a part of signing into the CMMC, and the CMMC could reject unauthorised users. In the case of VAT the authentication procedure would have to be done outside of VAT, for example in a dialogue between some software at the user side and the CAM. To authorised users, CAM would distribute a session key which would have to be used when starting VAT. There is no possibility for VAT to reject users.

It would be necessary to provide the normal infrastructure for key management in open systems. For this purpose there might be certification authorities and software in every workstation to communicate with a certification authority - it would be appropriate to consider the procedures adopted within the European PASSWORD project.

It would also be important to use the security infrastructure to provide access control (to limit partially the participants in a conference) and encryption (to ensure confidentiality even if there is unauthorised joining into a conference,

It would be desirable to have a user interface which integrates security functions and application functions (hiding most of the security details from the user).

5.10 Traffic Measurements, Analysis and Congestion Control

User behaviour and traffic characteristics are still not well understood for integrated broad band communication. Multimedia applications will generate traffic of a nature different from that of regular data and telecommunications. During MICE-I, we experienced the tendency among network operators and non-multimedia users to blame any network congestion and related problems on multimedia traffic. It is, therefore, imperative to collect and analyse data on multimedia traffic and its impact on networks. MICE partners are keen to show that users of multimedia conferencing facilities are good citizens of the network community. Traffic measurement and analysis is a first step to this, and will help with design and scale of future networks. On the basis of the data collected, we plan to design and test automatic congestion control schemes.

Due to the project's broad scope, data can be collected under realistic scenarios of every day network usage, and not solely from, for example, an academic institution where most users are computer literate. Specifically, such scenarios reflect: a large number of end-users with a wide variety of technical expertise

The traffic measurements can be used in the development of traffic models and dimensioning guidelines for multimedia networks and services.

Traffic should be measured from selected locations in different countries to capture as much of the variety in the data as possible. Both single streams and statistically multiplexed streams will be measured. Traffic will be collected at a few end-user sites, as well as at network nodes.

Performance studies are important - including the effect of multiplexing various services from each workstation as well as from using dedicated workstations. For example, users could be asked to describe on a scale of 1 to 10 their satisfaction with a particular service and this could be related to the load generated by the various applications. The data collected during this project would be extremely helpful in judging the viability of multimedia services over future IBCs.

5.10.1 Traffic Analysis and Modelling
It is important to ascertain the properties of packet traffic generated by multimedia applications over a distributed network, and to investigate the validity of a number of currently used traffic models - for example one could examine recent packet traffic measurements for a wide spectrum of packet based networks, which indicate the fractal nature of packet traffic and also the significance of these properties in the analysis, design and modelling of packet based networks. The traffic measurements could also be used to examine the suitability of currently proposed packet traffic models for multimedia services.

5.10.2 Congestion Control
In MICE-I we found that the performance of the European network infrastructure was much more fragile than had been realised - and that it was not difficult to endanger it. As long as this is due to its fundamental shortcomings, it will be strengthened; if we stress it beyond permitted levels, the MICE traffic will be prohibited. As a result it is essential MICE traffic be measured under normal loads to predict the stress on the networks. A corollary is that from the measurement data, the network providers can assess whether they wish to provide the relevant networks. It is desirable to investigate with current work on congestion control in other projects, including those on multicast and others on source coding rates, could be incorporated into a unified control scheme.

5.11 Applications

The current MICE Partners have suggested pursuing different applications activities. Clearly there is ample scope for applications activities - but since these are the province of many CEC initiatives, it is unnecessary to enumerate them in this report. In general we conclude that there is a vast scope for applying the MICE technology. Research and Development environment is an excellent environment for collaborations, as are education, medicine and any of the Grand Challenges. Most such activities would be more properly the province of other initiatives of a CEC or National co-funding nature. However some applications should be pursued in any MICE extension both because of their intrinsic interest and as exemplars to validate the MICE approach. An example of applications which should be pursued is the MICE seminar series is now being held between UCL, SICS. Oslo U, GMD and RUS; Xerox PARC and EuroPARC are participating via their conference rooms, too, and the series should be broadcast to the US Mbone.

In addition to supporting specific applications identified by the MICE partners, we feel it important to identify key user communities. We are less clear, however, to what extent support for such communities should come from any extension project itself. Our present view is that we should commit to making our specifications for hard ware available to such communities, and to provide them with software releases. Most direct support or development should be contracted in some way by the various communities themselves. Communities which have been suggested so far are the following:

Working with key users, within a year, it should be possible to demonstrate Distance Learning and Collaborative Working which stays close to multimedia conferencing. It would be important to identify specific such communities who would gain from such collaborations.

5.12 Large scale deployment

5.12.1 Further Developments Needed
We have already indicated the recommendations for further work. Many of these are merely desirable in their own right, others are needed for large scale deployment.

On the whole the workstation technology has become adequate for proper deployment. There is a part of the community which finds it difficult to consider any workstations other than PCs with Windows. This is less because of the relative costs of the equipment or its peripherals, than because of the in-house expertise and deployment of that base. It is essential for large scale deployment that one PC platform compatible with MICE be deployed; we are intending to pursue possible collaborations to achieve this aim.

While workstation manufacturers are continually improving the boards available in their PCs, there remains a niche market for high quality voice or video over lower cost networks which will continue to be met by the codec manufacturers for some years. There is always a long and unpredictable timescale before new standards emerge; thus we may expect to have to work with the current H.320 standards for some time. To attach such codecs to workstations require rather better serial interfaces from the workstation vendors - because of the rigid time constraints on the codec interfaces. While we may expect the present systems to become obsolete, this is no reason not to have wider deployment of the technology now.

Of particular importance is the need for a high performance infrastructure in transmission. With the present international networks, we do not believe that we are able to support much more than one mm conference at a time in Europe. Wide deployment really needs at least a 34 Mbps international infrastructure - which is being proposed in the EUROCAIRN proposal, and being considered in various Framework IV initiatives.

Before the higher bandwidth is available, it becomes particularly important to have access to a central reservation facility, which all projects using the international infrastructure can access. We are particularly concerned that there may be different projects attempting to use the European infrastructure at the same time; it is essential that there be a European-wide reservation facility which transcends individual projects and networks.

5.12.2 Recommended Systems Architecture
The present MICE infrastructure can scale to a much wider deployment - though clearly the CMMC would need replication, and there needs to be communication between the different CMMCs. The MBONE needs to be expanded - but rather more automated management (and monitoring of the status) are required. Thus for the MICE deployment, resource allocation is needed of a much better quality - which includes resource requirements for each node, and a resource booking and configuration set-up for each conference. For proper deployment, we need to be able to install security mechanisms on each conference - and hence need an appropriate key management system, authentication system and confidentiality system.

We need to continue to support a variety of communication subsystems - though these do not need to be located on CMMCs. It is probably desirable to have communications protocol distribution centres in several countries; these would act as relays between the local communications (ISDN, ATM and research networks) and the international infrastructure. The relays should also be able to mediate between different coding algorithms - both for the audio and the video. For the time being, there will continue to be a shortage of network bandwidth. For this reason, we need to install fairness algorithms at the critical routers, at the least those that give international access, to ensure that the MBONE traffic does not exceed a value agreed as supportable by the national funding bodies.

5.12.3 Industrial Involvement
The MICE project does have loose involvement with at least one workstation vendor (Sun Microsystems), who has supplied equipment for the UCL CMMC. It is vital that the industrial involvement be broadened. This should take place in several directions: workstation vendors, communications equipment providers and network operators; each will be discussed in turn. Involvement with one of each would be useful; a broader involvement with many would be even more useful

In Workstation providers, there are several needs. First, a number of the organisations which would like to use the MICE technology are constrained by other considerations to use PCs - and often Windows. There are some doing similar work in the European vendors (Teles, Bull and Siemens, for example); we should have further involvement with these vendors to ensure compatibility - and have started discussions with Teles already. Some other vendors e.g. NCR/ATT have agreements with PNOs to provide value added services (e.g. ATT and BT); we should investigate whether such offerings could become more compatible with MICE. Finally, we have already obtained some advantages from using advanced boards from the one vendor we already work closely with; it would be desirable to form closer links with others also.

The communications equipment vendors are producing higher performance codecs than the boards currently in workstations, but some of the interfaces are particularly unfriendly for interfacing to workstations. Inside the codecs, there are better interfaces; they are just not usually available to the customers. We should work more closely with vendor manufacturers to interface to codecs at levels more suitable to the networks we use. This type of supplier also provides the Multiplexing control units (MCUs). Here again we should work with them to provide MCU protocols more appropriate to the MICE architecture.

Service providers already offer many of the services we need. Under RACE auspices, additional value added services - booking, service engineering, network and applications management, are being developed. We should work with the service providers to either furnish MICE with some of their implementations, or discuss with them whether they might like to be involved with MICE not only to provide transmission services, but also possibly some of the Value Added Services. It is this area where it may be possible at least to get some more permanent national distribution in countries where this is still lacking.

APPENDIX A. Bitfield board data

Video is coded according to the CCITT H.261 standard while audio is coded using PCM. The actual coding and decoding of the audio and video takes place in a dedicated ISA type card developed by the Finnsh company Bitfield. The IBM-PC compatible machine where this card is installed is running the Mach/BSD-UNIX operating system.

The video codec is configurable over a large number of parameters resulting in data rates output from the coder ranging from 10-20 kbps up to 2 Mbps. In our application we generally use 400 to 800 kbps for video and audio.

The card is controlled from software, and most settings can be adjusted at any time (PAL or NTSC is set at start-up). The card can also display overlays, and it's possible to grab images directly from the board or put images onto the board (the display memory consist of one luminance bank (Y) and to colour-difference banks (CB, CR)).

The price is about ECU 6000 for the basic card with G.711 audio, and the optional motion estimation daughter board cost about ECU 2400.

To initiate an audio and video connection between the classrooms, one logs in on the machines with the Bitfield boards on a special account that starts the necessary processes. We also have an X11 application that is used to set different parameters on the Bitfield board while its running.

Coding algorithm: CCITT H.261

Video format: NTSC or PAL

Frame rate: NTSC 7.5, 10, 15, 30 frames/s

PAL 6.25, 8.33, 12.5, 25 frames/s

Resolution: CIF (352x288)

QCIF (176x144)

Data rate: 0 - 2048 kbps

Video inputs: 1. composite or Y/C

2. composite

Video output: composite, Y/C or RGB

Audio: G.711 (PCM), 64 or 56 kbps, 3.5 KHZ

7 KHZ optional daughter board

Address to Bitfield Oy:

Bitfield Oy

Tekniikantie 6

SF-02150 Espoo


Pho: +358-0-70018660

Fax: +358-0-4552240

APPENDIX B. Characteristics of the audio/video conferencing tools

IVS 3.2
From: INRIA Sophia Antipolis - RODEO Project.


Sun SPARCstation + VideoPix or Parallax framegrabber.

HP station + Raster Rops framegrabber.

Silicon Graphics station + Indigo framegrabber.

DEC station + VIDEOTX framegrabber

VAT (2.14v)
SD (1.14v)

APPENDIX C. Facilities and configuration at each MICE partner

MICE equipment and configuration (RUS, Stuttgart)

multicast IP.

video: Parallax Videoboard,

audio: Sun speakerbox and microphone, Headset

Multimedia software:

Multicast connections:

Equipment and configurations Brussels University (STC)
video: VideoPix Board, video camera (Fujix fh80) to be updated

audio: Sun speaker and microphone, Headset

Software: VAT, ivs, wb, nv

MICE equipment (GMD, Darmstadt) 26.1.94

Conference Room at GMD:

Size about 4m/6m, 6 seats each equipped with a microphone, arranged on one side of an oval segment table, facing 4 television screens.


6 microphones, 1 microphone is assigned to each participant

no speech technical coordination.

2 loudspeakers

4 television screens placed side by side. Used for sent and received pictures. One screen serves for checking and

adjusting pictures to be sent.

4 cameras: 2 person cameras each of which takes 3 persons,

1 camera for the white board and 1 for documents.

1 white board

Video/audio compression:




Multimedia lab at NTR:

6 References

[1] Recommendation H.261: "Video codec for audiovisual services at px64 kbits/s", CCITT White book, 1990.

[2] M. Liou, "Overview of the px64 kbit/s video coding standard", Communication of the ACM, no 4, April 1991, pp. 60-63.

[3] "Description Reference Model 8", Specialists group on coding for visual

telephony, COST 211-bis, Paris, May 89.

[4] Sun Microsystems Computer Company: "LANs of the future"

[5] R. Haugen, B.Mannsåker; Scenarios for Introduction of Broadband Services in Telecommunication Networks, Norwegian Telecom Research.

[6] S. Casner, "Frequently Asked Questions (FAQ) on the Multicast Backbone", May 6, 1993

ftp://venera.isi.edu/mbone/faq.txt and see also http://www.research.att.com/mbone-faq.html

[7] S. Clayman and B. Hestnes (1994), ISDN report. MICE Deliverable ESPRIT 7602.

[8] H. Eriksson (1994), Shared Workspace. MICE Deliverable ESPRIT 7602

[9] M. Handley, T.Turletti, K. Bahr (1993): Detailed Specification of MICE CMMC, Conference Rooms and Workstations. MICE Deliverable ESPRIT 7602.

[10] M. Handley, A. Sasse & P. Kirstein (!993): The MICE demonstration at JENC, 10-14th May, 1993. MICE Deliverable ESPRIT 7602.

[11] M. Handley, A. Sasse, S. Clayman, A. Ghosh, P. Kirstein (1993): The MICE demonstration at the Internet Engineering Task Force (IETF), RAI Conference Centre, Amsterdam, July 11-16 1993. MICE Deliverable ESPRIT 7602.

[12] C.-D. Schulz (1993), The MICE Demonstration at Interop. MICE Deliverable ESPRIT 7602.

[13] M. Handley (1993), The MICE Phase I final demonstration on 14th December 1993. MICE Deliverable ESPRIT 7602.

[14] Technical Annex of the MICE Phase I Project.

[15] Mark J. Handley, (1993), "Using the UCL H261 Codec Controller" UCL CS Research Note

[16] Mark Handley and Ian Wakeman , (1993), "CCCP: Conference Control Channel Protocol - A Scalable base for building conference control applications."