This questionnaire was completed by participants in a computer-mediated Business Game held between Westminster, UCL and Glasgow Universities. Two hour-long games were run, with 5 participants (plus tutor) in the first group and three (plus tutor) in the second. 7 of the participants completed an evaluation questionnaire administered immediately after the game had been completed.
SUMMARY OF THE DATA
The participants reported a number of problems with the audio, meaning that it was not very easy to hear the other participants - the overall adequacy of the audio was only 56%. Reasons given were missing words/sentences, variation in quality between different speakers, and lack of helpful visual cues. All participants reported making use of the video images either some of the time or a lot of the time, but felt they were missing non-verbal behaviours from the others. However, the video images increased the feeling of being part of a group. The overall adequacy rating for the video was 50.7%. The participants in general felt that the technology had slowed down the pace of the game, did not allow them to get to know the other participants, and that the activity would have been more effective had it taken place in a face-to-face setting. Five of the 7 participants would have preferred to participate in a face-to-face setting.
N.B. The data shows that the participants reacted quite negatively to the technology for engaging in the Business Game. It should be borne in mind that the session was a one-off, and that as participants become used to the technology over time, impressions tend to become more favourable.
Scope of the questionnaire
The participants' experiences and attitudes regarding the following issues were measured:
The data summary is followed by some brief conclusions and recommendations.
Seven participants completed questionnaires. The participants ranged in age between 22 and 45, 6 being female and 1 male. Five of the participants had participated in business games in FTF conditions before, but only 1 had participated in a similar game in a computer-mediated environment.
The participants were asked to provide information about the effort they had to employ in listening to the speech of the others, and about how the audio quality sounded in general.
The participants were asked how easy it was to hear the other participants in the session and whether they felt it required more effort than in a FTF session.
On a scale of 1-5,where 1 = very easy and 5 = very difficult, the mean rating for ease of hearing of other participants in the session was 3.28, s.d. 0.88. The range of ratings was between 2 and 5.
Asked to describe any difficulty/problems they had experienced, reasons cited included volume variations between participants, intermittent connection problems, and simultaneous speech resulting in nobody being heard.
6 of the participants agreed that Yes, it had required more effort on their behalf to understand what the other participants were saying than it might in a FTF setting. 1 subject disagreed. Of those that answered yes, reasons given included that there were no gestural or facial cues, or lip-movements, the video images were too small, and it was hard to tell who was talking (due to delay and lack of synchronisation).
The participants were asked to rate the overall audio quality in the session and whether the quality varied. They were asked to tick audio factors that caused dissatisfaction, and rank these selections in order of importance. Finally they were asked to rate the overall adequacy of the audio in meeting the needs of the session.
On a scale of 1-5, where 1 = very good and 5 = very poor, the mean rating for overall quality of the audio in the session was 2.86, s.d. 0.64. The range of ratings was between 2 and 4.
Asked whether they felt that the quality of the audio had varied during the session, 4 of the participants answered Yes, 2 answered No, and 1 answered Don’t Know. Those who answered Yes were asked to describe the extent to which they found this disruptive. The responses varied from 'not very' to 'extremely'. Those who answered Yes were also asked whether they felt that the audio quality differed according to who was speaking. All answered Yes, and then described which person, and the nature of the degradation. The problems cited were volume too low, speech breaking up, and 'tinny' voice quality.
The participants were asked to select options from a list of eight factors that might cause dissatisfaction with the audio links. They were then asked to rank their selections in order of importance. Two factors were chosen 6 times as causing dissatisfaction: 'missing words/incomplete sentences' and 'variation in quality between different speakers'. Of these two, 'missing words' was ranked as the most important problem 3 times, whereas 'variation in quality' was ranked as the most important problem 4 times. The second most frequently chosen factor was that of 'variation in volume between different speakers', which was chosen 5 times and was awarded 3 'second-most important' rankings. 'Background hiss' and 'unnatural/metallic sounding voice(s)' were both selected twice as causing dissatisfaction. 'Feedback/echo', 'variation in quality from one particular speaker' and 'variation in volume from one particular speaker' were each selected once only.
Asked to rate the overall adequacy of the audio during the session on a scale from 1-100, where 1 = audio met none of the needs of the session and 100 = audio met all of the needs of the session, the mean rating was 56.14, s.d. 21.7. The range was between 6 and 75.
Questions addressing video issues focused on how the participants had used the video facility, and perceived video quality.
The participants were asked how often they looked at the video images of the other participants and themselves, and whether the availability of these images was helpful or not. They were asked whether they felt they were missing non-verbal behaviours, and the extent to which this mattered. Lastly, they were asked to agree or disagree with certain statements about the video images.
On a scale of 1-5, where 1 = never and 5 = all of the time, the mean rating awarded for how often participants looked at the video images of the other participants was 3.71, s.d. 0.45. All responses were either of 3 or 4.
Asked whether they felt that the availability of video images of the others helped them to follow the discussion, 4 of the participants answered Yes, 3 answered No. Reasons given for Yes answers included being able to see that the others were 'there', how much they were engaged in the session, and that it provided a focus for attention. Reasons given for No answers included that the images were too small and indistinct, and the frame rate was too low. A suggestion was made that it would be an idea to highlight the relevant image of the speaker.
On a scale of 1-5, where 1 = never and 5 = all of the time, the mean rating awarded for how often participants looked at their own video image was 2.86, s.d. 0.83. The range of responses was between 2 and 4.
Asked to comment on the usefulness of their own image, the comments were mainly favourable, mentioning that it was useful in order to check whether on screen and how others were seeing them. However, some found the field of view somewhat disconcerting (not being able to make eye contact with oneself) and the thought the effect of the time lag became more noticeable.
Six of the participants answered Yes, they felt they were missing certain non-verbal behaviours made by the other participants. The seventh participant answered Don't Know. Those who answered Yes were asked to what extent they thought the ease of communication between themselves and the others had been affected by this. The responses included that it made the communication slower and turn-taking harder, that interpretation of some comments and reactions became harder, but that in a way it made the communication more disciplined 'i.e. less scope for argument'.
The participants were asked to agree or disagree with a number of negative and positive statements:
The video images were not beneficial as I found them too slow: 4 agreed, 3 disagreed.
The video images were not beneficial as I found them distracting: 7 disagreed.
The video images were not beneficial as audio-only was sufficient for communication purposes: 1 agreed, 6 disagreed.
The video images were not beneficial as they were out of sync with the sound: 3 agreed, 4 disagreed.
The video images were not beneficial as I found them too small: 5 agreed, 2 disagreed.
The video images were beneficial as they allowed me to see who was talking: 3 agreed, 4 disagreed.
The video images were beneficial as I could tell if there were technical problems at the other sites: 3 agreed, 3 disagreed (1 did not respond).
The video images were beneficial as they increased my feeling of being part of a group: 7 agreed.
The video images were beneficial as they allowed me to see if the other participants were listening to what I was saying: 1 agreed, 5 disagreed (1 did not respond).
In summary, despite the low frame rates, the participants did not find the video images distracting or too small, and their presence increased the feeling of being part of a group. They did not allow the participants to see whether others were listening to what they were saying, but audio-only communication was not thought to be sufficient for the communication.
Questions were asked addressing whether and how the quality of the images varied during a session. Participants were asked to rate the overall quality of the video in the session, and the overall adequacy of the video.
Four of the participants felt that the quality of the video images did not vary during the session, 1 answered that it did, and 2 answered that they did not know. The participant who answered Yes claimed that this was not disruptive (but in any case the video images were not relied upon because they 'often appeared static'.)
Asked whether the quality received varied according to participant, 1 respondent claimed that it did, 5 claimed that the quality was the same from different participants, and 1 didn't know. The participant who responded Different explained that 'some peoples' video was lost during the session, or froze'.
On a scale of 1-5, where 1 = Very good and 5 = very poor, the mean overall quality rating for the video in the session was 3, s.d. 1.07. The range of responses was from 2 to 5.
Asked to rate the overall adequacy of the video during the session on a scale from 1-100, where 1 = video met none of the needs of the session and 100 = video met all of the needs of the session, the mean rating was 50.7, s.d. 24.2. The range was between 18 and 86.
Questions addressing the ease of use of the shared workspace (NTE) were asked.
On a scale of 1-5, where 1 = Very easy to use and 5 = Very difficult to use, the mean rating for ease of use of the shared workspace was 3.29, s.d. 1.48. The full range of responses was occupied.
The participants were asked to describe any difficulties they had experienced. The main difficulty reported was that text was overlapping and duplicated [this was caused by different versions of NTE being used], and also that they participants were not sure how to use the tool.
Asked for suggestions as to how the shared workspace could have been made easier to use, the following comments were made: that cut and paste editing techniques should be supported, that participants' name should be tied with their text (in the manner Jane: I think that…), that different people should use different colours, and that a practice session would have invaluable.
Additional questions were asked addressing overall satisfaction with the outcome and method of playing the game; the impact the technology had on the outcome of the game; the degree to which it was possible to get to know the other participants; the level of involvement in the game; and whether a FTF setting would have been preferred.
On a scale of 1-5, where 1 = Very satisfied and 5 = Very dissatisfied, the mean result for satisfaction with this method of taking part in the Business Game was 3.28, s.d. 0.88. Responses ranged from 2 to 4.
Asked how satisfied they were with the outcome of the game, 2 of the respondents were very positive, 2 said 'fairly', 1 would have liked more time, and 2 were negative, citing technical problems and not getting a sense of involvement with the group.
Asked what impact they thought the technology had on the overall outcome of the game, most respondents thought that the impact had been quite major (with one exception). They believed that the technology had slowed down the interaction as a whole that the outcome would have been ‘better’ had the game taken place FTF.
The participants were asked whether they felt the technology had allowed them to get to know the other group members to the same extent as they might have done in a FTF setting. 6 answered No, and 1 answered Don't Know. Reasons given for the No answers included the inability to judge physical aspects such as height and stature, the lack of gestural information and gaze to facilitate turn-taking, the difficulty with identifying who was talking, and the inability to give flippant asides (!).
On a scale of 1-5, where 1 = A lot greater than FTF and 5 = A lot less than FTF, the mean rating for level of involvement in the game compared to how involved they might have been in a FTF setting was 3.86, s.d. 0.83. Responses ranged from 3 to 5.
Reasons provided for a different level of involvement compared to a FTF setting were unreliable connections, difficulty of turn-taking, and the observation that FTF would have been more spontaneous and interactive.
When asked whether they would have preferred to have participated in a FTF setting, 5 answered Yes, 2 answered No. Reasons given for Yes answers included that the task would have been performed better in a FTF setting, not knowing how the technology worked wouldn’t have mattered, that it would have allowed people to get to know each other better, and would have been more fun to be with people. Reasons given for No answers included that the mediated communication is less intrusive, less embarrassing (through greater distance) and more interesting (novelty value).
Finally, the participants were asked for any additional comments. Suggestions made included the highlighting of the current video window, a means of indicating that you would like to take the floor (like a button on a gameshow), an adequate training period, faster video frame rate and that audio levels should be checked and adjusted before and maybe during the session.
Tentative conclusions and recommendations
It is likely that the satisfaction of the participants could have been improved through a number of 'simple' factors.
Raw data is available on request to Anna Watson.