SUMMARY DATA FROM ANALYSIS OF THE C363 COURSE EVALUATION QUESTIONNAIRE
This questionnaire was completed by students attending the computer mediated tutorials for the C363 course held between UCL and Westminister. Three groups of 4 students attended a series of tutorials and completed the questionnaire as part of a course evaluation programme.
SUMMARY OF THE DATA
Despite students reporting problems such as delay, word drop out, frozen images and variability, audio and video quality was rated reasonably highly and adequate for the needs of the tutorials. In line with this, students didn't report experiencing particular difficulties hearing what others were saying, or following the tutorial. The availability of video images of fellow students and the tutor, were believed to provide useful information for both social and task orientated activities, and the tutor's image in particular, was reported to aid understanding of the tutorial. Overall, students rated this method of delivery as satisfactory. Students reported that the degree to which they felt involved in the tutorial was similar compared to a face to face setting, and some students had particularly positive experiences. There was no clear cut preference for this method of delivery, with 58% preferring computer mediated delivery and 42% preferring face to face. Those preferring face to face tended to cite technical difficulties as opposed to psychological/social factors.
Scope of Questionnaire.
Students' experiences and attitudes to the following issues were measured:
Scope of analysis.
The following is a descriptive summary of the data from a first pass analysis of the questionnaire. Further analysis of the responses of individuals who experienced particular problems, or who were particularly positive or negative about their experience may be a useful next step. It is appreciated that this questionnaire is only part of the evaluation programme the students took part in, and it would be useful to tie the data in with results from the other activity - who ever is doing that analysis???
The students ranged in age between 20 and 23 years, mean age 21 years. Half were male, half female! Within tutorial groups, the mix of males and females was roughly 50/50. A high attendance rate was observed as the majority, 83%, attended all tutorials.
Audio and Video Quality
Students were asked about the quality of audio and video images during the tutorials, how adequate they were for the needs of the tutorial and whether they believed audio and video image quality varied during tutorials.
Quality ratings for audio and video, when at best, were reasonably high. The mean ratings on a five point scale, where 1= very good and 5 = very poor, were 1.8 for audio and 2.36 for video. Ratings ranged between 1-3 for audio quality and between 1-4 for video.
Reasons for less than acceptable audio quality were obtained from an open ended question. Students cited crackling, break up, loss of sound, and difficulty hearing properly.
In relation to adequacy of audio and video images, again ratings were at the high end of the scale.. Students were asked to indicate the degree to which audio and video met the needs of the tutorial on a scale from 1-100, where 1 = met none of the needs of the tutorial, and 100 = met all of the needs of the tutorial. The mean rating for audio was 73.4, and for video was, 82.5. For the audio, ratings ranged from 57-93, and for video, 48-100. When rating the adequacy of video, all but one student gave ratings over 70. The two students who rated video quality as poor (rating of 4), in the previous question, rated adequacy highly with ratings of 81 and 84.
Students were asked whether they believed audio and video did or did not vary a great deal during the tutorials. When audio was at best, 50% of students reported it did, 50% believing it did not. The percentage believing it did vary a great deal increased to 92%, (11/12), when audio was at worst. Ratings for variability in video, when at best, were similar to those for audio with 42%, (5/12) reporting it varied a great deal. A corresponding question of variability when video image quality was at worst was not asked.
Students were also asked whether the quality of the video images of students versus the tutor differed at all. Of those who expressed an opinion, (10/12), half believed they differed, the other half believed they were the same. Explanations for how the video images differed, were obtained from an open ended question, and included: different view of tutor, different levels of lighting for tutorís image and tutor's image appeared to be of higher quality.
How easy it was to hear one another, and whether more effort was required to understand what was being said, compared to a face to face setting, was investigated for both student-student communication and student-tutor communication.
Differences in responses for the two types of communication were observed. In relation to ease of hearing, students didnít appear to experience particular difficulties. On a five point scale, where 1= very easy and 5 = very difficult, similar mean ratings were found for student-student and student-tutor communication, 2.91 and 2.64 respectively. However, for student-student 4/11 students expressed difficulty, giving a rating of 4, compared to 2/11 students for student-tutor communication.
When asked whether it required more effort to understand what was being said, compared to face to face, a greater percentage of students responded "yes (it did require more effort) for student-student communication compared to student-tutor communication. The percentages responding "yes" were 45% and 27% respectively.
Explanations for difficulty in hearing included factors to do with A/V quality: break up of sound, echo, delay, loss of words, frozen picture, and audio levels: audio too high or too level, for student-student communication, and student-tutor communication. Explanations given as to why it required more effort to understand the students or tutor compared to face to face, were similar: break up of audio and video, interference and delay, missing words, less clear speech, and shared editor inadequate.
Measures of the usefulness of video were obtained from a series of questions. These included: how often the students looked at the images of their fellow students and the tutor, whether the availability of these images aided understanding of the tutorial, and if video images were not deemed useful, why this was so.
They were also asked whether they believed video mediated communication reduced the availability of non-verbal signals, and which benefits the presence of video images bestowed in a computer mediated tutorial.
During the tutorials, the video images of the tutor and students were looked at frequently and, at a similar rate. On a scale of 1-5, where 1= never and 5 = all of the time, the mean rating for looking at fellow students' images was 4.0, and for the tutor's image, was 3.83. For both types of images, the ratings ranged between 3-5, and the majority, 75%, (18/24) gave a rating of 4.
In relation to usefulness for following the tutorial and understanding explanations, the video images of the tutor and fellow students were rated differently. The tutor's images was deemed more useful than the students images, with all but one student reporting the tutor's image aiding understanding (11/12) compared to 66%, (8/12) for the students' images.
As a measure of why the images were not useful, students were asked to select from a series of statements which ones described their situation or feelings. For the students images, most gave more than one reason. The reasons were varied, though the central theme appeared to be that they were busy engaged in the task, and believed there was no additional information to be gained from looking at their fellow students. Other factors included, images too small and not updated frequently enough, and finding the images a distraction.
In relation to the tutor's image, the one respondent who didnít think they were a help, reported being too busy engaged in task orientated activities.
Students were also asked how often they looked at their own image. There was a spread of responses, though overall, this behaviour occurred neither frequently nor infrequently. The mean rating was 2.9, on a scale of 1-5, where 1=never and 5 = all of the time. Scores ranged from between 2 and 4, and the majority, 58% (7/12), responded 3. They were also asked to comment on their responses to this question. Of those who did, (10/12), most said they looked at themselves to monitor the image they were sending to the other participants. A couple said they felt self-conscious looking at their own image.
When asked to what extent they thought they were missing certain non-verbal signals over this medium of communication, compared to a face to face setting, students responses suggested they thought this information was available to them most of the time.. On a scale of 1-5, where 1= never and 5 = all of the time, the mean rating was 2.9, with scores ranging from 1-5. Seventy five percent (9/12) gave a rating of 2 or 3.
Six students answered the follow on question concerned with how important it would be to you to miss non-verbal signals. Their responses did not appear to relate to the question, however one participant commented that missing non-verbal signals wasnít very important as he was already familiar with the other students.
Finally in this section on video use, students were asked to indicate which statements about the potential benefits or disadvantages of video images in a tutorial setting, they agreed or disagreed with. Half the statements were positive about the presence of video images, the other half were not.
In summary, the participants mostly disagreed with the negative statements and agreed with the positive ones. Thus the majority of participants agreed with statements that video images were of benefit because they: allowed you to tell who was talking, tell when the sound connection was lost, see when the tutor was available, see the rest of the students in group, see when others were paying attention.
The students, by in large disagreed with statements that video images were not of benefit because: they were too slow, distracting, added nothing to audio, out of sync with sound, too small. Negative statements were disagreed with 90% of the time, and positives agreed with 85% of the time.
The small number of negative statements that were agreed with were
1) video images were too slow - 1/12 respondents
2) they were out of synch with sound - 3/12
3) they were too small - 2/12
The positive statements which were disagreed with were:
1) video images allowed me to see who was talking - 1/12
2) they enabled me to tell when the sound connection was lost - 2/12
3) they allowed me to see if other students were paying attention - 5/12
Students were asked to rate how easy they found the shared workspace to use, and make any comments. Students evidently experienced some difficulty, as on a scale of 1-5, where 1=very easy to use, and 5=very difficult to use, the mean rating was 3.08. Scores ranged from 2-5, with the majority, 75% (9/12), giving a rating of 3 or 4, The comments they made supported this, and were mainly to do with difficulty editing text, unfamiliarity and lack of training, and slow refresh rates.
In the final section of the questionnaire, students were asked about their level of satisfaction with this method of delivery, how involved they felt compared to face to face setting, and whether they would have preferred face to face tutorials.
There was a lot of support for the computer mediated method of delivery, however some responses were not entirely favourable. Students gave a mean satisfaction rating of 2.45, on a scale of 1-5, where 1= very satisfied and 5= very dissatisfied. Scores ranged between 1-4, with the majority giving a response of 2 or 3, 75%, (9/12). Only 2 participants reported being dissatisfied, giving a rating of 4.
Asked to rate their level of involvement compared to face to face tutorials, similarities with a face to face setting were observed. A mean rating of 3.08 was obtained, on a scale of 1-5, where 1= a lot greater than face to face, and 5 = a lot less than face to face. Scores ranged from 2-4, with the majority responding 3, 58%. Of the three students who responded "less than face to face", only one had responded as being dissatisfied with the method of delivery. The other two had given middle ground responses..
Reasons given for levels of involvement being different compared to face to face tutorials were various. Seven students made comments. On the negative side, reasons given included time wasting at start, for set up and during tutorial, due to equipment mal-function. Other comments included reduction in informal interaction, increased involvement (which didnít suit the student), and reduction in richness of visual information.
On the positive side, the teaching style was found to be different, but liked, and students reported reduced feelings of embarrassment which allowed greater contribution, and more task focused interaction.
When asked whether they would have preferred face to face tutorials, most students said no, 58%, (7/12), while 42%, (5/12) said "yes". Reasons given for not liking computer mediated delivery included: lack of personal contact, feeling rushed due to time limit, and time wasting due to set up and technical failure. Positive comments were made in the "additional comments" section where despite initial hiccups, students appeared to have a good experience.
NB. Raw figures are available on request to rachel mcewan or anna watson. Copies of the questionnaire from anna watson.