Italian for Engineers Video Conference Tutorials

Number of respondents was 5, all male students. Not sure on the number of sessions held, at least 4, no data on attendance rates.

  1. Students were asked the following questions about audio and video quality:
  2. Please grade the overall quality of the tutor's audio in the sessions between 0 and 100, where 100 is the best quality you can imagine and 0 equals totally inaudible.

    Please grade the overall quality of the tutor's video in the sessions between 0 and 100, where 100 is the best quality you can imagine and 0 means that the video picture was no use at all.
     

    TABLE 1: MEAN OVERALL QUALITY RATING FOR AUDIO AND VIDEO OF TUTOR

    see above for description of scale.

    no. of respondents = 5
     
     
    AUDIO 
    VIDEO 
    MIN-MAX
    70-90
    40-94
    STANDARD DEVIATION
    7.89
    27.0
    MEAN RATING OUT OF 100 
    78.6
    62.8
     

  3. Students were asked the following question about the amount of effort required to understand the tutor compared to face to face.
  4. Did you feel it required more effort to understand the tutor than it would have done in a face to face tutorial? yes/no response options.
     
     

    TABLE 2: MORE EFFORT TO UNDERSTAND TUTOR COMPARED TO FACE TO FACE?

    no. of respondents = 4.
     
    RESPONSE
    YES
    NO
     
    2
    2
    The other respondent reported: not when it was working well
     

  5. Students were asked whether they thought audio quality varied in the following question
  6. Did you feel that the quality of audio varied during a session (i.e. it was better at some points and worse at others) yes/no response

     

    TABLE 3: VARIATION IN AUDIO DURING A SESSION

    no. of respondents = 5.
     
     
    YES
    NO
    RESPONSE
    4
    1
     
     

  7. Students were asked give their impression of audio problems using a list of descriptors in the following question.
If you experienced problems with the quality of audio, please indicate your impression of the audio by selecting three of the following words to describe the audio quality:

broken up, crackly, bubbly, cut up, irregular, choppy, echoed, disconnected, lossy, fuzzy, distant

Table 4 gives the descriptors chosen and the frequency with which they were chosen for all students.

TABLE 4: WORDS CHOSEN TO DESCRIBE AUDIO PROBLEMS

no. of respondents = all chose at least 1 descriptor, (4 chose 3, and 1 chose 1)
 
DESCRIPTORS CHOSEN
FREQUENCY
ECHOED
4
BROKEN UP
4
CRACKLY 
2
DISCONNECTED
2
LOSSY
1
Students were asked what would make audio better. Their responses are below:
 

    5.    Students were asked if they found the tutor's image helpful in the following question

    Did you find the image of the tutor helpful for the tutorial or was it irrelevant (i.e. might as well not been there at all) helpful/irrelevant response
     
     

    TABLE 5: TUTOR IMAGE HELPFUL OR IRRELEVANT

    no. of respondents = 4
     
     
    HELPFUL
    IRRELEVANT
    RESPONSE
    0
    4

    The other respondent reported: it seems a bit irrelevant but it would have seemed like we where just on the phone without it. Useful for seeing who's connected and when they cant hear.

     

    6.    Students were asked give their impression of video problems using a list of descriptors in the following question.

    If you experienced problems with the quality of video image of the tutor, please indicate your impression of the video by selecting three of the following words to describe the audio quality:

    frozen, delayed, blocky, broken up, patchy, variable, inconsistent, disjointed, jerky, fuzzy

    Table 6 gives the descriptors chosen and the frequency with which they were chosen for all students.

    TABLE 6: WORDS CHOSEN TO DESCRIBE VIDEO IMAGE OF TUTOR PROBLEMS

    no. of respondents = 3 choosing at least 1 descriptor, (2 chose 3 descript, and 1 chose 1)
     
    DESCRIPTORS CHOSEN
    FREQUENCY
    DELAYED
    3
    FROZEN
    2
    DISJOINTED
    2

    7.    Students were asked an open-ended question about what would make video better?
all students responded, responses given below:
      8.    Students were asked how important it was that lip synchronisation didn't occur in the following question
Audio and video are said to be synchronised when the lip movements of the speaker closely match the words of the speaker. For this videoconference the audio/video were not synchronised. Was this important?

 

TABLE 7: HOW IMPORTANT WAS IT THAT AUDIO/VIDEO WAS NOT SYNCHRONISED

no. of respondents = 3.
 
 
YES
NO
RESPONSE
1
2
The other respondents reported:

    9.    Students were asked: did seeing the tutors image make it easier to understand her? yes/no response

     

    TABLE 8: IMAGE OR TUTOR HELP UNDERSTANDING?

    no. of respondents = 5
     
     
    YES
    NO
    RESPONSE
    0
    5
     

    10.    Students were asked: did you often look at the images of the other students in the group?

    yes/no response

     

    TABLE 9: LOOK AT IMAGES OF OTHER STUDENTS OFTEN?

    No. of respondents =5
     
     
    YES
    NO
    RESPONSE
    5
    0

    11.    Students were asked: did you find the text editor useful in the tutorials? yes/no response

     

    TABLE 10: USEFULNESS OF TEXT EDITOR?

    No. of respondents =5
     
     
    YES
    NO
    RESPONSE
    5
    0
     

    12.    Students were asked: did you experience problems using the text editor? yes/no response
 

TABLE 11: ANY PROBLEMS?

No. of respondents =5
 
 
YES
NO
RESPONSE
1
4
The respondent who experienced problems reported:
 

      13.    Students were asked: would you recommend a friend to take a similar videoconference seminar?

     

    TABLE 12: RECOMMEND TO A FRIEND?

    No. of respondents =5
     
     
    YES
    NO
    RESPONSE
    5
    0
     

    14.    Students were asked: if you wanted to ask a question at the end of the seminar would you feel more intimidated than doing so in a face to face seminar?
 

TABLE 13: FELT IMTIMIDATED ASKING QUESTION COMPARED TO FACE TO FACE?

No. of respondents =8
 
 
YES
NO
RESPONSE
0
5