Ron Natalie (email@example.com)
Sat, 14 Nov 87 12:36:54 EST
On the coax there is no differenece electrically between Version I
Version II, and IEEE 802.3. There is an encoding difference in the
bytes. The 802.3 uses the two bytes following the source address for
a length field. The older Ethernet standards use this as a type field
for determining what protocol to use for the rest of the packet. Most
IP networks these days are constructed using the old Ethernet interpretation
regardless of what kind of transceiver they use.
The difference between the Version I transceiver and the version II
is the presence of the so called "heartbeat" signal or SQE. What this
does is blip the collision detect line after each transmission. This
is an added protection for detecting broken transcievers and cabling that
may be jabbering on the net.
The IEEE 802.3 transciever is similar to the Version II transciever, but
has one additional signal state on the collision detect line for something
like MAU (that's what they call the transciever) not ready. I'm not sure
what anybody does with this (if anything).
Of course, as stated earlier, the various standards call for different
sizes of conductors and grounding considerations, although the essential
signals conductors are the same.
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:39:55 GMT