Re: IEEE 802.2, 802.3 & Ethernet questions


Charles Hedrick (caip.rutgers.edu!aramis.rutgers.edu!athos.rutgers.edu!hedrick@rutgers.edu)
11 Aug 88 20:16:21 GMT


Arrgggg. There are two issues: electrical and software. Electrically,
Ethernet v1, Ethernet v2, and IEEE 802.3 all put out signals that are
more or less compatible on the cable. We've seen problems with
Ethernet v1 talking to IEEE now and then (where we have a choice, we
standardize on Ethernet v2), but by and large you can mix all three
types of equipment on the same cable. However between the transceiver
and the computer things are less free. Each given computer's
interface board is designed with a particular type of transceiver in
mind. You must use the one they had in mind. These days most
equipment is designed to take either Ethernet v2 or IEEE 802.3
transceivers. The two standards are very close electrically. Some
equipment can even deal with all three types of transceiver. But
there is still equipment around (and probably even being sold) than
can deal only with Ethernet v1 transceivers. So make sure your vendor
tells you. (To make things worse, the people you can get to normally
don't know the difference between the three standards, so it is
sometimes hard to find out. We normally assume Ethernet v2 unless
stated otherwise.)

There are also differences that show up in the software rather than
the hardware. 802.3, which is nominally about the hardware, does
reflect this difference in its terminology, but it shows up mostly in
802.2. Ethernet uses an Ethernet type code in the Ethernet header.
It does not use 802.2 By using the type codes, the device-level
software is able to determine the protocol and dispatch the packet to
the right protocol-handling software. IEEE did one of those wonderful
political numbers on us and changed the type field to a length field.
They then added 802.2 which has ways to figure out what the packet is
about. Mostly TCP/IP implementors have ignored 802.2 and continued
to use the Ethernet type code rather than the length code that 802.3
describes. The vendors generally claim that their systems support
802.3, but to the extent that they are putting a type code rather than
a length code in the Ethernet header, one could claim that actually
are supporting Ethernet v2 rather than 802.3.

This is all done for compatibility. Since the original
implementations were done in the days before the IEEE standards, it is
natural that they used the Ethernet type code rather than 802.2.
Newer implementations then continued this, in order to be able to talk
to the older ones. Vendors that put out a product that insists on
using 802.2 find that they can't talk to anyone. (This happened to
H-P with the 3000.) Their customers tend to demand a change. However
for newer networks that following the 802.x standards (e.g. token and
broadband networks), it's almost certain that full 802.x, including
802.2, will be used. That's what the new 802.x RFC is about. It
should probably contain a warning explaining this situation, so that
people realize that it typically does not apply to Ethernet. In order
to make it easier to design bridges that will connect a new network to
an old one, some vendors are probably going to support both formats on
Ethernet/802.3. It is possible to design software that can deal with
both types of encapsulation. When you want to talk to somebody, you
encode an ARP request both ways, send them both, and see which one he
answers. In the ARP table, you have a flag indicating which
encapsulation to use with that host. However at the moment most
software for 802.3 was really designed for Ethernet, and does not use
802.2.



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:43:12 GMT