Re: Fiber Ethernet problem solved


Murray.pa@Xerox.COM
Thu, 25 Sep 86 20:12:05 PDT


"This seems to imply that the interface should not monitor carrier
during transmit. Could someone more familiar with the spec elaborate?"

The main idea is that there should be a 9.6 microsecond minimum gap
between packets so that the receiver can get ready to grab the next
packet. Dropping packets can easily have disasterous impact on
performance. A bit of time will normaly simplify the hardware design.

The fine print is trying to say (I think) that after the transmitter
waits 9.6 microseconds, it shouldn't wait again/more (as if it were
starting fresh and the middle of a packet was already on the wire) just
because it now looks like there is a packet already on the wire. That
packet started just a very short while ago, probably less that a bit
time, (if everybody is following the rules).

If nothing else, the fraction of a bit difference in the phase of the
transmit clocks at the two stations could easily provoke this case. When
the (second/interesting) transmitter does starts to transmit, it will
cause a collision. That's the desired result when two stations try to
transmit at the "same" time.

Note that the fractional bit race condition actually happens quite
often. Consider three stations on an ethernet. Call them left to right
A, B, and C. Suppose A is transmitting and B and C are waiting to send.
When A finishes, the end of packet will sweep down the wire. When it
gets to B, B's 9.6 microsecond clock starts ticking. A while later, C's
clock will start too. When B's clock expires, the wire (around B) is
empty so B starts transmitting. When C's clock goes off, B's new packet
is just about to arrive at C or has just arrived at C. Because the wire
delays cancel out in this configuration, fractions of a bit dure to
clock synchronization are important.



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:36:35 GMT