Re: X.25 problems


Lars Poulsen (lars@ACC-SB-UNIX.ARPA)
Wed, 14 Oct 87 08:59:57 PDT


> Date: 14 Oct 1987 05:13-EDT
> Subject: X.25 problems
> From: CERF@A.ISI.EDU
> To: service@ACC-SB-UNIX.ARPA
>
> I don't understand why the introduction of release 7.0 should
> exacerbate X.25 VC shortages - the limitation is in the ACC
> software, isn't it (maximum VCs set at 64?) and this would be
> a bottleneck rgardless of the IMP release (7 or its predecessor)?
> Why would these problems surface only with release 7?
>
> Vint

Vint,
        The limitation is actually in firmware rather than in software.
We run the entire packet level on a 68000 on our X.25 board. And yes,
the limit is 64.
        Our device driver closes virtual circuits after (currently 10)
minutes of idle time. Since the timer value is in a source code #define
and we provide source code, any system manager can tighten this, and free
up VC's after two minutes, if they desire.
        The timer is set fairly long; we at one time closed circuits after
one idle minute, only to find that we would be thrashing VC's: Under certain
network conditions, the packet round trip time could go up to 80 seconds.
Under pathological conditions (buffer shortage in the PSN) we have even seen
30 seconds round trip time for an ICMP echo addressed to the host itself.
This can only be explained by the X25 equivalent of 1822 "blocking".
We are REALLY looking forward to the new End-to-end module curing this
problem.
        The lack of virtual circuits usually becomes a problem when
the network becomes pathologically slow. We speculate that this is
because transfers that normally complete in a couple of minutes
may take up to a half hour under these conditions, and thus there is
much more overlap.
        The transition release PSN7.0 has more code in it than either
PSN6.0 or PSN7.1 ; this means fewer buffers. This tends to provoke
the situation described above.
        Finally, I should mention that I have seen that hosts that use
many virtual circuits tend to have a few of these with bursts of real
traffic, such as you would expect for "normal" TCP use (SMTP, FTP,
TELNET) and a large number of VCs with very short bursts (< 5 packets)
with large intervals (one burst every 15 minutes or so). Invariably,
these VCs are to GATEWAYS, which is why I speculated that this might
be EGP traffic (I have never really read up on routing protocols).
I am told that each gateway peers with no more than 3 core EGP-mumblers.
I am now speculating, that maybe some gateway daemons like to ping each
gateway that they hear about to make sure it is reachable, but this
is speculative.
        I hope this helps you understand why we are concerned about
the transition.

        / Lars Poulsen
          ACC Customer Service
          Service@ACC-SB-UNIX.ARPA



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:39:35 GMT