Bob Braden (firstname.lastname@example.org)
Mon, 10 Nov 86 15:50:32 PST
I have been experimenting with high-rate ICMP echoes across the ISI
Ethernet between SUN 3/75's. The pinger program sets an interval timer to
fire off every 20 ms (the minimum resolution). Each time it fires,
the signal routine calls sendto() to send N successive ICMP echo requests
without a pause. sendto() is bound to a RAW socket. The remote SUN
echoes, and a recvfrom() (called in an endless loop) gathers statistics
on RTT, etc. All very trivial.
Now, if N = 3 (150 packets per second), no packets are dropped, and it
uses 6 % of the SUN CPU (1% in user state, 5% in system state). If N = 4
(200 packets per second), 7 % of the packets are dropped, but it uses
57% of my SUN CPU time (10% in user state, 47 % in system state).
Obviously, at 200 per second we are overrunning something and queues are
building up. That would account for the packet loss. But I cannot
understand why the CPU time should build up in this non-linear fashion,
unless there is some (heaven forbid) linear search process going on in
some system queue. Can anyone give suggest to me what is going on?
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:36:59 GMT