Sean McLinden (email@example.com)
13 Nov 88 14:40:11 GMT
I feel, in part, responsible for some of this discussion since I commented
that the bug(s) were well known by many system programmers prompting the
response "Well I didn't know about it?" or "Why didn't you tell anyone?"
It is clear from Rick Adams' comments that 'not wanting to tip anyone off'
is no excuse. Even binary-only sites can be protected fairly rapidly if
the appropriate channels are used.
fingerd.c: This bug (the use of gets() with a fixed buffer size), is
commonly used as an example of poor programming technique in C programming
courses. There are a lot of these in user contributed software and a
few more were present in earlier versions of Berkeley unix. It didn't
occur to me to look for it in daemon sources until we detected the worm
because I never really had occasion to look at fingerd, but problematic
nature of that particular programming style is well known. One problem
may be that many people learn C by example, not formal instruction. In
that mode, you look more towards what can go right than what can go
wrong. Perhaps someone could write a book on 'How NOT to program in C!'
sendmail: In the context in which it was intended to be used this is
not really a bug but a gaping hole. In fact, there is nothing wrong
with being able to mail to a process (uucp would be in trouble if
you couldn't). The problem is that the mechanism by which this capability
is controlled should rest in the 'aliases' file or with compile and/or
run time options that can be applied under appropriate conditions by
the system administrator or software developers. Again, the problem is
not the existence of this "feature" but the fact that it was the default
capability on distributed BSD systems.
Again, in the context of a programmer's assistant, this feature was
known by many people. When the worm appeared a grep of the syslog
file was all that was needed to determine what in sendmail (actually
the smtp server, to be precise), was allowing all the trouble to occur.
In that context, what was a convenience became a security hole (please,
I don't want to argue semantics).
Most of my system work is goal-oriented. My default mode is not to
be always thinking "How can I exploit this to invade thousands of
machines across the country?". The best I am capable of is to remember
those things that I have noticed in the past and reconsider then in
light of a new context (that of a security problem). Now I am prompted
to look more closely at sources, not with the idea of making things
more efficient, but with the goal of making them more secure.
In the 14th century Marco Polo brought Chinese technology to the west.
In particular, he brought fireworks which the Chinese had used to amuse
themselves for centuries. It was Western culture that first exploited
this technology for warfare. The Chinese had only peaceful applications
It accomplishes little to flame those people who knew the flaws of
BSD anymore than it does to blame the Chinese for modern warfare. Maybe
we should all be a little more suspicious (after what has happened
we probably will be). The point is that that what happened with the
worm could be attributed as much to the mindset of the Unix community
as to Morris' programming skills (probably more so the former). What
seems to be obvious problems in the system are only so in the context
of their exploitation by people with different orientations than
ourselves. It was an oversight (and I, for one, am reassured), that
the potential for harm did not occur to all of us, earlier.
There are people whose job it is not to promote open systems but to do
nothing but determine what are the security problems with any system.
They constantly operate in the mindset of someone attempting to
break a system. They work for industry, DoD, NSA, the FBI, and a lot
of speciality security firms. A better question is: If those people
knew why didn't THEY tell anyone?
Decision Systems Laboratory
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:44:30 GMT