
Desktop
System Security Architecture
February 1, 2004
Over the last few days, Windows users have been hit with yet another
email worm. Like many others, this one, the so-called "MyDoom" worm,
entices the user to click on a file sent as an email attachment. This
mode of attack has been around for a while, though this most recent
incarnation is particularly clever; it masquerades as an email system
error.
This particular worm doesn't seem to exploit a buffer overrun or other
similar weakness in order to do its evil; it simply runs when the user
opens it. Once it's running, it attaches itself to operating system
components in a manner calculated to make it difficult to remove, then
replicates itself by sending out a lot of email to users in the
victim's address book.
What I find particularly fascinating about all of this is the fact that
this is being treated primarily as a user education issue. While it's
true that a savvy user can dodge this attack completely by simply not
opening the attachment in question, one might still rightly ask, "Why
is it that users have to be
security-savvy in order to effectively use their computers?" Many of
the security problems that we see are, in fact, caused by architectural
flaws.
The lack of distinction between executable files and data is the first
problem. Windows differentiates between data files and programs through
file naming convention; the mere construction of a filename is
sufficient to get the operating system to attempt to run it if the user
should happen to click on it within the GUI.
Other operating systems don't do this. Unix systems have an attribute
separate from the filename that indicates that the file is executable
code. This attribute (a permission bit, actually) must be set in order
for the code to execute in response to a click from within the GUI (or,
for that matter, in response to actions in the command-line interface).
Had this worm been effective on a Unix system, it would have required
that the user save the attachment as a file, modify the executable
permissions for the file, then invoke the application. Most other
non-Unix systems with which I've worked are similar; you have to either
explicitly communicate to the operating system "run this file as a
program" or somehow bless the file in order to turn it into an
application.
Once the application is running, we discover the next major
architectural flaw: it's possible for most users of Windows to modify
the behavior of the operating system itself without realizing it. Most
modern operating systems require a user to be in some sort of a
privileged mode in order to install applications or otherwise change
the behavior of the system. The "su" command (or, better yet, the
"sudo" command) in Unix allows one to assume "superuser" privileges for
this purpose. In Windows, you have to be logged in as a user with
administrative rights to the computer, but there's no simple way to
assume and release privileges for the purpose of installing an
application. So most users (outside the most restrictive of corporate
environments) use their Windows environments from a login with full
administrative privileges. This is the equivalent of running one's Unix
environment while logged in as "root," a practice regarded as reckless
and incompetent. Unfortunately, it's very hard to get work done in
Windows any other way.
As a result, malware like the MyDoom worm can take advantage of these
administrative privileges in order to make itself harder to remove.
It's quite common for such applications to add themselves to the list
of things that run when the computer is started up. One variant of the
MyDoom worm even goes so far as to damage a network configuration file
in order to make it difficult for antivirus software to download
updated signature files. These attacks work only because the worm is
easily able to gain administrative rights to the computer. There's
certainly plenty of mischief that can be perpetrated as an ordinary
user, but it's quite a bit easier to prevent when the OS is off-limits.
And, when bad things do happen, it's vastly easier to clean up the
damage when the integrity of the operating system itself isn't in
question.
So, the next time you hear the claim that a security problem is caused
by a user acting stupid, consider this: is it really the case that the user is
stupid, or is the real stupidity the set of architectural decisions
that enable the user to make mistakes?