Originally published August, 14, 2001
What’s the most frequently seen message produced by Java programs?
Its got to be the NullPointerException. In efforts to avoid this dreaded message, programmers have adopted an idiom that looks something like:
if(object != null) object.doSomething();
Which basically means – only send the message if there’s something there to receive it. Thats how you keep the exception from being thrown. The idiom must be burdensome as NullPointerExceptions continue to pop up at unexpected times and the fix is invariably to put this sort of test on the line from which the exception is raised.
This is an idiom adapted from C and later C++ where null is memory address zero by convention. Since important chunks of the operating system live down in that memory region, modern operating systems protect that region from fiddling and view trying to reference anything off of memory address zero as a likely programmer error. So its not allowed in the name of protecting the operating system.
C and C++ programmers, in efforts to get their programs to live long enough to let the program report whats going on, check pointers to make sure that they aren’t null before using them. This is because the penalty for using a null pointer in these environments is death.
These days, applications are developed using higher level languages, like Java and Smalltalk. These have no pointers at all. Instead we have object references and its not possible for the programmer to endanger the operating system by messaging a null object reference. (Objective C uses pointers as object references, but the programmer never directly dereferences them – its all done by the Objective C runtime).
So now, released from the need to steer clear of the operating system’s defense mechanisms, we can take a step back and think about what sending a message to nothing means.
Shouting across the street to someone who isn’t there may make you feel silly – but nothing really happens. Same for sending a letter to a non-existent address. If there’s no return address, the post office will eventually give up and toss it. Nothing too terrible there. There’s simply no one to receive the message.
So why does Java take the position that messaging null is so great a catastrophe that the programmer must be burdened with handling an exception? Especially when most of the time, the programmer’s method of avoiding the unwanted exception is to add a test for null before sending the message. In other words, the programmer will fix it with:
if(object != null) object.doSomething();
which is a just a way of saying “if there’s noone to receive the message – do nothing”. Worse, the programmer has to add this check to every single location in the code that tries to make use of the object reference.
This is the same nuisance condition we identified with typing of object references. Recall that the dynamic languages provided a means of handling this condition in a central location via the doesNotUnderstand, while the Java version required the programmer to handle it at each call site.
A similar situation exists with the use of null in Java. It must be checked for and handled at each call site. There must be a better way.
On the dynamic side of the world things are simpler. In Smalltalk, null (actually, in Smalltalk its referred to as ‘nil’) is a global singleton object of the class UndefinedObject. It implements hardly any messages and so messaging nil results in nil receiving doesNotUnderstand. The default behavior of doesNotUnderstand in nil is to halt the program and throw a debugger around it. Many deployed systems change this behavior on deployment to log the behavior and stack trace, or to simply return nil.
In Objective C, messaging nil results in nil being returned by default. This behavior can be changed in the runtime by adding a hook function to the runtime that could do logging, or raise an exception.
In either case, the consequences of messaging nil are under the control of the programmer and can range from totally benign to fatal, depending on the developer’s preference and the application domain. Experience with developing applications in this environment has shown that messaging nil is nearly always harmless, and not having to place tests for nil before every message send results in smaller, cleaner, and easier to understand code.
Plus, the applications don’t crash nearly as often. This is yet another example of how a feature in Java that is intended to improve software reliability, actually undermines it.