All of the systematic techniques for writing high-quality code are controversial, because they require discipline plus a concession from the programmer that he isn't a rock star. Actual rock stars go broke faster than session musicians, though, and the simile holds true for programmers and their creations.
There is a theory to writing bug-free code (or rather, bug-minimal), and each technique that applies it combines the same two ingredients in different ways:
These ingredients are mixed together to create the special sauce: redundancy of meaning. When the same idea gets expressed in two different ways you can use runtime or static analysis to look for consistency between the two different expressions of function. Then whenever there's an inconsistency you get a red squiggly underline or a failure report or an exception that specifically tells you where and how you screwed up.
The following are some common recipes that use the above ingredients. Starting with the easiest and ending with the hardest, these techniques will catch most bugs early in the development stage.
In RAM everything is just bits, and so data types are used to help the compiler do two things: invoke the correct methods for any given expression and check for invalid assignments.
You can use static typed languages to minimize bugs by creating wrappers for values that are representable with primitive types but have special meaning. The compiler will then prevent you from accidentally assigning a
Wrappers also give you opportunity to test assumptions about those values when you initialize them. For example, in the constructor for each type you can check for bounds that apply to your problem domain. Maybe in your application a
New twists to the concept of types occasionally enter mainstream languages. One very interesting twist is support for units of measure in F#. These are used not merely to prevent you from making invalid assignments, but also to tell the compiler what their rules are and how they relate to other units. This makes the following syntax possible:
Assertions are statements made at the beginning and end of a method that check assumptions the programmer has made about the method's parameters, environment, and output. Method Contracts formalize this technique as a language feature and can shift their enforcement from runtime to design-time (static analysis).
A simple entry-assertion is something like "My parameters should never be null". An exit-assertion might be "I won't return a null" or "My result will always be divisible by 2". They're particularly useful for checking the relationship between multiple parameters, such as asserting that a key passed in one parameter must be present in a Dictionary collection passed as another parameter or through the environment. When writing methods that work on an XML DOM I've used assertions to test for expected attributes and child elements on node parameters. Eg:
A runtime assertion can be done any way the programmer likes, but more frameworks are now coming with helper classes (like
These ugly message boxes are desirable because they're meant to help you discover bugs before you even release the code to customers. Depending on which framework you use, you can even switch them off with a compiler directive before you release to manufacturing.
Method Contracts are typically defined outside of the method body in an attribute or some other meta-data, and they have an advantage over runtime assertions because they can be used in static analysis of the code--performed either by the compiler or a separate tool integrated with the IDE. These tools can follow the chain of contracts in your program and predict where violations and unchecked assumptions are being made.
The extra benefit of assertions is also its greatest weak spot: developer attitude. An empirical study performed at Microsoft1 found that assertions can improve code quality only if the developers "get it" and use them voluntarily and enthusiastically. Little was gained when managers forced their programmers to write assertions, because the programmers sighed and banged out boilerplate code that satisfied the new requirement but didn't encode much redundant meaning.
Where Contracts reason about the code's behavior in or around the method itself, unit tests reason about them outside the method in a separate assembly. And while an assertion is tested at runtime and contracts tested at design-time, a unit test is run on its own in a separate process whenever the programmer wants to.
A basic unit test prepares an input, runs the method it's testing, and checks for the correctness of the output. The advantage of unit tests over assertions is that they can check for answers that are known to be correct for a given input. Eg:
In this example the unit test is preparing a scenario that has been checked by hand and the answer known in advance. If the results of calling NetAnnualSalary with the known subject don't give the correct answer then the test fails.
A unit test can check as many scenarios as it likes and take as long as it likes before it's satisfied that the code under test has passed. Therefore unit tests are often run manually after making changes to a method to verify that they didn't create new bugs in the process of fixing other bugs or adding features. Unit testing can also be automated, and are particularly valuable when set to run automatically every time new code is checked into a repository. This is part of what a development team calls a Build Server.
Unit tests also enable a complementary practice called Test Driven Development (TDD), where instead of writing the code before the test, you write the test first, make sure it fails, then write and debug the method until it passes the test. Or, applied to the above example:
At first it wasn't clear what the benefits and trade-offs of TDD were, but an empirical study conducted on teams working at Microsoft and IBM2 found that teams practicing TDD could get up to 40% fewer bugs in their released code, but it came at a cost of taking up to 35% longer than a team developing the same software without TDD. It neither absolves TDD nor condemns it, but it can tell a product manager what he could get if he had the time to pay for it.
If the whole idea is to leverage redundancy then the groups who author the launch control software for the Space Shuttle have made this an extreme sport. They have two separate isolated teams--working in different parts of the country--who both implement against exactly the same spec but are allowed no cross-communication at all. They plan their design independently and code independently to produce two programs that both do exactly the same thing, but not the same way. Both programs are installed on redundant computers aboard the Shuttle, each receiving the same input and issuing the same commands to control thrusters and flight surfaces. Only one program is allowed to be dominant, but its output is compared against the output of its sister program to make sure they're exactly the same. If there's a deviation then ground control can pick which one "wins" the argument, or abort the mission.
It doesn't usually get that far, because the two programs are run side by side in simulated launches to discover bugs months before they're ever installed in a spacecraft.
The practice doesn't have to be limited to Lockheed Martin and NASA; I'm sure you'd like that same degree of care put into the software that runs your life support machine. At more mundane levels it's an option whenever the cost of a bug is greater than the cost of having two teams implement the same spec.
organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations". Or if your organization had 12 groups, they'd usually design a system with 12 major modules. Furthermore, the quality of the interfaces between those modules will reflect the quality of the communication between employees of each group. Conway's observation was picked up and made popular in The Mythical Man Month by Fred Brooks.The other key difference with the Shuttle group was the deliberate suppression of creativity expressed in code, and the channeling of that creativity into changing and fixing the process. Melvin Conway observed in 1968 that "
And since we're busy praising NASA and Lockheed Martin, it's with irony that the best example of organizational communication failures being reflected in software is the Mars Climate Orbiter, which crashed because Lockheed Martin used imperial units-of-measure while NASA was using metric. Not everyone does it right all the time.
What has begun to make sense to some software houses is that to make the perfect software you have to build a team with organization and communication channels that reflects the design for the software. We've now stepped up a level of abstraction: out of the realm of code and into wetware and meatspace. We have redundancy of meaning in the code, and now we have redundancy of meaning in the people who write the code. The key to propagating that meaning into the organization is to have its members be the engineers of their own organization. To get to the ultimate level of perfection--a level that isn't appropriate for every application, mind--you need to do what NASA did and shut-down creativity expressed in code and make it get expressed in the processes followed to write the code.
The point of The Turing Test was to discover intelligence by how it behaves, and the key to the interrogator's job was to seek consistency in the answers given to his questions. As contestants to the Loebner prize found, it wasn't enough to hard-code the answers to anticipated questions: the computer had to actually know how to say the same thing in different ways. Redundancy of meaning proves understanding. What we call a bug has to be distinguished from design flaws and misunderstandings of the problem. The program has a bug, but the person who defines the program can have a misunderstanding. If you assume that there's no misunderstanding, then bugs are just the difference between what we thought we wanted and what we got.
Redundancy is coded into programs to prove that the program knows what we wanted, and the rest is up to us. If--in spite of the above and any other method proposed to defeat software flaws--we found that our creations still didn't do what we want, then there's noone other to blame than ourselves.