Home‎ > ‎

System Architecture Tips

A short introduction to systems and their terms

 Simple systems are made of inputs, outputs and stocks. A bathtub is a simple system: the taps are the input, the drain is the output, the water in the tub is the stock. If you plug the output and turn on the input then the level of the stock will increase. If you unplug the output and shut-off the input then the stock level will decrease. If you turn on the input so that it equals the rate of output, then the level of the stock will remain the same and is said to have reached equilibrium. A baseball is also a simple system; throwing it adds kinetic energy to the stock--the ball's momentum--and the output is energy lost to friction or transference to another object (a catcher's mitt).

 A feedback loop is something that changes either the input or the output according to the level of stock. Say that you're in the bathtub and there's a leak, or evaporation, or whatever. When the level goes below your knees you turn on the input. When the level rises to your chin you shut off the input. This particular kind of loop is called a negative feedback loop because an increase in the stock's level results in a decrease of the input, whereas a decrease in the level leads to an increase of input. Negative feedback loops are used to keep systems in equilibrium. The cruise control on your car and the thermostat on your house heating system are both negative feedback loops.

 A savings account that earns interest is a positive feedback loop. The more money is in the account, the more it earns in interest, which increases the level of the stock, which increases the amount of input. Positive feedback loops can lead to runaway systems. Chernobyl was a positive feedback loop because the nuclear reaction increased as more of the cooling water boiled away.

 Because the level of the stock changes over time rather than instantaneously, stocks are buffers. Your checking account balance is a stock that acts as a buffer, because it enables you to spend at a different rate and frequency than you get paid. The Earth is a thermodynamic buffer, storing heat in the summer and discharging it in the winter.

 If the input to your bathtub is greater than the output, and this condition doesn't change, then the tub will overflow and the system is said to have entered a failure mode. This is also the case when the output of your checking account exceeds the input for too long. Failure modes pass the buck to another system, eg: a bucket and mop, or an overdraft line of credit.

 Failure modes are one of the ways that systems have interactions with each other. The other ways are when the output of one is connected to the input of the other. Your bathtub's output is the input for a sewer system. The bathtub's input is the output of a house plumbing system. Your checking account's input is the output of a payroll system, and so-on.

System Design Tips

  1. Resist creating new systems. See if an old system will suffice
    Old systems have already been debugged, and new systems will always bring new problems

    It could be said that in software development, we're now at the point where most of our effort and activity is going into replacing old systems, so does this advice still stand? It does only if the new system won't solve new problems, or if the aging of the old system hasn't created new problems. When we say "if it ain't broke, don't fix it," we don't mean you should ignore things like new business opportunities, new media, changing culture, etc. However, if you're going to plunge into creating a new system, focus on building modular systems that perform the minimum tasks necessary and that interact with other sub-systems. You'll gift your descendants with more options to stick with the old parts that still work, while only having to replace the few pieces that they must.

  2. Don't try to fix a broken system by adding more parts
    Don't fix bad data with more data, don't throw good money after bad, don't put perfume on a pig, and don't rig the CD tray to push the reset button

  3. Eliminate points of self-reference
    Avoid systems that modify their own source code or data structures (including saving SQL in table cells), that give themselves their own "Partner ID", that invoke their own API in the process of providing that API, etc. These risk a positive feedback loop that make the system unstable. You are not building an AI. Distinguish between hierarchical/stack designs and these strange loops

  4. Design a system to run downhill
    Do you need to store data in the most efficient format, or the most widely understood? Should you validate every input before you save it, or only when it's time to act on it? Some advice: don't take a shower in a basement apartment during a power blackout, don't enter a building where the fire escapes open inward, and don't design an email client that requires the letter to be already written

  5. Do what you can with what you're given
    Match the degree of output with the degree of input. Must the user fill every field of a Form 27B/6 before the system can do anything, or will the system provide basic behavior for minimal input? If some resources require secure validation, can you at least provide unclassified results for unauthenticated requests? 

  6. Don't reject conflicting facts, store and report them
    Say your system gets a notification that a package has shipped, but the order is marked as cancelled. If you reject the notification because it conflicts with internal state, then you'll lose information. If you think it's okay because you fired off an error report to somebody, what happens when it gets lost? The system's state will never agree with reality. Store conflicting data and provide tools for discovering it and fixing it. There is no such thing as a single version of the truth

  7. Automate the mundane, not the exceptional (Don't Build Airtight Systems)
    It should be possible to do everything the system's automation does by using a manual tool, even if it takes longer and is more tedious. When you see which parts are long and tedious, simplify them first, then automate them and then focus on making the manual tools easier to use

  8. Every input to the system should be logged in a way that can be played back like a tape to recreate the stock
    This makes it trivial to replicate a problem state in a debugging environment exactly. Use queues, transaction logs, and things that are machine-readable that can easily be put into the order-of-entry or are order-of-entry-agnostic. And if you have the discipline to take it this far, that "tape" should include every line of executable code: have a source-control system and a build server and a script that can rebuild Rome in a day from its blueprints: if you have to fix an installation running on an earlier version, you want to be able to plug in the version number and play back the tape of input to get a laboratory copy of the specimen that went rogue

  9. Never save state in a file format that only one program can understand
    You need to be able to inspect, tweak and re-process data with a variety of tools, not just the original document creator. This is why XML is so popular, in spite of itself

  10. Be zealous with the scalpel
    Customer databases, product catalogs, accounting, inventory, Orders-to-Cash (order management) and Point of Sale should all be cut out into separate systems with separate stores, developed independently. Make them talk to each other SOA-style. If it looks like it's a separate concern, then it is. If it turns out you were wrong, and the service shouldn't stand on its own, then at least that component is already modularized and easy to integrate into wherever it belongs

  11. Choosing the right name is everything
    Having intuitive, accurate, descriptive names for a system's parts is equally as important as what those parts do. But there's more: choosing a name can make the thing. The wrong name can lead you to develop the wrong solution by making you picture the wrong idea

  12. Know the difference between experience and speculation
    Focusing on the long term implies that you can predict the future. Engineering for speculated needs, rather than needs you know about from experience, is hubris: systems invent their own needs and you cannot know what they'll be until the system needs them. It is better to pay the cost of refactoring and re-engineering later than it is to pay for the wrong optimizations now

  13. Do not turn the system's artifacts into assets
    It must be possible to jettison all or part of a system's pieces without incurring a loss. Do not try to convert any part of it into a product. Your people--after they have come to understand the problem and ways of solving it--are the real asset

Observations About Systems

With credit (or apologies) to John Gall
  1. Systems in general work poorly, or not at all
    Someone who knows what they're doing will always beat an organization (or an ERP system) that's following fixed rules

  2. All complex systems that work invariably evolved from simple systems that worked
    Never build a large and complex system from scratch. Never assume you can make a broken system work by making it more complex

  3. Complex systems always present unexpected behavior
    This is the reason why you must not build an airtight system. If the system is sealed, then you won't be able to deal with the unexpected

  4. A "complex" system has 2 or more stocks
    Think of complexity as an exponential effect. Each new part increases complexity a little, but it's the stocks that double it. A database is simple. A database with a cache is complex

  5. New systems that replace old systems always bring new problems
    Supporting Unicode in URLs made web addresses friendlier to other languages, but gave scammers the ability to spoof "paypal.com" by registering the domain name using a Cyrillic letter "a"

  6. Fail-Safe systems fail by failing to fail safe
    The fail-safe valve on the top of the reactor vessel at Three Mile Island failed in the safe (open) state, but the sensor which told the control room it was open failed by reporting that it was shut. Hilarity ensued

  7. The divergence between the outputs of a system, and what the system was intended to output, increase with complexity
    Supermarket Breakfast Cereal is not cereal, it's baked pieces of injection-molded paste. Supermarket Bread is not bread, it's baked starch foam. A fast-food milkshake is not a milk shake, it's whipped corn sugar. Cinnamon isn't cinnamon, it's a cheaper spice called Cinnamomum burmanni. Do not assume that just because the System calls it a "Table", or a "Transaction", or a "Customer", or a "Required Ship Date", that it actually is one

  8. Complex systems tend to oppose their own intended function (Le Chatelier's Principle)
    The widespread use of antibiotics has resulted in the creation of new antibiotic-resistant germs. Nasal sprays, when overused, cause a rebound effect that's worse than the original stuffy nose. Your accounting system will inevitably make it impossible to record certain kinds of transactions

  9. Reality, to a system, is whatever is reported to it
    Don't pass data through an interpretation phase before presenting it to the system. Give the system tools to interpret data after it's been encoded and stored, and don't change that data under its nose without telling it why at the same time

  10. Systems attract Systems People
    A Systems Person won't see how absurd a procedure is as long as it happens to agree with the rules of the System. When a person sitting within arm's reach of you refuses to answer a question until you've put it through the ticketing system, then you are sitting next to a Systems Person. The IRS still makes people mail them checks for as little as $0.10, even though it costs them ten times that much to pay someone to open the envelope and process it

  11. All systems are an attempt to create an artificial intelligence
    The human brain works because it throws away most of its work. Systems fail because they never throw away any of their work

Further Reading

  • Thinking in Systems, by Donella H. Meadows. The nuts and bolts of system theory in easily digested form

  • The Systems Bible, by John Gall. This is one of those books you'd steal from school. If you liked the horrible examples of systems gone awry that I used above, then you'll nom on this book like it was breakfast, lunch and dinner. While the textbooks can teach you the theory, this book tells you what it all means
Comments