While the "5 rules represent 500 lines of code" idea has appeal, we are often asked "how can such simple rules address complex problems". It's a good question - however much you want to get to the other side of the river, you still need to have reasonable confidence that the bridge is up to the task.
A certain amount of confidence can be built by investigating the complex examples. Still, one longs for a more fundamental understanding of how complexity is addressed.
This paper draws parallels between rule operation and remarkable biological examples in a perspective that is both whimsical and serious. In particular, there has been interest in Emergent Behavior, which, as it turns out has a striking number a parallels with rule operation.
often-cited example is that of the cathedral termite. No engineer, this unremarkable creature manages to somehow build structures of remarkable complexity (Photo Credit: Wikipedia).
How is this possible? The termites are not engineers, they are not organized in any discernible way, they don't communicate (no boss!), they could not even conceptualize the resultant cathedral. Yet...
Study has revealed that the termites act in concert, governed by a small number of very simple rules:
This elegant simplicity has a few key elements that result in enormous power:
You're probably already seeing the parallels here in business logic:
Filling out our chart:
The critical distinction with respect to traditional programming is how complexity is managed. Here, complexity means the linkages between the rules (derivations etc).
In traditional programming, this is managed by the programmer, reflected in how code is ordered. As we have seen, this takes time, and is very non-responsive to requirements changes since the ordering must be re-designed for each change.
The underlying problem is that the number of dependencies between data is very large. And worse, this is a geometric problem. The interactions between 5 agents is 120, but 6 agents have 720 interactions.
Such a centralized approach, where the programmer manually controls the interactions, has built-in issues of scalability and adaption to change. But the declarative approach of rules interacting via their ability to sense and alter their environment scales linearly.
At its worst, Emergent Behavior feels like a bug. Termites can be constructive (build cathedrals), or destructive (ruin your house). So it's reasonable to ask which this is true for business logic.
Observe that the derivation rules are not simply pieces of code. In fact, they are the definition of your (derived) attributes: the customer balances is the sum of the unpaid order totals.
So our rule agents are simply maintaining these derivation rules (and related rules like constraints and actions) in the face of a complex set of updates to the attributes on which they depend. In data processing terms, you are getting automation for transactions you did not explicitly design, as in our example where declaring Place Order rules automatically implements the related Use Cases such as Delete Order, Pay Order, Change Item quantity and so forth.
The difference is you don't need to spell these out in a geometric series of tests. You simply state the rules and the transaction logic engine embues the rules with constructive emergent behavior.