Overview‎ > ‎

Emergent Behavior

While the "5 rules represent 500 lines of code" idea has appeal, we are often asked "how can such simple rules address complex problems".  It's a good question - however much you want to get to the other side of the river, you still need to have reasonable confidence that the bridge is up to the task.

A certain amount of confidence can be built by investigating the complex examples.  Still, one longs for a more fundamental understanding of how complexity is addressed.

This paper draws parallels between rule operation and remarkable biological examples in a perspective that is both whimsical and serious.  In particular, there has been interest in Emergent Behavior, which, as it turns out has a striking number a parallels with rule operation.


What is Emergent Behavior?

An often-cited example is that of the cathedral termite.  No engineer, this unremarkable creature manages to somehow build structures of remarkable complexity (Photo Credit: Wikipedia).

How is this possible?  The termites are not engineers, they are not organized in any discernible way, they don't communicate (no boss!), they could not even conceptualize the resultant cathedral.  Yet...

Study has revealed that the termites act in concert, governed by a small number of very simple rules:

          1. When you see a rock, move it to a pile.  (aside: this leaves irresistible termite pheromones on the pile)

          2. If there are no piles, start one

          3. If there are multiple piles, use the one with pheromones

This elegant simplicity has a few key elements that result in enormous power:

 Multiple agents the termites
 Interacting with their environment, both to
  • sense it, and
  • affect it
 Sense environment:
  • See stone
  • See pile 
  • Smell pheromones
 Affect environment:
  • Add stone to pile
  • (Add pheromones)
 Where the effect is governed by simple rules that are relative to the environment the rules above

Complexity: rules + environmental feedback loop

The italics are the real key: the sense/affect cycle sets up a feedback loop where the behavior of 1 termite affects the behavior of another.  Importantly, this is not because the termites communicate around some grand plan.

The feedback loop is via their ability to respond to - and affect - the environment, subject to a few simple rules.  In particular, the actions of termite-1 affect the environment in a way that impacts the rule-based behavior of termite-2.


Rules exhibit these same characteristics

You're probably already seeing the parallels here in business logic:
  • the rules are the agents
  • the environment is in 2 parts: the incoming transaction, matched against the current database state
  • the agent rule is: execute your derivation if any dependent data is changed
Filling out our chart:

 Multiple agents, acting independently the termites the rules
 Interacting with their environment, both to
  • sense it, and
  • affect it
 Sense environment:
  • See stone
  • See pile 
  • Smell pheromones
 Affect environment:
  • Add stone to pile
  • (Add pheromones)
 Sense environment:
  • Incoming transaction
  • Current Database State

 Affect environment
  • Set Attribute value
 Where the effect is governed by simple rules that are relative to the environment the rules above

 Execute your derivation if any dependent data is changed


Complexity Management: network vs. centralized

The critical distinction with respect to traditional programming is how complexity is managed.  Here, complexity means the linkages between the rules (derivations etc).

In traditional programming, this is managed by the programmer, reflected in how code is ordered.  As we have seen, this takes time, and is very non-responsive to requirements changes since the ordering must be re-designed for each change.

The underlying problem is that the number of dependencies between data is very large.  And worse, this is a geometric problem.  The interactions between 5 agents is 120, but 6 agents have 720 interactions.

Such a centralized approach, where the programmer manually controls the interactions, has built-in issues of scalability and adaption to change.  But the declarative approach of rules interacting via their ability to sense and alter their environment scales linearly.


Is Emergent Behavior Predictable?

At its worst, Emergent Behavior feels like a bug.  Termites can be constructive (build cathedrals), or destructive (ruin your house).  So it's reasonable to ask which this is true for business logic.

Observe that the derivation rules are not simply pieces of code.  In fact, they are the definition of your (derived) attributes: the customer balances is the sum of the unpaid order totals.

So our rule agents are simply maintaining these derivation rules (and related rules like constraints and actions) in the face of a complex set of updates to the attributes on which they depend.  In data processing terms, you are getting automation for transactions you did not explicitly design, as in our example where declaring Place Order rules automatically implements the related Use Cases such as Delete Order, Pay Order, Change Item quantity and so forth.

The difference is you don't need to spell these out in a geometric series of tests. You simply state the rules and the transaction logic engine embues the rules with constructive emergent behavior.


Comments