Many large organizations will spend significant amounts of money identifying, installing, operating, and servicing large technology platforms that provide a crucial underpinning to their operations. Various industries have their own well-known platforms and tools to help automate critical business functions. Done correctly, such automation promises to yield an improvement in a firm’s profitability or output. But as more and more business functions become automated, the more dense our technology inventories grow. Increased density pushes them further and further away from agility and interoperability.
This hardening of technology assets impedes responsiveness for businesses and drives increased operational expenses in the face of decreasing business value. Stated differently, an organization’s technology investments experience an overall erosion of incremental ROI over time. This often continues well past the point of diminishing net positive returns into the realm of negative returns. Many organizations share a common desire to de-tangle these ever-increasing technology inventories, and attempt to achieve this desire by implementing some form of Enterprise Architecture. However, we must consider the nature of the underlying problem before discussing the fitness of EA (in its current form) or any other discipline to address it.
If organizations operated in a completely fixed and unchanging environment then careful upfront design and implementation would naturally result in the implementation of finely-tuned and highly optimized technology platforms. Such platforms would continuously improve efficiency, quality, and throughput. However, such a fixed and unchanging environment is nothing more than a comforting fiction. The real context in which enterprise technology systems must operate is highly dynamic and volatile.
This leads us to a simple truth: we are asking too much of software solutions.
Fundamentally, software is a set of digitized instructions that directs computers how to respond under various circumstances. It serves to implement human-defined algorithms at high speed. When we introduce a logic machine designed to operate in a fully prescribed and change-resistant environment (an environment that doesn’t exist), we’ve essentially created a tool with an existential flaw. Designed for a non-existent reality of well-defined and predictable inputs software essentially begins a steady drift into further misalignment with reality from the moment it is deployed.
This is what gives us ‘bugs’. These are unpredictable software behaviors that occur when a software application encounters a set of input conditions that its author did not anticipate. Given the dynamic reality of human organizations and the many forces of change that exert upon them, one can expect these bugs to multiply as the application encounters more and more scenarios that its creator didn’t anticipate. This lack of perfect prediction becomes more pronounced the further into the future the instructions, and their associated predictions, are expected to remain useful and accurate. In this manner, the utility of most software systems will degrade in some form over time.
Stated another way: no software written and implemented today can be expected to continuously perform without failure over time.
Let’s take this one step further. We’ve already said software is essentially a set of instructions for a computer to interpret. But that is merely an observation on its nature, not its value.
Software does not have any intrinsic value by virtue of its existence.
Much like a plow that never cuts into the earth to sow a harvest, a software application has no inherent utility in itself. It only derives its utility in the hands of human beings. When a human uses a software application, they are in effect accelerating the navigation of a series of decisions to speed a valuable outcome. That valuable outcome is the reason technology exists, and the value of that outcome is what justifies the cost and complexity of accelerating it with software. Thus, the value of software, or technology more broadly, can only be properly understood in the context of the human-driven outcomes that it accelerates. This means that the scope of consideration for evaluating software’s usefulness is not merely limited to the software itself. It must include the people and the means (processes) that define the context in which value is realized. This is referred to by some as an Information System.
The Information System is the interconnected web of people, processes, and technologies that produce a valuable outcome.
Why this extended treatise on the nature of software? Because enterprise technology platforms are aggregations of software. As systems of software applications, they create a multiplicative web of growing uncertainty and unpredictability over time. An Information System, the more intricate and complicated big sibling to a software application, is subject to all the growth in unpredictable states that besets software, and compounds it with all of the unpredictable states that beset the people and the processes that complete the Information System. The ongoing progression and proliferation of unpredictable states puts enormous pressure on an Information System that begins almost as soon as it is implemented. This pressure often comes with unintended consequences, but it should never be unexpected. The underlying problem of continuous and unanticipated change in the various system states that surround an Information System should come as no surprise. We can borrow a term from the discipline of Thermodynamics to discuss this problem of value erosion due to increasing unpredictability: entropy.
I will pick up the question of technological entropy in my next post to explore the question:
Is Enterprise Architecture prepared to solve this problem?
Back to INSIGHTS