In the first and second posts in this series, I took you through a high-level look at microservices as a phenomenon, and a detailed examination of the issues that we face with our software, respectively. Today, I present the qualitative and quantitative evidence for my claim in the first post above: that “enterprise software development is broken”.
The qualitative evidence
Anecdotal evidence is not data. However, very large numbers of personal stories from very large numbers of relatively trustworthy sources do constitute a hint that some effect is present. The large number of authors who have been willing to expend professional capital on writing books on the subject of software delivery alone is suggestive. Further anecdotal evidence is found in the volumes of opinionated personal narratives on the subject of best practises for software development in blog posts and discussion forums.
Many detailed theories and hypotheses grounded in personal experience are proposed and vigorously defended, and few have any real data to defend their positions. There are fashions, but no commonly accepted paradigms, no base knowledge. Compare this to traditional engineering, where there are established, fundamental techniques and solutions. This common ground is sufficiently solid to facilitate the establishment of professional organizations, and for these organizations to have credibility as gate-keepers to the right to practice engineering.
No such common ground exists for software development as an engineering discipline. Software developers do not have a shared set of ethics. Our most common offence is to be blatantly over-optimistic, secretly allocating weekends to ensure that we can demonstrate mastery of our craft. We almost never attempt to understand where the real value of what we are building lies, with the result that we waste considerable resources building superfluous or low-value features. We waste further resources enforcing rules to support the various fashionable best practices. We also waste resources coding for its own sake, because we become infatuated with our castles in the sky.
We have all heard the refrain: “If only the proper procedures had been followed, we would have delivered on time”.
When projects still fail, we exonerate the methodologies because, of course, team members simply executed the method incorrectly. We have all heard the refrain: “If only the proper procedures had been followed, we would have delivered on time”.
Engineering requires techniques that are robust on contact with the real world.
Let’s assume for a moment that this refrain is true. That means that all known effective software development methodologies are incredibly fragile; that they work only with near-perfect implementation. This is no basis for an engineering discipline. Engineering requires techniques that are robust on contact with the real world, including the imperfect nature of the human intellect.
Let’s attempt to benchmark ourselves by looking at the practice of project management in a wider engineering context. There, we find far greater degrees of success. No discipline is free from failed projects, of course, but most show a consistent level of project success. Failure to correctly manage a project and deliver close to requirements is considered a professional failing. Now compare this to attitudes in the software industry, where we accept failure as being in the nature of the task. Those associated with failed projects are repeatedly given a clean slate, free to start again.
We have come to accept trial and error. We don’t call it that, of course; we call it refactoring, and promote it as an accepted best practice. This is the equivalent of tearing down a bridge halfway through construction and starting over, because you realized that your design was inadequate. In civil engineering, that’s a firing offense.
Predictable outcomes are a cornerstone of traditional engineering disciplines. Contemporary software development fails on this count.
Apologists claim that software is different because it is so malleable. This malleability, they say, means that we must do design and construction at the same time. I forcefully reject this claim. It is an observation of the current state of affairs, not a defense of them.
The malleability of software does not imply that it cannot be designed. It does not imply that you should design and build at the same time just because you can. Rather, the ineffectiveness of this behavior suggests that the current practices are inadequate.
The quantitative evidence
We do have some observational data from industry surveys. Keep in mind that this data is still considered weak, as there are so many confounding variables and much of the data is self-reported. Nonetheless, some examples are helpful:
- 49% of US federal projects were poorly planned or performing poorly (GAO-08-1051T – United States Government Accountability Office testimony – July 2008).
- Of 600 organizations in 22 countries in a 2005 KPMG study, 49% reported at least one project failure, where formal commitments were not delivered (KPMG, Global IT Project Management Survey, 2005).
- 75% of software project managers expect their projects to ultimately fail (Geneca Industry Survey 2011).
Despite the unreliability of this data, it does give us some uncertainty reduction. In other words, the number of software projects that do not go well is far more than is acceptable.
Non-software projects overran by an average of 3.6%, while software projects overran by an average of 33%.
There is also some data to back up my observation that our acceptance of failure in software development is out of step with other engineering disciplines. A 2012 McKinsey and Company comparative survey (Bloch, Blumberg, Laartz, Delivering large-scale IT projects on time, on budget and on value) reports that of the large non-software projects in their study group of 5,400 projects, non-software projects overran by an average of 3.6%, while software projects overran by an average of 33%.
This data suggests that even when traditional engineering projects suffer cost and schedule overruns, they do so to a much lower degree than is the case in software. Software projects, on the other hand, fail harder; even when delivered, they are barely fit for purpose. Traditional engineering at least delivers buildings that stay standing, planes that actually fly and trains that actually run.
In the absence of sufficient data to support the adoption of new solutions, we need to rely on a more analytical approach. In other words, we need to look at the features of the problem, try to understand why they occur, and develop compensating tactics. In my next post, I will explain and defend the mechanisms that I believe make these tactics effective.