The New Stack and Methodology: Interplay
(This piece — written in collaboration with Bhushan Nigale — is the fifth in a series that explores the evolution of the software technology stack and software development methodologies in the last two decades. In this instalment Bhushan and I examine the interplay between the old and the new worlds, and also look at how the stack and methodologies play together in this evolution)
The first four articles in this series examined the transition from the traditional stack and Waterfall methodology (common two decades ago) to the New Stack we see today in cloud-native products and the Agile/LEAN methodologies common in software development today. Those articles looked at the drivers (that led to the changes), and the impact (of these changes). Some of the key challenges the new stack and methodologies brought in were also discussed.
Given all this, it’s fair to ask: where does this leave the traditional stack or the Waterfall methodology? Where are they used (or relevant) even today? Do they have a role to play in future? How do these co-exist with the new stack and methodologies?
This article explores the interplay between old and new, and also how the stack and methodologies play together.
The traditional stack today
The traditional stack, dominant two decades ago, is still widely in use today. It figures mostly in the enterprise software products built around the 1990s and deployed ‘on-premise’. Some of these products have been rewritten for the cloud, some others have followed the ‘lift and shift’ path to the cloud, but a majority — close to 60 percent [1]— remain where they were originally deployed: in the on-premise data centres maintained by the IT departments of enterprises.
These legacy enterprise products — and thus the traditional stack they are based on — can be expected to stay operational for decades. The reasons for this are many.
Firstly, the large amount of investment (both in hardware and software) that has gone into these systems results in a lot of inertia. Having invested so much into these systems, the natural inclination is to keep them running for a long time.
Next there’s the tricky matter of switching costs — costs that include not just building or buying new software, but also migrations costs, end-user training costs, etc — that need to be justified: unless there’s a compelling business reason, such transformation projects do not get the budget.
Then there’s the question of skill. Enterprise IT departments are experienced in maintaining and operating the traditional stack, but they lack skills the new stack demands. Unless there’s a demographic change — which can take decades — this factor will continue to play a role in decisions involving a move to a new architecture.
Ultimately, it is a matter of business priority. These enterprise products also are typically ‘systems-of-record’, which do not face the same kind of demands — to change fast or scale flexibly — as the ‘systems of engagement’ (or, in the B2C world, any consumer facing apps) do. And while they may be mission critical, these transactional systems are often not seen as strategic: so why touch them if most of the innovation is anyway happening elsewhere? As long as the data from these systems of record can be accessed quickly and used (for AI related capabilities, for instance), there’s little business need to rebuild these solutions on the new stack.
So these legacy products built on the traditional stack will continue to be in use in the foreseeable future. One important consequence of this is the rise of Robotic Process Automation (RPA) tools in the software industry [2]. These tools make up for the deficiencies in legacy software (like missing APIs, or fragmented toolsets) and add a layer that further removes the need to modernize legacy solution landscapes among enterprise customers.
Co-existence of the two stacks
The traditional stack and the new one come together in the context of hybrid landscapes. As the name suggests, such a constellation spans products based on different architectures (the traditional and new stacks) and deployment models (on-premise and cloud). In B2B enterprises, such hybrid landscapes are all too common. Legacy products based on the traditional stack come together — in various business scenarios — with newer solutions built using cloud-native architectures.
Such legacy products based on the traditional stack are also being deployed — using Virtual Machines or containers — in the cloud. This ‘Lift and Shift’ approach involves moving a legacy application (running on-premise) to a public cloud provider like AWS or Azure with minimal cost and minimal to no code changes. The advantages of such a move to the cloud include reduced costs, higher availability, better agility and improved reliability. It also can pave the way for the next step of being re-written natively for the cloud.
Modernising the traditional stack (or parts of it)
The traditional stack, as we saw in the first article, was superseded by innovations that led to the new stack. But the software community has continued work on parts of that old stack, and today we have flavours of some “old” technologies and methods that are well adapted to today’s environment.
Perhaps the most visible example here is the relational database. A couple of decades ago relational databases, having survived the hype wave of object databases, were dominant. But they were hard to scale, which was a key requirement of web-based software catering to a large (and growing) number of users. The SQL-based relational database was designed to run best on a single machine, not a cluster of nodes. This gap paved the way for a new class of databases (later placed under the umbrella of ‘NoSQL databases’) that don’t use the relational model and are designed to run on large clusters. But in the last few years, we’ve seen the rise of relational databases designed to work in a distributed cluster. Prominent examples here include Google Cloud Spanner and CockroachDB, solutions which allow one logical database to be distributed across multiple nodes in a cluster. The advantage of distributed SQL databases like these (over the NoSQL ones) is that they are built ground up to offer transactional consistency. So you have all the advantages of the relational database plus the scaling benefits a distributed architecture brings.
The second example is not a technology per se, but more of a construct. The monolith, thought to be neatly buried under the hype of microservices, has been resurrected following well-documented failures that showed the drawbacks of using microservices in contexts where monoliths were more appropriate. Even in contexts where microservices eventually turn out to be the better approach, we’ve learned that it’s prudent to start with the monolith and later break it into microservices. The “modernisation”, in this case, is ideological: the old myth about monoliths was gradually replaced with a new one.
Beyond these public examples, there have been other proprietary (and thus largely unheard of) efforts to modernise the traditional stack. At SAP, where I worked previously, there were efforts made to optimise the ABAP application server — which has been around for decades — to run better in VMs and containers in the cloud. What was earlier a standalone application server is now a ‘multi-cloud’ PaaS; the ABAP runtime is now offered as a deployment option within the SAP Business Technology platform (alongside other runtimes like Cloud Foundry, Kubernetes, Serverless, etc).
Moving towards a hybrid methodology
As it is to be expected in a widely adopted methodology originally conceived almost two decades ago, flaws have crept in Agile. The agile rituals (daily scrum and backlogs) are the easiest aspects of Agile to ‘adopt’, without having to do the heavy-lifting of fundamental principles and essence of agile, i.e. to put people before processes. Without an appreciation of the essence, the rituals themselves peter away, or are even counter-productive: a Dark Scrum, where older hierarchies continue instead of the self-organization around a mission expected from Agile.
The ballooning of the agile consultancy industry (memorably termed as the Agile Industrial complex by Martin Fowler) has led to a variety of flavors being propagated, with elaborate process steps and as mentioned in article four, a zoo of tools that enforce their own processes. The ‘processification’ of Agile belies its founding principles outlined in the agile manifesto. Lastly, if imposed by management without an explanation (and the consequent buy-in), the processes might dampen the sheer individual brilliance of gifted developers.
Partly for these reasons, all its primacy in ‘born-digital’ companies, Agile hasn’t made headways in traditional software development/maintenance organizations. As this report shows, ~51% of surveyed organizations still used Waterfall (always or sometimes). Some organizations decide to continue to follow the Waterfall methodology, as in certain conditions it makes sense to use Waterfall.
For instance, the project might be running on fixed costs or timelines, making the uncertainty of delivery dates undesirable. Not all organizations/departments have the appetite to undergo expensive, long-running agile training (an effect of the agile consulting industry mentioned above) and thus might not have the skills (also owing to a generational gap) to work with Agile. Lastly, for massive, multi-million budget programs running across several departments, management might insist on a structured approach with defined milestones, oversight with regular reports, and steering ability.
In such instances, a ‘Hybrid’ methodology comes to the fore. Via a structured process, sometimes involving external agencies, a project team of specialists collects requirements upfront. Typically, this is a small team of cross functional experts led by an experienced program manager. With an iterative process, the team then outlines the target state of the program – a transition to a new system, a new suite of products, upgrade of a complex system to a new release. After an approval from a steering committee – again, drawn across leadership of departments – teams are set up.
The teams themselves work in fully agile mode, with a well-defined release backlog, regular stakeholder interaction, and release in smaller iterations. Across the program, coordination is usually done by the program office, with regular updates to the steering committee. As the release/upgrade/cut-over approaches, detailed system acceptance tests are conducted, with a centralized release decision approach.
This federated approach grants autonomy to teams to work in an Agile mode, while still keeping the elements of centralized control over the timelines (usually fixed well in advanced) and implementing across-the-board conditions, such as adherence to guidelines (e.g. use of a particular technology as part of the company strategy).
Conclusion
What we’ve outlined above shows that ample conditions exist for the continuation of the old technology stack and older practices. Situations unique to an organization further complicate the choice of making a transition to a new technology or practice, and overcoming the status-quo sometimes needs actions that have less to do with the merits of a technology or methodology than the operating constraints decision makers find themselves battling with. Decision-makers need to evaluate their position in the matrix below and weigh the risks of making a transition to the new world.
CombinationCost of keeping status-quoMoving to the new world – considerationsNew technology adopted using old practicesTension between fast-moving LoBs and traditional IT, potentially giving rise to shadow ITRetraining of staff on Agile; starting small with pilot projects before mass switchTraditional systems with a conservative set-upInability to respond to changing requirements; generational gap as younger workforce/mgmt demands transformationMassive switching costs; resistance to sudden change; regulatory considerationsProduct teams developing on-premise softwareRisk of been seen as legacy; lack of support from ecosystemGradual modernization of parts of stack, from low- to high-risk of disruption
There is a transition underway from the traditional towards the new or a hybrid model, and the paths taken depend on how these organisations navigate the cost-benefit-risks aspects outlined above. This trend is reflected in the options cloud infrastructure vendors (like AWS, Azure or Google Cloud Platform) offer in the space of modernisation of the legacy stack, and the consulting solutions IT service providers offer in this space.
This shift to the new stack and methodology is a generational change, and technological merits alone cannot drive such a change.
Footnotes/References:
[1] A 2020 survey from the Uptime Institute showed that 56% of the enterprises run “most of their workloads in corporate data centres — that is enterprise-owned, on-premise facilities.”
[2] According to a Gartner study from 2020, the $1.5 Billion market for RPA solutions is expected to grow with double digit rates until 2024.