The New Stack: Impact
(This piece is the third in a series that explores the evolution of the software technology stack and software development methodologies in the last two decades. It examines the first and second order effects of the new stack and explores the challenges this stack has given rise to.)
The first article in this series began with an outline of the “traditional” technology stack that was common in the early 2000s. It then examined how the internet, mobile, and cloud revolutions exposed the limitations of this stack, deficiencies that led to the new stack we see today. The article outlined the key characteristics of the new stack, and we also saw how these traits solved problems this traditional stack could not.
The stack today looks very different from the one we saw two decades ago. It consists of small, loosely-coupled (and mostly open-source) pieces that are distributed over a network and communicate using APIs. These aspects — the breakdown of the stack into smaller components, the ubiquity of APIs, the widespread adoption of open-source, and the distributed architecture — have had a huge impact in the last decade or so. This article will look at these consequences, both positive and negative.
First-order effects
Perhaps the most important consequences (of this breakdown of the traditional stack to the new one) have been the creation of a software supply chain and an API economy.
With the traditional stack, it was common for vendors to build most parts of the stack themselves. Vertical integration was seen as a competitive advantage, and software companies like Oracle even acquired hardware vendors (like Sun Microsystems) to offer the full stack, from infrastructure to user interface. And it was common for enterprise consumers to go to a small set of vendors to meet their software needs.
What we see today — thanks to the new stack that leans towards single-purpose solutions — is a best-of-breed approach for constructing the stack. Vendors (or open-source projects) offer specialised solutions or frameworks across the stack and across different stages of the software lifecycle [1]. The entire supply chain of software — from planning, development, delivery, to operations — can now be composed of tools from niche vendors or open-source offerings [2]. This trend highlights the growing maturity of the software industry: we’ve gone from a model where most parts of the solution come from one vendor (or a few vendors) to a model where a rich ecosystem of vendors is powering the entire software supply chain.
This explosion of choice (and the explosion of start-ups offering this choice) is one feature of the API economy. Several capabilities have come together to make this possible: standardised communication protocols, easier ways to publish and consume APIs, tools to manage the API lifecycle, among others. APIs are now the main building blocks of solutions built on the new stack.
In parallel to this broadening of the tools ecosystem, we’ve seen the rise of software platforms. By offering ‘opinionated’ ways to build solutions, platforms pave the way out of this conundrum of choice. Platforms also offer a wide array of integrations that can grow over time, keeping pace with the growing number of vendors offering tools in each domain. As one provider put it, “a platform creates a stable center of gravity for your tech stack.”
The widespread availability of APIs has raised the level of abstractions used to build software. The software primitives we use today make it a lot cheaper and faster to build and manage complex software solutions. The emergence of low-code and no-code tools is another manifestation of the higher level of abstractions available today via APIs. Entry-barriers for developers are now significantly lower than what it used to be with the traditional stack. And a new class of software creators has emerged, driving the so-called ‘citizen-developer’ movement.
This API revolution has also led to more software using software. Driven by the need for more automation, software has taken over large parts of activities done previously by humans. These days it is common to see software tools setting up server farms (via infrastructure-as-code solutions like Terraform), bots automating complex workflows (via Robotic Process Automation solutions like uiPath), and even bots orchestrating other bots [3]. The new stack, in contrast to the traditional one, has a far lower need for humans doing such orchestration.
The API economy has enabled the growth of marketplaces where these APIs (and their underlying software components) are published and consumed. Efficiency gains driven by the new stack (like zero marginal cost of adding a new user) also enables consumers to try out these components before they buy it. Such software marketplaces — and their adoption models — were unthinkable two decades ago, given the architecture of the traditional stack.
Second-order effects
The new stack has had some second order effects that are worth examining.
Compared to the situation a couple of decades ago, today we see the demand for more specialised skills across the stack. Apart from the obvious specialisation in front-ends and back-ends, the importance of understanding how to build and operate distributed systems has grown significantly; what was once a mostly academic pursuit is now at the heart of the software architect’s practice. On the flip side, all this specialisation has also increased the value of people who can work on the entire stack. ‘Full-Stack Developer’ was a meaningless phrase two decades ago, but these days it’s a skill that carries weight.
Site Reliability Engineers (SREs) and DevOps specialists are in high demand to manage the operational complexity of the new stack. These disciplines, non-existent two decades ago, are the result of new practices that have evolved in the last decade or so — a topic that will be covered in a separate article on engineering practices.
In Product Management, the new generation of internet/cloud/mobile applications has led to a shift towards relying on data and quantitative methods of evaluating impact. Measuring usage/adoption, defining metrics and tracking them are practices that are commonplace now.
Another impact has been in the new types of applications that the new stack has enabled.
As the volume of data gathered increased over the last two decades, we saw the rise of machine learning techniques that took advantage of Big Data. Without the distributed architecture of the new stack (and the economies of scale it led to), it’s hard to imagine machine learning reaching the maturity it has today. BlockChain is another application type enabled by the distributed nature of today’s technology stack.
It’s easy to miss the steps that led to these technologies, the factors that made them effective. While the drivers that led to the new stack tell one story (of how certain kinds of applications drove the change in architecture), the consequences tell another one: of how the new architecture in turn led to new types of applications. Machine learning and Blockchain were not in the minds of engineers who adopted a distributed approach for internet based applications, but that approach is what made these new technologies possible. Distributed storage techniques and the compute power we have today in the cloud are key enablers for both technologies.
Another direction the new stack has led to is the rise of ubiquitous computing. By unshackling itself from its ‘on-premise’ roots, the new stack paved the way for the stack — in its different forms — being present ‘everywhere’. The spread that began with mobile has now reached every imaginable physical device that needs to be controlled.
Challenges
The new stack solved many of the problems the traditional one could not, but it also created new ones. Most of these were the result of the split of the stack into small pieces across the network: in other words, the challenges of building and managing distributed systems.
Integration, which was a challenge even with the traditional stack, grew more complex with the new stack. Business processes that were earlier running within one application are now spread across multiple services in the network. These services can come from multiple vendors, and run in different “clouds”. In such a setup, the first challenge is to wire together these independent services into a coherent business process [4]. Next, getting those business processes to reach the same level of consistency as before is a lot harder; transactional data, now spread across multiple databases, brings up new challenges in data consistency and aggregation. Managing master data — also spread across different services in the network — becomes harder too.
Operations is another major challenge with the new stack. Distributed systems are not easy to understand, observe and manage — there are many points of failure, and the monitoring data generated comes from heterogeneous sources, which makes it hard to find the root cause of an issue. Microservices complicates this picture further: what was earlier one (technical) process in a monolithic application is now probably a dozen processes that need to be observed and managed. And the increase in the number of abstractions in the stack has also driven up complexity: from an application running inside a virtual machine, we have evolved to a bunch of services running in containers that are managed by a container orchestration tool like Kubernetes and also orchestrated via a service mesh like Istio. The growth of the ‘observability’ tool ecosystem — with dozens and dozens of tools to monitor cloud-native environments [5] — shows how much more complex (and important) this space has become. Things were a lot simpler with the traditional stack.
Security (and in particular Identity and Access Management) is another key challenge the stack brings up. Earlier, with applications on the traditional stack, there were a small number of resources (applications, data), running on few servers, and used by a (relatively) small number of users. This has now ballooned into a complex matrix spanning a large number of resources, distributed over a cluster of servers, and used by a vast number of users (at internet scale). The resulting whitespace has led to the rise of firms like Okta and SailPoint who have taken up the identity and security challenge in the cloud.
Conclusion
This tour reveals that the comparison between the traditional stack and the new one isn’t really black and white. While the new stack ushered in a lot of benefits and changed the IT industry landscape, it has also brought with it several challenges that we are still grappling with.
No wonder, then, that the traditional stack — or many elements of it — are still around, and are not going away anytime soon. In fact, some elements of the traditional stack — like the relational database — are making a comeback in this new world of distributed systems. Will the traditional stack be able to revive itself? In what contexts is it still relevant?
In article five of this series, we will take a closer look at these aspects and examine the interplay between the traditional and new stack.
Footnotes
[1] Examples across the stack include Docker and Kubernetes in the infrastructure layer; MongoDB, PostgreSQL, Redis on the database layer; Spring Boot at the application layer; AngularJS or React at the UI layer. A zoo of tools has emerged also in the space of managing the lifecycle of cloud-native applications: Terraform for infrastructure setup, Jira for issue tracking, GitHub for versioning, the ELK stack for observability, StatusPage for incident communication, and so on.
[2] It is striking to note that even a capability like publishing release notes is now a small domain in itself, with multiple start-ups attracting VC funding. Examples include AnnounceKit, ReleaseNotes, Noticeable.
[3] This level of automation is giving rise to segments like B2R2C — Business to Robot to Consumer — where bots make buying decisions for consumers. Now marketing folks need to target bots, not just humans!
[4] Tools like Zapier, IFTTT, Tonkean, etc. allow the wiring of a host of other tools into a complex business workflow.
[5] Examples include Dynatrace, New Relic, Data Dog, Splunk, Prometheus, Honeycomb, the ELK stack — the list can go on.