Our utilitarian computing landscape has changed from a few short years ago. It continues to change, but let’s focus on the recent few years to have a simpler scope of retrospection.
Consuming compute is now definitely focused on the cloud. Going further, we can’t ignore the forthcoming trend of consuming cloud resources with a very specific focus in mind. It’s no longer as simple as selecting cloud for compute. Now we have specific enterprise features to consider and plan for. This new model of enterprise cloud consumption is granting an actual shift of power, allowing developers direct access to capabilities in the interconnected network of virtual machines we take for granted and call ‘cloud’ .
Not only does this paradigm shift bring speed and agility to development teams, but it takes some of the onus of “what to use” away from management and director levels. This is a good thing. In fact, it’s a positive influence allowing businesses and people to do things in open-source with less worry about underlying infrastructure. This shouldn’t scare the ops teams. It should enable them to feel more confident through practices like Infra-as-Code and Infra-as-a-Service concepts. These concepts can bring speed and agility to all departments that consume and collaborate with technology.
Trying not to sound like clickbait, I’ll state my case; organizations should be starting, or well on the way, to a full benefits analysis of cloud opportunities, even where at the surface, no opportunity may appear. . I look at the trend of application owners wanting a container platform, and immediately the simple existence of Google’s GKE (a.k.a., Kubernetes as a service from Google on GCP), like a great way to jump into containers for any business that wants the gains that container workloads can bring. There are kubernetes platforms in nearly all the big cloud providers now, if that isn’t a big enough hint about where the industry is focused. While we’re at it, shifting responsibility of storage from infra teams to project teams frees operational teams to achieve more modernization projects. Project teams can look towards managing their own already edge-cached object storage, for example.
Earlier approaches to enterprise IT consumption were tied to centralized data centres and everything was owned by one IT department. Over time, segments of IT became the norm. Having a storage team, network team, server team, dev team, every team, has become more commonplace. We know these well as ’silos’ I shudder at the word. Cloud has vastly changed this landscape. A single project team can be the sole owner of their entire IT infrastructure with as little as one credit card and a laptop. Drop in some code to execute and the same project team is already up and running with a fully capable, plumbed, replicated, load-balanced, backed-up, multi-regional website application.
Gone are the days where the barrier to entry for getting complex applications in front of a worldwide internet audience was a six-figure project cost. This is the shift in power. This is the future of infrastructure where the use-case fits. Odds are that there will always be workloads that simply need to be on its own hardware, but it has become commonplace for businesses and nearly always for Arctiq architectures to be destined purely for this (not so new) cloud model.
At the same time, we don’t want to risk alienating operations teams by thinking they’re no longer needed. Instead, their focus needs to shift towards efficiencies found in infra-as-code practices, git-ops methods, and other practices bound by immutability. Ideally these operations teams will become authors, guardians and custodians of the code that builds the cloud architectures of their organizations. Often large and complex, and hopefully with a single click (yes, that’s possible now).