Cloud computing consumption is often compared to electricity usage because both provide resources on demand and offer so-called “pay as you go” models. Unlike cloud computing, though, you don’t buy electricity by chunks, guessing the right size in advance, and the bill doesn’t double when you use just a bit more power.
Cloud computing has been a major technological step for both its ease of deployment and its cost efficiency. One could call it the most disruptive technology of the last decade.
In this time of continual change, the cloud itself is now ripe for disruption. That’s because a significant amount of cloud resources are wasted, revealing inefficiency in this oh-so-efficient technology. This particularly applies to cloud computing resources and storage. Many companies don’t do routine checks to see how much capacity they are using, so in most cases, they are overpaying for resources that never get used.
When you deploy a particular instance from a cloud vendor, you’re given a wide range of virtual machine (VM) sizes from which to choose. For example, below are some of the options from AWS.
The same approach is used by Azure, Google Cloud (both of which Unispace with Jelastic PaaS has a technical integration with), Digital Ocean and many others.
The first challenge here is to find the size that is enough for a good performance during an average load and has extra breathing room for scaling inside the machine you use. The second challenge arises when your current VM becomes too little for the project needs and you have to graduate to a higher-powered VM that will usually be twice as large.
The problem is that you’re likely always over-allocating beyond what you need, especially during low-use or idle times. As a result, you’re still paying for these reserved but unused computing resources. When you start growing your infrastructure horizontally — adding more VMs — you compound the problem by having multiple VMs with unused capacity. Wasted resources increase proportionally and, as a result, the efficiency declines even further.
The “pay as you go” billing model in cloud computing isn’t nearly as flexible as the billing for electricity. You simply cannot order a VM that precisely suits your project requirements at the current moment and scales without extra configurations and migration efforts as the load grows. As a result, you order bigger VMs and continue to pay for unused resources. For example, Google Cloud admits that the problem exists and even tries to provide customers with “hints” when they overallocated.
The traffic at AWS, MS Azure and many other clouds is already billed in a “pay as you use” model, so end users do not buy capacity in advance but are charged based on real consumption. This billing approach became possible for the whole platform as a service (PaaS) layer with the introduction of containers, which create more flexibility based on how big your load is at any given moment. During the last three or four years, we have seen a noticeable shift to container technology that added game-changing granularity for resource slicing. As a result, each container can be scaled vertically on the fly taking into consideration the load changes at the current moment. So you pay for actual consumption and don’t need to make complex reconfigurations to align with project growth.
Even so, there is no active movement to the “pay as you use” model yet, as most vendors do not offer pure container-based clouds. If you host containers inside a VM, you are still stuck to its size and pay for the unused resources.
Obviously, your spend highly depends on the chosen cloud vendor, what resource unit is taken as a scaling step, the availability of automatic scaling, etc. So to reach maximum efficiency, ask your cloud vendor to shift to a “pay as you use” pricing model with tiny scaling steps and smooth resizing based on the load in order to not reserve extra resources beforehand without real need.
Today, IT is being asked to continually do more with less, and cloud efficiency, once disruptive, is now being disrupted. You have a right to demand changes from cloud vendors. The success of containerization is more than a current opportunity. It’s a wake-up call that new models of cloud computing continue to emerge, disrupting what currently seems like the most cost-efficient approach. Ultimately, that is good news.