Building the Open Cloud, Part 2: Re-Imagining the Cloud With Akash
by Anil Murty , Head of Product at Overclock Labs
This is the second entry in our four-part blog series. In this post, we build on our summary from Part 1, which focused on the key challenges in web3 infrastructure today. If you prefer to watch a presentation of this, you can find the recording here.
Before we dig in and talk about how Akash Network addresses these challenges, let's start with a refresher on how public clouds work today.
The Status Quo for Public Cloud Infrastructure
Any public cloud today — AWS, Azure, Google Cloud, Alibaba, Tencent, Oracle, Digital Ocean, or any of the smaller players in the space — is structured around the idea of a single large entity (like Amazon, in the case of AWS or Microsoft in case of Azure) owning all the infrastructure. All users (thousands, if not millions) connect to that infrastructure through the interfaces that the cloud provider exposes.
The model above revolutionized the software industry. It enabled people to get started without the need to spin up and manage their own infrastructure. It lowered significant capital expenditure (CapEx) for small startups. The programmability of clouds also enabled things like elastic autoscaling and supported DevOps principles. The programmability of cloud infrastructure allows development teams to move faster by managing the infrastructure their applications run on without being dependent on operations teams. Their autoscaling nature also allows companies to keep costs low by fluidly scaling in response to demand.
While the things above have contributed immensely to the pace of innovation across many industries, they have some drawbacks.
A Decentralized Vision
Imagine a decentralized version of a public cloud that addresses these concerns. It would look something like this:
Instead of a single, large entity owning and controlling all the infrastructure, you would have several smaller entities owning portions of the infrastructure. Users could choose which provider they want to run on — and if they choose, they could run on more than one provider.
While this solves some of the problems we outlined earlier, it introduces new ones. The users in this scenario are now faced with the daunting task of figuring out contractual agreements with multiple providers, each of which could be for different lengths of time. Assuming they figure that out, they would then have to figure out how to deploy to each of these providers, which may have different interfaces and feature sets. All this would take an army of legal, operations, and engineering talent — making the task unrealistic for a small company.
This is where Overclock labs saw an opportunity and decided to build a solution.
The Akash Network Approach
Overclock Labs pioneered a two-sided marketplace for cloud compute where “Tenants” lease compute resources from “Providers” to deploy their applications and services. Akash Network’s marketplace for cloud compute is a software abstraction layer that provides a consistent experience for users regardless of which provider they choose for their deployment.
There are clear benefits to this approach:
Recommended by LinkedIn
In addition, Akash is an open-source, community-driven network that is powered by a blockchain. This means that it adheres to a governance model that has been proven by many other open source projects and other blockchains.
Akash Network “Under the Hood”
If the above has you excited to learn more, here is a quick overview of how Akash Network works.
Akash is built on Open Source Software (OSS). The two major technologies that Akash uses are Docker and Kubernetes. Docker is the world's most widely used container technology and Kubernetes is the world’s most popular container orchestrator.
Providers on Akash Network run Kubernetes clusters and Tenants (users of Akash Network) deploy their applications as Docker containers onto these Kubernetes clusters.
A typical deployment workflow on Akash looks like the block diagram below:
Tenants begin by containerizing their application into a docker image so it is portable (can be easily moved between providers). Then, they include a reference to that image in an SDL file. SDL stands for “Stack Definition Language” and, as the name suggests, is a way for users to specify which infrastructure stack they would like for their application. The SDL file includes compute needs, locations, and pricing.
Once as SDL file is prepared, it is submitted to the Akash marketplace as a “request for bids”. Akash software takes the information contained in the SDL file and broadcasts it to all the providers on the Akash marketplace (more than 50 at the time of writing). The providers inspect the request and decide if it is something that they can satisfy. If a provider decides that it can satisfy the request — it responds with a bid.
The tenant ends up with a set of bids from the providers that respond. The number of bids can range from a few to a few dozen, depending on the request. Then, the tenant decides which bid suits their needs. They may decide to choose a lower-cost provider, even if that provider lacks features. Once the user selects a provider and instructs Akash to deploy, the application container is scheduled to run on the provider and, within a few minutes, the application is deployed and running.
To summarize, it is a simple, four-step process:
In case you are wondering what an SDL file looks like, here is an example of one that is used to spin up a Tetris game on the web. Let’s break it down and understand the various parts.
The SDL file answers five questions to help Akash decide which provider to choose and how to deploy the app:
And, that’s it! Isn’t that way easier than you thought it would be? There are many people hard at work at Overclock Labs as well as in small teams of independent developers within the Akash community who are building deployment and provider set-up solutions. This will make the process of interacting with Akash Network much easier. Eventually, someone with little to no technical expertise can operate on Akash. Stay tuned for subsequent posts that will dig into some of these products.
Meanwhile, if you are eager to get started, head over to Akash Network’s documentation.
(This post originally appeared on akash.network: https://akash.network/blog/building-the-open-cloud-part-2-re-imagining-the-cloud-with-akash)