TEAM BLOG
xqizit cloud. So what’s the big idea?
xqizit cloud
low carbon
low cost
Authored by Cameron Purdy and Gene Gleyzer
Last edited: @January 12, 2024
When we set out to create the xqiz.it cloud, we had a simple goal in mind:
Make it simple and affordable to create and host web applications. Useful, data-rich applications, of all sizes, that grow and evolve over time.
To achieve this, we had to radically increase server density and automation. These efficiency gains translate directly into cost savings, but they also deliver important environmental benefits: Fewer data centers, fewer servers, and drastically reduced energy consumption.
Good for users. Good for developers. Good for business. And good for the planet.
When we look at back-end web services that support iOS, Android, and web applications, very few of these are uniformly busy 24 hours a day, every day of the year. We see load peaks at certain times of the day, certain days of the week, and certain weeks of the year. Viewed over years, applications tend to follow a growth curve, followed by some long tailing off. But unfortunately, it’s difficult to provision for applications in a way that reflects these load patterns.
Applications are typically provisioned at a virtual machine or container level and often have to be force-fit into one of a small number of predefined, strangely-shaped configurations. It often feels like the resource limitations for hosting — such as CPU, RAM, and storage — are designed with someone else’s unbelievably skinny application in mind. And each option beyond the most constrained choice comes with a shocking jump in price that seems completely out of proportion to the hardware costs involved. It’s now the job of the application developer to build an application to squeeze into some predefined box, instead of it being the responsibility of the host to provide a box that will reliably host the application.
And even worse, that application hosting “box” has to be provisioned for the worst-case scenario — it must be based on the anticipated peak load and size. And that is true even if the hosting is configured to add hosts with Kubernetes or some other auto-scaler, because running out of resources like memory or disk space will usually result in a hard crash.
Even if an application spends half of its time with no load at all, its over-provisioned box will still run up a bill, occupying server hardware, and consuming electricity — 24 hours a day, 365 days a year!
That’s the problem that we set out to solve.
Why this wasn’t solved already
The concept of sharing a computer’s resources automatically across multiple applications dates back to the 1950s. To do it effectively for modern workloads, a server’s resources need to be automatically re-allocated in real-time, taking resources from applications with low or no load, and providing resources to applications that are receiving and processing requests. But several things are working against real-time resource re-allocation:
- For security reasons, each application will be deployed to its own VM or — if security is less of a concern — its own OS container.
- VMs have rigid hardware allocations. Even when a VM is idle, its compute and memory resources can’t be utilized by other VMs on that same machine.
- Even with only two applications, and even if they’re running in the same OS with no VM boundary, the applications don’t cooperatively share and reallocate resources. Instead, they compete for resources, which will cause thrashing as soon as the applications’ resource demands exceed the container’s capacity.
- While operating systems do support memory paging (aka “swap”), the use of memory paging in modern applications causes thrashing, because it interacts poorly with widely used technologies like garbage collection (JavaScript/Node, Java, C#, etc.) To be safe, most deployments are sized pessimistically, so that the OS will never have to page memory.
- Due to the traditional way that languages, libraries, and operating systems have been built, it’s quite challenging to securely limit what an application can access and modify within its environment — and that’s not even considering any security exploits. That’s why each application is usually placed in its own VM or container, and that increased isolation makes it even harder to share resources efficiently.
The xqizit cloud solution
To solve this, we needed a new approach that was fundamentally different from the ground up, starting with the Ecstasy programming language (xtclang):
- Ecstasy programs have no direct access to hardware, OS, or external libraries. All such access (called capabilities) is provided by secure injection.
- Ecstasy builds on the code verification concepts pioneered by languages like Java, using both compile- and link-time code verification that statically identifies all dependencies and capabilities required by an application.
- The code of a running Ecstasy application cannot be dynamically modified (including self-modification) after it is verified and linked. This eliminates potential exploits like code injection.
- In addition to its strong isolation guarantees, the Ecstasy runtime is fully managed (including CPU and memory resources), and internally containerized. Each injected capability is container-specific and fully managed. Similarly, CPU and memory resource management is fully managed down to the individual container level.
Leveraging these capabilities, the xqizit cloud provides a secure deployment environment for each application, running inside its own managed Ecstasy container, and with the server resources movable from application to application in real-time — without requiring the application’s consent or cooperation:
- Running applications can be paused, and later resumed.
- Running applications can be moved out to persistent storage (freeing up memory), and later restored. They can even be moved from one server to another.
- CPU and memory resources can be added to and removed from applications in real-time.
- Applications are completely sand-boxed, and their interaction with the outside world is enabled solely by injected capabilities, which can be tightly locked down.
This allows an autonomic management system to pack many applications onto a smaller number of servers, by having the ability to rapidly grow and shrink resource allocation on an application-by-application basis, including completely offloading applications to persistent storage when idle, or when the system is oversubscribed.
Set it and forget it!
The purpose of this design is to make it super simple and affordable to create and host web applications of all sizes, that grow and evolve over time. The xqizit cloud manages the deployments autonomically, actively eliminating waste when possible, and dynamically scaling a deployment when the application gets busy.
Interesting? Like to learn more?
If the ideas in this blog post resonate with you, get started with Ecstasy and xqizit cloud today, or explore our blog posts, tutorials, and guides to learn more!
GET STARTED
Get started with xqizit cloud
If these ideas resonate with you, you can get started with xqizit cloud today. It’s free.
EXPLORE
Stay up to date with the latest!
If you like to explore the latest ecstasy and xqizit blog posts and guides, you can find them here!