It takes a team to manage servers, networks, databases, and infrastructure, and performing maintenance and changes can cost a company in both overtime pay and employee morale. Even the best provisioning of static resources leaves some 80 percent of available computing power unused. A few years ago, YP began working on an elastic compute solution, a concept that enables computing resources (such as CPU processing power, memory, bandwidth and disk usage) to be easily scaled up and down as needed. Using cutting-edge technologies, and some in-house ingenuity, we created a solution that has enabled us to overcome two primary issues and manage the workload dynamically.
Elastic Compute can help companies overcome two main problems caused by static provisioning: first, static provisioning of computing resources causes over-provisioning and ridiculously low use of available computer power.
For most IT organizations across the spectrum, here’s the typical situation: companies provision static resources for their applications, but when those applications are actually profiled for their use of computing resources, it becomes clear they are over-provisioned. Yet they use just 20 percent or less of available computing power, on average.
Organizations have been well-aware of the problem but what can they do about it? Maybe buy cheaper hardware or try to optimize the app to consume optimal resources, or try to put as many applications as they can on a single machine.
Second, static provisioning of people who manage that computing infrastructure requires a team and can be costly.
Organizations that don’t have dynamic elastic compute technology need a dedicated team to manage servers, networks, databases, infrastructure, etc., to watch over the infrastructure and perform changes and maintenance. Any change to that structure requires a long maintenance window involving all stakeholders and lots of overtime pay, and results in increased team member frustration and less than favorable work/personal life balance.
The solution, incorporating new technologies, solves both problems by putting systems on auto-pilot
With the advent of Mesos, Docker and others, the ecosystem has unleashed a wide range of technologies that enable you to put systems and infrastructure on auto-pilot. Mesos and Docker provide tools and technologies that allow IT to add value to a business.
Understanding the potential of these technologies, YP embarked on a journey a couple of years ago to bring effective change to the organization. We quickly realized the new technologies are really cutting-edge, and with a lot of work, we could make it a real enterprise-level offering to support production workload.
Because the workload was supposed to run in Docker containers, it lacked several core features such as centralized logging, provisioning application secrets in the containers, persistent storage, application configuration management, etc. We didn’t want to wait for these features to become available with Docker and Mesos, so our talented engineering team took the bull by the horns and developed those solutions in-house. We made some of those contributions open-source, so you can check them out at YP Engineering GitHub account here. With those solutions in place, we are running a heterogeneous, containerized workload in production.
Through this experience, we picked up valuable information in sustaining and scaling that workload dynamically. We incorporated several key components on top of Mesos and Docker that work together to make Elastic Compute an enterprise-level solution.
Our engineering team has been invited to share our expertise at a number of conferences, including USENIX, SCALE (Southern California Linux Expo) and MesosCon, to share our expertise. We presented “Lessons Learned from Running Heterogeneous Workload on Mesos,”. To watch the video , click here.