Static provisioning – Resources & People

It takes a team to manage servers, networks, databases, and infrastructure, and performing maintenance and changes can cost a company in both overtime pay and employee morale. Even the best provisioning of static resources leaves some 80 percent of available computing power unused. A few years ago, YP began working on an elastic compute solution, a concept that enables computing resources (such as CPU processing power, memory, bandwidth and disk usage) to be easily scaled up and down as needed. Using cutting-edge technologies, and some in-house ingenuity, we created a solution that has enabled us to overcome two primary issues and manage the workload dynamically.

Elastic Compute can help companies overcome two main problems caused by static provisioning: first, static provisioning of computing resources causes over-provisioning and ridiculously low use of available computer power.

For most IT organizations across the spectrum, here’s the typical situation: companies provision static resources for their applications, but when those applications are actually profiled for their use of computing resources, it becomes clear they are over-provisioned. Yet they use just 20 percent or less of available computing power, on average.

Organizations have been well-aware of the problem but what can they do about it? Maybe buy cheaper hardware or try to optimize the app to consume optimal resources, or try to put as many applications as they can on a single machine.

Second, static provisioning of people who manage that computing infrastructure requires a team and can be costly.

Organizations that don’t have dynamic elastic compute technology need a dedicated team to manage servers, networks, databases, infrastructure, etc., to watch over the infrastructure and perform changes and maintenance. Any change to that structure requires a long maintenance window involving all stakeholders and lots of overtime pay, and results in increased team member frustration and less than favorable work/personal life balance.

The solution, incorporating new technologies, solves both problems by putting systems on auto-pilot

With the advent of Mesos, Docker and others, the ecosystem has unleashed a wide range of technologies that enable you to put systems and infrastructure on auto-pilot. Mesos and Docker provide tools and technologies that allow IT to add value to a business.

Enterprise-level Solution

Understanding the potential of these technologies, YP embarked on a journey a couple of years ago to bring effective change to the organization. We quickly realized the new technologies are really cutting-edge, and with a lot of work, we could make it a real enterprise-level offering to support production workload.

Because the workload was supposed to run in Docker containers, it lacked several core features such as centralized logging, provisioning application secrets in the containers, persistent storage, application configuration management, etc. We didn’t want to wait for these features to become available with Docker and Mesos, so our talented engineering team took the bull by the horns and developed those solutions in-house. We made some of those contributions open-source, so you can check them out at YP Engineering GitHub account here. With those solutions in place, we are running a heterogeneous, containerized workload in production.

Through this experience, we picked up valuable information in sustaining and scaling that workload dynamically. We incorporated several key components on top of Mesos and Docker that work together to make Elastic Compute an enterprise-level solution.

Our engineering team has been invited to share our expertise at a number of conferences, including USENIX, SCALE (Southern California Linux Expo) and MesosCon, to share our expertise. We presented “Lessons Learned from Running Heterogeneous Workload on Mesos,”. To watch the video , click here.

IT shouldn’t be a cost-center with Docker

IT is critical for every big and small organization. It touches all the business processes and data in every company. Every year organization allocates budget to keep the “Lights On”. The #1 priority of IT is to make sure that the business is running as usual. That means that it reinforces the view of IT as a cost-center. Every aspect in the IT is focused on minimizing downtime rather than providing value to the organization. And that is the how things are and that is how every company is being run. All the “C-Suite” bosses are happy to cut the fat checks and are content with it. There is nothing wrong with that. Except that you can’t improvise on that and can’t convert that cost-center into a profit-center.

If you are running a data center of your own or outsourcing it to cloud services or running a hybrid cloud, then you need a whole ensemble of a team to manage your systems, networks, databases, infrastructures etc. They will babysit your infrastructure, perform changes and do maintenances. Any change to the state of production would require long maintenance window involving all the stakeholders, tons of overtime pay, increased frustration and no family life. It is not only the people and processes, but you will also accumulate computing power along the way and keep on piling your racks in the data centers. You will tons of wasted resources and would hardly be using even 20% of the computing power that you have available. Unfortunately, this is the sorry state of affair for most of IT departments in majority of the companies.

Alas! IT Gods have finally smiled upon us. Things doesn’t have to be same like before with the advent of technologies like docker. Docker and its ecosystem have unleashed myriad range of technologies that lets you put your systems and infrastructure on an auto-pilot. Docker ecosystem provides you with the tools and technologies that allows your IT to add value to your business. I am not claiming we will be able to solve all the issues with old-school IT but there are already tools at our disposal which when combine with docker can help us solve real complex IT solutions that we have been dreading all these years. Much has been talked about docker being a developer’s friend, but it can truly help alleviate IT Ops problem and can help us run a real lean DevOps IT organization.

Using docker successfully to manage and run part of our IT operations at YellowPages, we have leveraged its capabilities with technologies like push based metrics, service discovery in ephemeral environment, orchestration tools, build pipeline and message queues to solve real world IT problems. There were lots of new challenges that we overcame doing that. Also, there is a lot of work that needs to be done for docker to become a IT department’s best friend. In short, things are looking up and we can do quite a few things with docker which will help us evolve the eco-system to make IT a truly profit center.