How Pinterest is using Kubernetes for solving the Challenges?
→ Use-Cases solved by Kubernetes
Here is my new article based on the containerization technology, in this, I discuss about the use-cases solved by Kubernetes and how Kubernetes is used in industries for solving the challenges faced by Pinterest and many more companies.
Pinterest is an American image sharing and social media service designed to enable saving and discovery of information on the internet using images and, on a smaller scale, animated GIFs and videos, in the form of pinboards. It is operated by Pinterest, Inc., based in San Francisco, California.
Challenges faced by Pinterest
After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
Impact of Kubernetes
“By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins,” says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. “We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster.”
Since its launch, Pinterest has become a household name, with more than 200 million active monthly users and 100 billion objects saved. Underneath the hood, there are 1,000 microservices running and hundreds of thousands of data jobs.
With such growth came layers of infrastructure and diverse set-up tools and platforms for the different workloads, resulting in an inconsistent and complex end-to-end developer experience, and ultimately less velocity to get to production. So ,the company launched a roadmap toward a new compute platform, led by the vision of having the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
The first phase involved moving to Docker. “Pinterest has been heavily running on virtual machines, on EC2 instances directly, for the longest time,” says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group. “To solve the problem around packaging software and not make engineers own portions of the fleet and those kinds of challenges, we standardized the packaging mechanism and then moved that to the container on top of the VM. Not many drastic changes. We didn’t want to boil the ocean at that point.”
So far it’s been good, especially the elasticity around how we can configure our Jenkins workloads on that Kubernetes shared cluster. That is the win we were pushing for.
Benedict points to a “pretty robust roadmap” going forward. In addition to the Pinterest big data team’s experiments with Spark on Kubernetes, the company collaborated with Amazon’s EKS team on an ENI/CNI plug in.
Once the Jenkins cluster is up and running out of dark mode, Benedict hopes to establish best practices, including having governance primitives established — including integration with the chargeback system — before moving on to migrating the next service. “We have a healthy pipeline of use-cases to be on-boarded. After Jenkins, we want to enable support for Tensorflow and Apache Spark. If we move that and understand the complexity around that, it builds our confidence,” says Benedict. “It sets us up for migration of all our other services.”
Earlier they involved with moving services to Docker containers. Once these services went into production, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.