Mistakes to Avoid when Deploying on a Kubernetes Cluster

Devtron
3 min readMay 12, 2020

--

mistakes while deploying kubernetes cluster

Kubernetes is one of the highest development velocity projects in the history of open source. In this post, we have explored some useful techniques to improve the high-availability Kubernetes deployments. We have explored five common mistakes to avoid when working with Kubernetes.

1.) Moving Kubernetes into Production too Quickly

There can be many significant differences between running Kubernetes in a dev/test environment and running it in a production environment. You’re smart if you plan properly to minimize issues while moving into production, otherwise you might have a hard time.

The biggest mistakes that one can make while moving Kubernetes into production are rooted in the lethal combination of overconfidence, ignorance, and pressing deadlines. So, don’t hurry to move Kubernetes into production; instead, be prepared with the right policies, processes, and test coverage; otherwise, it might cause your organization a huge loss!!

2.) To Assume you’re secured by default with Kubernetes

This is the most common misunderstanding that people have while deploying Kubernetes into the production environment. It is true that Kubernetes Community has shown a strong commitment towards the security and because the orchestrator itself has lots of security-oriented features.

However, you need to ensure that you properly configure the security features when you deploy your architecture in this fast-paced environment. For example, The default setting for network policies is to leave deployments open to all traffic so that every resource can talk to each other, and here is a catch! This open setup vastly increases the risk of attackers approaching your Kubernetes Cluster.

3.) Not configuring pod disruption budgets in Prod Env

Another mistake that one makes is not protecting your applications with a PodDisruptionBudget(PDB). Firstly, you must think of what application you want to protect and how your application reacts to the disruption. That is to decide how many instances can be down simultaneously for a short period due to a voluntary disruption. After this, you can configure the pod disruption budget using YAML.

An example of how you can specify the pod disruption budget( PDB Using max unavailable) is:

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
maxUnavailable: 1
selector:
matchLabels:
app: animal

The use of **maxUnavailable** is recommended as it automatically responds to changes in the number of replicas of the corresponding controller.

4.) Not doing Proper Monitoring

You can monitor your resources; overuse of resources might occur because of the lack of monitoring. Implementing an effective resource monitoring system might usually take time because it is deprioritized by the everyday tasks that DevOps teams face.

To achieve better Kubernetes usage, it is critical to have Monitoring Systems. The lack of monitoring leads to resource exhaustion; as Application Developers climb the learning curve, organizations end up with a massive cluster of reserved idle resources, leading to resource wastage and increased cost. Therefore, your organization must have a monitoring system to provide a crystal clear view of your teams' utilization of the Kubernetes Cluster.

5.) Not adding Default Memory Limits and Cpu limits to Namespaces

Even Before you start running your own applications, you must know how many resources Kubernetes start consuming and how many small VMs could not run your applications because Kube-system eats approximately 70% of a node.

It happens because various system components of Kubernetes already have derived resource requirements, and you might not have enough resources while you’re deploying your applications on Kubernetes Cluster in the production environment. Therefore it is advisable to use at least two or more CPUs per node. Later on, you can modify the settings to be workable on lesser resources, but in the beginning, it should be avoided.

If someone writes an application, For example, An application that opens a connection to a database every second but never closes it, this causes a memory leak in one of your application clusters. If deployed to your Kubernetes Cluster in Production Environment with no limit set, it can crash a node. It is as simple as creating a YAML for the limit range and applying that to the namespace to set the limit.

apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
-default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container

Originally published at https://devtron.ai on May 12, 2020.

--

--

Devtron
Devtron

Written by Devtron

Devtron is an open source no-code solution for Kubernetes deployments. https://github.com/devtron-labs/devtron

No responses yet