Kubernetes-1: Overview about Container Orchestration
Container Orchestration Outline:-
Kubernetes-1: Container Orchestration - Outline
- Docker Image Vs Container
- Container Orchestration
- Kubernetes Overview
- Kubernetes Nodes Requirements
- Kubernetes Cluster
- Life Cycle for K8s orchestrator
Docker Image Vs Container
An image is a snapshot of an environment, while a container runs the software.
Docker image is a lightweight, standalone, and executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, and dependencies.
Docker container is a running instance of a Docker image. It is created from an image and includes the application code as well as the environment needed to run it.
Container Orchestration
Container Orchestration automates the deployment, management, scaling and networking of containers. It is useful for the enterprises to deploy and manage multiple containers and hosts.
Purposes
A container orchestrator automatically deploys and manages containerized apps.
- It responds dynamically to changes in the environment.
- It ensures all deployed container instances get updated if a new version of a service is released.
- Dynamically respond to changes
- Deploy the same application across different environments.
Container Orchestration is essential to automate and manage tasks such as:
How does container Orchestration Work?
The configuration of an application is described using either a YAML or JSON file. The file specifies where to find the container images, how to establish a network, and where to store logs. While deploying a new container, the orchestration tool automatically schedules the deployment to a cluster. Container Orchestration then manages the container's life cycle based on the specifications in the config files.
Container Orchestration Tools:
Container orchestration tools provide a framework for managing containers and microservices architecture at scale. They simplify container management and provide a framework for managing multiple containers as one entity. Some popular tools used for container lifecycle management are:
Kubernetes Overview
Kubernetes is a powerful open-source orchestration platform designed to manage containerized applications. It aims to provide better ways of managing related distributed components and services across varied infrastructure. It is also known as k8s or "kube". Kubernetes was originally developed by Google and then acquired by Cloud Native Computing Foundation (CNCF). It acts as a cloud service in major cloud providers such as EKS in AWS and Kubernetes Engine in GCP.
Features of Kubernetes
Benefits of Kubernetes
Tools for Kubernetes
Minikube (https://minikube.sigs.k8s.io/): Minikube to learn how to deploy and manage applications in Kubernetes. Develop Kubernetes applications: Minikube can be used to develop Kubernetes applications. You can use Minikube to test your applications before deploying them to production.
KIND (https://kind.sigs.k8s.io/): kind is a tool for running local Kubernetes clusters using Docker container "nodes". kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
Lens (https://k8slens.dev/): Lens is an Integrated Development Environment (IDE) that enables users to connect and manage multiple Kubernetes clusters from Mac, Windows, and Linux platforms.
Kubernetes Nodes Requirements
The number of master and worker nodes you should set up for a container orchestration platform, like Kubernetes, can vary based on your specific requirements and use cases. Here's a general guideline for both development/test and production environments:
Development/Test Environment
Master Nodes (1 Node): A single master node is typically sufficient for development or testing. It simplifies management and is cost-effective.
Worker Nodes (1-3 Nodes): You can start with one worker node for small-scale testing. If your application needs more resources or to test scaling, consider adding up to three worker nodes.
Production Environment
Master Nodes (3 Nodes): For production, it's common to have at least three master nodes to ensure high availability. This setup helps avoid a single point of failure and allows for leader election among masters.
Worker Nodes (3-5+ Nodes): The number of worker nodes will depend on your application's resource requirements, expected load, and redundancy needs. Generally, start with three worker nodes and scale up as necessary. More complex applications may require more nodes to handle traffic and provide failover capabilities.
Minimum Configuration for Node
CPU: 2 vCPUs
Memory: 2 GB RAM
Storage: 20 GB (or similar)
OS: minimal installation (Only CLI mode, not GUI mode)s
Why 3 node?
(n/2) + 1 rules for node selection by quorum is the minimum number of nodes that need to agree on a decision to ensure the system can continue to function correctly.
Kubernetes Cluster
Master Node
Master components,
- Kube-apiserver:The API server is the central management component that exposes the Kubernetes API. It handles requests from users, clients, and other components, ensuring communication and state management. DevOps developer communicate with Master by "kubectl" that has a configure file. For cluster communication with DevOps member. You can run several instances of kube-apiserver and balance traffic between those instances.
- Kube scheduler:It watches for newly created Pods with no assigned node and selects a node for them to run on. It scheduled the job assign to Master node from the cluster communication command. The scheduler is responsible for assigning newly created pods to available worker nodes based on resource requirements, constraints, and other factors like affinity rules.
- Etcd:It is a consistent and highly-available key value store used as Kubernetes backing store for all cluster data. Etcd is a distributed key-value store used to store all cluster data, including configuration data and the current state of the cluster.
- Kube-controller-manager:It runs controller processes. Each controller is a separate process. Control the Master node by different control
Worker Node
Worker node core component
Kubelet: The Kubelet is an agent that ensures that containers are running in a Pod. It communicates with the Kubernetes API server to receive instructions and report the status of the containers and Pods it manages. The Kubelet monitors the health of the containers and restarts them if necessary.
Kube-proxy: Kube-proxy is responsible for implementing a network proxy and acts as a load balancer in a Kubernetes cluster. It manages network traffic by routing requests to the appropriate containers in a Pod based on incoming port and IP details. Kube-proxy facilitates communication between services and ensures that traffic is directed correctly.
Container Runtime: The container runtime is the software responsible for running containers on a Kubernetes cluster. It is responsible for fetching, starting, and stopping container images. Popular container runtimes include Docker, containerd.
Pod: A Pod is the smallest and simplest unit in the Kubernetes object model. It represents a single instance of an application and can host one or more closely related containers. Pods are the components of the application workload that run on the worker node and share the same network namespace, allowing them to communicate easily.
Life Cycle for K8s orchestrator
1. Command Execution by DevOps Engineer
The DevOps engineer runs the command "kubectl run pod_name --image=image_name
", which communicates with the Kubernetes API server.
2. Kube-apiserver (Master Node)
The Kube-apiserver receives the request and validates it.It stores the desired state (the new Pod specification) in etcd.
3. Kube-controller-manager (Master Node)
The Kube-controller-manager detects the new desired state and manages the creation of the Pod. It ensures that the state in etcd is reflected in the cluster.
4. Kube-scheduler (Master Node)
The Kube-scheduler finds an appropriate worker node for the new Pod based on resource availability and constraints. It updates the Pod specification in etcd with the assigned node.
5. Kubelet (Worker Node)
The Kubelet on the assigned worker node notices the new Pod specification in etcd. It pulls the specified container image from a container registry using the Container Runtime.
6. Container Runtime (Worker Node)
The Container Runtime starts the container based on the image and configuration provided by the Kubelet. It ensures that the container is running as part of the Pod.
7. Kube-proxy (Worker Node)
Kube-proxy sets up the necessary network rules to enable communication with the Pod. It manages load balancing and traffic routing for the Pod's services.
8. Pod Running
The Pod is now running on the worker node, and the Kubelet continues to monitor its health. The Pod is ready to accept traffic based on the configurations set.
Comment / Reply From
You May Also Like
Popular Posts
Stay Connected
Newsletter
Subscribe to our mailing list to get the new updates!