Thursday, January 11, 2024

Deploying a Simple Web Application with Helm

 Kubernetes package management with Helm is a powerful way to manage Kubernetes applications. Helm helps you define, install, and upgrade even the most complex Kubernetes applications using Helm charts. Here’s a comprehensive example to illustrate how you can use Helm for package management.

Example: Deploying a Simple Web Application with Helm

1. Install Helm

Before you begin, ensure you have Helm installed. You can install Helm using the following command:

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Verify the installation:


helm version

2. Create a Helm Chart

To create a new Helm chart for your application, use the following command:


helm create my-web-app

This command creates a new directory named my-web-app with a basic Helm chart structure:


my-web-app/ ├── .helmignore ├── Chart.yaml ├── values.yaml ├── charts/ ├── templates/ │ ├── deployment.yaml │ ├── service.yaml │ ├── ingress.yaml │ └── _helpers.tpl └── templates/tests/

3. Customize the Chart

Edit the Chart.yaml file to define your chart’s metadata:


apiVersion: v2 name: my-web-app description: A Helm chart for deploying a simple web application version: 0.1.0 appVersion: "1.0"

Update the values.yaml file to configure default values for your application:


replicaCount: 1 image: repository: nginx pullPolicy: IfNotPresent tag: "latest" service: type: ClusterIP port: 80 ingress: enabled: false name: "" annotations: {} path: / hosts: - host: chart-example.local paths: [] tls: []

Edit templates/deployment.yaml to define your application’s deployment:


apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "my-web-app.fullname" . }} labels: app: {{ include "my-web-app.name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: {{ include "my-web-app.name" . }} template: metadata: labels: app: {{ include "my-web-app.name" . }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: 80

Edit templates/service.yaml to define your service:


apiVersion: v1 kind: Service metadata: name: {{ include "my-web-app.fullname" . }} labels: app: {{ include "my-web-app.name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version }} spec: type: {{ .Values.service.type }} ports: - port: {{ .Values.service.port }} targetPort: http selector: app: {{ include "my-web-app.name" . }}

4. Package the Chart

To package your Helm chart into a .tgz file, use the following command:


helm package my-web-app

This creates a my-web-app-0.1.0.tgz file that you can distribute or upload to a Helm repository.

5. Install the Chart

To install your chart into a Kubernetes cluster, use the following command:


helm install my-web-app ./my-web-app-0.1.0.tgz

To specify custom values during installation, use:


helm install my-web-app ./my-web-app-0.1.0.tgz --values custom-values.yaml

6. Upgrade the Chart

To upgrade your release with new chart changes, use:


helm upgrade my-web-app ./my-web-app-0.1.0.tgz

7. Uninstall the Chart

To remove the deployed release, use:


helm uninstall my-web-app

8. Helm Repositories

You can also use Helm repositories to manage and share charts. To add a Helm repository:


helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update

To search for charts in the repository:


helm search repo bitnami

To install a chart from a repository:


helm install my-web-app bitnami/nginx

Summary

  • Helm Charts are used to define, install, and upgrade Kubernetes applications.
  • Helm allows you to package, configure, and deploy applications with a single command.
  • Charts consist of templates and values to customize the Kubernetes manifests for your application.

By following these steps, you can effectively manage Kubernetes applications using Helm, making it easier to deploy, update, and maintain complex applications in a Kubernetes environment.

Thursday, January 4, 2024

Docker concepts and commands

Here’s a comprehensive guide to Docker concepts and commands, complete with explanations to help you understand each one.

1. Basic Docker Commands

1.1. Check Docker Version


docker --version

Explanation: This command displays the installed Docker version. It's useful for verifying that Docker is installed and checking its version.

1.2. List Running Containers


docker ps

Explanation: Lists all currently running containers. By default, it shows the container ID, image, command, creation time, status, ports, and names.

1.3. List All Containers (including stopped ones)


docker ps -a

Explanation: Lists all containers, both running and stopped. This helps in managing and inspecting containers that are not currently active.

1.4. Pull an Image from Docker Hub


docker pull nginx

Explanation: Downloads the nginx image from Docker Hub (the default image repository). If the image is already on your local machine, Docker will pull the latest version.

1.5. Run a Container


docker run -d -p 80:80 --name webserver nginx

Explanation:

  • -d: Runs the container in detached mode (in the background).
  • -p 80:80: Maps port 80 on the host to port 80 in the container.
  • --name webserver: Assigns the name "webserver" to the container.
  • nginx: Specifies the image to use.

1.6. Stop a Container


docker stop webserver

Explanation: Stops the running container named "webserver." It sends a SIGTERM signal, followed by SIGKILL after a grace period if the container doesn’t stop.

1.7. Remove a Container


docker rm webserver

Explanation: Removes the stopped container named "webserver." The container must be stopped before it can be removed.

1.8. Remove an Image


docker rmi nginx

Explanation: Deletes the nginx image from your local Docker repository. If any containers are using this image, Docker will prevent its removal unless forced.

2. Dockerfile Basics

A Dockerfile is a script used to automate the building of Docker images.

2.1. Simple Dockerfile

Create a file named Dockerfile with the following content:


# Use an official Python runtime as a parent image FROM python:3.9-slim # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 80 available to the world outside this container EXPOSE 80 # Define environment variable ENV NAME World # Run app.py when the container launches CMD ["python", "app.py"]

Explanation:

  • FROM python:3.9-slim: Sets the base image to Python 3.9 slim version.
  • WORKDIR /app: Sets the working directory inside the container.
  • COPY . /app: Copies files from the current directory to the container’s /app directory.
  • RUN pip install --no-cache-dir -r requirements.txt: Installs Python dependencies listed in requirements.txt.
  • EXPOSE 80: Documents that the container listens on port 80.
  • ENV NAME World: Sets an environment variable NAME with value World.
  • CMD ["python", "app.py"]: Specifies the default command to run when the container starts.

2.2. Build the Docker Image


docker build -t my-python-app .

Explanation: Builds an image from the Dockerfile in the current directory (.) and tags it as my-python-app.

2.3. Run a Container from the Image


docker run -p 4000:80 my-python-app

Explanation: Runs a container from the my-python-app image, mapping port 4000 on the host to port 80 in the container.

3. Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications.

3.1. Basic docker-compose.yml File

Create a file named docker-compose.yml with the following content:


version: '3' services: web: image: nginx ports: - "8080:80" redis: image: redis

Explanation:

  • version: '3': Specifies the version of Docker Compose syntax.
  • services: Defines the services (containers) to run.
    • web: A service using the nginx image, exposing port 8080.
    • redis: A service using the redis image.

3.2. Start Services with Docker Compose


docker-compose up

Explanation: Starts the services defined in docker-compose.yml. If the images are not available locally, Docker Compose will pull them.

3.3. Stop and Remove Containers with Docker Compose


docker-compose down

Explanation: Stops and removes the containers defined in docker-compose.yml. It also removes the associated networks and volumes.

4. Docker Networking

Docker networking allows containers to communicate with each other and with the outside world.

4.1. Create a Network


docker network create my-network

Explanation: Creates a new network named my-network. Containers connected to this network can communicate with each other.

4.2. Run Containers on a Custom Network


docker run -d --name db --network my-network mongo docker run -d --name app --network my-network my-python-app

Explanation: Runs mongo and my-python-app containers on the my-network network, allowing them to communicate directly.

4.3. Inspect a Network


docker network inspect my-network

Explanation: Shows detailed information about the my-network network, including connected containers and configuration.

5. Docker Volumes

Volumes are used to persist data across container restarts and to share data between containers.

5.1. Create a Volume


docker volume create my-volume

Explanation: Creates a new Docker volume named my-volume. Volumes are stored in a part of the host filesystem managed by Docker.

5.2. Run a Container with a Volume


docker run -d -v my-volume:/data --name my-container nginx

Explanation: Runs a container with the volume my-volume mounted to /data in the container. This allows data to persist and be shared.

5.3. List Volumes


docker volume ls

Explanation: Lists all Docker volumes on the system.

5.4. Inspect a Volume


docker volume inspect my-volume

Explanation: Provides detailed information about the my-volume volume, including its mount point and usage.

5.5. Remove a Volume


docker volume rm my-volume

Explanation: Deletes the my-volume volume. It can only be removed if no containers are using it.

6. Advanced Docker Commands

6.1. Build an Image with Build Arguments


docker build --build-arg MY_ARG=value -t my-image .

Explanation: Passes build arguments to the Dockerfile. You can use MY_ARG in the Dockerfile with ARG directive.

6.2. Tag an Image


docker tag my-image my-repo/my-image:latest

Explanation: Tags an image (my-image) with a new name and tag (my-repo/my-image:latest). This helps in organizing and managing images.

6.3. Push an Image to Docker Hub


docker push my-repo/my-image:latest

Explanation: Uploads the tagged image to Docker Hub or another Docker registry.

6.4. Pull an Image from a Private Repository


docker login docker pull my-repo/my-image:latest

Explanation: Logs into a Docker registry and pulls an image from it. Authentication is required for private repositories.

6.5. Create and Manage Docker Swarm


docker swarm init docker service create --name my-service -p 80:80 nginx docker service ls docker service ps my-service docker service rm my-service docker swarm leave --force

Explanation:

  • docker swarm init: Initializes a Docker Swarm cluster.
  • docker service create --name my-service -p 80:80 nginx: Creates a new service named my-service using the nginx image.
  • docker service ls: Lists all services in the swarm.
  • docker service ps my-service: Lists tasks (containers) of the specified service.
  • docker service rm my-service: Removes the specified service.
  • docker swarm leave --force: Forces the current node to leave the swarm.

7. Debugging and Logs

7.1. View Container Logs


docker logs my-container

Explanation: Displays the logs of the specified container, useful for debugging issues.

7.2. Attach to a Running Container


docker attach my-container

Explanation: Attaches your terminal to the running container’s process, allowing you to interact with it directly.

7.3. Exec into a Running Container


docker exec -it my-container /bin/bash

Explanation: Opens an interactive terminal session (-it) inside the container, allowing you to run commands directly.

This guide covers a broad range of Docker commands and concepts, giving you a solid foundation to work with Docker in various scenarios. 

About Prometheus

 GitHub Link : https://github.com/Naveenjayachandran/Kubernetes_Prometheus

What is Prometheus?

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project and maintained independently of any company. To emphasize this, and to clarify the project's governance structure, Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project, after Kubernetes.

Prometheus collects and stores its metrics as time series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.

For more elaborate overviews of Prometheus, see the resources linked from the media section.

Features

Prometheus's main features are:

  • a multi-dimensional data model with time series data identified by metric name and key/value pairs
  • PromQL, a flexible query language to leverage this dimensionality
  • no reliance on distributed storage; single server nodes are autonomous
  • time series collection happens via a pull model over HTTP
  • pushing time series is supported via an intermediary gateway
  • targets are discovered via service discovery or static configuration
  • multiple modes of graphing and dashboarding support

What are metrics?

Metrics are numerical measurements in layperson terms. The term time series refers to the recording of changes over time. What users want to measure differs from application to application. For a web server, it could be request times; for a database, it could be the number of active connections or active queries, and so on.

Metrics play an important role in understanding why your application is working in a certain way. Let's assume you are running a web application and discover that it is slow. To learn what is happening with your application, you will need some information. For example, when the number of requests is high, the application may become slow. If you have the request count metric, you can determine the cause and increase the number of servers to handle the load.

Components

The Prometheus ecosystem consists of multiple components, many of which are optional:

Most Prometheus components are written in Go, making them easy to build and deploy as static binaries.

Architecture

This diagram illustrates the architecture of Prometheus and some of its ecosystem components:

Prometheus architecture

Prometheus scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. It stores all scraped samples locally and runs rules over this data to either aggregate and record new time series from existing data or generate alerts. Grafana or other API consumers can be used to visualize the collected data.

When does it fit?

Prometheus works well for recording any purely numeric time series. It fits both machine-centric monitoring as well as monitoring of highly dynamic service-oriented architectures. In a world of microservices, its support for multi-dimensional data collection and querying is a particular strength.

Prometheus is designed for reliability, to be the system you go to during an outage to allow you to quickly diagnose problems. Each Prometheus server is standalone, not depending on network storage or other remote services. You can rely on it when other parts of your infrastructure are broken, and you do not need to setup extensive infrastructure to use it.

When does it not fit?

Prometheus values reliability. You can always view what statistics are available about your system, even under failure conditions. If you need 100% accuracy, such as for per-request billing, Prometheus is not a good choice as the collected data will likely not be detailed and complete enough. In such a case you would be best off using some other system to collect and analyze the data for billing, and Prometheus for the rest of your monitoring.