Thursday, October 31, 2024

Ubuntu Server as a VPN Gateway

 To connect multiple Ubuntu devices (clients) to one central Ubuntu server and share the connection securely over a VPN, here’s a detailed, step-by-step guide.


Step 1: Set Up the Ubuntu Server as a VPN Gateway

This server will act as the central point, allowing other devices to connect to it.

1.1 Install OpenVPN on the Server

  1. Log into your central Ubuntu server.
  2. Update package lists:

    sudo apt update
  3. Install OpenVPN:

    sudo apt install openvpn -y

1.2 Set Up Easy-RSA for Key and Certificate Management

OpenVPN requires certificates and keys for secure connections.

  1. Install easy-rsa to help with certificate creation:

    sudo apt install easy-rsa -y
  2. Create a new directory for the PKI (Public Key Infrastructure):

    make-cadir ~/openvpn-ca cd ~/openvpn-ca
  3. Initialize the PKI:

    ./easyrsa init-pki
  4. Build the CA (Certificate Authority) and follow the prompts:

    ./easyrsa build-ca
  5. Generate the server certificate and key:

    ./easyrsa gen-req server nopass
  6. Sign the server certificate:

    ./easyrsa sign-req server server
  7. Generate Diffie-Hellman parameters:

    ./easyrsa gen-dh
  8. Copy the keys and certificates to OpenVPN’s directory:

    sudo cp pki/ca.crt pki/private/server.key pki/issued/server.crt /etc/openvpn/ sudo cp pki/dh.pem /etc/openvpn/dh2048.pem

1.3 Configure the OpenVPN Server

  1. Create a configuration file for the server:

    sudo nano /etc/openvpn/server.conf
  2. Paste the following configuration into server.conf:

    port 1194 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh2048.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" keepalive 10 120 cipher AES-256-CBC user nobody group nogroup persist-key persist-tun status openvpn-status.log verb 3

1.4 Enable IP Forwarding for Internet Sharing

  1. Open /etc/sysctl.conf:

    sudo nano /etc/sysctl.conf
  2. Find or add the line below to enable IP forwarding:

    net.ipv4.ip_forward = 1
  3. Apply the change immediately:

    sudo sysctl -p

1.5 Set Up Firewall Rules for OpenVPN

  1. Allow OpenVPN traffic through the firewall:

    sudo ufw allow 1194/udp
  2. Enable NAT (Network Address Translation) to allow VPN clients to reach the internet through the server:

    sudo iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
    Replace eth0 with your server’s network interface if it differs.

1.6 Start and Enable the OpenVPN Service

  1. Start the OpenVPN service:

    sudo systemctl start openvpn@server
  2. Enable it to start at boot:

    sudo systemctl enable openvpn@server

Step 2: Set Up VPN Clients (Each of the 10 Ubuntu Devices)

Each client needs its own certificate and configuration to connect securely to the VPN server.

2.1 Create a Certificate for Each Client

On the server:

  1. Go back to the ~/openvpn-ca directory:

    cd ~/openvpn-ca
  2. Generate a certificate and key for each client (e.g., client1, client2, etc.):

    ./easyrsa gen-req client1 nopass ./easyrsa sign-req client client1
  3. Copy the client’s certificates and keys to a separate directory to transfer them:

    cp pki/ca.crt pki/issued/client1.crt pki/private/client1.key ~/client1

2.2 Create Client Configuration File

  1. On the server, create a client configuration file for each client (e.g., client1.ovpn):

    nano ~/client1/client1.ovpn
  2. Add this configuration, replacing your_server_ip with the server's public IP address:

    client dev tun proto udp remote your_server_ip 1194 resolv-retry infinite nobind persist-key persist-tun remote-cert-tls server cipher AES-256-CBC verb 3 <ca> # Paste contents of ca.crt here </ca> <cert> # Paste contents of client1.crt here </cert> <key> # Paste contents of client1.key here </key>

2.3 Install OpenVPN on Each Client Device

On each Ubuntu client:

  1. Install OpenVPN:

    sudo apt update sudo apt install openvpn -y
  2. Copy the client1.ovpn configuration file from the server to each client.

2.4 Connect Each Client to the VPN

On each client device, use the configuration file to connect:


sudo openvpn --config /path/to/client1.ovpn

To run this automatically on boot, copy the configuration to /etc/openvpn/client/ as client.conf and enable the OpenVPN service:


sudo cp /path/to/client1.ovpn /etc/openvpn/client.conf sudo systemctl enable openvpn-client@client

Step 3: Testing and Sharing Data Across Clients

  1. Verify VPN Connectivity: From each client, ping the VPN server to ensure the connection.

    ping 10.8.0.1
  2. Enable File Sharing (Optional): Use SSH/SCP or set up an NFS shared folder on the VPN server to allow clients to access shared data.

By following these steps, you will connect 10 Ubuntu devices through a VPN to a central Ubuntu server, securely sharing resources and internet access across the network.

Monday, October 14, 2024

Git Commands

 Initiate a repository:

# initialize an existing directory as a Git repository
$ git init

# retrieve an entire repository from a hosted location via URL
$ git clone [url]

 

Stage your files:

# Show modified files in working directory, staged for your next commit
git status


# Add a file as it looks now to your next commit (stage)
git add [file path]


# If you need to add ALL the modified files at once
git add .


# Unstage a file while retaining the changes in working directory
$ git reset [file]


# Difference of what is changed but not staged
$ git diff


# Difference of what is staged but not yet commited
$ git diff --staged


# Commit your staged content as a new commit snapshot
$ git commit -m "descriptive message"


# Add files and Commit your staged content as a new commit snapshot
$ git commit -a

 

Manage branch & merge:

# list your branches. a * will appear next to the currently active branch
$ git branch


# create a new branch at the current commit
$ git branch [branch-name]


# switch to another branch and check it out into your working directory
$ git checkout


# One line command to checkout a new branch
$ git checkout -b [branch-name]


# merge the specified branch’s history into the current one
$ git merge [branch]


# show all commits in the current branch’s history
$ git log


# Git branch rename
$ git branch -m <new_branch_name>


# Delete branch
$ git branch -d [branch name]

 

Inspect branch & compare

# Show the commit history for the currently active branch
$ git log


# Show the commits on branchA that are not on branchB
$ git log branchB..branchA


# Show the commits that changed file, even across renames
$ git log --follow [file]


# Show the diff of what is in branchA that is not in branchB
$ git diff branchB...branchA


# Show any object in Git in human-readable format
$ git show [SHA]
$ git show [commit]

# used to give tags to the specified commit.
$ git tag [commitID] 

 

Share & Update:

# add a git URL as an alias
$ git remote add [alias] [url]


# fetch down all the branches from that Git remote
$ git fetch [alias]


# merge a remote branch into your current branch to bring it up to date
$ git merge [alias]/[branch]


# Transmit local branch commits to the remote repository branch
$ git push [alias] [branch]


# Push commits to all branches in your repository
$ git push –all [variable name]


# fetch and merge any commits from the tracking remote branch
$ git pull

 

Tracking path changes

# delete the file from project and stage the removal for commit
$ git rm [file]


# change an existing file path and stage the move
$ git mv [existing-path] [new-path]


# show all commit logs with indication of any paths that moved TEMPO
$ git log --stat -M

 

Rewrite history

# apply any commits of current branch ahead of specified one
$ git rebase [branch]


# clear staging area, rewrite working tree from specified commit
$ git reset --hard [commit]

 

Temporary Commits

# Save modified and staged changes
$ git stash


# list stack-order of stashed file changes
$ git stash list


# write working from top of stash stack
$ git stash pop


# discard the changes from top of stash stack
$ git stash drop

 

 Ignoring patterns

# system wide ignore patern for all local repositories
$ git config --global core.excludesfile [file]

 

Tuesday, October 8, 2024

About Azure Boards

 What is Azure Boards:  

Azure Boards is a service within Azure DevOps that helps teams plan, track, and manage software development projects. Key features include: 

  • Work Item Tracking: Manage user stories, tasks, and bugs. 

  • Agile Tools: Supports Scrum and Kanban methodologies. 

  • Boards and Backlogs: Visualize and manage tasks using Kanban boards. 

  • Queries and Reporting: Create custom queries and track project progress. 

  • CI/CD Integration: Links with Azure Repos and Pipelines for seamless workflows. 

  • Customization: Tailor fields, workflows, and processes to fit team needs. 

  • Collaboration: Enhance team communication with comments and notifications. 

Overall, Azure Boards improves project management and collaboration in software development. 

Azure Boards hubs:  

Azure Boards features several hubs that provide specific functionalities to help teams manage their projects effectively. Here’s a brief overview of each hub: 

  • Work Items: Central hub for creating, viewing, and managing work items like user stories, tasks, bugs, and features. It allows users to track the status and details of each item. 

  • Boards: Visual hub that displays work items in a Kanban board format. Teams can move items across columns to reflect their current status and progress. 

  • Backlogs: A prioritized list of work items organized by iteration or area. It helps teams manage their product backlog and plan sprints effectively. 

  • Sprints: Focused on managing and tracking work during specific time frames. Teams can view sprint progress, burndown charts, and allocate tasks for upcoming sprints. 

  • Queries: A hub for creating and managing custom queries to filter and view work items based on specific criteria. It helps teams track work and generate reports. 

  • Dashboards: Provides customizable dashboards that display key metrics and project insights through various widgets, helping teams monitor progress and performance at a glance. 

  • Delivery Plans: Visualize and manage work items across teams and iterations, providing a timeline view of project delivery. 

These hubs collectively enhance project visibility, collaboration, and management, allowing teams to streamline their software development processes. 

Wednesday, June 5, 2024

Security in DevOps

Security in DevOps, often referred to as DevSecOps, integrates security practices into the DevOps process, ensuring that security is built into every phase of the software development lifecycle (SDLC). Here’s a breakdown of key security practices in DevOps:

1. Shift-Left Security

  • What it is: Security is integrated early in the development process (in the design and coding phases).
  • Practices:
    • Perform threat modeling and risk assessments at the start.
    • Implement secure coding standards.
    • Use static application security testing (SAST) to scan code for vulnerabilities.

2. Continuous Security Testing

  • What it is: Automated security tests run continuously throughout the CI/CD pipeline.
  • Practices:
    • Integrate tools for dynamic application security testing (DAST) and interactive application security testing (IAST) to catch vulnerabilities during and after code deployment.
    • Run security checks for every pull request and automated builds.

3. Automation and Infrastructure as Code (IaC) Security

  • What it is: Security configurations are enforced through automated scripts and templates.
  • Practices:
    • Use tools like Terraform, CloudFormation, or Ansible to define secure configurations for infrastructure.
    • Use security validation tools (e.g., TFLint, Checkov) to verify security compliance in infrastructure code.
    • Automate patch management for servers and containers.

4. Container and Kubernetes Security

  • What it is: Secure the containerized applications and Kubernetes environments.
  • Practices:
    • Use vulnerability scanning tools (e.g., Aqua, Clair) for Docker images.
    • Ensure that containers run with the least privilege principle.
    • Secure Kubernetes clusters by applying role-based access control (RBAC), network policies, and secret management.

5. Security Monitoring and Logging

  • What it is: Continuous monitoring and analysis of system logs to detect security anomalies.
  • Practices:
    • Implement log monitoring tools (e.g., Splunk, ELK Stack, Datadog) for real-time security alerts.
    • Set up centralized logging for all services, containers, and cloud infrastructure.
    • Use security information and event management (SIEM) tools for threat detection and response.

6. Secrets Management

  • What it is: Securely manage sensitive data such as API keys, passwords, and encryption keys.
  • Practices:
    • Use secret management tools (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) to securely store and retrieve secrets.
    • Avoid hardcoding secrets in code or configuration files.

7. Secure Software Dependencies

  • What it is: Ensure that third-party libraries and dependencies used in the application are secure.
  • Practices:
    • Use tools like OWASP Dependency-Check or Snyk to scan and update vulnerable dependencies.
    • Regularly update libraries to the latest versions with known security patches.

8. Network Security

  • What it is: Secure network traffic and access control for DevOps environments.
  • Practices:
    • Implement firewalls, virtual private networks (VPNs), and private subnets in cloud environments.
    • Use zero-trust network architecture (ZTNA) principles to restrict access to resources based on identity.

9. Access Control and Identity Management

  • What it is: Manage access to systems and environments securely.
  • Practices:
    • Enforce multi-factor authentication (MFA) for all privileged users.
    • Implement role-based access control (RBAC) to limit user permissions.
    • Use identity management solutions (e.g., AWS IAM, Azure Active Directory, Okta) to manage user identities and permissions.

10. Compliance and Auditing

  • What it is: Ensure that the DevOps pipeline adheres to industry standards and regulations.
  • Practices:
    • Automate compliance checks (e.g., CIS Benchmark assessments) in the CI/CD pipeline.
    • Conduct regular audits and logging to ensure all actions and configurations are compliant with standards (e.g., GDPR, HIPAA, PCI-DSS).

Integrating these security practices ensures that security becomes an integral part of DevOps, without hindering agility and speed. Since you're leading a team, adopting DevSecOps will not only streamline your security process but also enhance your organization’s overall security posture across cloud and infrastructure operations.

Thursday, May 2, 2024

Kubernetes learning Approach

 

To learn Kubernetes effectively, you should focus on a structured approach that covers both foundational concepts and hands-on experience. Below is a breakdown of the key areas and topics to focus on:

1. Basic Concepts of Containers and Orchestration

  • Containers: Understand Docker and containerization. Learn how containers are created, how images are built, and how they differ from traditional VMs.
  • Container Orchestration: Learn why orchestration is necessary and how Kubernetes solves problems like scalability, high availability, and automated management of containerized applications.

2. Kubernetes Architecture

  • Nodes and Clusters: Learn how Kubernetes clusters are organized into nodes (worker nodes and master nodes).
  • Control Plane: Understand the components of the control plane (API server, scheduler, etcd, controller manager).
  • Worker Node Components: Learn about kubelet, kube-proxy, and container runtime.

3. Core Kubernetes Components

  • Pods: The smallest deployable units in Kubernetes.
  • Services: Exposing your application to other services or external traffic (ClusterIP, NodePort, LoadBalancer).
  • Deployments: Handling application updates and scaling.
  • ReplicaSets: Ensuring the desired number of pod replicas are running.
  • Namespaces: Logical isolation of Kubernetes resources.

4. Networking in Kubernetes

  • Cluster Networking: Understand how containers communicate inside the cluster using CNI (Container Network Interface).
  • Service Discovery: Learn how services use DNS to find each other.
  • Ingress: Exposing HTTP and HTTPS routes outside the cluster with an ingress controller.

5. Storage and Volumes

  • Persistent Volumes (PVs): Managing storage that exists beyond the lifecycle of pods.
  • Persistent Volume Claims (PVCs): Requesting storage resources dynamically.
  • Storage Classes: Different storage provisioning types and policies.

6. Managing Configurations and Secrets

  • ConfigMaps: Manage environment-specific configuration.
  • Secrets: Store sensitive information securely.

7. Scaling and Self-healing

  • Horizontal Pod Autoscaling (HPA): Automatically scale the number of pods based on CPU or custom metrics.
  • Vertical Pod Autoscaling (VPA): Automatically adjust the CPU and memory requests for containers.
  • Self-healing: How Kubernetes automatically restarts failed containers and replaces unresponsive nodes.

8. Kubernetes Security

  • RBAC (Role-Based Access Control): Fine-grained access control.
  • Service Accounts: Handling authentication within pods.
  • Network Policies: Control traffic between different pods.

9. Helm and Kubernetes Package Management

  • Learn Helm for managing Kubernetes applications with charts (preconfigured Kubernetes resources).
  • Understand how Helm simplifies the deployment, upgrade, and rollback of applications.

10. Monitoring and Logging

  • Monitoring: Tools like Prometheus for real-time monitoring of the cluster.
  • Logging: Tools like Fluentd or ELK Stack (Elasticsearch, Logstash, Kibana) for logging and aggregation.

11. Kubernetes Workflows and CI/CD

  • Learn how to integrate Kubernetes with CI/CD pipelines (using tools like Jenkins, GitLab, or ArgoCD).
  • Automated testing, deployment, and rollback strategies.

12. Kubernetes Operators and Custom Resource Definitions (CRDs)

  • Operators: Extend Kubernetes functionalities by automating complex tasks.
  • Custom Resource Definitions: Define custom APIs for Kubernetes to manage.

13. Hands-On Practice

  • Minikube: Set up a local Kubernetes cluster.
  • kubectl: Learn the CLI tool to interact with the cluster (get pods, services, deploy apps).
  • Cloud Providers: Experiment with managed Kubernetes services like Google Kubernetes Engine (GKE), Amazon EKS, or Azure AKS.

Learning Resources:

  • Official Kubernetes Documentation: Great for in-depth and up-to-date knowledge.
  • Kubernetes Tutorials: Websites like Katacoda, Kubernetes the Hard Way (by Kelsey Hightower), and Labs from cloud providers.
  • Books: "Kubernetes Up & Running" and "The Kubernetes Book".
  • Courses: Platforms like Coursera, Udemy, and Pluralsight offer Kubernetes courses.

By following these steps and building projects along the way, you’ll develop a solid understanding of Kubernetes.

Friday, March 8, 2024

Gtk-Message: 21:23:41.751: Not loading module

 Error message: he message you're seeing:

Gtk-Message: 21:23:41.751: Not loading module "atk-bridge": The functionality is provided by GTK natively. Please try to not load it.

FIX:- 

indicates that the atk-bridge module is no longer necessary for your version of GTK, as the functionality it provides is now built into GTK itself. This is more of an informational or warning message rather than an error, and your application should still run fine without any issues.

However, if you'd like to suppress this message or resolve it for a cleaner output, here are some approaches:

1. Ensure Dependencies Are Up-to-Date

Make sure you have the latest versions of GTK and its related packages:

 sudo apt update
sudo apt upgrade

You can also specifically update GTK and ATK packages (on Ubuntu/Debian):

sudo apt install --reinstall libgtk-3-0 at-spi2-core libatk-adaptor
 

2. Unset GTK Modules Environment Variable (Suppress Message)

The message might be triggered because the GTK_MODULES environment variable includes atk-bridge. You can suppress this by unsetting the variable.

Run the following command in your terminal before launching your application:

unset GTK_MODULES
 

To make this change permanent, you can add the command to your .bashrc or .bash_profile:

echo "unset GTK_MODULES" >> ~/.bashrc
source ~/.bashrc
 

3. Check for Old Configurations

Some applications or configurations may explicitly load unnecessary modules. Look for any GTK or atk-bridge settings that might be outdated in the following locations:

  • ~/.config/gtk-3.0/settings.ini
  • /etc/gtk-3.0/settings.ini

You may not find this file, but if you do, ensure there’s no manual loading of atk-bridge.

4. Install Accessibility Bridge (Optional)

If you still want to install the atk-bridge module (even though it's not necessary), you can do so with:

sudo apt install at-spi2-core
 

5. Suppress the Warning in Output (Advanced)

If you're running a script or an application that logs GTK messages and you want to suppress this specific message, you can redirect the output using grep or sed.

Example:

your-application 2>&1 | grep -v "Not loading module 'atk-bridge'"
 

These steps should help either resolve or suppress the atk-bridge message depending on your preference. If the message is just cosmetic and not affecting functionality, you can safely ignore it.

 

 

 

Friday, March 1, 2024

Install Prometheus in Minikube using Helm

To install Prometheus in Minikube using Helm, follow these step-by-step instructions. This process assumes that you already have Minikube and Helm installed.

Prerequisites:

  1. Minikube installed on your machine. Minikube Installation Guide
  2. kubectl installed and configured. kubectl Installation Guide
  3. Helm installed on your machine. Helm Installation Guide

Step-by-Step Installation

Step 1: Start Minikube

Start your Minikube cluster:

 minikube start


Wait for Minikube to start, and check the status:

 minikube status

Step 2: Add Helm Repository for Prometheus

Helm provides a stable repository that contains Prometheus charts. First, add the prometheus-community repository:

 helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

Update your Helm repository to make sure everything is up-to-date:

 helm repo update

Step 3: Create a Namespace for Monitoring

Create a dedicated namespace for Prometheus (e.g., monitoring):

 kubectl create namespace monitoring

Step 4: Install Prometheus Using Helm

Now, use Helm to install Prometheus. You will use the Prometheus chart from the prometheus-community repository.

helm install prometheus prometheus-community/prometheus --namespace monitoring
 

This command will:

  • Install the Prometheus chart from the prometheus-community Helm repo.
  • Use the namespace monitoring for the Prometheus components.

Step 5: Verify the Installation

Check the resources created in the monitoring namespace:

kubectl get all -n monitoring
 

You should see several resources such as pods, services, deployments, statefulsets, etc.

Step 6: Access the Prometheus UI

To access the Prometheus UI, we will use Minikube’s service tunneling feature. Run the following command to get the service URL:

minikube service prometheus-server -n monitoring
 

This will launch a browser window to access Prometheus.

If you want to expose the Prometheus UI via port forwarding, you can run:

helm uninstall prometheus --namespace monitoring
 

You can also delete the monitoring namespace if you no longer need it:

kubectl delete namespace monitoring