Wednesday, January 1, 2025

Cloud Computing Vocabulary

 

Virtualization

Definition: Virtualization is the process of creating a virtual version of physical resources such as servers, storage, or networks.
Example: Instead of using three physical servers, a company can use one powerful server with virtualization software like VMware or Hyper-V to create three virtual machines, each acting as an independent server.


Virtual Machine (VM)

Definition: A Virtual Machine is a software-based emulation of a physical computer that runs its own operating system and applications independently.
Example: A developer can run Windows and Linux simultaneously on a Mac using VMs created with VirtualBox or VMware.


API (Application Programming Interface)

Definition: An API is a set of rules and protocols that enables different software applications to communicate and interact with each other.
Example: A weather website uses the OpenWeatherMap API to fetch live weather data and display it to users.


Regions

Definition: Regions are physical locations around the world where cloud providers like AWS, Azure, or GCP have data centers.
Example: Microsoft Azure has a "UK South" region in London and a "West Europe" region in the Netherlands.


Availability Zones

Definition: Availability Zones are isolated data centers within a region, each with separate power, cooling, and networking, to ensure high availability.
Example: AWS’s "us-east-1" region (Virginia) has multiple availability zones like us-east-1a, us-east-1b, etc., to spread resources and reduce the risk of downtime.


Scalability

Definition: Scalability is the ability of a system to grow and handle increased demand by adding resources.
Example: An e-commerce site can scale its infrastructure during Black Friday by adding more servers to handle the spike in traffic.


Elasticity

Definition: Elasticity refers to the automatic scaling of resources up or down based on real-time demand.
Example: A cloud-based video streaming service like Netflix adds more servers during peak hours and reduces them when traffic decreases.


Agility

Definition: Agility is the ability to quickly adapt and respond to changes or deploy new features rapidly.
Example: A startup can use Azure DevOps to deploy new versions of its app weekly instead of waiting months, thanks to the agility provided by cloud tools.


High Availability (HA)

Definition: High Availability ensures that systems remain operational with minimal downtime, typically 99.9% uptime or higher.
Example: Hosting a web application in multiple availability zones helps ensure it remains online even if one zone fails.


Fault Tolerance

Definition: Fault Tolerance is the ability of a system to continue functioning even when some components fail.
Example: A banking system using a redundant database cluster can still serve customers even if one database server crashes.


Disaster Recovery (DR)

Definition: Disaster Recovery includes processes and technologies used to restore systems and data after catastrophic events.
Example: A company backs up its data to Azure Backup, so it can restore files and virtual machines if a ransomware attack corrupts the primary data.


Load Balancing

Definition: Load Balancing distributes incoming network traffic across multiple servers to ensure no single server is overwhelmed.
Example: A web application uses an AWS Elastic Load Balancer to route requests to multiple EC2 instances, keeping the application fast and responsive.

Tuesday, December 31, 2024

Basics of Cloud Computing

 

Basics of Cloud Computing

What is the Cloud?

Think of the cloud like a powerful computer you can use over the internet.
It’s a place where you can store files, run apps, and use services—without needing to own or manage the actual hardware.

What is Cloud Computing?

Cloud computing means using the internet to get access to computing services like storage, software, and servers.
Instead of buying and managing your own computers, you use resources provided by others (like Google, Amazon, or your own company).

These services are run from data centers all over the world, and you can access them from anywhere with an internet connection.


Types of Cloud

Public Cloud

  • Who Uses It: Anyone – individuals, companies, or organizations.

  • What It's Like: A shared online space that anyone can use.

  • Example: Google Drive, Microsoft Azure, Amazon Web Services (AWS).

Private Cloud

  • Who Uses It: Only one organization or company.

  • What It's Like: A private digital space only you and your team can access.

  • Example: A company using its own servers in a secure network.

Hybrid Cloud

  • Who Uses It: Businesses that need both private and public options.

  • What It's Like: A mix – you use your private space for sensitive stuff, and the public cloud when you need more space or power.

  • Example: Storing important data in a private cloud but using public cloud services for emails or backups.


In Short:

  • Public Cloud: Shared space for everyone.

  • Private Cloud: Private space just for you.

  • Hybrid Cloud: A mix of both, depending on what you need.

Thursday, October 31, 2024

Ubuntu Server as a VPN Gateway

 To connect multiple Ubuntu devices (clients) to one central Ubuntu server and share the connection securely over a VPN, here’s a detailed, step-by-step guide.


Step 1: Set Up the Ubuntu Server as a VPN Gateway

This server will act as the central point, allowing other devices to connect to it.

1.1 Install OpenVPN on the Server

  1. Log into your central Ubuntu server.
  2. Update package lists:

    sudo apt update
  3. Install OpenVPN:

    sudo apt install openvpn -y

1.2 Set Up Easy-RSA for Key and Certificate Management

OpenVPN requires certificates and keys for secure connections.

  1. Install easy-rsa to help with certificate creation:

    sudo apt install easy-rsa -y
  2. Create a new directory for the PKI (Public Key Infrastructure):

    make-cadir ~/openvpn-ca cd ~/openvpn-ca
  3. Initialize the PKI:

    ./easyrsa init-pki
  4. Build the CA (Certificate Authority) and follow the prompts:

    ./easyrsa build-ca
  5. Generate the server certificate and key:

    ./easyrsa gen-req server nopass
  6. Sign the server certificate:

    ./easyrsa sign-req server server
  7. Generate Diffie-Hellman parameters:

    ./easyrsa gen-dh
  8. Copy the keys and certificates to OpenVPN’s directory:

    sudo cp pki/ca.crt pki/private/server.key pki/issued/server.crt /etc/openvpn/ sudo cp pki/dh.pem /etc/openvpn/dh2048.pem

1.3 Configure the OpenVPN Server

  1. Create a configuration file for the server:

    sudo nano /etc/openvpn/server.conf
  2. Paste the following configuration into server.conf:

    port 1194 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh2048.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" keepalive 10 120 cipher AES-256-CBC user nobody group nogroup persist-key persist-tun status openvpn-status.log verb 3

1.4 Enable IP Forwarding for Internet Sharing

  1. Open /etc/sysctl.conf:

    sudo nano /etc/sysctl.conf
  2. Find or add the line below to enable IP forwarding:

    net.ipv4.ip_forward = 1
  3. Apply the change immediately:

    sudo sysctl -p

1.5 Set Up Firewall Rules for OpenVPN

  1. Allow OpenVPN traffic through the firewall:

    sudo ufw allow 1194/udp
  2. Enable NAT (Network Address Translation) to allow VPN clients to reach the internet through the server:

    sudo iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
    Replace eth0 with your server’s network interface if it differs.

1.6 Start and Enable the OpenVPN Service

  1. Start the OpenVPN service:

    sudo systemctl start openvpn@server
  2. Enable it to start at boot:

    sudo systemctl enable openvpn@server

Step 2: Set Up VPN Clients (Each of the 10 Ubuntu Devices)

Each client needs its own certificate and configuration to connect securely to the VPN server.

2.1 Create a Certificate for Each Client

On the server:

  1. Go back to the ~/openvpn-ca directory:

    cd ~/openvpn-ca
  2. Generate a certificate and key for each client (e.g., client1, client2, etc.):

    ./easyrsa gen-req client1 nopass ./easyrsa sign-req client client1
  3. Copy the client’s certificates and keys to a separate directory to transfer them:

    cp pki/ca.crt pki/issued/client1.crt pki/private/client1.key ~/client1

2.2 Create Client Configuration File

  1. On the server, create a client configuration file for each client (e.g., client1.ovpn):

    nano ~/client1/client1.ovpn
  2. Add this configuration, replacing your_server_ip with the server's public IP address:

    client dev tun proto udp remote your_server_ip 1194 resolv-retry infinite nobind persist-key persist-tun remote-cert-tls server cipher AES-256-CBC verb 3 <ca> # Paste contents of ca.crt here </ca> <cert> # Paste contents of client1.crt here </cert> <key> # Paste contents of client1.key here </key>

2.3 Install OpenVPN on Each Client Device

On each Ubuntu client:

  1. Install OpenVPN:

    sudo apt update sudo apt install openvpn -y
  2. Copy the client1.ovpn configuration file from the server to each client.

2.4 Connect Each Client to the VPN

On each client device, use the configuration file to connect:


sudo openvpn --config /path/to/client1.ovpn

To run this automatically on boot, copy the configuration to /etc/openvpn/client/ as client.conf and enable the OpenVPN service:


sudo cp /path/to/client1.ovpn /etc/openvpn/client.conf sudo systemctl enable openvpn-client@client

Step 3: Testing and Sharing Data Across Clients

  1. Verify VPN Connectivity: From each client, ping the VPN server to ensure the connection.

    ping 10.8.0.1
  2. Enable File Sharing (Optional): Use SSH/SCP or set up an NFS shared folder on the VPN server to allow clients to access shared data.

By following these steps, you will connect 10 Ubuntu devices through a VPN to a central Ubuntu server, securely sharing resources and internet access across the network.

Monday, October 14, 2024

Git Commands

 Initiate a repository:

# initialize an existing directory as a Git repository
$ git init

# retrieve an entire repository from a hosted location via URL
$ git clone [url]

 

Stage your files:

# Show modified files in working directory, staged for your next commit
git status


# Add a file as it looks now to your next commit (stage)
git add [file path]


# If you need to add ALL the modified files at once
git add .


# Unstage a file while retaining the changes in working directory
$ git reset [file]


# Difference of what is changed but not staged
$ git diff


# Difference of what is staged but not yet commited
$ git diff --staged


# Commit your staged content as a new commit snapshot
$ git commit -m "descriptive message"


# Add files and Commit your staged content as a new commit snapshot
$ git commit -a

 

Manage branch & merge:

# list your branches. a * will appear next to the currently active branch
$ git branch


# create a new branch at the current commit
$ git branch [branch-name]


# switch to another branch and check it out into your working directory
$ git checkout


# One line command to checkout a new branch
$ git checkout -b [branch-name]


# merge the specified branch’s history into the current one
$ git merge [branch]


# show all commits in the current branch’s history
$ git log


# Git branch rename
$ git branch -m <new_branch_name>


# Delete branch
$ git branch -d [branch name]

 

Inspect branch & compare

# Show the commit history for the currently active branch
$ git log


# Show the commits on branchA that are not on branchB
$ git log branchB..branchA


# Show the commits that changed file, even across renames
$ git log --follow [file]


# Show the diff of what is in branchA that is not in branchB
$ git diff branchB...branchA


# Show any object in Git in human-readable format
$ git show [SHA]
$ git show [commit]

# used to give tags to the specified commit.
$ git tag [commitID] 

 

Share & Update:

# add a git URL as an alias
$ git remote add [alias] [url]


# fetch down all the branches from that Git remote
$ git fetch [alias]


# merge a remote branch into your current branch to bring it up to date
$ git merge [alias]/[branch]


# Transmit local branch commits to the remote repository branch
$ git push [alias] [branch]


# Push commits to all branches in your repository
$ git push –all [variable name]


# fetch and merge any commits from the tracking remote branch
$ git pull

 

Tracking path changes

# delete the file from project and stage the removal for commit
$ git rm [file]


# change an existing file path and stage the move
$ git mv [existing-path] [new-path]


# show all commit logs with indication of any paths that moved TEMPO
$ git log --stat -M

 

Rewrite history

# apply any commits of current branch ahead of specified one
$ git rebase [branch]


# clear staging area, rewrite working tree from specified commit
$ git reset --hard [commit]

 

Temporary Commits

# Save modified and staged changes
$ git stash


# list stack-order of stashed file changes
$ git stash list


# write working from top of stash stack
$ git stash pop


# discard the changes from top of stash stack
$ git stash drop

 

 Ignoring patterns

# system wide ignore patern for all local repositories
$ git config --global core.excludesfile [file]

 

Tuesday, October 8, 2024

About Azure Boards

 What is Azure Boards:  

Azure Boards is a service within Azure DevOps that helps teams plan, track, and manage software development projects. Key features include: 

  • Work Item Tracking: Manage user stories, tasks, and bugs. 

  • Agile Tools: Supports Scrum and Kanban methodologies. 

  • Boards and Backlogs: Visualize and manage tasks using Kanban boards. 

  • Queries and Reporting: Create custom queries and track project progress. 

  • CI/CD Integration: Links with Azure Repos and Pipelines for seamless workflows. 

  • Customization: Tailor fields, workflows, and processes to fit team needs. 

  • Collaboration: Enhance team communication with comments and notifications. 

Overall, Azure Boards improves project management and collaboration in software development. 

Azure Boards hubs:  

Azure Boards features several hubs that provide specific functionalities to help teams manage their projects effectively. Here’s a brief overview of each hub: 

  • Work Items: Central hub for creating, viewing, and managing work items like user stories, tasks, bugs, and features. It allows users to track the status and details of each item. 

  • Boards: Visual hub that displays work items in a Kanban board format. Teams can move items across columns to reflect their current status and progress. 

  • Backlogs: A prioritized list of work items organized by iteration or area. It helps teams manage their product backlog and plan sprints effectively. 

  • Sprints: Focused on managing and tracking work during specific time frames. Teams can view sprint progress, burndown charts, and allocate tasks for upcoming sprints. 

  • Queries: A hub for creating and managing custom queries to filter and view work items based on specific criteria. It helps teams track work and generate reports. 

  • Dashboards: Provides customizable dashboards that display key metrics and project insights through various widgets, helping teams monitor progress and performance at a glance. 

  • Delivery Plans: Visualize and manage work items across teams and iterations, providing a timeline view of project delivery. 

These hubs collectively enhance project visibility, collaboration, and management, allowing teams to streamline their software development processes. 

Wednesday, June 5, 2024

Security in DevOps

Security in DevOps, often referred to as DevSecOps, integrates security practices into the DevOps process, ensuring that security is built into every phase of the software development lifecycle (SDLC). Here’s a breakdown of key security practices in DevOps:

1. Shift-Left Security

  • What it is: Security is integrated early in the development process (in the design and coding phases).
  • Practices:
    • Perform threat modeling and risk assessments at the start.
    • Implement secure coding standards.
    • Use static application security testing (SAST) to scan code for vulnerabilities.

2. Continuous Security Testing

  • What it is: Automated security tests run continuously throughout the CI/CD pipeline.
  • Practices:
    • Integrate tools for dynamic application security testing (DAST) and interactive application security testing (IAST) to catch vulnerabilities during and after code deployment.
    • Run security checks for every pull request and automated builds.

3. Automation and Infrastructure as Code (IaC) Security

  • What it is: Security configurations are enforced through automated scripts and templates.
  • Practices:
    • Use tools like Terraform, CloudFormation, or Ansible to define secure configurations for infrastructure.
    • Use security validation tools (e.g., TFLint, Checkov) to verify security compliance in infrastructure code.
    • Automate patch management for servers and containers.

4. Container and Kubernetes Security

  • What it is: Secure the containerized applications and Kubernetes environments.
  • Practices:
    • Use vulnerability scanning tools (e.g., Aqua, Clair) for Docker images.
    • Ensure that containers run with the least privilege principle.
    • Secure Kubernetes clusters by applying role-based access control (RBAC), network policies, and secret management.

5. Security Monitoring and Logging

  • What it is: Continuous monitoring and analysis of system logs to detect security anomalies.
  • Practices:
    • Implement log monitoring tools (e.g., Splunk, ELK Stack, Datadog) for real-time security alerts.
    • Set up centralized logging for all services, containers, and cloud infrastructure.
    • Use security information and event management (SIEM) tools for threat detection and response.

6. Secrets Management

  • What it is: Securely manage sensitive data such as API keys, passwords, and encryption keys.
  • Practices:
    • Use secret management tools (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) to securely store and retrieve secrets.
    • Avoid hardcoding secrets in code or configuration files.

7. Secure Software Dependencies

  • What it is: Ensure that third-party libraries and dependencies used in the application are secure.
  • Practices:
    • Use tools like OWASP Dependency-Check or Snyk to scan and update vulnerable dependencies.
    • Regularly update libraries to the latest versions with known security patches.

8. Network Security

  • What it is: Secure network traffic and access control for DevOps environments.
  • Practices:
    • Implement firewalls, virtual private networks (VPNs), and private subnets in cloud environments.
    • Use zero-trust network architecture (ZTNA) principles to restrict access to resources based on identity.

9. Access Control and Identity Management

  • What it is: Manage access to systems and environments securely.
  • Practices:
    • Enforce multi-factor authentication (MFA) for all privileged users.
    • Implement role-based access control (RBAC) to limit user permissions.
    • Use identity management solutions (e.g., AWS IAM, Azure Active Directory, Okta) to manage user identities and permissions.

10. Compliance and Auditing

  • What it is: Ensure that the DevOps pipeline adheres to industry standards and regulations.
  • Practices:
    • Automate compliance checks (e.g., CIS Benchmark assessments) in the CI/CD pipeline.
    • Conduct regular audits and logging to ensure all actions and configurations are compliant with standards (e.g., GDPR, HIPAA, PCI-DSS).

Integrating these security practices ensures that security becomes an integral part of DevOps, without hindering agility and speed. Since you're leading a team, adopting DevSecOps will not only streamline your security process but also enhance your organization’s overall security posture across cloud and infrastructure operations.