configuration – Zak Abdel-Illah https://zai.dev Automation Enthusiast Mon, 09 Dec 2024 16:01:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://zai.dev/wp-content/uploads/2024/06/android-chrome-512x512-1-150x150.png configuration – Zak Abdel-Illah https://zai.dev 32 32 Overview: Extending my home network to the cloud https://zai.dev/2024/09/11/overview-extending-my-home-network-to-the-cloud/ Wed, 11 Sep 2024 22:02:52 +0000 https://zai.dev/?p=915 As a frequent traveller, I found it impractical to maintain a physical system infrastructure, so I relocated my home infrastructure to the cloud.

Establishing a VPN Connection

To begin, I set up a VPN connection from my OpenWRT router to the cloud provider using WireGuard. I created two VPCs in the cloud provider – one public and one private – to mimic the “WAN-LAN” scenario of at-home routers.

This setup provides isolation similar to a home network, where the resources on the private network can only be access by other resources on the same network, but they are also able to communicate with the outside world.

The intention is to have the private network as an extension to my “home” (at any given time).

Deploying a Cloud Router

I deployed a virtual machine that will act as a router spanning both networks. This needs to be across both networks as I need an endpoint to connect to (which requires an internet-exposed network) while still being able to access private resources.

I chose VyOS as the cloud router’s operating system because it is configuration-driven, allowing for an Infrastructure-as-Code (IaC) approach for easy re-deployment on any cloud provider.

Utilizing Object Storage for Plex Media Server

I adopted object storage to take advantage of the “unlimited” data offered by the cloud provider, and configured s3fs to mount the object storage on a specific node. With this, Plex can access data directly from the object storage bucket without many configuration changes or plugins to Plex.

The VPN connection allows me to access the Plex server securely as if it were local on both my PS5 and laptop. This setup ensures that the Plex interface remains non-accessible to the public and bypasses the bandwidth limit when proxying via the official Plex servers.

Securely Pushing Metrics from In-House Devices

By using the VPN connection, I can push metrics from my in-house devices directly, such as weather sensors without exposing my Prometheus instance to the public internet.

The VPN’s security layer wraps around all traffic, eliminating the need for implementing a CA chain for Prometheus when using platforms such as AWS IoT or Grafana Cloud (where devices are expected to communicate with a public HTTPS endpoint)

Automating At-Home Devices with HomeAssistant

I use HomeAssistant within the cloud provider to automate my at-home devices without worrying about downtime or maintaining a device inside my home. HomeAssistant is scriptable, easily re-deployable, and can bridge a wide range of IoT devices under a single platform, such as HomeKit and Hue.

I can now utilize my old infrastructure without worrying about maintaining hardware, and plan to deploy many services to the private cloud. Keep an eye out for a deeper breakdown on how I deployed and configured each element of my private cloud

]]>
Building a PKI using Terraform https://zai.dev/2024/02/24/building-a-pki-using-terraform/ Sat, 24 Feb 2024 21:09:08 +0000 https://zai.dev/?p=568 Read more]]>

As part of building a hybrid infrastructure, I explored different technologies for achieving a stable VPN connection from on-premises to the AWS Infrastructure and found AWS’ Client-to-Site feature nested within AWS VPC. I explored this prior to AWS Site-to-Site VPN as I didn’t have the right setup for handling IPSec/L2TP tunnels at the time, and had OpenVPN already handy from my MacBook.

Since I would be using OpenVPN (As that’s what AWS Client VPN uses), I require TLS certificates as a method of authentication and encryption. While AWS provides certificate management features, it does have a cost, making it less suitable for my testing requirements.

I’ve opted to use Terraform to create a custom PKI solution locally, and to prepare for the re-use in larger infrastructure projects.

Working Environment

  • Machine
    • MacBook Pro M2
  • Technologies
    • Terraform

Terraform Module Breakdown

terraform {
  required_providers {
    tls = {
      source = "hashicorp/tls"
    }
    pkcs12 = {
      source = "chilicat/pkcs12"
    }
  }
}

I’m making use of the following modules within my Terraform project

  • hashicorp/tls
    • For generating the private keys, certificate requests and certificates themselves
  • chilicat/pkcs12
    • For combining the private key & certificate together, a requirement for using OpenVPN client without embedding the data inside the *.ovpn configuration file (which didn’t come out-of-the-box from AWS)
/**
 * Private key for use by the self-signed certificate, used for
 * future generation of child certificates. As long as the state
 * remains unchanged, the private key and certificate should not
 * re-update at every re-run unless any variable is changed.
 */
resource "tls_private_key" "pem_ca" {
  algorithm = var.algorithm
}

I’ve made the algorithm of the certificates controllable from a global variable due to customer requirements possibly needing to adopt a different level of encryption. This resource returns a PEM-formatted key.

/**
 * Generation of the CA Certificate, which is in turn used by
 * the client.tf and server.tf submodules to generate child
 * certificates
 */
resource "tls_self_signed_cert" "ca" {
  private_key_pem = tls_private_key.pem_ca.private_key_pem
  is_ca_certificate = true

  subject {
    country             = var.ca_country
    province            = var.ca_province
    locality            = var.ca_locality
    common_name         = var.ca_cn
    organization        = var.ca_org
    organizational_unit = var.ca_org_name
  }

  validity_period_hours = var.ca_validity

  allowed_uses = [
    "digital_signature",
    "cert_signing",
    "crl_signing",
  ]
}

I then used the tls_self_signed_cert resource to generate the CA certificate itself, providing the private key generated prior into the private_key_pem attribute. Again, by providing global variables for the ca subject and validity, I’m able to re-run the same terraform module for multiple clients under different workspaces (or by referencing this into larger modules).

The subject fields I had decided to expose are a way to describe exactly what and where the TLS certificate belongs to without needing to dive back into the module.

By adding cert_signing and crl_signing to the allowed_uses list, it adds permissions to the certificate for signing child certificates. This is essential as I would still need to generate the certificates for the OpenVPN server and the client.

This resource returns a PEM-formatted certificate.

/**
 * Return the certificate itself. It's the responsibility of
 * the user of this module to determine whether the certificate should
 * be stored locally, transferred or submitted directly to a cloud
 * service
 */
output "ca_certificate" {
  value = tls_self_signed_cert.ca.cert_pem
  sensitive = true
  description = "generated ca certificate"
}

Finally, I return the CA Certificate and its’ key from the module for the user to place it where it needs to be, for example;

To a local file

resource "local_file" "ca_key" {
  content_base64 = module.pki.ca_private_key
  filename = "${path.module}/certs/ca.key"
}
resource "local_file" "ca" {
  content_base64 = module.pki.ca_certificate
  filename = "${path.module}/certs/ca.crt"
}

To the AWS Certificate Manager

resource "aws_acm_certificate" "ca" {
  private_key = module.pki.ca_private_key
  certificate_body = module.pki.ca_certificate
}

Server & Client Certificates

resource "tls_cert_request" "csr" {
  for_each = var.clients # or var.servers
  private_key_pem = tls_private_key.pem_clients[each.key].private_key_pem
    # or pem_servers[each.key]
  dns_names = [each.key]

  subject {
    country = try(each.value.country, try(var.default_client_subject.country, var.default_subject.country))
    province = try(each.value.province, try(var.default_client_subject.province, var.default_subject.province))
    locality = try(each.value.locality, try(var.default_client_subject.locality, var.default_subject.locality))
    common_name = try(each.value.cn, try(var.default_client_subject.cn, var.default_subject.cn))
    organization = try(each.value.org, try(var.default_client_subject.org, var.default_subject.org))
    organizational_unit = try(each.value.ou, try(var.default_client_subject.ou, var.default_subject.ou))
  }
}

Regardless of whether generating a server or client TLS certificate, both need to go through the ‘certificate request’ process, which is to;

  1. Generate a private key for the server or client
  2. Generate a certificate signing request based on the private key
  3. Using the CSR to get a CA-signed certificate

In this example, I made use of the try block to achieve a value priority in the following order;

  1. Resource-level
    • Do I have a value specific to the server or client?
  2. Class-level
    • Do I have a value specific to the target type?
  3. Module-level
    • Do I have a global default?

And each refers to a key / value pair that is identical for clients as it is servers, where the key is the machine name and the value is the subject data. Here is a sample of the *.tfvars.json file that drives this behaviour.

{
  "clients": {
    "mbp": {
      "country": "GB",
      "locality": "GB",
      "org": "ZAI",
      "org_name": "ZAI",
      "province": "GB"
    }
  }
}

In an ideal (and secure) scenario, the private keys should never be transmitted over the wire, instead, you generate a CSR and transmit that. Since this is aimed for test environments, security is not a concern for me. Should I want to do the generation securely, I’ve exposed the following variable as a way to override the CSR generation.

variable "client_csrs" {
  type = map
  description = "csrs to use instead of generating them within this module"
  default = {}
}

Getting the signed certificate

resource "tls_locally_signed_cert" "client" {
  for_each = var.clients
  cert_request_pem = tls_cert_request.csr_client[each.key].cert_request_pem
  ca_private_key_pem = tls_private_key.pem_ca.private_key_pem
  ca_cert_pem = tls_self_signed_cert.ca.cert_pem

  validity_period_hours = var.client_certificate_validity

  allowed_uses = [
    "digital_signature",
    "key_encipherment",
    "server_auth", # for server-side
    "client_auth", # for client-side
  ]
}

Once the *.csr is generated (or provided), I’m able to use the tls_locally_signed_cert resource type to connect that data with the CA Certificate for signing against the private key of the CA Certificate. The cert_request_pem, ca_private_key_pem and ca_cert_pem inputs allow me to do so using the raw PEM format, without needing to save to disk before passing the data in.

Relying on the data within the terraform state file allows me to also rule out any “external influence” when troubleshooting, as there will be only a single source of truth.

Adding either server_auth or client_auth (depending on use-case) to allowed_uses permits the use of the signed certificate for authentication, as required by OpenVPN.

Converting from *.PEM to PCKS12

resource "pkcs12_from_pem" "client" {
  for_each = var.clients
  ca_pem          = tls_self_signed_cert.ca.cert_pem
  cert_pem        = tls_locally_signed_cert.client[each.key].cert_pem
  private_key_pem = tls_private_key.pem_client[each.key].private_key_pem
  password = "123" # Testing purposes
  encoding = "legacyRC2"
}

Using the pkcs12_from_pem resource type from chilicat makes this process simple, as long as I have access to the private key in addition to the certificate and ca.

For compatibility with the OpenVPN Connect application, I needed to enforce the encoding of legacyRC2, rather than the modern encryption that’s offered by easy-rsa.

Returning the certificates

output "client_certificates" {
  value = [ for cert in tls_locally_signed_cert.client : cert.cert_pem ]
  description = "generated client certificates in ordered list form"
  sensitive = true
}

Finally, I return the generated certificates and their *.p12 equivalent from the module. I mark this data as sensitive due to the inclusion of private keys.

For the value, I needed to iterate over a list of resources (as I had used the foreach input earlier to handle a key/value pair) and re-build a single list with the result.

As mentioned above, it is then the responsibility of the user to determine what to do with the generated certificates, be it storing them locally or pushing them to AWS.

]]>
Authenticating DigitalOcean for Terraform OSS https://zai.dev/2023/12/05/authenticating-digitalocean-for-terraform-oss/ Tue, 05 Dec 2023 19:21:25 +0000 https://zai.dev/?p=542 Terraform DigitalOcean Provider with API tokens from DigitalOcean]]> Scenario

Why?

I’m diving into Terraform as part of my adventure into the DevOps world, which I’ve adopted an interest in the past few months.

  • I use 2 workstations with DigitalOcean
    • MacBook; for when I’m out and about
    • ArchLinux; for when I’m at home

Generating the API Tokens

Under API, located within the dashboards’ menu (on the left-hand side), I’m presented with the option to Generate New Token.

Followed by an interface to define;

  • Name
    • I typically name this token as zai.dev or personal, as this token will be shared across my devices. While this approach isn’t the most secure (Ideally, I should have one token per machine), I’m going for the matter of convenience of having one token for my user profile.
  • Expiry date
    • Since I’m sharing the token across workstations (including my laptop, which may be prone to theft), I set the expiration to the lowest possible value of 30 days.
  • Write permissions
    • Since I’ll be using Terraform, and it’s main purpose is to ‘sculpt’ infrastructure, I require the token that it’ll use to connect to DigitalOcean to have write permissions.

Authenticating DigitalOcean Spaces

As the Terraform Provider allows the creation of Spaces, DigitalOceans’ equivalent to AWS’ S3-bucket, I should also create tokens for it. By navigating to the “Spaces Keys” tab under the APIs option, I can repeat the same steps as above

Installing the Tokens

Continuing from the setup of environment variables in my Synchronizing environment variables across Workstations post, I need to add 3 environment variables for connecting to DigitalOcean.

  • DIGITALOCEAN_TOKEN
    • This is the value that is given to you after hitting “Generate Token” on the Tokens tab
  • SPACES_ACCESS_KEY_ID
    • This is the value that is given to you after hitting “Generate Token” on the Spaces Tokens tab
  • SPACES_SECRET_ACCESS_KEY
    • This is the one-time value that is given to you alongside the SPACES_ACCESS_KEY_ID value

Whilst I’m at it, I’m going to add the following environment variables so that I can use any S3-compliant tools to communicate with my object storage, such as the s3 copy command to push build artifacts

  • AWS_ACCESS_KEY_ID=${SPACES_ACCESS_KEY_ID}
  • AWS_SECRET_ACCESS_KEY=${SPACES_SECRET_ACCESS_KEY}

To keep things tidy, I created a separate environment file for digital ocean, under ~/.config/zai/env/digitalocean.sh

export DIGITALOCEAN_TOKEN="<DO_TOKEN>"
export SPACES_ACCESS_KEY_ID="<SPACES_KEY>"
export SPACES_SECRET_ACCESS_KEY="<SPACES_SECRET>"
export AWS_ACCESS_KEY_ID=${SPACES_ACCESS_KEY_ID}
export AWS_SECRET_ACCESS_KEY=${SPACES_SECRET_ACCESS_KEY}
]]>
Synchronizing environment variables across Workstations https://zai.dev/2023/11/30/synchronizing-env-vars-across-workstations/ Thu, 30 Nov 2023 20:53:45 +0000 https://zai.dev/?p=534 I need to have the configuration for my applications and APIs synchronized across multiple machines.

What’s my situation?

  • I use at least two workstations
    • MacBook Pro; for use when out and about
    • ArchLinux Desktop; for use when at home
    • Ubuntu Server; for hosting permanent services

What does this mean?

As I’m working across two devices, I need to make sure that the equivalent configuration is available across both devices and immediately. I use SyncThing as the technology to keep my personal configuration such as environment variables synchronized across all devices. I don’t use Git as there is an extra step of manually pulling down the configuration each time, in addition to as not having access to my local git repository at all times.

Mac & Linux are UNIX-based/like platforms, so I can keep my configuration files uniform. I use Bash scripts to define the environment variables needed for any APIs that I use.

How did I achieve it?

Directory structure & files needed

I use ~/.config/zai as my configuration directory and set SyncThing to watch it, and then set it on the other workstations to point to the same path. A file named rc.sh lives inside here centralize anything I want upon loading the terminal.

Installing SyncThing

Installing on Linux

Luckily for most Linux distributions, SyncThing is already provided in the pre-installed repositories.

pacman -S syncthing # Arch Linux
apt install syncthing # Debian / Ubuntu

# Enable & Start Syncthing
systemctl enable --now syncthing@<username>
Installing on macOS

On Mac it’s slightly more trivial, but the instructions are provided within the Downloadable ZIP File for macOS.

Sourcing the rc.sh from the shell

The following snippet needs to be placed in a shell initialization script which may differ depending on platform. The source command tells Bash to reference (and execute) the file that follows it

source ~/.config/zai/rc.sh
macOS

macOS will execute the ~/.bash_profile script upon opening a new Bash shell. I switch between zsh and bash from time to time, so either I manually execute /usr/bin/bash to take me to the Bash environment, or I’d just change the default shell under the Terminal properties.

Linux

Most linux platforms will execute ~/.bashrc upon opening a new shell, assuming that Bash is the default shell.

rc.sh

I keep this file simple, which is to loop through all the files inside the env/ subdirectory for bash files and execute them. This allows me to not have a single file with numerous lines.

for file in ~/.config/zai/env/*.sh; do
    source $file;
done

What’s next?

I’m diving into the world of DevOps, and will need to configure my local systems to;

  • Hold the API Credentials for the cloud service(s) of my choice
  • Hold the API Credentials for an S3 bucket location of my choice
]]>