Update Docker containers via CLI

There are several ways to update Docker containers. You can use Portainer or Watchtower, but if you only have a few containers running, you can also do it via the CLI.

These steps assume that a (docker-)compose.yml file is in /docker directory and that the Docker Compose plugin is installed.

Login the Docker host

ssh user@host

Navigate to the directory

cd /docker/

Stop running containers

docker compose down

Update the containers

docker compose pull

Prune unused images

docker image prune

Confirm with Y and press Enter

Start the containers

docker compose up -d

Check running containers

docker ps

Backup Proxmox containers and Virtual Machines to a portable USB drive

I use a Samsung Portable SSD T7 for offline backups of containers and VMs. It is formatted with the exFAT filesystem, making it readable by multiple operating systems.

This guide requires basic Linux knowledge and is intended for occasional backups, as there are better solutions than backing up to portable SSDs.

Mounting the portable USB drive in Debian

  1. Open the shell via the Proxmox webinterface or login via SSH
  2. List the current attached SCSI, SATA, or USB drives:
    ls -l /dev/ | grep sd
  3. If the command output is not empty, it will list the connected drives and partitions:
    sda
    sda1
    sda2
    sda3
  4. Attach the portable SSD to the computer on which Proxmox is running
  5. Check if the portable SSD is correctly detected by Proxmox:
    lsusb
  6. The portable SSD should be visible in the results:
    Bus 002 Device 002: ID 04e8:61fb Samsung Electronics Co., Ltd PSSD T7 Shield
  7. Create a local directory which will be used to mount the portable SSD:
    mkdir -p /root/ssd/
  8. List the current attached disks again, to verify which disk has the partition we want to mount into the local folder:
    ls -l /dev/ | grep sd
  9. If the command output is not empty, it will list the connected drives and partitions:
    sda
    sda1
    sda2
    sda3
    sdb < Samsung Portable SSD
    sdb1 < exFAT partition
  10. Mount the exfat partition sdb1 into the local folder /root/ssd/:
    mount /dev/sdb1 /root/ssd/
  11. Create a folder called proxmox_backup:
    mkdir -p /root/ssd/proxmox_backup/

Add the portable USB drive as a backup target in Proxmox

  1. In the webinterface of Proxmox, click on Datacenter > Storage > Add > Directory
  2. Enter the following information:
    ID: SamsungT7
    Path: /root/ssd/proxmox_backup/
    Content: select Disk image and VZDump backup file
  3. Click Add
  4. A new entry should be added with the ID you entered

Create a backup of a container or virtual machine

I prefer to back up containers and virtual machines that are shut down to ensure no changes are made during the backup process.

Repeat the steps for every container or virtual machine you want to back-up.

  1. Select the container or virtual machine you want to back-up
  2. Shutdown the container or virtual machine
  3. Click Backup in the vertical toolbar
  4. Click Backup now
  5. Make sure the settings are:
    Storage: SamsungT7
    Mode: Stop
  6. Click Backup
  7. When the backup is successfull, start the container or virtual machine

Remove the portable USB drive as a backup target in Proxmox

Ensure all backups are finished before proceeding with these steps:

  1. In the webinterface of Proxmox, click on Datacenter > Storage > SamsungT7 > Remove
  2. Confirm with Yes

Unmount the portable USB drive in Debian

  1. Open the shell via the Proxmox webinterface or login via SSH
  2. Go to the root of the filesystem:
    cd /
  3. Unmount the portable USB drive
    umount /root/ssd/
  4. Check if the partition sdb1 is really unmounted – the result should be empty:
    df -h | grep sdb
  5. Remove the portable SSD from the computer in which Proxmox is running.
  6. Store the portable SSD somewhere safe

Flash Zigbee dongle with Z-Stack-firmware using Ubuntu Live

This guide is based on my experience flashing the firmware of an Electrolama ZZH! dongle. It should work with every Zigbee dongle that is supported by the Z-Stack-firmware.

These steps are based on information and tools from these sources:

Ubuntu Live

We use a live install of Ubuntu Desktop on an USB-drive. Download the ISO and use a tool like Rufus or BalenaEtcher to flash the ISO onto the USB-drive. Boot the device from the USB drive, ensure you are on the desktop, and confirm you have a working internet connection before proceeding.

Preparations

Before we can flash the new firmware onto the dongle, we first need to download the new firmware and install some Python modules. This way, we can backup the current configuration, flash the firmware and then restore the backup into the new firmware.

Download new firmware

First, consult this page to find which firmware you need for your Zigbee dongle: https://github.com/Koenkk/Z-Stack-firmware/blob/master/coordinator/Z-Stack_3.x.0/bin/README.md

After finding the correct version, download the compatible Z-Stack firmware from KoenKK’s GitHub: https://github.com/Koenkk/Z-Stack-firmware/tree/master/coordinator/Z-Stack_3.x.0/bin

Extract the correct .hex file from the zip-file into /home/ubuntu/

Download cc2538-bsl

Open the terminal and make sure you are in /home/ubuntu/. Then type the following command:

wget -O cc2538-bsl.zip https://codeload.github.com/JelmerT/cc2538-bsl/zip/master && unzip cc2538-bsl.zip

Install python-pip

The next steps are all executed in the terminal in the folder /home/ubuntu/ as root:

Open the terminal and insert the following:

sudo -s

You should now see root@ubuntu:/home/ubuntu#

We need pip to install other packages, therefore we install this one first:

apt install python3-pip

Because it’s a live install, things can get broken when installing packages via pip. The following message will appear when trying to install new Python modules:

error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.

    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.

    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.

    See /usr/share/doc/python3.11/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

But it’s necessary that we install these packages, so we use an option to override this warning:

pip install zigpy-znp intelhex pyserial --break-system-packages

When these packages are installed, it’s time to back-up the current configuration:

python3 -m zigpy_znp.tools.network_backup /dev/serial/by-id/... -o zigbee_backup.json

Next, it’s time to update the firmware:

python3 cc2538-bsl.py -p /dev/serial/by-id/... -evw NEW_FIRMWARE.hex

When this is done, it’s time to restore the backup into the new firmware:

python3 -m zigpy_znp.tools.network_restore /dev/serial/by-id/... -i zigbee_backup.json

If everything went well, your Zigbee dongle should now be running the new firmware, making your network more stable, responsive, and secure.

Add Synology NFSv4.1 datastore to VMware ESXi 8

I have a Synology DS214 NAS in my network. A folder on the NAS was shared via NFS. Connecting to this shared folder with NFS version 3 worked fine in VMware ESXi 8.0, but I was unable to get access to the folder using NFS version 4.1. Newer models may support NFS version 4.1 out of the box., check your settings.

While searching for a solution, I found that in my case the configuration of NFS had to be changed on the Synology DS214 NAS. By default it allows NFS version 4, but not NFS version 4.1.

By following these steps, it is possible to add the Synology NFS datastore to VMware ESXi 8 using NFS version 4.1:

  1. Enable SSH in Synology DSM
  2. Login to SSH with your admin credentials
  3. Type sudo -i to become root
  4. Navigate to /etc/nfs/:
    cd /etc/nfs/
  5. Edit the file syno_nfs_conf:
    vi syno_nfs_conf
  6. Press Insert on your keyboard and change the value of nfs_minor_ver_enable from 0 to 1
  7. Press Escape, type :wq! and then press Enter
  8. Restart the NFS service:
    systemctl restart nfs-server
  9. Log out of SSH by typing exit twice
  10. Mount the NFS Datastore in VMware ESXi

You should now be able to add the NFS share as a datastore to your VMware ESXi server.

Based on these instructions: VMware ESXi mount a NFS share on a Synology Diskstation

KPN IPv6 met Ubiquiti USG (zonder iTV)

Vanwege een nieuw internetabonnement ben ik met ingang van 31 oktober 2020 door KPN overgezet van het Telfort netwerk naar het KPN netwerk. Dit betekende onder andere het omschakelen van DHCP naar PPPoE om verbinding te kunnen maken met het internet.

Mijn uitdaging: IPv6 zonder al te veel moeite werkend krijgen.

Let op: ik maak geen gebruik van KPN iTV, dus dat wordt buiten beschouwing gelaten in deze post.

Achtergrondinformatie

Voordat ik verder inga op mijn uitdaging, eerst wat informatie. KPN gebruikt VLAN’s om hun diensten over één verbinding aan te bieden, namelijk:

  • vlan4: IPTV
  • vlan6: Internet
  • vlan7: VOIP

Normaal gesproken regelt de door KPN geleverde Experiabox de toegang tot deze diensten voor de apparaten die erop worden aangesloten. Aangezien de Experiabox gemaakt is voor gebruikers die er verder niet naar willen omkijken, is er weinig wat er op het apparaat aangepast kan worden. It just works.

Mijn netwerk en bijbehorende uitdaging

Mijn netwerk

Ik gebruik apparatuur van Ubiquiti in mijn netwerk:

De Unifi Security Gateway kan via de Unifi Network Management Controller worden geconfigureerd, maar daarin kunnen niet alle instellingen worden toegepast om de functies van de Experiabox over te nemen. Dat vereist meer werk.

Er zijn allerlei handleidingen te vinden waarin wordt uitgelegd hoe je door middel van een config.gateway.json bestand de functies van de Experiabox grotendeels kunt overnemen.

Mijn uitdaging

Ik wil IPv6 zonder een config.gateway.json bestand te hoeven maken. De Experiabox biedt standaard wél de mogelijkheid om via IPv6 het internet op te gaan, dus dat wil ik ook. In de Unifi Network Management Controller zijn bij de WAN drie mogelijkheden voor IPv6:

  • Disabled
  • DHCPv6 (met Prefix Delegation)
  • Static IP

Ik heb gekozen voor DHCPv6 met bijbehorende Prefix Delegation size van 48, maar zonder resultaat: geen IPv6 adres op de WAN.

In mijn zoektocht naar een oplossing kwam ik dit artikel tegen:

En dit blijkt de oplossing te zijn!

Mijn instellingen

Hieronder mijn instellingen in de Unifi Network Management Controller

  1. Navigeer naar Settings > Networks > LAN > Configure IPv6 Network
  1. Nadat deze instellingen zijn ingevoerd klik je op Save.
  2. Vervolgens ga je naar Devices > USG > tabblad Config > Manage Device + > Force Provision > Provision. Hiermee wordt de Unifi Security Gateway opnieuw voorzien van de instellingen en wordt ook het IPv6 adres opgehaald met bijbehorende informatie.
  3. Open Powershell / Command Prompt en voer in: ipconfig
  4. Wanneer alles goed is verlopen, verschijnt er bij de output van ipconfig IPv6 gerelateerde informatie.

SPF-record kpnxchange.com

Om vanuit een eigen mailserver te kunnen mailen vanuit het netwerk van KPN, moet onderstaande toegevoegd worden aan het SPF-record van het verzendende domein:

include:spf.ews.kpnxchange.com

Hiermee worden de mailservers op het domein kpnxchange.com geautoriseerd om te mailen namens het verzendende domein.

DNS over HTTPS with nginx, dnsdist and Pi-hole

When I was looking for something new to build I ended up building a DNS over HTTPS server. This way I can use my Pi-hole server wherever I am, without exposing port 53. I let nginx handle the encryption of the HTTPS connection, send the information to dnsdist for translation to DNS, and let Pi-hole filter the queries using my blocklists.

The following is assumed:

  • You have nginx up and running
  • You have a subdomain (doh.domain.tld, or dns.domain.tld) with valid certificates (Lets Encrypt, or commercial)
  • You have installed dnsdist, but not yet configured it
  • You have a Pi-hole server up and running, configured to your wishes

This instruction is based upon this tutorial from nginx.com, which I could not get to work.

https://www.nginx.com/blog/using-nginx-as-dot-doh-gateway

So, that’s why I adopted their configuration to use dnsdist instead of their njs script language.

nginx

The configuration of nginx (saved as dns.domain.nl):

# Proxy Cache storage - so we can cache the DoH response from the upstream
proxy_cache_path /var/run/doh_cache levels=1:2 keys_zone=doh_cache:10m;

server {
    listen 80;
    server_name dns.domain.nl;
    return 301 https://dns.domain.nl/$request_uri;
}

# This virtual server accepts HTTP/2 over HTTPS
server {
    listen 443 ssl http2;
    server_name dns.domain.nl;

    access_log /var/log/nginx/doh.access;
    error_log /var/log/nginx/doh.error error;

    ssl_certificate /etc/letsencrypt/live/dns.domain.nl/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/dns.domain.nl/privkey.pem;

    # DoH may use GET or POST requests, Cache both
    proxy_cache_methods GET POST;

    # Return 404 to all responses, except for those using our published DoH URI
    location / {
        try_files $uri $uri/ =404;
    }

    # This is our published DoH URI
    location /dns-query {

      # Proxy HTTP/1.1, clear the connection header to enable Keep-Alive
      proxy_http_version 1.1;
      proxy_set_header Connection "";

      # Enable Cache, and set the cache_key to include the request_body
      proxy_cache doh_cache;
      proxy_cache_key $scheme$proxy_host$uri$is_args$args$request_body;

      # proxy pass to dnsdist
      proxy_pass http://127.0.0.1:5300;
    }
}

nginx sends an 404 error when you visit the address https://dns.domain.nl/. It is only a proxy to 127.0.0.1:5300 when data is sent to https://dns.domain.nl/dns-query.

Check the configuration of nginx

nginx -t

It should give the following output:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Restart nginx to load the new configuration:

systemctl restart nginx.service

dnsdist

The (minimal working) configuration of dnsdist (saved as dnsdist.conf)

Note: this is a minimal configuration. No measures have been taken with regard to security or abuse. Consult the documentation for more information.

-- dnsdist configuration file, an example can be found in /usr/share/doc/dnsdist/examples/

-- disable security status polling via DNS
-- setSecurityPollSuffix("")

-- fix up possibly badly truncated answers from pdns 2.9.22
-- truncateTC(true)

-- Answer to only clients from this subnet
setACL("127.0.0.1/8")

-- Define upstream DNS server (Pi-hole)
newServer({address="192.168.2.100", name="Pi-hole", checkName="dc01.domain.nl.", checkInterval=60, mustResolve=true})

-- Create local DOH server listener in DNS over HTTP mode, otherwise the information coming from nginx won't be processed well
addDOHLocal("127.0.0.1:5300", nil, nil, "/dns-query", { reusePort=true })

A few things are important here:

  • I’ve set an ACL that allows dnsdist to only answer to queries from the subnet 127.0.0.1/8.
  • I’ve added an upstream (downstream according to dnsdist) DNS server with the IP address 192.168.2.100. It’s configured with a custom checkName and checkInterval. Normally, dnsdist sends a query to a.root-servers.net every second(!). With this configuration, it checks another server – my domain controller – every 60 seconds.
  • I’ve added a DOH listener on the loopback address 127.0.0.1:5300. This is configured as DNS over HTTP, because nginx takes care of the decryption of the connection.

Check the configuration of dnsdist:

dnsdist --check-config

It should give the following output:

No certificate provided for DoH endpoint 127.0.0.1:5300, running in DNS over HTTP mode instead of DNS over HTTPS
Configuration '/etc/dnsdist/dnsdist.conf' OK!

Restart dnsdist to load the new configuration:

systemctl restart dnsdist.service

To check if dnsdist is listening to 127.0.0.1:5300:

netstat -tapn | grep 5300

It should give the following output:

tcp 0 0 127.0.0.1:5300 0.0.0.0:* LISTEN 4435/dnsdist

Configuring the browser

Now it’s time to configure your browser to use your new DNS over HTTPS server. This website explains how to configure your web browser to use DNS over HTTPS:

https://developers.cloudflare.com/1.1.1.1/dns-over-https/web-browser/

Final inspection

To make sure it’s working properly, we need to inspect the logs. nginx keeps a log of access and error messages. We will look at those logs to see if the information is passed on correctly to dnsdist.

Take a look at the access logs of nginx:

cat /var/log/nginx/doh.access

It should give the following output:

192.168.2.1 - - [09/Aug/2020:11:55:05 +0200] "POST /dns-query HTTP/2.0" 200 107 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:05 +0200] "POST /dns-query HTTP/2.0" 200 107 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:05 +0200] "POST /dns-query HTTP/2.0" 200 122 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:05 +0200] "POST /dns-query HTTP/2.0" 200 102 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:05 +0200] "POST /dns-query HTTP/2.0" 200 125 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:06 +0200] "POST /dns-query HTTP/2.0" 200 102 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:06 +0200] "POST /dns-query HTTP/2.0" 200 122 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:06 +0200] "POST /dns-query HTTP/2.0" 200 125 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:08 +0200] "POST /dns-query HTTP/2.0" 200 112 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:08 +0200] "POST /dns-query HTTP/2.0" 200 112 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:19 +0200] "POST /dns-query HTTP/2.0" 200 140 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:19 +0200] "POST /dns-query HTTP/2.0" 200 152 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:20 +0200] "POST /dns-query HTTP/2.0" 200 175 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:55:20 +0200] "POST /dns-query HTTP/2.0" 200 137 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:56:15 +0200] "POST /dns-query HTTP/2.0" 200 64 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:56:15 +0200] "POST /dns-query HTTP/2.0" 200 64 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:56:15 +0200] "POST /dns-query HTTP/2.0" 200 64 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:56:15 +0200] "POST /dns-query HTTP/2.0" 200 64 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:56:21 +0200] "POST /dns-query HTTP/2.0" 200 59 "-" "-"
192.168.2.1 - - [09/Aug/2020:11:56:30 +0200] "POST /dns-query HTTP/2.0" 200 55 "-" "-"

It’s important that you see 200 (after HTTP/2.0) in the logs. This means that nginx was able to pass the information to dnsdist. Otherwise, something has gone wrong.

When something has gone wrong, it should show up in the error logs

cat /var/log/nginx/doh.error

It should (hopefully not) give the following output:

2020/08/09 11:15:26 [error] 946#946: *511 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.2.1, server: dns.domain.nl, request: "POST /dns-query HTTP/2.0", upstream: "http://127.0.0.1:5300/dns-query", host: "dns.domain.nl"

Install pi_mqtt_gpio on Raspbian

I want to receive a notification when someone rings the doorbell at my front door. For this I can buy all kinds of expensive (wireless) doorbells, but I want to use the existing wiring. Because I use Home Assistant for all kinds of automation at home, I also want to integrate this into Home Assistant. Since there is already a Raspberry Pi in the meter cupboard for reading my smart meter, this seemed a logical addition to me.

For this I found this repository: https://github.com/flyte/pi-mqtt-gpio

This Python application does exactly what I want:

  1. It reads the GPIO pins from the Raspberry Pi Board
  2. It allows me to configure which pins have to be read
  3. It allows the Pi to send the information to an MQTT broker

This allows me to use my Home Assistant installation to notify me when someone rang my doorbell, thereby always being informed when someone was at my front door, even when I’m not at home. It also makes it possible to automatically turn on a light in the hallway when it’s dark.

It was a little headache to make it work on my Raspberry Pi 3b with Raspbian, so that’s why I wrote down the instructions.

All the commands are executed as root user

Virtual Env

First you need to create a virtual environment with Python version 3, in /home/pi

/home/pi# python3 -m venv pi_mqtt_gpio

Next, you enter the virtual environment

/home/pi# . pi_mqtt_gpio/bin/activate

Your shell should now show this:

(pi_mqtt_gpio) root@rpi3:

Install the following packages with pip3 (if it’s not installed, use apt install python3-pip)

  • pi_mqtt_gpio
  • rpi.gpio
(pi_mqtt_gpio) root@rpi3: pip3 install rpi.gpio pi_mqtt_gpio

In addition, other packages are also installed (enum34, PyYAML, cerberus, paho-mqtt)

Configuration

In the configuration file you define which MQTT broker the data should be sent to, but also to which GPIO pins should be listened.

Read more about the configuration of pi_mqtt_gpio on this Github page.

Note: this tutorial assumes you save the file pi-mqtt-gpio-config.yaml in the folder /home/pi

Supervisor

Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.

Install Supervisor

apt install supervisor

Open the Supervisor configuration folder

cd /etc/supervisor/conf.d/

Create a new file with the filename pi_mqtt_gpio.conf

nano pi_mqtt_gpio.conf

Add the following lines:

[program:pi_mqtt_gpio]
command = /home/pi/pi_mqtt_gpio/bin/python -m pi_mqtt_gpio.server /home/pi/pi-mqtt-gpio-config.yaml
autorestart = false # added to avoid a restart loop
directory = /home/pi
redirect_stderr = true
stdout_logfile = /var/log/pi-mqtt-gpio.log

Update supervisor to include this program during the startup of the operating system:

supervisorctl update

This should give the following output:

pi_mqtt_gpio: updated process group

Now it’s time to start pi_mqtt_gpio with Supervisor

supervisorctl start pi_mqtt_gpio

This should give the following output:

pi_mqtt_gpio: started

Check the logfile for the correct operation of the program:

tail -f /var/log/pi-mqtt-gpio.log

An example of an working configuration:

2020-04-28 12:06:28,550 mqtt_gpio (INFO): Startup
2020-04-28 12:06:29,039 mqtt_gpio (INFO): Connected to the MQTT broker with protocol v3.1.1.
2020-04-28 12:06:29,049 mqtt_gpio (INFO): Polling: Input 'doorbell' state changed to False

Now you can connect a doorbell to your Raspberry Pi. When you ring the doorbell, a message should be sent to your MQTT broker.

AsusWRT: block Google DNS with iptables

By default, every Google device uses the following configured DNS-servers:

  • 8.8.8.8
  • 8.8.4.4

But I don’t want my guests, who can use my WiFi, to let Google phone home and give information about who visits my network.

I use iptables to block those DNS-requests. The firewall rejects all the DNS-requests that would be sent to Google. So the clients have no other option than to use the DNS-server that’s published by my DHCP-server.

User scripts

With the AsusWRT (and asuswrt-merlin) firmware I can add user scripts. The next two lines are loaded when the firewall (iptables) has been started.

iptables -I FORWARD --destination 8.8.8.8 -j REJECT
iptables -I FORWARD --destination 8.8.4.4 -j REJECT

Save this code in the folder /jffs/scripts/ with the filename firewall-start.

After successfully loading the rules in iptables, when your router has (re)booted successfully, every DNS-request to Google will be rejected. When testing this with a ping to 8.8.8.8 (or 8.8.4.4), the result should be:

Pinging 8.8.8.8 with 32 bytes of data:
Reply from 192.168.2.1: Destination port unreachable.
Reply from 192.168.2.1: Destination port unreachable.
Reply from 192.168.2.1: Destination port unreachable.
Reply from 192.168.2.1: Destination port unreachable.

Handige PowerShell commando’s voor mezelf

Het toevoegen van lange DKIM sleutels met dnscmd

Probleem met PowerShell: het kan geen strings aan die langer zijn dan 255 tekens. Mijn DKIM-key is langer dan dat, en ondanks allerlei pogingen is het niet gelukt om deze met PowerShell toe te voegen aan de DNS-zone.

Gelukkig werkt dnscmd nog wel in PowerShell, dus kan ik met onderstaande code alsnog dit type TXT-records aanmaken.

dnscmd /RecordAdd maartenvandekamp.nl 2019._domainkey TXT "v=DKIM1; t=s;" "p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsNounMXAATZ5rc8NzV3HlB31GT6/fRrbACDCyEXMDrwD84Q79uIFUgflvPbO6BHlmfY73IQVpuV+DAgyTebEjaTD9iatRII/z+5hjv8a4pzuVlnFycKjd0A4btIw3NnCI+rnbUpSNrrFp88bQM/yznxPniAGHzF3Y9Fi3CznoSwnZyvZ0JF81nkwN7R9A7fy86nCAg0bzlE4kh" "IKZ1rIMl4vGvP/ntgh3AxaizrPAzXro9HPeIyAbOaspug1NuW90uTAYy9L/IygX+IjC9nouBo4wqM9uzc/GPNipPjcHGzTECQX+loOkr2rJJ5blGEEciB4djLMDc2r7OECB5f/3QIDAQAB"

Belangrijk is dat er een spatie tussen de aanhalingstekens wordt geplaatst. Dan wordt de tekst op een nieuwe regel geplaatst in de DNS-zone.

Het toevoegen van een reservering

Ik draai PiHole met een uitgebreide lijst, maar daardoor worden een aantal services geblokkeerd die ik op mijn Philips Smart TV graag wel wil gebruiken.

Om het mogelijk te maken dat deze een andere DNS-server toegewezen krijgt, maak ik eerst een reservering voor de TV:

Add-DhcpServerv4Reservation -ScopeId 192.168.2.0 -IPAddress 192.168.2.20 -ClientId "1C-5A-6B-9E-E8-67" -Description "Philips TV"

Het wijzigen van een DHCP option bij een reservering

Nadat ik de reservering heb aangemaakt, wijzig ik DHCP option 6 om de toegewezen DNS-server te wijzigen:

Set-DhcpServerv4OptionValue -ComputerName "core19" -ReservedIP 192.168.2.20 -OptionId 6 -Value "192.168.2.8"