TrueNAS Scale: online expand disks

Say you’re running a TrueNAS Scale VM for some reason instead of running it on baremetal and you have your zpool on a virtual disk that’s getting full. You can either add another virtual disk and just add it to your pool, data getting striped across all vdevs – but that gets messy if you need to expand in increments as you end up with a bunch of vdevs you can’t remove from your pool ever again (as of 2025 at least, might be possible at some point in the future). So you just increase the size of your virtual disk and be done with it, right? Sadly, no – TrueNAS Scale autoexpand doesn’t work quite like that and needs a little bit of help from the CLI. Here’s how you can still get it done without having to reboot or take your system offline:

  1. Look at your pool to find out what disks are being used: zpool list -v
  2. Look in /dev/disk/by-partuuid to check what disk the partition used is on, for example if the symlink points to vdb1 you’ll want to adjust vdb in the next step
  3. parted /dev/vdb resizepart 1 100% you’ll want to run that twice: First time it asks you to fix the GPT to use all the space that’s now available, second time it actually adjusts the partition size
  4. If your pool has autoexpand=on already set, you’ll probably don’t have to do anything else and the pool is already expanded to the new size of the virtual disk. Otherwise you might have to manually online the device with the expand flag set: zpool online -e yourzpoolname yourpartuuid
  5. Check your pool if everything worked as expected with another zpool list

That should be it, your pool increased in size without any interruption.

Workgroup Bridge with older Cisco Accesspoints and new Mobility Express Releases

Since I just spent the better part of a weekend to finally get this working, here’s a nasty little bug that took forever to track down: Starting from 8.10.150.0 if you want some older Cisco accesspoints connect to your network in WGB mode, you need to tweak your security setting a little bit otherwise they just wont connect: “cannot associate: EAP authentication failed” is one of the various not exactly helpful error messages you’re probably very familiar at this point if you found this post…

Cisco actually points to it in their documentation, but of course that was the last place I looked at: config wlan security wpa akm psk pmkid {enable | disable} wlan_id

That is, if you want have a WGB connect to your wlan 3:

config wlan disable 3
config wlan security wpa akm psk enable 3
config wlan security wpa akm psk pmkid enable 3
config wlan enable 3

Once that’s set, just configure your WGB like you usually would – for example:

dot11 ssid yourssid
authentication open
authentication key-management wpa version 2
guest-mode
wpa-psk ascii yourpresharedkey
interface Dot11Radio0
no ip address
no ip route-cache
encryption mode ciphers aes-ccm
ssid yourssid
station-role workgroup-bridge
bridge-group 1
bridge-group 1 subscriber-loop-control
bridge-group 1 spanning-disabled
bridge-group 1 block-unknown-source
no bridge-group 1 source-learning
no bridge-group 1 unicast-flooding

With PMK ID set, a 2600 Series or even something as old as a 1131AG would connect like it’s supposed to.

vcsu ERROR: Invalid IP address

If you run into the less than helpful message “ERROR: Invalid IP address” when trying to use the Virtual Connect Support Utility (vcsu for short) because you haven’t had to use that tool in a while, or ever: make sure you’re using the IP address and credentials of your Bladecenter OA and not the IP address of your VC device you want to update/check/whatever.

Turns out vcsu logs into your OA first, checks which VC interconnects are installed and goes from there. You’ll have to enter your VC login credentials at a later stage once the initial assesment is done.

Intel SSD Update ISO fails to boot

In case you’re trying to update the firmware on your Intel SSDs by using their handy dandy issdfut Version 3.0.7 or 3.0.8 ISO just to be greeted by an ISOLINUX error like

Failed to load ldlinux.c32
Boot failed: press a key to retry…

Chances are it’s because their ISO seems to be broken when booting using legacy BIOS mode. Switch to UEFI and it will most likely work the way it’s supposed to.
When you’re done, don’t forget to switch back to legacy BIOS if you installed your OS that way, otherwise it probably wont be able to boot.

oVirt node-ng update fix

Having problems updating some of your oVirt nodes? Chances are you might have a var_crash volume you need to manually remove and/or ran out of space on your PV/VG if you are running rather small disks on your node.

For a quick fix, SSH into your node and run those:

lvremove /dev//.0
lvremove /dev//.0+1
lvremove /dev//var_crash
fstrim -av

To upgrade your image, then run

yum update ovirt-node-ng-image-update

Reboot your node and run an upgrade via the engine webinterface once again to make sure you’re now on the latest release.

Noisy Focusrite Scarlett and how to fix it

Having noise issues on your external USB audio interface when connected to a desktop PC and powered studio monitors? Chances are you’re having issues with your grounding… Make sure you are using balanced cables, meaning three wires (two phases + ground) to connect your speakers to the output of your interface.

I recently spent quite a few hours chasing down an extremely annoying crackling hum on my audio setup because of that. No ground loop, all devices connected to the same circuit, I even hooked everything up to an online UPS so I get perfect sine voltage but to no avail. When connected to my main workstation there was a noticeable and extremely annoying background noise on my monitors that wasn’t there when I plugged the Focusrite Scarlett 2i2 into my notebook or mobile phone via an USB OTG adapter. Heck, it wasn’t even there when I plugged it into an HP Z600 workstation that is sitting right beside my main computer, all connected to the same UPS, ethernet switches and whatnot…

Turns out I used unbalanced cables from the Scarlett to the monitors and apparently the MSI X99S mainboard (or maybe the power supply?) in my primary desktop computer has for some reason quite a bit of noise on the ground of the USB bus. Or something like that – I didn’t bother to get an oscilloscope to verify.

Swapping the cables from TS to proper TRS (tip-sleeve “mono” to tip-ring-sleeve “stereo” aka balanced) fixed it and everything sounds perfect, no matter what computer the interface is connected to.

Dockerized Ubuntu mirror via nginx proxy

Recently one of my older servers died and I decided to move its data and services to other, newer systems and get rid of the old power hungry hardware. One those services was my local Ubuntu mirror for all the other servers in that colo. Accelerating package updates is nice, but storing hundreds of GB data that is rarely if ever used isn’t. Better replace it with a small reverse proxy, basically mirroring those mirrors I regularly use and save only the stuff that’s actually requested…

Squid seems a bit much and way too complex for this, Varnish works but would need another small webserver besides it if I want to serve some local files (for example misc. ISO images) as well, so… let’s go with nginx. A very lightweight and fast http server and proxy in one package. And let’s run it as a docker service so I can quickly deploy it wherever I want in the future by copying just two files:

docker-compose.yml

mirror:
  image: nginx
  volumes:
   - ./mirror.conf:/etc/nginx/conf.d/default.conf
   - ./index.html:/var/www/index.html
   - ./repo-ubunu:/var/repo_mirror/
   - ./cdimages:/var/www/cdimages/
  ports:
   - "80:80"
  command: /bin/bash -c "nginx -g 'daemon off;'"

And the mirror.conf with the nginx configuration:

upstream ubuntu {
# choose your nearest mirror  
  server de.archive.ubuntu.com;
  server uk.archive.ubuntu.com;
  server us.archive.ubuntu.com;
  server archive.ubuntu.com backup;
}

    tcp_nopush on;
    tcp_nodelay on;
    types_hash_max_size 2048;


    # where the cache is located on disk
    # to keep the data persistent, make it a docker volume
    proxy_cache_path /var/repo_mirror # defines where the cache is stashed

    # defines cache path heirarchy 
    levels=1:2

    # defines name and size of zone where all cache keys and cache metadata are stashed.
    keys_zone=repository_cache:50m

    # data access timeout - don't cache packages for more than two weeks
    inactive=14d

    # Cache size limit
    max_size=10g;

server {

  listen 80;

  root /var/www/;

  # some additional ISO files on the mirror, added via docker volume
  location /cdimages/ {
  autoindex on;
  }

  # don't log in production mode, way too much info
  access_log off;

  # Location directive for the /ubuntu path
  location /ubuntu {
    
    # cache root, see above
    root /var/repo_mirror/index_data;

    # look for files in the following order
    try_files $uri @ubuntu;
  }

  # directive for the location defined above
  location @ubuntu {

    # map to upstream
    proxy_pass http://ubuntu;

    # two weeks of caching for http code 200 response content
    # 15 minutes for 301 and 302
    # one minute for everything else

    proxy_cache_valid 200 14d;
    proxy_cache_valid 301 302 15m;
    proxy_cache_valid any 1m;

    # set "repository_cache" zone defined above
    proxy_cache repository_cache;

    # Use stale data in those error events
    proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

    # go to backup server those error events
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;

    # lock parallel requests and fetch from backend only once
    proxy_cache_lock on;

    # set some debug headers, just in case
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
    add_header X-Mirror-Upstream-Status $upstream_status;
    add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
    add_header X-Mirror-Status $upstream_cache_status;
  }
}

Adjust to your liking – this currently works for me.