Windows 10 on the Acer Aspire X3990

I have a couple of older PCs that I use on my electronics workbench; an Acer Aspire X3990 and an Acer Aspire X3995 both running Windows 10 version 1607 (originally they ran Windows 7 Home Premium, upgraded in-place to Windows 10).

Any attempt to upgrade these systems to a newer Windows 10 fails; the screen just goes black and the system hangs. No diagnostics whatsoever. After many frustrating attempts I decided to do a “clean” install.

I extracted the Product Key from the running OS using the free ProduKey utility. Then, I downloaded the Windows 10 Home installation media and created a bootable USB stick.

This time, the installer threw an error before hanging: “clock watchdog timeout”. Google finally came up with a few relevant links (tenforums.com and hardforum.com) that pointed me to the WiFi add-on card.

I never used WiFi on these PCs, so I pulled the card out and lo and behold: Windows 10 installation finally succeeds without problems!

My experience with Ubiquiti UniFi wireless

In late 2016, I replaced my existing Apple / AVM Fritz!Box mix of wireless networking gear with a set of Ubiquiti UniFi AP-AC Pro access points.

Overall, I’m very happy with them. Things I like:

  • Handoff between access points Just Works [tm]
  • They can perform rolling firmware upgrades, one AP at a time.
  • The APs support Power over Ethernet (PoE). This cuts down on cabling. I use them with Netgear GS110TP PoE switches.
  • Ubiquiti gear offers “Single Pane of Glass” management through the (free) Controller software. As an alternative, you can purchase a “Cloud Key”; haven’t gone that route myself.

Things I don’t like as much:

  • Initially I had a lot of issues with some iPads dropping off the network. After a lot of Googling, I found a post that suggested disabling the “connectivity monitor and wireless uplink”. Since all my APs are wired to the network, I disabled the Uplink Connectivity Monitor under Settings > Services and the problem disappeared.
  • To detach the access point from the mounting ring, you need a small “key”. It’s cumbersome; it would have been nicer to have a slightly larger opening so I can use a small flat-blade screwdriver.

Ubiquiti UniFi controller settings

I’ll be adding a UniFi  Security Gateway (USG) soon, to get better insight into the traffic on my wireless networks.

Tip: mark your wireless mouse and USB dongle

We have several identical wireless rodents (Logitech M525, they are nice). This means we also have several identical USB receivers.

To prevent mixups, I’ve color-coded the USB receivers as well as the CE-markings on the mice, using different permanent markers:

Synology network bonding with LACP

These are my notes for configuring my HomeLab NAS for LACP (“Link Aggregation“, “network bonding” etc.) to increase bandwidth.

My home lab consists of a couple of Intel NUCs running the free edition of VMware vSphere 6U2, each with 16GB RAM and 256GB SSD. For additional storage, I use a Synology DS1815+ NAS.

As the NUCs have only one 1Gbps network interface, I configured them as a ‘trunk’, carrying all VLANs. The Synology NAS has multiple network interfaces; I started out with a single connection.

To improve NAS bandwidth (and for my own amusement), I decided to upgrade the single 1Gbps connection to 2x 1Gbps using the “LACP” (Link Aggregation) protocol. I use NetGear smart switches with VLAN and LACP support, so this should be easy…

HomeLab setup

  • Step 1: enable bonding on the Synology; log on to the web admin panel and go to Control Panel, Network, Network Interface, Create -> Create Bond. Choose LACP, select the interfaces to bond (I use a static IP address).
  • Step 2: log into the NetGear switch, create a Link Aggregation Group (LAG) consisting of both ports. I used LAG1, with ports 7 and 8.
  • Step 3: connect both network cables, check if everything works.

At this point, I ran into trouble. I couldn’t reach the NAS anymore. Turns out I made a couple of mistakes in my NetGear configuration. The fix was:

  • Specify Jumbo Frames (9216) on the LAG;
    not (just) on the physical interfaces.
  • Specify VLAN settings on the LAG;
    check the VLAN membership as well as the PVID settings.

The NetGear interface doesn’t show LAG settings by default – you need to explicitly select “LAG” or “All” settings. I overlooked this at first:

Netgear-hidden-settings

Incorrect VLAN settings caused the NAS to drop off the network; LAG traffic wasn’t tagged even though both physical interfaces were properly configured. It took me a while to realize and fix.

So, here’s a couple of screenshots:

Step 1: Synology – Create bond, set IP and Jumbo Frames on the bond

Synology bond settings

 

Step 2: Netgear – Create LAG, set VLAN and Jumbo Frames on the LAG

Create LAG, select members:

Netgear-lag-members

Configure VLAN membership and PVID for the LAG as well:

Netgear-vlan-membership

 

Step 3: Connect and enjoy 2Gbps network bandwidth

I tried copying a couple of large files from the NAS to two different vSphere hosts – bandwidth clearly exceeds 1Gbps now.

Synology-bond-result

Synology DSM 6 – rebalancing BTRFS

I recently upgraded my Synology from 4x 3TiB to 4x 6TiB disks (WD Red).

I could have simply installed the new disks and created a BTRFS volume (which would have become /volume2) but I decided to take a different route:

  1. Install 2x 6TiB disks and format as RAID-0 (mounted on /volume2).
  2. Copy all relevant data from the old disks to the new disks, and verify.
    If anything were to go wrong, I still had my data on the old disks.
  3. Remove the old /volume1 4x 3TiB disks for safekeeping, insert 2x 6TiB disks and create a new /volume1 using BTRFS.
  4. Copy all data from /volume2 (RAID-0) to /volume1 (BTRFS), and verify.
  5. Destroy /volume2 and add the remaining disks to /volume1.

Now, all my data is effectively stored on the first 2 disks in the new volume. BTRFS can rebalance the data chunks across all spindles. Open an SSH connection to the Synology NAS, and issue the following command:

command: btrfs balance start /volume1

This operation took several hours to complete – it had to rebalance about 6 terabytes of data… I opened a second SSH session to monitor progress:

btrfs balance status -v /volume1