Synology network bonding with LACP

These are my notes for configuring my HomeLab NAS for LACP (“Link Aggregation“, “network bonding” etc.) to increase bandwidth.

My home lab consists of a couple of Intel NUCs running the free edition of VMware vSphere 6U2, each with 16GB RAM and 256GB SSD. For additional storage, I use a Synology DS1815+ NAS.

As the NUCs have only one 1Gbps network interface, I configured them as a ‘trunk’, carrying all VLANs. The Synology NAS has multiple network interfaces; I started out with a single connection.

To improve NAS bandwidth (and for my own amusement), I decided to upgrade the single 1Gbps connection to 2x 1Gbps using the “LACP” (Link Aggregation) protocol. I use NetGear smart switches with VLAN and LACP support, so this should be easy…

HomeLab setup

  • Step 1: enable bonding on the Synology; log on to the web admin panel and go to Control Panel, Network, Network Interface, Create -> Create Bond. Choose LACP, select the interfaces to bond (I use a static IP address).
  • Step 2: log into the NetGear switch, create a Link Aggregation Group (LAG) consisting of both ports. I used LAG1, with ports 7 and 8.
  • Step 3: connect both network cables, check if everything works.

At this point, I ran into trouble. I couldn’t reach the NAS anymore. Turns out I made a couple of mistakes in my NetGear configuration. The fix was:

  • Specify Jumbo Frames (9216) on the LAG;
    not (just) on the physical interfaces.
  • Specify VLAN settings on the LAG;
    check the VLAN membership as well as the PVID settings.

The NetGear interface doesn’t show LAG settings by default – you need to explicitly select “LAG” or “All” settings. I overlooked this at first:

Netgear-hidden-settings

Incorrect VLAN settings caused the NAS to drop off the network; LAG traffic wasn’t tagged even though both physical interfaces were properly configured. It took me a while to realize and fix.

So, here’s a couple of screenshots:

Step 1: Synology – Create bond, set IP and Jumbo Frames on the bond

Synology bond settings

 

Step 2: Netgear – Create LAG, set VLAN and Jumbo Frames on the LAG

Create LAG, select members:

Netgear-lag-members

Configure VLAN membership and PVID for the LAG as well:

Netgear-vlan-membership

 

Step 3: Connect and enjoy 2Gbps network bandwidth

I tried copying a couple of large files from the NAS to two different vSphere hosts – bandwidth clearly exceeds 1Gbps now.

Synology-bond-result

Running the Archiveteam Warrior on ESXi

The Archiveteam Warrior is available for download as an OVA virtual appliance for use with VirtualBox, VMware Workstation/Player etc.

To use this virtual appliance on VMware ESXi 5.1, you need to make some changes related to unsupported virtual hardware.

The instructions below are for Windows – the VMware vSphere Client doesn’t run on my Mac or Linux boxes, so I keep a Windows VM around just to run the vSphere client.

Download the .OVA file and extract its contents

An OVA file is a TAR file. You can use 7-Zip to unpack the OVA file (to your Desktop). After unpacking, you should see 3 new files:

archiveteam-warrior-v2-20121008.ovf
archiveteam-warrior-v2-20121008-disk1.vmdk
archiveteam-warrior-v2-20121008-disk2.vmdk

Modify the .OVF file to make it compatible with ESXi

Open the .OVF file in a text editor (use Notepad or Notepad++). It is an XML formatted file, describing the virtual appliance.

First, change the Virtual Machine type (line 38):

<vssd:VirtualSystemType>virtualbox-2.2</vssd:VirtualSystemType>

into:

<vssd:VirtualSystemType>vmx-07</vssd:VirtualSystemType>

Next, locate the virtual SATA storage controller (starting at line 75):

<Item>
 <rasd:Address>0</rasd:Address>
 <rasd:Caption>sataController0</rasd:Caption>
 <rasd:Description>SATA Controller</rasd:Description>
 <rasd:ElementName>sataController0</rasd:ElementName>
 <rasd:InstanceID>5</rasd:InstanceID>
 <rasd:ResourceSubType>AHCI</rasd:ResourceSubType>
 <rasd:ResourceType>20</rasd:ResourceType>
</Item>

This virtual SATA controller is not supported by ESXi 5.1, so replace the item with the following:

<Item>
 <rasd:Address>0</rasd:Address>
 <rasd:Caption>SCSIController</rasd:Caption>
 <rasd:Description>SCSI Controller</rasd:Description>
 <rasd:ElementName>SCSIController</rasd:ElementName>
 <rasd:InstanceID>5</rasd:InstanceID>
 <rasd:ResourceSubType>lsilogic</rasd:ResourceSubType>
 <rasd:ResourceType>6</rasd:ResourceType>
</Item>

Save the OVF file.

Import the Virtual Appliance

  1. Start the vSphere Client and select File > Deploy OVF Template.
  2. Browse to the .OVF file (on your Desktop) and click Next.
  3. Now, vSphere Client will display a warning that the “Debian” OS is unknown, and was remapped to “Other (32-bit)”. You can ignore this warning. Deployment should complete successfully.

Power on the Virtual Machine and follow the instructions on the Console window – happy Archiving!