Setting Up the Host for Single Root I/O Virtualization (SR-IOV)

Single Root I/O Virtualization (SR-IOV) allows a single PCIe physical device under a single root port to appear to be multiple physical devices to the hypervisor.

The following tasks are required to set up the host for SR-IOV.

1. Verify the IOMMU Support.

Use the virt-host-validate Linux command to check IOMMU (input/output memory management unit) support. If it does not "PASS" for IOMMU, check the BIOS setting and kernel settings.

The example below is what should be displayed.

[arista@solution]$ virt-host-validate
QEMU: Checking for device assignment IOMMU support : PASS
QEMU: Checking if IOMMU is enabled by kernel : PASS

2. Verify the Drivers are Supported.

Ensure the PCI device with SR-IOV capabilities is detected. In the example below, an INTEL 82599 ES network interface card is detected which supports SR-IOV.

Verify the ports and NIC IDs that are in bold in the lspci | grep Ethernet Linux command output below.

# lspci | grep Ethernet
01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
01:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
01:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
81:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
81:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
81:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
81:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
82:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
82:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
83:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
83:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
3. Verify the driver kernel is active.

After confirming the device support, the driver kernel module should load automatically by the kernel. To verify the driver kernel is active, use the lsmod | grep igb Linux command.

[root@kvmsolution]# lsmod | grep igb
igb 1973280
ptp192312 igb,ixgbe
dca151302 igb,ixgbe
i2c_algo_bit 134132 ast,igb
i2c_core 407566 ast,drm,igb,i2c_i801,drm_kms_helper,i2c_algo_bit

4. Activate Virtual Functions (VFs).

The maximum number of supported virtual functions depends on the type of card. The example below shows that the PF identifier 82:00.0 supports a total of 63 VFs.

[arista@localhost]$ cat/sys/bus/pci/devices/0000\:82\:00.0/sriov_totalvfs 63

To activate the seven VFs per PFs and make them persistent after reboot, add the line options igb max_vfs=7 in ixgbe.conf and the sriov.conf files in /etc/modprobe.d

Use the rmmod ixgbe and modprobe ixgbe Linux commands to unload and reload the module.

5. Verify the VFs are detected.

Verify the VFs are detected by using the lspci | grep Ethernet Linux command. For the two identifiers 82:00.0 and 82:00.1, 14 VFs are detected.

# lspci | grep Ethernet
82:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
82:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
82:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)

6. Locate the serial numbers for the PFs and VRFs.

Locate the serial numbers for the PFs and VFs. The Linux virsh nodedev-list | grep 82 command below displays the serial number for identifiers 82:00.0 and 82:00.1. The first two numbers are the serial numbers for the PFs and the remaining are the serial numbers for the VFs.

# virsh nodedev-list | grep 82
pci_0000_82_00_0
pci_0000_82_00_1
pci_0000_82_10_0
pci_0000_82_10_1
pci_0000_82_10_2
pci_0000_82_10_3
pci_0000_82_10_4
pci_0000_82_10_5
pci_0000_82_10_6
pci_0000_82_10_7
pci_0000_82_11_0
pci_0000_82_11_1
pci_0000_82_11_2
pci_0000_82_11_3
pci_0000_82_11_4
pci_0000_82_11_5

7. Select the serial number of the VF.

Select the serial number of the VF that will attach to the VM (vEOS). Using the Linux virsh nodedev-dumpxml <serial number> command, locate the bus, slot, and function parameters. For example, serial number: pci_0000_82_11_1 displays the following details.

# virsh nodedev-dumpxmlpci_0000_82_11_1
<device>
<name>pci_0000_82_11_1</name>
<path>/sys/devices/pci0000:80/0000:80:02.0/0000:82:11.1</path>
<parent>computer</parent>
<driver>
<name>ixgbevf</name>
</driver>
<capability type='pci'>
<domain>0</domain>
<bus>130</bus>
<slot>17</slot>
<function>1</function>
<product id='0x10ed'>82599 Ethernet Controller Virtual Function</product>
<vendor id='0x8086'>Intel Corporation</vendor>
<capability type='phys_function'>
<address domain='0x0000' bus='0x82' slot='0x00' function='0x1'/>
</capability>
<iommuGroup number='71'>
<address domain='0x0000' bus='0x82' slot='0x11' function='0x1'/>
</iommuGroup>
<numa node='1'/>
<pci-express>
<link validity='cap' port='0' width='0'/>
<link validity='sta' width='0'/>
</pci-express>
</capability>
</device>

8. Create a new Interface.

Shutdown the vEOS VM if it is already running. Open the XML file for the specific vEOS VM for editing using the Linux command virsh edit <vm-name>. In the interface section, create a new interface by adding the details as shown below. The bus, slot, and function values are in the hexadecimal format of the decimal values found in step 7.

<interface type='hostdev' managed='yes'>
<source>
<address type='pci' domain='0x0000' bus='0x82' slot='0x11' function='0x1'/>
</source>
</interface>

9. Start the vEOS VM. Verify there is an added interface on the VM. Using the command ethtool -i et9 to verify that the driver for the added interface is ixgbevf .

switch(config)#show interface status
Port NameStatus Vlan Duplex SpeedType Flags
 Et9 notconnect routed unconfunconf 10/100/1000
 Ma1 connectedrouted a-fulla-1G 10/100/1000

[admin@vEOS]$ ethtool -i et9
driver: ixgbevf
version: 2.12.1-k
firmware-version:
bus-info: 0000:00:0c.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

Launching SR-IOV

vEOS can also use PCIE SRI-OV I/O interfaces. Each SRI-OV NIC is passed-through to the VM such that network I/O does not hit the hypervisor. In this model, the hypervisor and multiple VMs can share the same NIC card.

SR-IOV has the following advantages over LinuxBridge:

  • Higher Performance ~ 2x.
  • Better latency and jitter characteristics.
  • vEOS directly receives physical port state indications from the virtual device.
  • Using SR-IOV virtualize the NIC.
  • The NICs have a built-in bridge to do basic bridging.
  • Avoids software handling of the packets in the kernel.
Figure 1. Linux SRIOV PCI Passthough-based Deployment