Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit f6775a28 authored by David S. Miller's avatar David S. Miller
Browse files

Merge branch 'netvsc-transparent-VF-support'



Stephen Hemminger says:

====================
netvsc: transparent VF support

This patch set changes how SR-IOV Virtual Function devices are managed
in the Hyper-V network driver. This version is rebased onto current net-next.

Background

In Hyper-V SR-IOV can be enabled (and disabled) by changing guest settings
on host. When SR-IOV is enabled a matching PCI device is hot plugged and
visible on guest. The VF device is an add-on to an existing netvsc
device, and has the same MAC address.

How is this different?

The original support of VF relied on using bonding driver in active
standby mode to handle the VF device.

With the new netvsc VF logic, the Linux hyper-V network
virtual driver will directly manage the link to SR-IOV VF device.
When VF device is detected (hot plug) it is automatically made a
slave device of the netvsc device. The VF device state reflects
the state of the netvsc device; i.e. if netvsc is set down, then
VF is set down. If netvsc is set up, then VF is brought up.

Packet flow is independent of VF status; all packets are sent and
received as if they were associated with the netvsc device. If VF is
removed or link is down then the synthetic VMBUS path is used.

What was wrong with using bonding script?

A lot of work went into getting the bonding script to work on all
distributions, but it was a major struggle. Linux network devices
can be configured many, many ways and there is no one solution from
userspace to make it all work. What is really hard is when
configuration is attached to synthetic device during boot (eth0) and
then the same addresses and firewall rules needs to also work later if
doing bonding. The new code gets around all of this.

How does VF work during initialization?

Since all packets are sent and received through the logical netvsc
device, initialization is much easier. Just configure the regular
netvsc Ethernet device; when/if SR-IOV is enabled it just
works. Provisioning and cloud init only need to worry about setting up
netvsc device (eth0). If SR-IOV is enabled (even as a later step), the
address and rules stay the same.

What devices show up?

Both netvsc and PCI devices are visible in the system. The netvsc
device is active and named in usual manner (eth0). The PCI device is
visible to Linux and gets renamed by udev to a persistent name
(enP2p3s0). The PCI device name is now irrelevant now.

The logic also sets the PCI VF device SLAVE flag on the network
device so network tools can see the relationship if they are smart
enough to understand how layered devices work.

This is a lot like how I see Windows working.
The VF device is visible in Device Manager, but is not configured.

Is there any performance impact?
There is no visible change in performance. The bonding
and netvsc driver both have equivalent steps.

Is it compatible with old bonding script?

It turns out that if you use the old bonding script, then everything
still works but in a sub-optimum manner. What happens is that bonding
is unable to steal the VF from the netvsc device so it creates a one
legged bond.  Packet flow then is:
	bond0 <--> eth0 <- -> VF (enP2p3s0).
In other words, if you get it wrong it still works, just
awkward and slower.

What if I add address or firewall rule onto the VF?

Same problems occur with now as already occur with bonding, bridging,
teaming on Linux if user incorrectly does configuration onto
an underlying slave device. It will sort of work, packets will come in
and out but the Linux kernel gets confused and things like ARP don’t
work right.  There is no way to block manipulation of the slave
device, and I am sure someone will find some special use case where
they want it.
====================

Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 638ce0fc 12aa7469
Loading
Loading
Loading
Loading
+63 −0
Original line number Original line Diff line number Diff line
Hyper-V network driver
======================

Compatibility
=============

This driver is compatible with Windows Server 2012 R2, 2016 and
Windows 10.

Features
========

  Checksum offload
  ----------------
  The netvsc driver supports checksum offload as long as the
  Hyper-V host version does. Windows Server 2016 and Azure
  support checksum offload for TCP and UDP for both IPv4 and
  IPv6. Windows Server 2012 only supports checksum offload for TCP.

  Receive Side Scaling
  --------------------
  Hyper-V supports receive side scaling. For TCP, packets are
  distributed among available queues based on IP address and port
  number. Current versions of Hyper-V host, only distribute UDP
  packets based on the IP source and destination address.
  The port number is not used as part of the hash value for UDP.
  Fragmented IP packets are not distributed between queues;
  all fragmented packets arrive on the first channel.

  Generic Receive Offload, aka GRO
  --------------------------------
  The driver supports GRO and it is enabled by default. GRO coalesces
  like packets and significantly reduces CPU usage under heavy Rx
  load.

  SR-IOV support
  --------------
  Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV
  is enabled in both the vSwitch and the guest configuration, then the
  Virtual Function (VF) device is passed to the guest as a PCI
  device. In this case, both a synthetic (netvsc) and VF device are
  visible in the guest OS and both NIC's have the same MAC address.

  The VF is enslaved by netvsc device.  The netvsc driver will transparently
  switch the data path to the VF when it is available and up.
  Network state (addresses, firewall, etc) should be applied only to the
  netvsc device; the slave device should not be accessed directly in
  most cases.  The exceptions are if some special queue discipline or
  flow direction is desired, these should be applied directly to the
  VF slave device.

  Receive Buffer
  --------------
  Packets are received into a receive area which is created when device
  is probed. The receive area is broken into MTU sized chunks and each may
  contain one or more packets. The number of receive sections may be changed
  via ethtool Rx ring parameters.

  There is a similar send buffer which is used to aggregate packets for sending.
  The send area is broken into chunks of 6144 bytes, each of section may
  contain one or more packets. The send buffer is an optimization, the driver
  will use slower method to handle very large packets or if the send buffer
  area is exhausted.
+1 −0
Original line number Original line Diff line number Diff line
@@ -6258,6 +6258,7 @@ M: Haiyang Zhang <haiyangz@microsoft.com>
M:	Stephen Hemminger <sthemmin@microsoft.com>
M:	Stephen Hemminger <sthemmin@microsoft.com>
L:	devel@linuxdriverproject.org
L:	devel@linuxdriverproject.org
S:	Maintained
S:	Maintained
F:	Documentation/networking/netvsc.txt
F:	arch/x86/include/asm/mshyperv.h
F:	arch/x86/include/asm/mshyperv.h
F:	arch/x86/include/uapi/asm/hyperv.h
F:	arch/x86/include/uapi/asm/hyperv.h
F:	arch/x86/kernel/cpu/mshyperv.c
F:	arch/x86/kernel/cpu/mshyperv.c
+12 −0
Original line number Original line Diff line number Diff line
@@ -680,6 +680,15 @@ struct netvsc_ethtool_stats {
	unsigned long tx_busy;
	unsigned long tx_busy;
};
};


struct netvsc_vf_pcpu_stats {
	u64     rx_packets;
	u64     rx_bytes;
	u64     tx_packets;
	u64     tx_bytes;
	struct u64_stats_sync   syncp;
	u32	tx_dropped;
};

struct netvsc_reconfig {
struct netvsc_reconfig {
	struct list_head list;
	struct list_head list;
	u32 event;
	u32 event;
@@ -713,6 +722,9 @@ struct net_device_context {


	/* State to manage the associated VF interface. */
	/* State to manage the associated VF interface. */
	struct net_device __rcu *vf_netdev;
	struct net_device __rcu *vf_netdev;
	struct netvsc_vf_pcpu_stats __percpu *vf_stats;
	struct work_struct vf_takeover;
	struct work_struct vf_notify;


	/* 1: allocated, serial number is valid. 0: not allocated */
	/* 1: allocated, serial number is valid. 0: not allocated */
	u32 vf_alloc;
	u32 vf_alloc;
+330 −89

File changed.

Preview size limit exceeded, changes collapsed.

tools/hv/bondvf.sh

deleted100755 → 0
+0 −255

File deleted.

Preview size limit exceeded, changes collapsed.