Commit a5050c61 authored by stephen hemminger's avatar stephen hemminger Committed by David S. Miller

netvsc: add documentation

Add some background documentation on netvsc device options
and limitations.
Signed-off-by: default avatarStephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 0c195567
Hyper-V network driver
======================
Compatibility
=============
This driver is compatible with Windows Server 2012 R2, 2016 and
Windows 10.
Features
========
Checksum offload
----------------
The netvsc driver supports checksum offload as long as the
Hyper-V host version does. Windows Server 2016 and Azure
support checksum offload for TCP and UDP for both IPv4 and
IPv6. Windows Server 2012 only supports checksum offload for TCP.
Receive Side Scaling
--------------------
Hyper-V supports receive side scaling. For TCP, packets are
distributed among available queues based on IP address and port
number. Current versions of Hyper-V host, only distribute UDP
packets based on the IP source and destination address.
The port number is not used as part of the hash value for UDP.
Fragmented IP packets are not distributed between queues;
all fragmented packets arrive on the first channel.
Generic Receive Offload, aka GRO
--------------------------------
The driver supports GRO and it is enabled by default. GRO coalesces
like packets and significantly reduces CPU usage under heavy Rx
load.
SR-IOV support
--------------
Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV
is enabled in both the vSwitch and the guest configuration, then the
Virtual Function (VF) device is passed to the guest as a PCI
device. In this case, both a synthetic (netvsc) and VF device are
visible in the guest OS and both NIC's have the same MAC address.
The VF is enslaved by netvsc device. The netvsc driver will transparently
switch the data path to the VF when it is available and up.
Network state (addresses, firewall, etc) should be applied only to the
netvsc device; the slave device should not be accessed directly in
most cases. The exceptions are if some special queue discipline or
flow direction is desired, these should be applied directly to the
VF slave device.
Receive Buffer
--------------
Packets are received into a receive area which is created when device
is probed. The receive area is broken into MTU sized chunks and each may
contain one or more packets. The number of receive sections may be changed
via ethtool Rx ring parameters.
There is a similar send buffer which is used to aggregate packets for sending.
The send area is broken into chunks of 6144 bytes, each of section may
contain one or more packets. The send buffer is an optimization, the driver
will use slower method to handle very large packets or if the send buffer
area is exhausted.
...@@ -6258,6 +6258,7 @@ M: Haiyang Zhang <haiyangz@microsoft.com> ...@@ -6258,6 +6258,7 @@ M: Haiyang Zhang <haiyangz@microsoft.com>
M: Stephen Hemminger <sthemmin@microsoft.com> M: Stephen Hemminger <sthemmin@microsoft.com>
L: devel@linuxdriverproject.org L: devel@linuxdriverproject.org
S: Maintained S: Maintained
F: Documentation/networking/netvsc.txt
F: arch/x86/include/asm/mshyperv.h F: arch/x86/include/asm/mshyperv.h
F: arch/x86/include/uapi/asm/hyperv.h F: arch/x86/include/uapi/asm/hyperv.h
F: arch/x86/kernel/cpu/mshyperv.c F: arch/x86/kernel/cpu/mshyperv.c
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment