Loading .mailmap +2 −0 Original line number Diff line number Diff line Loading @@ -68,6 +68,8 @@ Jacob Shin <Jacob.Shin@amd.com> James Bottomley <jejb@mulgrave.(none)> James Bottomley <jejb@titanic.il.steeleye.com> James E Wilson <wilson@specifix.com> James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com> James Hogan <jhogan@kernel.org> <james@albanarts.com> James Ketrenos <jketreno@io.(none)> Javi Merino <javi.merino@kernel.org> <javi.merino@arm.com> <javier@osg.samsung.com> <javier.martinez@collabora.co.uk> Loading Documentation/ABI/testing/sysfs-bus-thunderbolt +48 −0 Original line number Diff line number Diff line Loading @@ -110,3 +110,51 @@ Description: When new NVM image is written to the non-active NVM is directly the status value from the DMA configuration based mailbox before the device is power cycled. Writing 0 here clears the status. What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/key Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains name of the property directory the XDomain service exposes. This entry describes the protocol in question. Following directories are already reserved by the Apple XDomain specification: network: IP/ethernet over Thunderbolt targetdm: Target disk mode protocol over Thunderbolt extdisp: External display mode protocol over Thunderbolt What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/modalias Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: Stores the same MODALIAS value emitted by uevent for the XDomain service. Format: tbtsvc:kSpNvNrN What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcid Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains XDomain protocol identifier the XDomain service supports. What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcvers Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains XDomain protocol version the XDomain service supports. What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcrevs Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains XDomain software version the XDomain service supports. What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcstns Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains XDomain service specific settings as bitmask. Format: %x Documentation/ABI/testing/sysfs-power +1 −1 Original line number Diff line number Diff line Loading @@ -127,7 +127,7 @@ Description: What; /sys/power/pm_trace_dev_match Date: October 2010 Contact: James Hogan <james@albanarts.com> Contact: James Hogan <jhogan@kernel.org> Description: The /sys/power/pm_trace_dev_match file contains the name of the device associated with the last PM event point saved in the RTC Loading Documentation/admin-guide/thunderbolt.rst +24 −0 Original line number Diff line number Diff line Loading @@ -197,3 +197,27 @@ information is missing. To recover from this mode, one needs to flash a valid NVM image to the host host controller in the same way it is done in the previous chapter. Networking over Thunderbolt cable --------------------------------- Thunderbolt technology allows software communication across two hosts connected by a Thunderbolt cable. It is possible to tunnel any kind of traffic over Thunderbolt link but currently we only support Apple ThunderboltIP protocol. If the other host is running Windows or macOS only thing you need to do is to connect Thunderbolt cable between the two hosts, the ``thunderbolt-net`` is loaded automatically. If the other host is also Linux you should load ``thunderbolt-net`` manually on one host (it does not matter which one):: # modprobe thunderbolt-net This triggers module load on the other host automatically. If the driver is built-in to the kernel image, there is no need to do anything. The driver will create one virtual ethernet interface per Thunderbolt port which are named like ``thunderbolt0`` and so on. From this point you can either use standard userspace tools like ``ifconfig`` to configure the interface or let your GUI to handle it automatically. Documentation/core-api/workqueue.rst +6 −6 Original line number Diff line number Diff line Loading @@ -39,8 +39,8 @@ up. Although MT wq wasted a lot of resource, the level of concurrency provided was unsatisfactory. The limitation was common to both ST and MT wq albeit less severe on MT. Each wq maintained its own separate worker pool. A MT wq could provide only one execution context per CPU while a ST wq one for the whole system. Work items had to compete for worker pool. An MT wq could provide only one execution context per CPU while an ST wq one for the whole system. Work items had to compete for those very limited execution contexts leading to various problems including proneness to deadlocks around the single execution context. Loading Loading @@ -151,7 +151,7 @@ Application Programming Interface (API) ``alloc_workqueue()`` allocates a wq. The original ``create_*workqueue()`` functions are deprecated and scheduled for removal. ``alloc_workqueue()`` takes three arguments - @``name``, removal. ``alloc_workqueue()`` takes three arguments - ``@name``, ``@flags`` and ``@max_active``. ``@name`` is the name of the wq and also used as the name of the rescuer thread if there is one. Loading Loading @@ -197,7 +197,7 @@ resources, scheduled and executed. served by worker threads with elevated nice level. Note that normal and highpri worker-pools don't interact with each other. Each maintain its separate pool of workers and each other. Each maintains its separate pool of workers and implements concurrency management among its workers. ``WQ_CPU_INTENSIVE`` Loading Loading @@ -249,8 +249,8 @@ unbound worker-pools and only one work item could be active at any given time thus achieving the same ordering property as ST wq. In the current implementation the above configuration only guarantees ST behavior within a given NUMA node. Instead alloc_ordered_queue should be used to achieve system wide ST behavior. ST behavior within a given NUMA node. Instead ``alloc_ordered_queue()`` should be used to achieve system-wide ST behavior. Example Execution Scenarios Loading Loading
.mailmap +2 −0 Original line number Diff line number Diff line Loading @@ -68,6 +68,8 @@ Jacob Shin <Jacob.Shin@amd.com> James Bottomley <jejb@mulgrave.(none)> James Bottomley <jejb@titanic.il.steeleye.com> James E Wilson <wilson@specifix.com> James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com> James Hogan <jhogan@kernel.org> <james@albanarts.com> James Ketrenos <jketreno@io.(none)> Javi Merino <javi.merino@kernel.org> <javi.merino@arm.com> <javier@osg.samsung.com> <javier.martinez@collabora.co.uk> Loading
Documentation/ABI/testing/sysfs-bus-thunderbolt +48 −0 Original line number Diff line number Diff line Loading @@ -110,3 +110,51 @@ Description: When new NVM image is written to the non-active NVM is directly the status value from the DMA configuration based mailbox before the device is power cycled. Writing 0 here clears the status. What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/key Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains name of the property directory the XDomain service exposes. This entry describes the protocol in question. Following directories are already reserved by the Apple XDomain specification: network: IP/ethernet over Thunderbolt targetdm: Target disk mode protocol over Thunderbolt extdisp: External display mode protocol over Thunderbolt What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/modalias Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: Stores the same MODALIAS value emitted by uevent for the XDomain service. Format: tbtsvc:kSpNvNrN What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcid Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains XDomain protocol identifier the XDomain service supports. What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcvers Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains XDomain protocol version the XDomain service supports. What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcrevs Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains XDomain software version the XDomain service supports. What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcstns Date: Jan 2018 KernelVersion: 4.15 Contact: thunderbolt-software@lists.01.org Description: This contains XDomain service specific settings as bitmask. Format: %x
Documentation/ABI/testing/sysfs-power +1 −1 Original line number Diff line number Diff line Loading @@ -127,7 +127,7 @@ Description: What; /sys/power/pm_trace_dev_match Date: October 2010 Contact: James Hogan <james@albanarts.com> Contact: James Hogan <jhogan@kernel.org> Description: The /sys/power/pm_trace_dev_match file contains the name of the device associated with the last PM event point saved in the RTC Loading
Documentation/admin-guide/thunderbolt.rst +24 −0 Original line number Diff line number Diff line Loading @@ -197,3 +197,27 @@ information is missing. To recover from this mode, one needs to flash a valid NVM image to the host host controller in the same way it is done in the previous chapter. Networking over Thunderbolt cable --------------------------------- Thunderbolt technology allows software communication across two hosts connected by a Thunderbolt cable. It is possible to tunnel any kind of traffic over Thunderbolt link but currently we only support Apple ThunderboltIP protocol. If the other host is running Windows or macOS only thing you need to do is to connect Thunderbolt cable between the two hosts, the ``thunderbolt-net`` is loaded automatically. If the other host is also Linux you should load ``thunderbolt-net`` manually on one host (it does not matter which one):: # modprobe thunderbolt-net This triggers module load on the other host automatically. If the driver is built-in to the kernel image, there is no need to do anything. The driver will create one virtual ethernet interface per Thunderbolt port which are named like ``thunderbolt0`` and so on. From this point you can either use standard userspace tools like ``ifconfig`` to configure the interface or let your GUI to handle it automatically.
Documentation/core-api/workqueue.rst +6 −6 Original line number Diff line number Diff line Loading @@ -39,8 +39,8 @@ up. Although MT wq wasted a lot of resource, the level of concurrency provided was unsatisfactory. The limitation was common to both ST and MT wq albeit less severe on MT. Each wq maintained its own separate worker pool. A MT wq could provide only one execution context per CPU while a ST wq one for the whole system. Work items had to compete for worker pool. An MT wq could provide only one execution context per CPU while an ST wq one for the whole system. Work items had to compete for those very limited execution contexts leading to various problems including proneness to deadlocks around the single execution context. Loading Loading @@ -151,7 +151,7 @@ Application Programming Interface (API) ``alloc_workqueue()`` allocates a wq. The original ``create_*workqueue()`` functions are deprecated and scheduled for removal. ``alloc_workqueue()`` takes three arguments - @``name``, removal. ``alloc_workqueue()`` takes three arguments - ``@name``, ``@flags`` and ``@max_active``. ``@name`` is the name of the wq and also used as the name of the rescuer thread if there is one. Loading Loading @@ -197,7 +197,7 @@ resources, scheduled and executed. served by worker threads with elevated nice level. Note that normal and highpri worker-pools don't interact with each other. Each maintain its separate pool of workers and each other. Each maintains its separate pool of workers and implements concurrency management among its workers. ``WQ_CPU_INTENSIVE`` Loading Loading @@ -249,8 +249,8 @@ unbound worker-pools and only one work item could be active at any given time thus achieving the same ordering property as ST wq. In the current implementation the above configuration only guarantees ST behavior within a given NUMA node. Instead alloc_ordered_queue should be used to achieve system wide ST behavior. ST behavior within a given NUMA node. Instead ``alloc_ordered_queue()`` should be used to achieve system-wide ST behavior. Example Execution Scenarios Loading