Loading .mailmap +1 −0 Original line number Diff line number Diff line Loading @@ -107,6 +107,7 @@ Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch> Maciej W. Rozycki <macro@mips.com> <macro@imgtec.com> Marcin Nowakowski <marcin.nowakowski@mips.com> <marcin.nowakowski@imgtec.com> Mark Brown <broonie@sirena.org.uk> Mark Yao <markyao0591@gmail.com> <mark.yao@rock-chips.com> Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com> Martin Kepplinger <martink@posteo.de> <martin.kepplinger@ginzinger.com> Matthieu CASTET <castet.matthieu@free.fr> Loading Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32 0 → 100644 +16 −0 Original line number Diff line number Diff line What: /sys/bus/iio/devices/iio:deviceX/in_voltage_spi_clk_freq KernelVersion: 4.14 Contact: arnaud.pouliquen@st.com Description: For audio purpose only. Used by audio driver to set/get the spi input frequency. This is mandatory if DFSDM is slave on SPI bus, to provide information on the SPI clock frequency during runtime Notice that the SPI frequency should be a multiple of sample frequency to ensure the precision. if DFSDM input is SPI master Reading SPI clkout frequency, error on writing If DFSDM input is SPI Slave: Reading returns value previously set. Writing value before starting conversions. No newline at end of file Documentation/ABI/testing/sysfs-devices-system-cpu +16 −0 Original line number Diff line number Diff line Loading @@ -375,3 +375,19 @@ Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> Description: information about CPUs heterogeneity. cpu_capacity: capacity of cpu#. What: /sys/devices/system/cpu/vulnerabilities /sys/devices/system/cpu/vulnerabilities/meltdown /sys/devices/system/cpu/vulnerabilities/spectre_v1 /sys/devices/system/cpu/vulnerabilities/spectre_v2 Date: January 2018 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> Description: Information about CPU vulnerabilities The files are named after the code names of CPU vulnerabilities. The output of those files reflects the state of the CPUs in the system. Possible output values: "Not affected" CPU is not affected by the vulnerability "Vulnerable" CPU is affected and no mitigation in effect "Mitigation: $M" CPU is affected and mitigation $M is in effect Documentation/IRQ-domain.txt +2 −34 Original line number Diff line number Diff line Loading @@ -265,37 +265,5 @@ support other architectures, such as ARM, ARM64 etc. === Debugging === If you switch on CONFIG_IRQ_DOMAIN_DEBUG (which depends on CONFIG_IRQ_DOMAIN and CONFIG_DEBUG_FS), you will find a new file in your debugfs mount point, called irq_domain_mapping. This file contains a live snapshot of all the IRQ domains in the system: name mapped linear-max direct-max devtree-node pl061 8 8 0 /smb/gpio@e0080000 pl061 8 8 0 /smb/gpio@e1050000 pMSI 0 0 0 /interrupt-controller@e1101000/v2m@e0080000 MSI 37 0 0 /interrupt-controller@e1101000/v2m@e0080000 GICv2m 37 0 0 /interrupt-controller@e1101000/v2m@e0080000 GICv2 448 448 0 /interrupt-controller@e1101000 it also iterates over the interrupts to display their mapping in the domains, and makes the domain stacking visible: irq hwirq chip name chip data active type domain 1 0x00019 GICv2 0xffff00000916bfd8 * LINEAR GICv2 2 0x0001d GICv2 0xffff00000916bfd8 LINEAR GICv2 3 0x0001e GICv2 0xffff00000916bfd8 * LINEAR GICv2 4 0x0001b GICv2 0xffff00000916bfd8 * LINEAR GICv2 5 0x0001a GICv2 0xffff00000916bfd8 LINEAR GICv2 [...] 96 0x81808 MSI 0x (null) RADIX MSI 96+ 0x00063 GICv2m 0xffff8003ee116980 RADIX GICv2m 96+ 0x00063 GICv2 0xffff00000916bfd8 LINEAR GICv2 97 0x08800 MSI 0x (null) * RADIX MSI 97+ 0x00064 GICv2m 0xffff8003ee116980 * RADIX GICv2m 97+ 0x00064 GICv2 0xffff00000916bfd8 * LINEAR GICv2 Here, interrupts 1-5 are only using a single domain, while 96 and 97 are build out of a stack of three domain, each level performing a particular function. Most of the internals of the IRQ subsystem are exposed in debugfs by turning CONFIG_GENERIC_IRQ_DEBUGFS on. Documentation/RCU/Design/Data-Structures/Data-Structures.html +34 −15 Original line number Diff line number Diff line Loading @@ -1097,7 +1097,8 @@ will cause the CPU to disregard the values of its counters on its next exit from idle. Finally, the <tt>rcu_qs_ctr_snap</tt> field is used to detect cases where a given operation has resulted in a quiescent state for all flavors of RCU, for example, <tt>cond_resched_rcu_qs()</tt>. for all flavors of RCU, for example, <tt>cond_resched()</tt> when RCU has indicated a need for quiescent states. <h5>RCU Callback Handling</h5> Loading Loading @@ -1182,8 +1183,8 @@ CPU (and from tracing) unless otherwise stated. Its fields are as follows: <pre> 1 int dynticks_nesting; 2 int dynticks_nmi_nesting; 1 long dynticks_nesting; 2 long dynticks_nmi_nesting; 3 atomic_t dynticks; 4 bool rcu_need_heavy_qs; 5 unsigned long rcu_qs_ctr; Loading @@ -1191,15 +1192,31 @@ Its fields are as follows: </pre> <p>The <tt>->dynticks_nesting</tt> field counts the nesting depth of normal interrupts. In addition, this counter is incremented when exiting dyntick-idle mode and decremented when entering it. nesting depth of process execution, so that in normal circumstances this counter has value zero or one. NMIs, irqs, and tracers are counted by the <tt>->dynticks_nmi_nesting</tt> field. Because NMIs cannot be masked, changes to this variable have to be undertaken carefully using an algorithm provided by Andy Lutomirski. The initial transition from idle adds one, and nested transitions add two, so that a nesting level of five is represented by a <tt>->dynticks_nmi_nesting</tt> value of nine. This counter can therefore be thought of as counting the number of reasons why this CPU cannot be permitted to enter dyntick-idle mode, aside from non-maskable interrupts (NMIs). NMIs are counted by the <tt>->dynticks_nmi_nesting</tt> field, except that NMIs that interrupt non-dyntick-idle execution are not counted. mode, aside from process-level transitions. <p>However, it turns out that when running in non-idle kernel context, the Linux kernel is fully capable of entering interrupt handlers that never exit and perhaps also vice versa. Therefore, whenever the <tt>->dynticks_nesting</tt> field is incremented up from zero, the <tt>->dynticks_nmi_nesting</tt> field is set to a large positive number, and whenever the <tt>->dynticks_nesting</tt> field is decremented down to zero, the the <tt>->dynticks_nmi_nesting</tt> field is set to zero. Assuming that the number of misnested interrupts is not sufficient to overflow the counter, this approach corrects the <tt>->dynticks_nmi_nesting</tt> field every time the corresponding CPU enters the idle loop from process context. </p><p>The <tt>->dynticks</tt> field counts the corresponding CPU's transitions to and from dyntick-idle mode, so that this counter Loading Loading @@ -1231,14 +1248,16 @@ in response. <tr><th> </th></tr> <tr><th align="left">Quick Quiz:</th></tr> <tr><td> Why not just count all NMIs? Wouldn't that be simpler and less error prone? Why not simply combine the <tt>->dynticks_nesting</tt> and <tt>->dynticks_nmi_nesting</tt> counters into a single counter that just counts the number of reasons that the corresponding CPU is non-idle? </td></tr> <tr><th align="left">Answer:</th></tr> <tr><td bgcolor="#ffffff"><font color="ffffff"> It seems simpler only until you think hard about how to go about updating the <tt>rcu_dynticks</tt> structure's <tt>->dynticks</tt> field. Because this would fail in the presence of interrupts whose handlers never return and of handlers that manage to return from a made-up interrupt. </font></td></tr> <tr><td> </td></tr> </table> Loading Loading
.mailmap +1 −0 Original line number Diff line number Diff line Loading @@ -107,6 +107,7 @@ Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch> Maciej W. Rozycki <macro@mips.com> <macro@imgtec.com> Marcin Nowakowski <marcin.nowakowski@mips.com> <marcin.nowakowski@imgtec.com> Mark Brown <broonie@sirena.org.uk> Mark Yao <markyao0591@gmail.com> <mark.yao@rock-chips.com> Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com> Martin Kepplinger <martink@posteo.de> <martin.kepplinger@ginzinger.com> Matthieu CASTET <castet.matthieu@free.fr> Loading
Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32 0 → 100644 +16 −0 Original line number Diff line number Diff line What: /sys/bus/iio/devices/iio:deviceX/in_voltage_spi_clk_freq KernelVersion: 4.14 Contact: arnaud.pouliquen@st.com Description: For audio purpose only. Used by audio driver to set/get the spi input frequency. This is mandatory if DFSDM is slave on SPI bus, to provide information on the SPI clock frequency during runtime Notice that the SPI frequency should be a multiple of sample frequency to ensure the precision. if DFSDM input is SPI master Reading SPI clkout frequency, error on writing If DFSDM input is SPI Slave: Reading returns value previously set. Writing value before starting conversions. No newline at end of file
Documentation/ABI/testing/sysfs-devices-system-cpu +16 −0 Original line number Diff line number Diff line Loading @@ -375,3 +375,19 @@ Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> Description: information about CPUs heterogeneity. cpu_capacity: capacity of cpu#. What: /sys/devices/system/cpu/vulnerabilities /sys/devices/system/cpu/vulnerabilities/meltdown /sys/devices/system/cpu/vulnerabilities/spectre_v1 /sys/devices/system/cpu/vulnerabilities/spectre_v2 Date: January 2018 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> Description: Information about CPU vulnerabilities The files are named after the code names of CPU vulnerabilities. The output of those files reflects the state of the CPUs in the system. Possible output values: "Not affected" CPU is not affected by the vulnerability "Vulnerable" CPU is affected and no mitigation in effect "Mitigation: $M" CPU is affected and mitigation $M is in effect
Documentation/IRQ-domain.txt +2 −34 Original line number Diff line number Diff line Loading @@ -265,37 +265,5 @@ support other architectures, such as ARM, ARM64 etc. === Debugging === If you switch on CONFIG_IRQ_DOMAIN_DEBUG (which depends on CONFIG_IRQ_DOMAIN and CONFIG_DEBUG_FS), you will find a new file in your debugfs mount point, called irq_domain_mapping. This file contains a live snapshot of all the IRQ domains in the system: name mapped linear-max direct-max devtree-node pl061 8 8 0 /smb/gpio@e0080000 pl061 8 8 0 /smb/gpio@e1050000 pMSI 0 0 0 /interrupt-controller@e1101000/v2m@e0080000 MSI 37 0 0 /interrupt-controller@e1101000/v2m@e0080000 GICv2m 37 0 0 /interrupt-controller@e1101000/v2m@e0080000 GICv2 448 448 0 /interrupt-controller@e1101000 it also iterates over the interrupts to display their mapping in the domains, and makes the domain stacking visible: irq hwirq chip name chip data active type domain 1 0x00019 GICv2 0xffff00000916bfd8 * LINEAR GICv2 2 0x0001d GICv2 0xffff00000916bfd8 LINEAR GICv2 3 0x0001e GICv2 0xffff00000916bfd8 * LINEAR GICv2 4 0x0001b GICv2 0xffff00000916bfd8 * LINEAR GICv2 5 0x0001a GICv2 0xffff00000916bfd8 LINEAR GICv2 [...] 96 0x81808 MSI 0x (null) RADIX MSI 96+ 0x00063 GICv2m 0xffff8003ee116980 RADIX GICv2m 96+ 0x00063 GICv2 0xffff00000916bfd8 LINEAR GICv2 97 0x08800 MSI 0x (null) * RADIX MSI 97+ 0x00064 GICv2m 0xffff8003ee116980 * RADIX GICv2m 97+ 0x00064 GICv2 0xffff00000916bfd8 * LINEAR GICv2 Here, interrupts 1-5 are only using a single domain, while 96 and 97 are build out of a stack of three domain, each level performing a particular function. Most of the internals of the IRQ subsystem are exposed in debugfs by turning CONFIG_GENERIC_IRQ_DEBUGFS on.
Documentation/RCU/Design/Data-Structures/Data-Structures.html +34 −15 Original line number Diff line number Diff line Loading @@ -1097,7 +1097,8 @@ will cause the CPU to disregard the values of its counters on its next exit from idle. Finally, the <tt>rcu_qs_ctr_snap</tt> field is used to detect cases where a given operation has resulted in a quiescent state for all flavors of RCU, for example, <tt>cond_resched_rcu_qs()</tt>. for all flavors of RCU, for example, <tt>cond_resched()</tt> when RCU has indicated a need for quiescent states. <h5>RCU Callback Handling</h5> Loading Loading @@ -1182,8 +1183,8 @@ CPU (and from tracing) unless otherwise stated. Its fields are as follows: <pre> 1 int dynticks_nesting; 2 int dynticks_nmi_nesting; 1 long dynticks_nesting; 2 long dynticks_nmi_nesting; 3 atomic_t dynticks; 4 bool rcu_need_heavy_qs; 5 unsigned long rcu_qs_ctr; Loading @@ -1191,15 +1192,31 @@ Its fields are as follows: </pre> <p>The <tt>->dynticks_nesting</tt> field counts the nesting depth of normal interrupts. In addition, this counter is incremented when exiting dyntick-idle mode and decremented when entering it. nesting depth of process execution, so that in normal circumstances this counter has value zero or one. NMIs, irqs, and tracers are counted by the <tt>->dynticks_nmi_nesting</tt> field. Because NMIs cannot be masked, changes to this variable have to be undertaken carefully using an algorithm provided by Andy Lutomirski. The initial transition from idle adds one, and nested transitions add two, so that a nesting level of five is represented by a <tt>->dynticks_nmi_nesting</tt> value of nine. This counter can therefore be thought of as counting the number of reasons why this CPU cannot be permitted to enter dyntick-idle mode, aside from non-maskable interrupts (NMIs). NMIs are counted by the <tt>->dynticks_nmi_nesting</tt> field, except that NMIs that interrupt non-dyntick-idle execution are not counted. mode, aside from process-level transitions. <p>However, it turns out that when running in non-idle kernel context, the Linux kernel is fully capable of entering interrupt handlers that never exit and perhaps also vice versa. Therefore, whenever the <tt>->dynticks_nesting</tt> field is incremented up from zero, the <tt>->dynticks_nmi_nesting</tt> field is set to a large positive number, and whenever the <tt>->dynticks_nesting</tt> field is decremented down to zero, the the <tt>->dynticks_nmi_nesting</tt> field is set to zero. Assuming that the number of misnested interrupts is not sufficient to overflow the counter, this approach corrects the <tt>->dynticks_nmi_nesting</tt> field every time the corresponding CPU enters the idle loop from process context. </p><p>The <tt>->dynticks</tt> field counts the corresponding CPU's transitions to and from dyntick-idle mode, so that this counter Loading Loading @@ -1231,14 +1248,16 @@ in response. <tr><th> </th></tr> <tr><th align="left">Quick Quiz:</th></tr> <tr><td> Why not just count all NMIs? Wouldn't that be simpler and less error prone? Why not simply combine the <tt>->dynticks_nesting</tt> and <tt>->dynticks_nmi_nesting</tt> counters into a single counter that just counts the number of reasons that the corresponding CPU is non-idle? </td></tr> <tr><th align="left">Answer:</th></tr> <tr><td bgcolor="#ffffff"><font color="ffffff"> It seems simpler only until you think hard about how to go about updating the <tt>rcu_dynticks</tt> structure's <tt>->dynticks</tt> field. Because this would fail in the presence of interrupts whose handlers never return and of handlers that manage to return from a made-up interrupt. </font></td></tr> <tr><td> </td></tr> </table> Loading