Loading Documentation/DocBook/mcabook.tmpl +1 −1 Original line number Diff line number Diff line Loading @@ -96,7 +96,7 @@ <chapter id="pubfunctions"> <title>Public Functions Provided</title> !Earch/i386/kernel/mca.c !Edrivers/mca/mca-legacy.c </chapter> <chapter id="dmafunctions"> Loading Documentation/IPMI.txt +7 −6 Original line number Diff line number Diff line Loading @@ -605,12 +605,13 @@ is in the ipmi_poweroff module. When the system requests a powerdown, it will send the proper IPMI commands to do this. This is supported on several platforms. There is a module parameter named "poweroff_control" that may either be zero (do a power down) or 2 (do a power cycle, power the system off, then power it on in a few seconds). Setting ipmi_poweroff.poweroff_control=x will do the same thing on the kernel command line. The parameter is also available via the proc filesystem in /proc/ipmi/poweroff_control. Note that if the system does not support power cycling, it will always to the power off. There is a module parameter named "poweroff_powercycle" that may either be zero (do a power down) or non-zero (do a power cycle, power the system off, then power it on in a few seconds). Setting ipmi_poweroff.poweroff_control=x will do the same thing on the kernel command line. The parameter is also available via the proc filesystem in /proc/sys/dev/ipmi/poweroff_powercycle. Note that if the system does not support power cycling, it will always do the power off. Note that if you have ACPI enabled, the system will prefer using ACPI to power off. Documentation/RCU/NMI-RCU.txt 0 → 100644 +112 −0 Original line number Diff line number Diff line Using RCU to Protect Dynamic NMI Handlers Although RCU is usually used to protect read-mostly data structures, it is possible to use RCU to provide dynamic non-maskable interrupt handlers, as well as dynamic irq handlers. This document describes how to do this, drawing loosely from Zwane Mwaikambo's NMI-timer work in "arch/i386/oprofile/nmi_timer_int.c" and in "arch/i386/kernel/traps.c". The relevant pieces of code are listed below, each followed by a brief explanation. static int dummy_nmi_callback(struct pt_regs *regs, int cpu) { return 0; } The dummy_nmi_callback() function is a "dummy" NMI handler that does nothing, but returns zero, thus saying that it did nothing, allowing the NMI handler to take the default machine-specific action. static nmi_callback_t nmi_callback = dummy_nmi_callback; This nmi_callback variable is a global function pointer to the current NMI handler. fastcall void do_nmi(struct pt_regs * regs, long error_code) { int cpu; nmi_enter(); cpu = smp_processor_id(); ++nmi_count(cpu); if (!rcu_dereference(nmi_callback)(regs, cpu)) default_do_nmi(regs); nmi_exit(); } The do_nmi() function processes each NMI. It first disables preemption in the same way that a hardware irq would, then increments the per-CPU count of NMIs. It then invokes the NMI handler stored in the nmi_callback function pointer. If this handler returns zero, do_nmi() invokes the default_do_nmi() function to handle a machine-specific NMI. Finally, preemption is restored. Strictly speaking, rcu_dereference() is not needed, since this code runs only on i386, which does not need rcu_dereference() anyway. However, it is a good documentation aid, particularly for anyone attempting to do something similar on Alpha. Quick Quiz: Why might the rcu_dereference() be necessary on Alpha, given that the code referenced by the pointer is read-only? Back to the discussion of NMI and RCU... void set_nmi_callback(nmi_callback_t callback) { rcu_assign_pointer(nmi_callback, callback); } The set_nmi_callback() function registers an NMI handler. Note that any data that is to be used by the callback must be initialized up -before- the call to set_nmi_callback(). On architectures that do not order writes, the rcu_assign_pointer() ensures that the NMI handler sees the initialized values. void unset_nmi_callback(void) { rcu_assign_pointer(nmi_callback, dummy_nmi_callback); } This function unregisters an NMI handler, restoring the original dummy_nmi_handler(). However, there may well be an NMI handler currently executing on some other CPU. We therefore cannot free up any data structures used by the old NMI handler until execution of it completes on all other CPUs. One way to accomplish this is via synchronize_sched(), perhaps as follows: unset_nmi_callback(); synchronize_sched(); kfree(my_nmi_data); This works because synchronize_sched() blocks until all CPUs complete any preemption-disabled segments of code that they were executing. Since NMI handlers disable preemption, synchronize_sched() is guaranteed not to return until all ongoing NMI handlers exit. It is therefore safe to free up the handler's data as soon as synchronize_sched() returns. Answer to Quick Quiz Why might the rcu_dereference() be necessary on Alpha, given that the code referenced by the pointer is read-only? Answer: The caller to set_nmi_callback() might well have initialized some data that is to be used by the new NMI handler. In this case, the rcu_dereference() would be needed, because otherwise a CPU that received an NMI just after the new handler was set might see the pointer to the new NMI handler, but the old pre-initialized version of the handler's data. More important, the rcu_dereference() makes it clear to someone reading the code that the pointer is being protected by RCU. Documentation/cdrom/sonycd535 +2 −1 Original line number Diff line number Diff line Loading @@ -68,7 +68,8 @@ it a better device citizen. Further thanks to Joel Katz Porfiri Claudio <C.Porfiri@nisms.tei.ericsson.se> for patches to make the driver work with the older CDU-510/515 series, and Heiko Eissfeldt <heiko@colossus.escape.de> for pointing out that the verify_area() checks were ignoring the results of said checks. the verify_area() checks were ignoring the results of said checks (note: verify_area() has since been replaced by access_ok()). (Acknowledgments from Ron Jeppesen in the 0.3 release:) Thanks to Corey Minyard who wrote the original CDU-31A driver on which Loading Documentation/cpusets.txt +12 −0 Original line number Diff line number Diff line Loading @@ -60,6 +60,18 @@ all of the cpus in the system. This removes any overhead due to load balancing code trying to pull tasks outside of the cpu exclusive cpuset only to be prevented by the tasks' cpus_allowed mask. A cpuset that is mem_exclusive restricts kernel allocations for page, buffer and other data commonly shared by the kernel across multiple users. All cpusets, whether mem_exclusive or not, restrict allocations of memory for user space. This enables configuring a system so that several independent jobs can share common kernel data, such as file system pages, while isolating each jobs user allocation in its own cpuset. To do this, construct a large mem_exclusive cpuset to hold all the jobs, and construct child, non-mem_exclusive cpusets for each individual job. Only a small amount of typical kernel memory, such as requests from interrupt handlers, is allowed to be taken outside even a mem_exclusive cpuset. User level code may create and destroy cpusets by name in the cpuset virtual file system, manage the attributes and permissions of these cpusets and which CPUs and Memory Nodes are assigned to each cpuset, Loading Loading
Documentation/DocBook/mcabook.tmpl +1 −1 Original line number Diff line number Diff line Loading @@ -96,7 +96,7 @@ <chapter id="pubfunctions"> <title>Public Functions Provided</title> !Earch/i386/kernel/mca.c !Edrivers/mca/mca-legacy.c </chapter> <chapter id="dmafunctions"> Loading
Documentation/IPMI.txt +7 −6 Original line number Diff line number Diff line Loading @@ -605,12 +605,13 @@ is in the ipmi_poweroff module. When the system requests a powerdown, it will send the proper IPMI commands to do this. This is supported on several platforms. There is a module parameter named "poweroff_control" that may either be zero (do a power down) or 2 (do a power cycle, power the system off, then power it on in a few seconds). Setting ipmi_poweroff.poweroff_control=x will do the same thing on the kernel command line. The parameter is also available via the proc filesystem in /proc/ipmi/poweroff_control. Note that if the system does not support power cycling, it will always to the power off. There is a module parameter named "poweroff_powercycle" that may either be zero (do a power down) or non-zero (do a power cycle, power the system off, then power it on in a few seconds). Setting ipmi_poweroff.poweroff_control=x will do the same thing on the kernel command line. The parameter is also available via the proc filesystem in /proc/sys/dev/ipmi/poweroff_powercycle. Note that if the system does not support power cycling, it will always do the power off. Note that if you have ACPI enabled, the system will prefer using ACPI to power off.
Documentation/RCU/NMI-RCU.txt 0 → 100644 +112 −0 Original line number Diff line number Diff line Using RCU to Protect Dynamic NMI Handlers Although RCU is usually used to protect read-mostly data structures, it is possible to use RCU to provide dynamic non-maskable interrupt handlers, as well as dynamic irq handlers. This document describes how to do this, drawing loosely from Zwane Mwaikambo's NMI-timer work in "arch/i386/oprofile/nmi_timer_int.c" and in "arch/i386/kernel/traps.c". The relevant pieces of code are listed below, each followed by a brief explanation. static int dummy_nmi_callback(struct pt_regs *regs, int cpu) { return 0; } The dummy_nmi_callback() function is a "dummy" NMI handler that does nothing, but returns zero, thus saying that it did nothing, allowing the NMI handler to take the default machine-specific action. static nmi_callback_t nmi_callback = dummy_nmi_callback; This nmi_callback variable is a global function pointer to the current NMI handler. fastcall void do_nmi(struct pt_regs * regs, long error_code) { int cpu; nmi_enter(); cpu = smp_processor_id(); ++nmi_count(cpu); if (!rcu_dereference(nmi_callback)(regs, cpu)) default_do_nmi(regs); nmi_exit(); } The do_nmi() function processes each NMI. It first disables preemption in the same way that a hardware irq would, then increments the per-CPU count of NMIs. It then invokes the NMI handler stored in the nmi_callback function pointer. If this handler returns zero, do_nmi() invokes the default_do_nmi() function to handle a machine-specific NMI. Finally, preemption is restored. Strictly speaking, rcu_dereference() is not needed, since this code runs only on i386, which does not need rcu_dereference() anyway. However, it is a good documentation aid, particularly for anyone attempting to do something similar on Alpha. Quick Quiz: Why might the rcu_dereference() be necessary on Alpha, given that the code referenced by the pointer is read-only? Back to the discussion of NMI and RCU... void set_nmi_callback(nmi_callback_t callback) { rcu_assign_pointer(nmi_callback, callback); } The set_nmi_callback() function registers an NMI handler. Note that any data that is to be used by the callback must be initialized up -before- the call to set_nmi_callback(). On architectures that do not order writes, the rcu_assign_pointer() ensures that the NMI handler sees the initialized values. void unset_nmi_callback(void) { rcu_assign_pointer(nmi_callback, dummy_nmi_callback); } This function unregisters an NMI handler, restoring the original dummy_nmi_handler(). However, there may well be an NMI handler currently executing on some other CPU. We therefore cannot free up any data structures used by the old NMI handler until execution of it completes on all other CPUs. One way to accomplish this is via synchronize_sched(), perhaps as follows: unset_nmi_callback(); synchronize_sched(); kfree(my_nmi_data); This works because synchronize_sched() blocks until all CPUs complete any preemption-disabled segments of code that they were executing. Since NMI handlers disable preemption, synchronize_sched() is guaranteed not to return until all ongoing NMI handlers exit. It is therefore safe to free up the handler's data as soon as synchronize_sched() returns. Answer to Quick Quiz Why might the rcu_dereference() be necessary on Alpha, given that the code referenced by the pointer is read-only? Answer: The caller to set_nmi_callback() might well have initialized some data that is to be used by the new NMI handler. In this case, the rcu_dereference() would be needed, because otherwise a CPU that received an NMI just after the new handler was set might see the pointer to the new NMI handler, but the old pre-initialized version of the handler's data. More important, the rcu_dereference() makes it clear to someone reading the code that the pointer is being protected by RCU.
Documentation/cdrom/sonycd535 +2 −1 Original line number Diff line number Diff line Loading @@ -68,7 +68,8 @@ it a better device citizen. Further thanks to Joel Katz Porfiri Claudio <C.Porfiri@nisms.tei.ericsson.se> for patches to make the driver work with the older CDU-510/515 series, and Heiko Eissfeldt <heiko@colossus.escape.de> for pointing out that the verify_area() checks were ignoring the results of said checks. the verify_area() checks were ignoring the results of said checks (note: verify_area() has since been replaced by access_ok()). (Acknowledgments from Ron Jeppesen in the 0.3 release:) Thanks to Corey Minyard who wrote the original CDU-31A driver on which Loading
Documentation/cpusets.txt +12 −0 Original line number Diff line number Diff line Loading @@ -60,6 +60,18 @@ all of the cpus in the system. This removes any overhead due to load balancing code trying to pull tasks outside of the cpu exclusive cpuset only to be prevented by the tasks' cpus_allowed mask. A cpuset that is mem_exclusive restricts kernel allocations for page, buffer and other data commonly shared by the kernel across multiple users. All cpusets, whether mem_exclusive or not, restrict allocations of memory for user space. This enables configuring a system so that several independent jobs can share common kernel data, such as file system pages, while isolating each jobs user allocation in its own cpuset. To do this, construct a large mem_exclusive cpuset to hold all the jobs, and construct child, non-mem_exclusive cpusets for each individual job. Only a small amount of typical kernel memory, such as requests from interrupt handlers, is allowed to be taken outside even a mem_exclusive cpuset. User level code may create and destroy cpusets by name in the cpuset virtual file system, manage the attributes and permissions of these cpusets and which CPUs and Memory Nodes are assigned to each cpuset, Loading