Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit e1955342 authored by Paul Mundt's avatar Paul Mundt
Browse files

Merge branch 'sh/stable-updates'



Conflicts:
	arch/sh/kernel/dwarf.c
	drivers/dma/shdma.c

Signed-off-by: default avatarPaul Mundt <lethal@linux-sh.org>
parents 35f6cd4a 83515bc7
Loading
Loading
Loading
Loading
+13 −0
Original line number Diff line number Diff line
@@ -16,6 +16,15 @@
     </address>
    </affiliation>
   </author>
   <author>
    <firstname>William</firstname>
    <surname>Cohen</surname>
    <affiliation>
     <address>
      <email>wcohen@redhat.com</email>
     </address>
    </affiliation>
   </author>
  </authorgroup>

  <legalnotice>
@@ -91,4 +100,8 @@
!Iinclude/trace/events/signal.h
  </chapter>

  <chapter id="block">
   <title>Block IO</title>
!Iinclude/trace/events/block.h
  </chapter>
</book>
+1 −1
Original line number Diff line number Diff line
@@ -234,7 +234,7 @@ process is as follows:
    Linus, usually the patches that have already been included in the
    -next kernel for a few weeks.  The preferred way to submit big changes
    is using git (the kernel's source management tool, more information
    can be found at http://git.or.cz/) but plain patches are also just
    can be found at http://git-scm.com/) but plain patches are also just
    fine.
  - After two weeks a -rc1 kernel is released it is now possible to push
    only patches that do not include new features that could affect the
+22 −17
Original line number Diff line number Diff line
@@ -34,7 +34,7 @@ NMI handler.
		cpu = smp_processor_id();
		++nmi_count(cpu);

		if (!rcu_dereference(nmi_callback)(regs, cpu))
		if (!rcu_dereference_sched(nmi_callback)(regs, cpu))
			default_do_nmi(regs);

		nmi_exit();
@@ -47,12 +47,13 @@ function pointer. If this handler returns zero, do_nmi() invokes the
default_do_nmi() function to handle a machine-specific NMI.  Finally,
preemption is restored.

Strictly speaking, rcu_dereference() is not needed, since this code runs
only on i386, which does not need rcu_dereference() anyway.  However,
it is a good documentation aid, particularly for anyone attempting to
do something similar on Alpha.
In theory, rcu_dereference_sched() is not needed, since this code runs
only on i386, which in theory does not need rcu_dereference_sched()
anyway.  However, in practice it is a good documentation aid, particularly
for anyone attempting to do something similar on Alpha or on systems
with aggressive optimizing compilers.

Quick Quiz:  Why might the rcu_dereference() be necessary on Alpha,
Quick Quiz:  Why might the rcu_dereference_sched() be necessary on Alpha,
	     given that the code referenced by the pointer is read-only?


@@ -99,17 +100,21 @@ invoke irq_enter() and irq_exit() on NMI entry and exit, respectively.

Answer to Quick Quiz

	Why might the rcu_dereference() be necessary on Alpha, given
	Why might the rcu_dereference_sched() be necessary on Alpha, given
	that the code referenced by the pointer is read-only?

	Answer: The caller to set_nmi_callback() might well have
		initialized some data that is to be used by the
		new NMI handler.  In this case, the rcu_dereference()
		would be needed, because otherwise a CPU that received
		an NMI just after the new handler was set might see
		the pointer to the new NMI handler, but the old
		pre-initialized version of the handler's data.

		More important, the rcu_dereference() makes it clear
		to someone reading the code that the pointer is being
		protected by RCU.
		initialized some data that is to be used by the new NMI
		handler.  In this case, the rcu_dereference_sched() would
		be needed, because otherwise a CPU that received an NMI
		just after the new handler was set might see the pointer
		to the new NMI handler, but the old pre-initialized
		version of the handler's data.

		This same sad story can happen on other CPUs when using
		a compiler with aggressive pointer-value speculation
		optimizations.

		More important, the rcu_dereference_sched() makes it
		clear to someone reading the code that the pointer is
		being protected by RCU-sched.
+4 −3
Original line number Diff line number Diff line
@@ -260,7 +260,8 @@ over a rather long period of time, but improvements are always welcome!
	The reason that it is permissible to use RCU list-traversal
	primitives when the update-side lock is held is that doing so
	can be quite helpful in reducing code bloat when common code is
	shared between readers and updaters.
	shared between readers and updaters.  Additional primitives
	are provided for this case, as discussed in lockdep.txt.

10.	Conversely, if you are in an RCU read-side critical section,
	and you don't hold the appropriate update-side lock, you -must-
@@ -344,8 +345,8 @@ over a rather long period of time, but improvements are always welcome!
	requiring SRCU's read-side deadlock immunity or low read-side
	realtime latency.

	Note that, rcu_assign_pointer() and rcu_dereference() relate to
	SRCU just as they do to other forms of RCU.
	Note that, rcu_assign_pointer() relates to SRCU just as they do
	to other forms of RCU.

15.	The whole point of call_rcu(), synchronize_rcu(), and friends
	is to wait until all pre-existing readers have finished before
+26 −2
Original line number Diff line number Diff line
@@ -32,9 +32,20 @@ checking of rcu_dereference() primitives:
	srcu_dereference(p, sp):
		Check for SRCU read-side critical section.
	rcu_dereference_check(p, c):
		Use explicit check expression "c".
		Use explicit check expression "c".  This is useful in
		code that is invoked by both readers and updaters.
	rcu_dereference_raw(p)
		Don't check.  (Use sparingly, if at all.)
	rcu_dereference_protected(p, c):
		Use explicit check expression "c", and omit all barriers
		and compiler constraints.  This is useful when the data
		structure cannot change, for example, in code that is
		invoked only by updaters.
	rcu_access_pointer(p):
		Return the value of the pointer and omit all barriers,
		but retain the compiler constraints that prevent duplicating
		or coalescsing.  This is useful when when testing the
		value of the pointer itself, for example, against NULL.

The rcu_dereference_check() check expression can be any boolean
expression, but would normally include one of the rcu_read_lock_held()
@@ -59,7 +70,20 @@ In case (1), the pointer is picked up in an RCU-safe manner for vanilla
RCU read-side critical sections, in case (2) the ->file_lock prevents
any change from taking place, and finally, in case (3) the current task
is the only task accessing the file_struct, again preventing any change
from taking place.
from taking place.  If the above statement was invoked only from updater
code, it could instead be written as follows:

	file = rcu_dereference_protected(fdt->fd[fd],
					 lockdep_is_held(&files->file_lock) ||
					 atomic_read(&files->count) == 1);

This would verify cases #2 and #3 above, and furthermore lockdep would
complain if this was used in an RCU read-side critical section unless one
of these two cases held.  Because rcu_dereference_protected() omits all
barriers and compiler constraints, it generates better code than do the
other flavors of rcu_dereference().  On the other hand, it is illegal
to use rcu_dereference_protected() if either the RCU-protected pointer
or the RCU-protected data that it points to can change concurrently.

There are currently only "universal" versions of the rcu_assign_pointer()
and RCU list-/tree-traversal primitives, which do not (yet) check for
Loading