Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 9b06e818 authored by Paul E. McKenney's avatar Paul E. McKenney Committed by Linus Torvalds
Browse files

[PATCH] Deprecate synchronize_kernel, GPL replacement



The synchronize_kernel() primitive is used for quite a few different purposes:
waiting for RCU readers, waiting for NMIs, waiting for interrupts, and so on.
This makes RCU code harder to read, since synchronize_kernel() might or might
not have matching rcu_read_lock()s.  This patch creates a new
synchronize_rcu() that is to be used for RCU readers and a new
synchronize_sched() that is used for the rest.  These two new primitives
currently have the same implementation, but this is might well change with
additional real-time support.  Both new primitives are GPL-only, the old
primitive is deprecated.

Signed-off-by: default avatarPaul E. McKenney <paulmck@us.ibm.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 512345be
Loading
Loading
Loading
Loading
+20 −3
Original line number Original line Diff line number Diff line
@@ -157,9 +157,9 @@ static inline int rcu_pending(int cpu)
/**
/**
 * rcu_read_lock - mark the beginning of an RCU read-side critical section.
 * rcu_read_lock - mark the beginning of an RCU read-side critical section.
 *
 *
 * When synchronize_kernel() is invoked on one CPU while other CPUs
 * When synchronize_rcu() is invoked on one CPU while other CPUs
 * are within RCU read-side critical sections, then the
 * are within RCU read-side critical sections, then the
 * synchronize_kernel() is guaranteed to block until after all the other
 * synchronize_rcu() is guaranteed to block until after all the other
 * CPUs exit their critical sections.  Similarly, if call_rcu() is invoked
 * CPUs exit their critical sections.  Similarly, if call_rcu() is invoked
 * on one CPU while other CPUs are within RCU read-side critical
 * on one CPU while other CPUs are within RCU read-side critical
 * sections, invocation of the corresponding RCU callback is deferred
 * sections, invocation of the corresponding RCU callback is deferred
@@ -256,6 +256,21 @@ static inline int rcu_pending(int cpu)
						(p) = (v); \
						(p) = (v); \
					})
					})


/**
 * synchronize_sched - block until all CPUs have exited any non-preemptive
 * kernel code sequences.
 *
 * This means that all preempt_disable code sequences, including NMI and
 * hardware-interrupt handlers, in progress on entry will have completed
 * before this primitive returns.  However, this does not guarantee that
 * softirq handlers will have completed, since in some kernels
 *
 * This primitive provides the guarantees made by the (deprecated)
 * synchronize_kernel() API.  In contrast, synchronize_rcu() only
 * guarantees that rcu_read_lock() sections will have completed.
 */
#define synchronize_sched() synchronize_rcu()

extern void rcu_init(void);
extern void rcu_init(void);
extern void rcu_check_callbacks(int cpu, int user);
extern void rcu_check_callbacks(int cpu, int user);
extern void rcu_restart_cpu(int cpu);
extern void rcu_restart_cpu(int cpu);
@@ -265,7 +280,9 @@ extern void FASTCALL(call_rcu(struct rcu_head *head,
				void (*func)(struct rcu_head *head)));
				void (*func)(struct rcu_head *head)));
extern void FASTCALL(call_rcu_bh(struct rcu_head *head,
extern void FASTCALL(call_rcu_bh(struct rcu_head *head,
				void (*func)(struct rcu_head *head)));
				void (*func)(struct rcu_head *head)));
extern void synchronize_kernel(void);
extern __deprecated_for_modules void synchronize_kernel(void);
extern void synchronize_rcu(void);
void synchronize_idle(void);


#endif /* __KERNEL__ */
#endif /* __KERNEL__ */
#endif /* __LINUX_RCUPDATE_H */
#endif /* __LINUX_RCUPDATE_H */
+14 −2
Original line number Original line Diff line number Diff line
@@ -444,15 +444,18 @@ static void wakeme_after_rcu(struct rcu_head *head)
}
}


/**
/**
 * synchronize_kernel - wait until a grace period has elapsed.
 * synchronize_rcu - wait until a grace period has elapsed.
 *
 *
 * Control will return to the caller some time after a full grace
 * Control will return to the caller some time after a full grace
 * period has elapsed, in other words after all currently executing RCU
 * period has elapsed, in other words after all currently executing RCU
 * read-side critical sections have completed.  RCU read-side critical
 * read-side critical sections have completed.  RCU read-side critical
 * sections are delimited by rcu_read_lock() and rcu_read_unlock(),
 * sections are delimited by rcu_read_lock() and rcu_read_unlock(),
 * and may be nested.
 * and may be nested.
 *
 * If your read-side code is not protected by rcu_read_lock(), do -not-
 * use synchronize_rcu().
 */
 */
void synchronize_kernel(void)
void synchronize_rcu(void)
{
{
	struct rcu_synchronize rcu;
	struct rcu_synchronize rcu;


@@ -464,7 +467,16 @@ void synchronize_kernel(void)
	wait_for_completion(&rcu.completion);
	wait_for_completion(&rcu.completion);
}
}


/*
 * Deprecated, use synchronize_rcu() or synchronize_sched() instead.
 */
void synchronize_kernel(void)
{
	synchronize_rcu();
}

module_param(maxbatch, int, 0);
module_param(maxbatch, int, 0);
EXPORT_SYMBOL(call_rcu);  /* WARNING: GPL-only in April 2006. */
EXPORT_SYMBOL(call_rcu);  /* WARNING: GPL-only in April 2006. */
EXPORT_SYMBOL(call_rcu_bh);  /* WARNING: GPL-only in April 2006. */
EXPORT_SYMBOL(call_rcu_bh);  /* WARNING: GPL-only in April 2006. */
EXPORT_SYMBOL_GPL(synchronize_rcu);
EXPORT_SYMBOL(synchronize_kernel);  /* WARNING: GPL-only in April 2006. */
EXPORT_SYMBOL(synchronize_kernel);  /* WARNING: GPL-only in April 2006. */