Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit c5905afb authored by Ingo Molnar's avatar Ingo Molnar
Browse files

static keys: Introduce 'struct static_key', static_key_true()/false() and...


static keys: Introduce 'struct static_key', static_key_true()/false() and static_key_slow_[inc|dec]()

So here's a boot tested patch on top of Jason's series that does
all the cleanups I talked about and turns jump labels into a
more intuitive to use facility. It should also address the
various misconceptions and confusions that surround jump labels.

Typical usage scenarios:

        #include <linux/static_key.h>

        struct static_key key = STATIC_KEY_INIT_TRUE;

        if (static_key_false(&key))
                do unlikely code
        else
                do likely code

Or:

        if (static_key_true(&key))
                do likely code
        else
                do unlikely code

The static key is modified via:

        static_key_slow_inc(&key);
        ...
        static_key_slow_dec(&key);

The 'slow' prefix makes it abundantly clear that this is an
expensive operation.

I've updated all in-kernel code to use this everywhere. Note
that I (intentionally) have not pushed through the rename
blindly through to the lowest levels: the actual jump-label
patching arch facility should be named like that, so we want to
decouple jump labels from the static-key facility a bit.

On non-jump-label enabled architectures static keys default to
likely()/unlikely() branches.

Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
Acked-by: default avatarJason Baron <jbaron@redhat.com>
Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
Cc: a.p.zijlstra@chello.nl
Cc: mathieu.desnoyers@efficios.com
Cc: davem@davemloft.net
Cc: ddaney.cavm@gmail.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20120222085809.GA26397@elte.hu


Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 1cfa60dc
Loading
Loading
Loading
Loading
+20 −9
Original line number Diff line number Diff line
@@ -47,18 +47,29 @@ config KPROBES
	  If in doubt, say "N".

config JUMP_LABEL
       bool "Optimize trace point call sites"
       bool "Optimize very unlikely/likely branches"
       depends on HAVE_ARCH_JUMP_LABEL
       help
         This option enables a transparent branch optimization that
	 makes certain almost-always-true or almost-always-false branch
	 conditions even cheaper to execute within the kernel.

	 Certain performance-sensitive kernel code, such as trace points,
	 scheduler functionality, networking code and KVM have such
	 branches and include support for this optimization technique.

         If it is detected that the compiler has support for "asm goto",
	 the kernel will compile trace point locations with just a
	 nop instruction. When trace points are enabled, the nop will
	 be converted to a jump to the trace function. This technique
	 lowers overhead and stress on the branch prediction of the
	 processor.

	 On i386, options added to the compiler flags may increase
	 the size of the kernel slightly.
	 the kernel will compile such branches with just a nop
	 instruction. When the condition flag is toggled to true, the
	 nop will be converted to a jump instruction to execute the
	 conditional block of instructions.

	 This technique lowers overhead and stress on the branch prediction
	 of the processor and generally makes the kernel faster. The update
	 of the condition is slower, but those are always very rare.

	 ( On 32-bit x86, the necessary options added to the compiler
	   flags may increase the size of the kernel slightly. )

config OPTPROBES
	def_bool y
+3 −3
Original line number Diff line number Diff line
@@ -281,9 +281,9 @@ paravirt_init_missing_ticks_accounting(int cpu)
		pv_time_ops.init_missing_ticks_accounting(cpu);
}

struct jump_label_key;
extern struct jump_label_key paravirt_steal_enabled;
extern struct jump_label_key paravirt_steal_rq_enabled;
struct static_key;
extern struct static_key paravirt_steal_enabled;
extern struct static_key paravirt_steal_rq_enabled;

static inline int
paravirt_do_steal_accounting(unsigned long *new_itm)
+2 −2
Original line number Diff line number Diff line
@@ -634,8 +634,8 @@ struct pv_irq_ops pv_irq_ops = {
 * pv_time_ops
 * time operations
 */
struct jump_label_key paravirt_steal_enabled;
struct jump_label_key paravirt_steal_rq_enabled;
struct static_key paravirt_steal_enabled;
struct static_key paravirt_steal_rq_enabled;

static int
ia64_native_do_steal_accounting(unsigned long *new_itm)
+1 −1
Original line number Diff line number Diff line
@@ -20,7 +20,7 @@
#define WORD_INSN ".word"
#endif

static __always_inline bool arch_static_branch(struct jump_label_key *key)
static __always_inline bool arch_static_branch(struct static_key *key)
{
	asm goto("1:\tnop\n\t"
		"nop\n\t"
+1 −1
Original line number Diff line number Diff line
@@ -17,7 +17,7 @@
#define JUMP_ENTRY_TYPE		stringify_in_c(FTR_ENTRY_LONG)
#define JUMP_LABEL_NOP_SIZE	4

static __always_inline bool arch_static_branch(struct jump_label_key *key)
static __always_inline bool arch_static_branch(struct static_key *key)
{
	asm goto("1:\n\t"
		 "nop\n\t"
Loading