Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit a9ba9a3b authored by Arjan van de Ven's avatar Arjan van de Ven Committed by Linus Torvalds
Browse files

[PATCH] x86_64: prefetch the mmap_sem in the fault path



In a micro-benchmark that stresses the pagefault path, the down_read_trylock
on the mmap_sem showed up quite high on the profile. Turns out this lock is
bouncing between cpus quite a bit and thus is cache-cold a lot. This patch
prefetches the lock (for write) as early as possible (and before some other
somewhat expensive operations). With this patch, the down_read_trylock
basically fell out of the top of profile.

Signed-off-by: default avatarArjan van de Ven <arjan@linux.intel.com>
Signed-off-by: default avatarAndi Kleen <ak@suse.de>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 4bc32c4d
Loading
Loading
Loading
Loading
+4 −2
Original line number Diff line number Diff line
@@ -314,11 +314,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
	unsigned long flags;
	siginfo_t info;

	tsk = current;
	mm = tsk->mm;
	prefetchw(&mm->mmap_sem);

	/* get the address */
	__asm__("movq %%cr2,%0":"=r" (address));

	tsk = current;
	mm = tsk->mm;
	info.si_code = SEGV_MAPERR;