Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 3a13c4d7 authored by Johannes Weiner's avatar Johannes Weiner Committed by Linus Torvalds
Browse files

x86: finish user fault error path with fatal signal



The x86 fault handler bails in the middle of error handling when the
task has a fatal signal pending.  For a subsequent patch this is a
problem in OOM situations because it relies on pagefault_out_of_memory()
being called even when the task has been killed, to perform proper
per-task OOM state unwinding.

Shortcutting the fault like this is a rather minor optimization that
saves a few instructions in rare cases.  Just remove it for
user-triggered faults.

Use the opportunity to split the fault retry handling from actual fault
errors and add locking documentation that reads suprisingly similar to
ARM's.

Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: azurIt <azurit@pobox.sk>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 759496ba
Loading
Loading
Loading
Loading
+17 −18
Original line number Diff line number Diff line
@@ -842,23 +842,15 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address,
	force_sig_info_fault(SIGBUS, code, address, tsk, fault);
}

static noinline int
static noinline void
mm_fault_error(struct pt_regs *regs, unsigned long error_code,
	       unsigned long address, unsigned int fault)
{
	/*
	 * Pagefault was interrupted by SIGKILL. We have no reason to
	 * continue pagefault.
	 */
	if (fatal_signal_pending(current)) {
		if (!(fault & VM_FAULT_RETRY))
	if (fatal_signal_pending(current) && !(error_code & PF_USER)) {
		up_read(&current->mm->mmap_sem);
		if (!(error_code & PF_USER))
		no_context(regs, error_code, address, 0, 0);
		return 1;
		return;
	}
	if (!(fault & VM_FAULT_ERROR))
		return 0;

	if (fault & VM_FAULT_OOM) {
		/* Kernel mode? Handle exceptions or die: */
@@ -866,7 +858,7 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
			up_read(&current->mm->mmap_sem);
			no_context(regs, error_code, address,
				   SIGSEGV, SEGV_MAPERR);
			return 1;
			return;
		}

		up_read(&current->mm->mmap_sem);
@@ -884,7 +876,6 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
		else
			BUG();
	}
	return 1;
}

static int spurious_fault_check(unsigned long error_code, pte_t *pte)
@@ -1189,8 +1180,16 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
	 */
	fault = handle_mm_fault(mm, vma, address, flags);

	if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {
		if (mm_fault_error(regs, error_code, address, fault))
	/*
	 * If we need to retry but a fatal signal is pending, handle the
	 * signal first. We do not need to release the mmap_sem because it
	 * would already be released in __lock_page_or_retry in mm/filemap.c.
	 */
	if (unlikely((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)))
		return;

	if (unlikely(fault & VM_FAULT_ERROR)) {
		mm_fault_error(regs, error_code, address, fault);
		return;
	}