Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit c69307d5 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

sched/numa: Fix comments



Fix a 80 column violation and a PTE vs PMD reference.

Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
Reviewed-by: default avatarRik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1381141781-10992-4-git-send-email-mgorman@suse.de


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 10fc05d0
Loading
Loading
Loading
Loading
+4 −4
Original line number Diff line number Diff line
@@ -988,10 +988,10 @@ void task_numa_work(struct callback_head *work)

out:
	/*
	 * It is possible to reach the end of the VMA list but the last few VMAs are
	 * not guaranteed to the vma_migratable. If they are not, we would find the
	 * !migratable VMA on the next scan but not reset the scanner to the start
	 * so check it now.
	 * It is possible to reach the end of the VMA list but the last few
	 * VMAs are not guaranteed to the vma_migratable. If they are not, we
	 * would find the !migratable VMA on the next scan but not reset the
	 * scanner to the start so check it now.
	 */
	if (vma)
		mm->numa_scan_offset = start;
+1 −1
Original line number Diff line number Diff line
@@ -1305,7 +1305,7 @@ int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
	spin_unlock(&mm->page_table_lock);
	lock_page(page);

	/* Confirm the PTE did not while locked */
	/* Confirm the PMD did not change while page_table_lock was released */
	spin_lock(&mm->page_table_lock);
	if (unlikely(!pmd_same(pmd, *pmdp))) {
		unlock_page(page);