Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 073e587e authored by KAMEZAWA Hiroyuki's avatar KAMEZAWA Hiroyuki Committed by Linus Torvalds
Browse files

memcg: move charge swapin under lock



While page-cache's charge/uncharge is done under page_lock(), swap-cache
isn't.  (anonymous page is charged when it's newly allocated.)

This patch moves do_swap_page()'s charge() call under lock.  I don't see
any bad problem *now* but this fix will be good for future for avoiding
unnecessary racy state.

Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: default avatarDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: default avatarBalbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 47c59803
Loading
Loading
Loading
Loading
+6 −5
Original line number Original line Diff line number Diff line
@@ -2326,16 +2326,17 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
		count_vm_event(PGMAJFAULT);
		count_vm_event(PGMAJFAULT);
	}
	}


	if (mem_cgroup_charge(page, mm, GFP_KERNEL)) {
	mark_page_accessed(page);

	lock_page(page);
	delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
	delayacct_clear_flag(DELAYACCT_PF_SWAPIN);

	if (mem_cgroup_charge(page, mm, GFP_KERNEL)) {
		ret = VM_FAULT_OOM;
		ret = VM_FAULT_OOM;
		unlock_page(page);
		goto out;
		goto out;
	}
	}


	mark_page_accessed(page);
	lock_page(page);
	delayacct_clear_flag(DELAYACCT_PF_SWAPIN);

	/*
	/*
	 * Back out if somebody else already faulted in this pte.
	 * Back out if somebody else already faulted in this pte.
	 */
	 */