Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 3359b54c authored by Seth, Rohit's avatar Seth, Rohit Committed by Linus Torvalds
Browse files

[PATCH] Handle spurious page fault for hugetlb region



The hugetlb pages are currently pre-faulted.  At the time of mmap of
hugepages, we populate the new PTEs.  It is possible that HW has already
cached some of the unused PTEs internally.  These stale entries never
get a chance to be purged in existing control flow.

This patch extends the check in page fault code for hugepages.  Check if
a faulted address falls with in size for the hugetlb file backing it.
We return VM_FAULT_MINOR for these cases (assuming that the arch
specific page-faulting code purges the stale entry for the archs that
need it).

Signed-off-by: default avatarRohit Seth <rohit.seth@intel.com>

[ This is apparently arguably an ia64 port bug. But the code won't
  hurt, and for now it fixes a real problem on some ia64 machines ]

Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent bb7e257e
Loading
Loading
Loading
Loading
+13 −0
Original line number Original line Diff line number Diff line
@@ -155,11 +155,24 @@ static inline void set_file_hugepages(struct file *file)
{
{
	file->f_op = &hugetlbfs_file_operations;
	file->f_op = &hugetlbfs_file_operations;
}
}

static inline int valid_hugetlb_file_off(struct vm_area_struct *vma, 
					  unsigned long address) 
{
	struct inode *inode = vma->vm_file->f_dentry->d_inode;
	loff_t file_off = address - vma->vm_start;
	
	file_off += (vma->vm_pgoff << PAGE_SHIFT);
	
	return (file_off < inode->i_size);
}

#else /* !CONFIG_HUGETLBFS */
#else /* !CONFIG_HUGETLBFS */


#define is_file_hugepages(file)		0
#define is_file_hugepages(file)		0
#define set_file_hugepages(file)	BUG()
#define set_file_hugepages(file)	BUG()
#define hugetlb_zero_setup(size)	ERR_PTR(-ENOSYS)
#define hugetlb_zero_setup(size)	ERR_PTR(-ENOSYS)
#define valid_hugetlb_file_off(vma, address) 	0


#endif /* !CONFIG_HUGETLBFS */
#endif /* !CONFIG_HUGETLBFS */


+12 −2
Original line number Original line Diff line number Diff line
@@ -2045,8 +2045,18 @@ int __handle_mm_fault(struct mm_struct *mm, struct vm_area_struct * vma,


	inc_page_state(pgfault);
	inc_page_state(pgfault);


	if (is_vm_hugetlb_page(vma))
	if (unlikely(is_vm_hugetlb_page(vma))) {
		if (valid_hugetlb_file_off(vma, address))
			/* We get here only if there was a stale(zero) TLB entry 
			 * (because of  HW prefetching). 
			 * Low-level arch code (if needed) should have already
			 * purged the stale entry as part of this fault handling.  
			 * Here we just return.
			 */
			return VM_FAULT_MINOR; 
		else
			return VM_FAULT_SIGBUS;	/* mapping truncation does this. */
			return VM_FAULT_SIGBUS;	/* mapping truncation does this. */
	}


	/*
	/*
	 * We need the page table lock to synchronize with kswapd
	 * We need the page table lock to synchronize with kswapd