Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 7a3a4d76 authored by Alexey Kardashevskiy's avatar Alexey Kardashevskiy Committed by Michael Ellerman
Browse files

powerpc/mm_iommu: Allow pinning large regions



When called with vmas_arg==NULL, get_user_pages_longterm() allocates
an array of nr_pages*8 which can easily get greater that the max order,
for example, registering memory for a 256GB guest does this and fails
in __alloc_pages_nodemask().

This adds a loop over chunks of entries to fit the max order limit.

Fixes: 678e174c ("powerpc/mm/iommu: allow migration of cma allocated pages during mm_iommu_do_alloc", 2019-03-05)
Signed-off-by: default avatarAlexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
parent eb9d7a62
Loading
Loading
Loading
Loading
+20 −4
Original line number Diff line number Diff line
@@ -98,6 +98,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
	struct mm_iommu_table_group_mem_t *mem, *mem2;
	long i, ret, locked_entries = 0, pinned = 0;
	unsigned int pageshift;
	unsigned long entry, chunk;

	if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) {
		ret = mm_iommu_adjust_locked_vm(mm, entries, true);
@@ -134,10 +135,25 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
	}

	down_read(&mm->mmap_sem);
	ret = get_user_pages_longterm(ua, entries, FOLL_WRITE, mem->hpages, NULL);
	chunk = (1UL << (PAGE_SHIFT + MAX_ORDER - 1)) /
			sizeof(struct vm_area_struct *);
	chunk = min(chunk, entries);
	for (entry = 0; entry < entries; entry += chunk) {
		unsigned long n = min(entries - entry, chunk);

		ret = get_user_pages_longterm(ua + (entry << PAGE_SHIFT), n,
				FOLL_WRITE, mem->hpages + entry, NULL);
		if (ret == n) {
			pinned += n;
			continue;
		}
		if (ret > 0)
			pinned += ret;
		break;
	}
	up_read(&mm->mmap_sem);
	pinned = ret > 0 ? ret : 0;
	if (ret != entries) {
	if (pinned != entries) {
		if (!ret)
			ret = -EFAULT;
		goto free_exit;
	}