Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 5d8b3e75 authored by Alexey Kardashevskiy's avatar Alexey Kardashevskiy Committed by Greg Kroah-Hartman
Browse files

vfio/spapr: Postpone allocation of userspace version of TCE table



[ Upstream commit 39701e56f5f16ea0cf8fc9e8472e645f8de91d23 ]

The iommu_table struct manages a hardware TCE table and a vmalloc'd
table with corresponding userspace addresses. Both are allocated when
the default DMA window is created and this happens when the very first
group is attached to a container.

As we are going to allow the userspace to configure container in one
memory context and pas container fd to another, we have to postpones
such allocations till a container fd is passed to the destination
user process so we would account locked memory limit against the actual
container user constrainsts.

This postpones the it_userspace array allocation till it is used first
time for mapping. The unmapping patch already checks if the array is
allocated.

Signed-off-by: default avatarAlexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: default avatarDavid Gibson <david@gibson.dropbear.id.au>
Acked-by: default avatarAlex Williamson <alex.williamson@redhat.com>
Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 3c0cbb47
Loading
Loading
Loading
Loading
+7 −13
Original line number Diff line number Diff line
@@ -509,6 +509,12 @@ static long tce_iommu_build_v2(struct tce_container *container,
	unsigned long hpa;
	enum dma_data_direction dirtmp;

	if (!tbl->it_userspace) {
		ret = tce_iommu_userspace_view_alloc(tbl);
		if (ret)
			return ret;
	}

	for (i = 0; i < pages; ++i) {
		struct mm_iommu_table_group_mem_t *mem = NULL;
		unsigned long *pua = IOMMU_TABLE_USERSPACE_ENTRY(tbl,
@@ -582,15 +588,6 @@ static long tce_iommu_create_table(struct tce_container *container,
	WARN_ON(!ret && !(*ptbl)->it_ops->free);
	WARN_ON(!ret && ((*ptbl)->it_allocated_size != table_size));

	if (!ret && container->v2) {
		ret = tce_iommu_userspace_view_alloc(*ptbl);
		if (ret)
			(*ptbl)->it_ops->free(*ptbl);
	}

	if (ret)
		decrement_locked_vm(table_size >> PAGE_SHIFT);

	return ret;
}

@@ -1062,10 +1059,7 @@ static int tce_iommu_take_ownership(struct tce_container *container,
		if (!tbl || !tbl->it_map)
			continue;

		rc = tce_iommu_userspace_view_alloc(tbl);
		if (!rc)
		rc = iommu_take_ownership(tbl);

		if (rc) {
			for (j = 0; j < i; ++j)
				iommu_release_ownership(