Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit f145f05e authored by Patrick Daly's avatar Patrick Daly
Browse files

iommu/io-pgtable-arm: Use optimized unmap path properly



The original intention was to free consecutive third-level page table
entries via a single call to memset, and issue a single cache flush.

Consider the case of unmapping 4M, using 4K iommu page mappings, with
an initial iova of 1M.
1M to 2M:
The optimized logic would not run, since 'remaining < SZ_2M' is false.
This causes a cache flush to be issued for every 4K page.

2M to 4M:
This is a standard block size.

4M to 5M:
The optimized logic detects that 'remaining < SZ_2M' is true, and
runs properly.

Fix the logic to consider the first case.

Change-Id: I441b414e2d60e78958c2b26864b08b7b89edfa86
Signed-off-by: default avatarPatrick Daly <pdaly@codeaurora.org>
parent 5f92f324
Loading
Loading
Loading
Loading
+6 −4
Original line number Diff line number Diff line
@@ -627,10 +627,12 @@ static size_t arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova,
		size_t ret, size_to_unmap, remaining;

		remaining = (size - unmapped);
		size_to_unmap = remaining < SZ_2M
			? remaining
			: iommu_pgsize(data->iop.cfg.pgsize_bitmap, iova,
		size_to_unmap = iommu_pgsize(data->iop.cfg.pgsize_bitmap, iova,
						remaining);
		size_to_unmap = size_to_unmap >= SZ_2M ?
				size_to_unmap :
				min_t(unsigned long, remaining,
					(ALIGN(iova + 1, SZ_2M) - iova));
		ret = __arm_lpae_unmap(data, iova, size_to_unmap, lvl, ptep);
		if (ret == 0)
			break;