Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit a9ed40b1 authored by Mitchel Humpherys's avatar Mitchel Humpherys
Browse files

iommu: msm_iommu_sec: fix some overeager cache maintenance



In general, we must do some cache maintenance when sharing buffers with
TZ since our caches are not shared with TZ.  Specifically:

    (1) If we put some data into a buffer then share it with TZ for
        reading, the buffer must be flushed before sending it to TZ.

    (2) If TZ puts some data in a buffer for us to read, the buffer must
        be invalidated after receiving it from TZ.

In msm_iommu_sec_ptbl_map we are currently doing some cache maintenance
incorrectly.  We are invalidating a buffer that we shared with TZ even
though we're not reading from the buffer.  We're also flushing way more
than we need to.  We're only sharing a single buffer of size
sizeof(phys_addr_t) with TZ but we're flushing the size of the IOMMU
mapping (which could be anything up to 2^sizeof(size_t)).

Remove the unnecessary invalidate and only flush the size of the buffer
being shared, nothing more.

Change-Id: I46d2d95319364d197ae530001851ab819a6eb6fa
Signed-off-by: default avatarMitchel Humpherys <mitchelh@codeaurora.org>
parent cb0b4937
Loading
Loading
Loading
Loading
+5 −9
Original line number Diff line number Diff line
@@ -511,8 +511,7 @@ static int msm_iommu_sec_ptbl_map(struct msm_iommu_drvdata *iommu_drvdata,
			unsigned long va, phys_addr_t pa, size_t len)
{
	struct msm_scm_map2_req map;
	void *flush_va;
	phys_addr_t flush_pa;
	void *flush_va, *flush_va_end;
	int ret = 0;

	map.plist.list = virt_to_phys(&pa);
@@ -524,20 +523,18 @@ static int msm_iommu_sec_ptbl_map(struct msm_iommu_drvdata *iommu_drvdata,
	map.info.size = len;

	flush_va = &pa;
	flush_pa = virt_to_phys(&pa);
	flush_va_end = (void *)
		(((unsigned long) flush_va) + sizeof(phys_addr_t));

	/*
	 * Ensure that the buffer is in RAM by the time it gets to TZ
	 */
	dmac_clean_range(flush_va, flush_va + len);
	dmac_clean_range(flush_va, flush_va_end);

	ret = msm_iommu_sec_map2(&map);
	if (ret)
		return -EINVAL;

	/* Invalidate cache since TZ touched this address range */
	dmac_inv_range(flush_va, flush_va + len);

	return 0;
}

@@ -616,8 +613,7 @@ static int msm_iommu_sec_ptbl_map_range(struct msm_iommu_drvdata *iommu_drvdata,
	/*
	 * Ensure that the buffer is in RAM by the time it gets to TZ
	 */
	dmac_clean_range(flush_va,
		flush_va + sizeof(unsigned long) * map.plist.list_size);
	dmac_clean_range(flush_va, flush_va + map.plist.list_size);

	ret = msm_iommu_sec_map2(&map);
	kfree(pa_list);