Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 58c4a95f authored by Sebastian Andrzej Siewior's avatar Sebastian Andrzej Siewior Committed by Joerg Roedel
Browse files

iommu/vt-d: Don't disable preemption while accessing deferred_flush()



get_cpu() disables preemption and returns the current CPU number. The
CPU number is only used once while retrieving the address of the local's
CPU deferred_flush pointer.
We can instead use raw_cpu_ptr() while we remain preemptible. The worst
thing that can happen is that flush_unmaps_timeout() is invoked multiple
times: once by taskA after seeing HIGH_WATER_MARK and then preempted to
another CPU and then by taskB which saw HIGH_WATER_MARK on the same CPU
as taskA. It is also likely that ->size got from HIGH_WATER_MARK to 0
right after its read because another CPU invoked flush_unmaps_timeout()
for this CPU.
The access to flush_data is protected by a spinlock so even if we get
migrated to another CPU or preempted - the data structure is protected.

While at it, I marked deferred_flush static since I can't find a
reference to it outside of this file.

Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: iommu@lists.linux-foundation.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
parent 71bb620d
Loading
Loading
Loading
Loading
+2 −6
Original line number Original line Diff line number Diff line
@@ -481,7 +481,7 @@ struct deferred_flush_data {
	struct deferred_flush_table *tables;
	struct deferred_flush_table *tables;
};
};


DEFINE_PER_CPU(struct deferred_flush_data, deferred_flush);
static DEFINE_PER_CPU(struct deferred_flush_data, deferred_flush);


/* bitmap for indexing intel_iommus */
/* bitmap for indexing intel_iommus */
static int g_num_of_iommus;
static int g_num_of_iommus;
@@ -3710,10 +3710,8 @@ static void add_unmap(struct dmar_domain *dom, unsigned long iova_pfn,
	struct intel_iommu *iommu;
	struct intel_iommu *iommu;
	struct deferred_flush_entry *entry;
	struct deferred_flush_entry *entry;
	struct deferred_flush_data *flush_data;
	struct deferred_flush_data *flush_data;
	unsigned int cpuid;


	cpuid = get_cpu();
	flush_data = raw_cpu_ptr(&deferred_flush);
	flush_data = per_cpu_ptr(&deferred_flush, cpuid);


	/* Flush all CPUs' entries to avoid deferring too much.  If
	/* Flush all CPUs' entries to avoid deferring too much.  If
	 * this becomes a bottleneck, can just flush us, and rely on
	 * this becomes a bottleneck, can just flush us, and rely on
@@ -3746,8 +3744,6 @@ static void add_unmap(struct dmar_domain *dom, unsigned long iova_pfn,
	}
	}
	flush_data->size++;
	flush_data->size++;
	spin_unlock_irqrestore(&flush_data->lock, flags);
	spin_unlock_irqrestore(&flush_data->lock, flags);

	put_cpu();
}
}


static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)