Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit be9b79e8 authored by Parav Pandit's avatar Parav Pandit Committed by Greg Kroah-Hartman
Browse files

virtio: Improve vq->broken access to avoid any compiler optimization



[ Upstream commit 60f0779862e4ab943810187752c462e85f5fa371 ]

Currently vq->broken field is read by virtqueue_is_broken() in busy
loop in one context by virtnet_send_command().

vq->broken is set to true in other process context by
virtio_break_device(). Reader and writer are accessing it without any
synchronization. This may lead to a compiler optimization which may
result to optimize reading vq->broken only once.

Hence, force reading vq->broken on each invocation of
virtqueue_is_broken() and also force writing it so that such
update is visible to the readers.

It is a theoretical fix that isn't yet encountered in the field.

Signed-off-by: default avatarParav Pandit <parav@nvidia.com>
Link: https://lore.kernel.org/r/20210721142648.1525924-2-parav@nvidia.com


Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
parent 0d4ba693
Loading
Loading
Loading
Loading
+4 −2
Original line number Diff line number Diff line
@@ -2268,7 +2268,7 @@ bool virtqueue_is_broken(struct virtqueue *_vq)
{
	struct vring_virtqueue *vq = to_vvq(_vq);

	return vq->broken;
	return READ_ONCE(vq->broken);
}
EXPORT_SYMBOL_GPL(virtqueue_is_broken);

@@ -2283,7 +2283,9 @@ void virtio_break_device(struct virtio_device *dev)
	spin_lock(&dev->vqs_list_lock);
	list_for_each_entry(_vq, &dev->vqs, list) {
		struct vring_virtqueue *vq = to_vvq(_vq);
		vq->broken = true;

		/* Pairs with READ_ONCE() in virtqueue_is_broken(). */
		WRITE_ONCE(vq->broken, true);
	}
	spin_unlock(&dev->vqs_list_lock);
}