Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 349f2ccf authored by Chris Wilson's avatar Chris Wilson
Browse files

drm/i915: Move the mb() following release-mmap into release-mmap



As paranoia, we want to ensure that the CPU's PTEs have been revoked for
the object before we return from i915_gem_release_mmap(). This allows us
to rely on there being no outstanding memory accesses from userspace
and guarantees serialisation of the code against concurrent access just
by calling i915_gem_release_mmap().

v2: Reduce the mb() into a wmb() following the revoke.

Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: "Goel, Akash" <akash.goel@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-13-git-send-email-chris@chris-wilson.co.uk
parent a687a43a
Loading
Loading
Loading
Loading
+16 −3
Original line number Original line Diff line number Diff line
@@ -1936,11 +1936,27 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
void
void
i915_gem_release_mmap(struct drm_i915_gem_object *obj)
i915_gem_release_mmap(struct drm_i915_gem_object *obj)
{
{
	/* Serialisation between user GTT access and our code depends upon
	 * revoking the CPU's PTE whilst the mutex is held. The next user
	 * pagefault then has to wait until we release the mutex.
	 */
	lockdep_assert_held(&obj->base.dev->struct_mutex);

	if (!obj->fault_mappable)
	if (!obj->fault_mappable)
		return;
		return;


	drm_vma_node_unmap(&obj->base.vma_node,
	drm_vma_node_unmap(&obj->base.vma_node,
			   obj->base.dev->anon_inode->i_mapping);
			   obj->base.dev->anon_inode->i_mapping);

	/* Ensure that the CPU's PTE are revoked and there are not outstanding
	 * memory transactions from userspace before we return. The TLB
	 * flushing implied above by changing the PTE above *should* be
	 * sufficient, an extra barrier here just provides us with a bit
	 * of paranoid documentation about our requirement to serialise
	 * memory writes before touching registers / GSM.
	 */
	wmb();

	obj->fault_mappable = false;
	obj->fault_mappable = false;
}
}


@@ -3324,9 +3340,6 @@ static void i915_gem_object_finish_gtt(struct drm_i915_gem_object *obj)
	if ((obj->base.read_domains & I915_GEM_DOMAIN_GTT) == 0)
	if ((obj->base.read_domains & I915_GEM_DOMAIN_GTT) == 0)
		return;
		return;


	/* Wait for any direct GTT access to complete */
	mb();

	old_read_domains = obj->base.read_domains;
	old_read_domains = obj->base.read_domains;
	old_write_domain = obj->base.write_domain;
	old_write_domain = obj->base.write_domain;