Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 603e7729 authored by Jan Kara's avatar Jan Kara Committed by Roland Dreier
Browse files

IB/qib: Convert qib_user_sdma_pin_pages() to use get_user_pages_fast()



qib_user_sdma_queue_pkts() gets called with mmap_sem held for
writing. Except for get_user_pages() deep down in
qib_user_sdma_pin_pages() we don't seem to need mmap_sem at all.  Even
more interestingly the function qib_user_sdma_queue_pkts() (and also
qib_user_sdma_coalesce() called somewhat later) call copy_from_user()
which can hit a page fault and we deadlock on trying to get mmap_sem
when handling that fault.

So just make qib_user_sdma_pin_pages() use get_user_pages_fast() and
leave mmap_sem locking for mm.

This deadlock has actually been observed in the wild when the node
is under memory pressure.

Cc: <stable@vger.kernel.org>
Reviewed-by: default avatarMike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: default avatarJan Kara <jack@suse.cz>
Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
parent 4adcf7fb
Loading
Loading
Loading
Loading
+1 −5
Original line number Original line Diff line number Diff line
@@ -594,8 +594,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd,
		else
		else
			j = npages;
			j = npages;


		ret = get_user_pages(current, current->mm, addr,
		ret = get_user_pages_fast(addr, j, 0, pages);
			     j, 0, 1, pages, NULL);
		if (ret != j) {
		if (ret != j) {
			i = 0;
			i = 0;
			j = ret;
			j = ret;
@@ -1294,11 +1293,8 @@ int qib_user_sdma_writev(struct qib_ctxtdata *rcd,
		int mxp = 8;
		int mxp = 8;
		int ndesc = 0;
		int ndesc = 0;


		down_write(&current->mm->mmap_sem);
		ret = qib_user_sdma_queue_pkts(dd, ppd, pq,
		ret = qib_user_sdma_queue_pkts(dd, ppd, pq,
				iov, dim, &list, &mxp, &ndesc);
				iov, dim, &list, &mxp, &ndesc);
		up_write(&current->mm->mmap_sem);

		if (ret < 0)
		if (ret < 0)
			goto done_unlock;
			goto done_unlock;
		else {
		else {