Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 41d34865 authored by Shiraz Saleem's avatar Shiraz Saleem Committed by Jason Gunthorpe
Browse files

RDMA/mthca: Use correct sizing on buffers holding page DMA addresses



The buffer that holds the page DMA addresses is sized off umem->nmap.
This can potentially cause out of bound accesses on the PBL array when
iterating the umem DMA-mapped SGL. This is because if umem pages are
combined, umem->nmap can be much lower than the number of system pages
in umem.

Use ib_umem_num_pages() to size this buffer.

Signed-off-by: default avatarShiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
parent 5f818d67
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -914,7 +914,7 @@ static struct ib_mr *mthca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
		goto err;
	}

	n = mr->umem->nmap;
	n = ib_umem_num_pages(mr->umem);

	mr->mtt = mthca_alloc_mtt(dev, n);
	if (IS_ERR(mr->mtt)) {