Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit e9308884 authored by Jérôme Glisse's avatar Jérôme Glisse Committed by Dave Airlie
Browse files

drm/ttm: improve uncached page deallocation.



Calls to set_memory_wb() incure heavy TLB flush and IPI cost. To
minimize those wait until pool grow beyond batch size before
draining the pool.

Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
Reviewed-by: default avatarMario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-and-Tested-by: default avatarMichel Dänzer <michel@daenzer.net>
Reviewed-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: default avatarAlex Deucher <alexander.deucher@amd.com>
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
parent ef2b7317
Loading
Loading
Loading
Loading
+6 −6
Original line number Diff line number Diff line
@@ -963,13 +963,13 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev)
	} else {
		pool->npages_free += count;
		list_splice(&ttm_dma->pages_list, &pool->free_list);
		if (pool->npages_free > _manager->options.max_size) {
		/*
		 * Wait to have at at least NUM_PAGES_TO_ALLOC number of pages
		 * to free in order to minimize calls to set_memory_wb().
		 */
		if (pool->npages_free >= (_manager->options.max_size +
					  NUM_PAGES_TO_ALLOC))
			npages = pool->npages_free - _manager->options.max_size;
			/* free at least NUM_PAGES_TO_ALLOC number of pages
			 * to reduce calls to set_memory_wb */
			if (npages < NUM_PAGES_TO_ALLOC)
				npages = NUM_PAGES_TO_ALLOC;
		}
	}
	spin_unlock_irqrestore(&pool->lock, irq_flags);