Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 6484eb3e authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds
Browse files

page allocator: do not check NUMA node ID when the caller knows the node is valid



Callers of alloc_pages_node() can optionally specify -1 as a node to mean
"allocate from the current node".  However, a number of the callers in
fast paths know for a fact their node is valid.  To avoid a comparison and
branch, this patch adds alloc_pages_exact_node() that only checks the nid
with VM_BUG_ON().  Callers that know their node is valid are then
converted.

Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
Reviewed-by: default avatarChristoph Lameter <cl@linux-foundation.org>
Reviewed-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Reviewed-by: default avatarPekka Enberg <penberg@cs.helsinki.fi>
Acked-by: Paul Mundt <lethal@linux-sh.org>	[for the SLOB NUMA bits]
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent b3c466ce
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -1131,7 +1131,7 @@ sba_alloc_coherent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp
#ifdef CONFIG_NUMA
	{
		struct page *page;
		page = alloc_pages_node(ioc->node == MAX_NUMNODES ?
		page = alloc_pages_exact_node(ioc->node == MAX_NUMNODES ?
		                        numa_node_id() : ioc->node, flags,
		                        get_order(size));

+1 −2
Original line number Diff line number Diff line
@@ -1829,8 +1829,7 @@ ia64_mca_cpu_init(void *cpu_data)
			data = mca_bootmem();
			first_time = 0;
		} else
			data = page_address(alloc_pages_node(numa_node_id(),
					GFP_KERNEL, get_order(sz)));
			data = __get_free_pages(GFP_KERNEL, get_order(sz));
		if (!data)
			panic("Could not allocate MCA memory for cpu %d\n",
					cpu);
+2 −1
Original line number Diff line number Diff line
@@ -98,7 +98,8 @@ static int uncached_add_chunk(struct uncached_pool *uc_pool, int nid)

	/* attempt to allocate a granule's worth of cached memory pages */

	page = alloc_pages_node(nid, GFP_KERNEL | __GFP_ZERO | GFP_THISNODE,
	page = alloc_pages_exact_node(nid,
				GFP_KERNEL | __GFP_ZERO | GFP_THISNODE,
				IA64_GRANULE_SHIFT-PAGE_SHIFT);
	if (!page) {
		mutex_unlock(&uc_pool->add_chunk_mutex);
+2 −1
Original line number Diff line number Diff line
@@ -90,7 +90,8 @@ static void *sn_dma_alloc_coherent(struct device *dev, size_t size,
	 */
	node = pcibus_to_node(pdev->bus);
	if (likely(node >=0)) {
		struct page *p = alloc_pages_node(node, flags, get_order(size));
		struct page *p = alloc_pages_exact_node(node,
						flags, get_order(size));

		if (likely(p))
			cpuaddr = page_address(p);
+2 −2
Original line number Diff line number Diff line
@@ -122,7 +122,7 @@ static int __init cbe_ptcal_enable_on_node(int nid, int order)

	area->nid = nid;
	area->order = order;
	area->pages = alloc_pages_node(area->nid, GFP_KERNEL | GFP_THISNODE,
	area->pages = alloc_pages_exact_node(area->nid, GFP_KERNEL|GFP_THISNODE,
						area->order);

	if (!area->pages) {
Loading