Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 0ae15132 authored by Glauber Costa's avatar Glauber Costa Committed by Linus Torvalds
Browse files

mm: vmalloc search restart fix

Current vmalloc restart search for a free area in case we can't find one.
The reason is there are areas which are lazily freed, and could be
possibly freed now.  However, current implementation start searching the
tree from the last failing address, which is pretty much by definition at
the end of address space.  So, we fail.

The proposal of this patch is to restart the search from the beginning of
the requested vstart address.  This fixes the regression in running KVM
virtual machines for me, described in http://lkml.org/lkml/2008/10/28/349

,
caused by commit db64fe02.

Signed-off-by: default avatarGlauber Costa <glommer@redhat.com>
Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 496850e5
Loading
Loading
Loading
Loading
+2 −2
Original line number Original line Diff line number Diff line
@@ -324,14 +324,14 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,


	BUG_ON(size & ~PAGE_MASK);
	BUG_ON(size & ~PAGE_MASK);


	addr = ALIGN(vstart, align);

	va = kmalloc_node(sizeof(struct vmap_area),
	va = kmalloc_node(sizeof(struct vmap_area),
			gfp_mask & GFP_RECLAIM_MASK, node);
			gfp_mask & GFP_RECLAIM_MASK, node);
	if (unlikely(!va))
	if (unlikely(!va))
		return ERR_PTR(-ENOMEM);
		return ERR_PTR(-ENOMEM);


retry:
retry:
	addr = ALIGN(vstart, align);

	spin_lock(&vmap_area_lock);
	spin_lock(&vmap_area_lock);
	/* XXX: could have a last_hole cache */
	/* XXX: could have a last_hole cache */
	n = vmap_area_root.rb_node;
	n = vmap_area_root.rb_node;