Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 0f265741 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'hughd-fixes' (patches from Hugh Dickins)

Merge VM fixes from High Dickins:
 "I get the impression that Andrew is away or busy at the moment, so I'm
  going to send you three independent uncontroversial little mm fixes
  directly - though none is strictly a 4.8 regression fix.

   - shmem: fix tmpfs to handle the huge= option properly from Toshi
     Kani is a one-liner to fix a major embarrassment in 4.8's hugepages
     on tmpfs feature: although Hillf pointed it out in June, somehow
     both Kirill and I repeatedly dropped the ball on this one.  You
     might wonder if the feature got tested at all with that bug in:
     yes, it did, but for wider testing coverage, Kirill and I had each
     relied too much on an override which bypasses that condition.

   - huge tmpfs: fix Committed_AS leak just a run-of-the-mill accounting
     fix in the same feature.

   - mm: delete unnecessary and unsafe init_tlb_ubc() is an unrelated
     fix to 4.3's TLB flush batching in reclaim: the bug would be rare,
     and none of us will be shamed if this one misses 4.8; but it got
     such a quick ack from Mel today that I'm inclined to offer it along
     with the first two"

* emailed patches from Hugh Dickins <hughd@google.com>:
  mm: delete unnecessary and unsafe init_tlb_ubc()
  huge tmpfs: fix Committed_AS leak
  shmem: fix tmpfs to handle the huge= option properly
parents bd5dbcb4 b385d21f
Loading
Loading
Loading
Loading
+3 −2
Original line number Diff line number Diff line
@@ -270,7 +270,7 @@ bool shmem_charge(struct inode *inode, long pages)
		info->alloced -= pages;
		shmem_recalc_inode(inode);
		spin_unlock_irqrestore(&info->lock, flags);

		shmem_unacct_blocks(info->flags, pages);
		return false;
	}
	percpu_counter_add(&sbinfo->used_blocks, pages);
@@ -291,6 +291,7 @@ void shmem_uncharge(struct inode *inode, long pages)

	if (sbinfo->max_blocks)
		percpu_counter_sub(&sbinfo->used_blocks, pages);
	shmem_unacct_blocks(info->flags, pages);
}

/*
@@ -1980,7 +1981,7 @@ unsigned long shmem_get_unmapped_area(struct file *file,
				return addr;
			sb = shm_mnt->mnt_sb;
		}
		if (SHMEM_SB(sb)->huge != SHMEM_HUGE_NEVER)
		if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER)
			return addr;
	}

+0 −19
Original line number Diff line number Diff line
@@ -2303,23 +2303,6 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
	}
}

#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
static void init_tlb_ubc(void)
{
	/*
	 * This deliberately does not clear the cpumask as it's expensive
	 * and unnecessary. If there happens to be data in there then the
	 * first SWAP_CLUSTER_MAX pages will send an unnecessary IPI and
	 * then will be cleared.
	 */
	current->tlb_ubc.flush_required = false;
}
#else
static inline void init_tlb_ubc(void)
{
}
#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */

/*
 * This is a basic per-node page freer.  Used by both kswapd and direct reclaim.
 */
@@ -2355,8 +2338,6 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
	scan_adjusted = (global_reclaim(sc) && !current_is_kswapd() &&
			 sc->priority == DEF_PRIORITY);

	init_tlb_ubc();

	blk_start_plug(&plug);
	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
					nr[LRU_INACTIVE_FILE]) {