Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 9452b2c2 authored by Greg Kroah-Hartman's avatar Greg Kroah-Hartman
Browse files

Merge 4.9.51 into android-4.9



Changes in 4.9.51
	ipv6: accept 64k - 1 packet length in ip6_find_1stfragopt()
	ipv6: add rcu grace period before freeing fib6_node
	ipv6: fix sparse warning on rt6i_node
	macsec: add genl family module alias
	udp: on peeking bad csum, drop packets even if not at head
	fsl/man: Inherit parent device and of_node
	sctp: Avoid out-of-bounds reads from address storage
	qlge: avoid memcpy buffer overflow
	netvsc: fix deadlock betwen link status and removal
	cxgb4: Fix stack out-of-bounds read due to wrong size to t4_record_mbox()
	packet: Don't write vnet header beyond end of buffer
	kcm: do not attach PF_KCM sockets to avoid deadlock
	Revert "net: phy: Correctly process PHY_HALTED in phy_stop_machine()"
	tcp: initialize rcv_mss to TCP_MIN_MSS instead of 0
	mlxsw: spectrum: Forbid linking to devices that have uppers
	bridge: switchdev: Clear forward mark when transmitting packet
	Revert "net: use lib/percpu_counter API for fragmentation mem accounting"
	Revert "net: fix percpu memory leaks"
	gianfar: Fix Tx flow control deactivation
	vhost_net: correctly check tx avail during rx busy polling
	ip6_gre: update mtu properly in ip6gre_err
	ipv6: fix memory leak with multiple tables during netns destruction
	ipv6: fix typo in fib6_net_exit()
	sctp: fix missing wake ups in some situations
	ip_tunnel: fix setting ttl and tos value in collect_md mode
	f2fs: let fill_super handle roll-forward errors
	f2fs: check hot_data for roll-forward recovery
	x86/fsgsbase/64: Fully initialize FS and GS state in start_thread_common
	x86/fsgsbase/64: Report FSBASE and GSBASE correctly in core dumps
	x86/switch_to/64: Rewrite FS/GS switching yet again to fix AMD CPUs
	xfs: Move handling of missing page into one place in xfs_find_get_desired_pgoff()
	xfs: fix spurious spin_is_locked() assert failures on non-smp kernels
	xfs: push buffer of flush locked dquot to avoid quotacheck deadlock
	xfs: try to avoid blowing out the transaction reservation when bunmaping a shared extent
	xfs: release bli from transaction properly on fs shutdown
	xfs: remove bli from AIL before release on transaction abort
	xfs: don't allow bmap on rt files
	xfs: free uncommitted transactions during log recovery
	xfs: free cowblocks and retry on buffered write ENOSPC
	xfs: don't crash on unexpected holes in dir/attr btrees
	xfs: check _btree_check_block value
	xfs: set firstfsb to NULLFSBLOCK before feeding it to _bmapi_write
	xfs: check _alloc_read_agf buffer pointer before using
	xfs: fix quotacheck dquot id overflow infinite loop
	xfs: fix multi-AG deadlock in xfs_bunmapi
	xfs: Fix per-inode DAX flag inheritance
	xfs: fix inobt inode allocation search optimization
	xfs: clear MS_ACTIVE after finishing log recovery
	xfs: don't leak quotacheck dquots when cow recovery
	iomap: fix integer truncation issues in the zeroing and dirtying helpers
	xfs: write unmount record for ro mounts
	xfs: toggle readonly state around xfs_log_mount_finish
	xfs: remove xfs_trans_ail_delete_bulk
	xfs: Add infrastructure needed for error propagation during buffer IO failure
	xfs: Properly retry failed inode items in case of error during buffer writeback
	xfs: fix recovery failure when log record header wraps log end
	xfs: always verify the log tail during recovery
	xfs: fix log recovery corruption error due to tail overwrite
	xfs: handle -EFSCORRUPTED during head/tail verification
	xfs: add log recovery tracepoint for head/tail
	xfs: stop searching for free slots in an inode chunk when there are none
	xfs: evict all inodes involved with log redo item
	xfs: check for race with xfs_reclaim_inode() in xfs_ifree_cluster()
	xfs: open-code xfs_buf_item_dirty()
	xfs: remove unnecessary dirty bli format check for ordered bufs
	xfs: ordered buffer log items are never formatted
	xfs: refactor buffer logging into buffer dirtying helper
	xfs: don't log dirty ranges for ordered buffers
	xfs: skip bmbt block ino validation during owner change
	xfs: move bmbt owner change to last step of extent swap
	xfs: disallow marking previously dirty buffers as ordered
	xfs: relog dirty buffers during swapext bmbt owner change
	xfs: disable per-inode DAX flag
	xfs: fix incorrect log_flushed on fsync
	xfs: don't set v3 xflags for v2 inodes
	xfs: open code end_buffer_async_write in xfs_finish_page_writeback
	xfs: use kmem_free to free return value of kmem_zalloc
	md/raid5: release/flush io in raid5_do_work()
	xfs: fix compiler warnings
	ipv6: Fix may be used uninitialized warning in rt6_check
	Linux 4.9.51

Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@google.com>
parents faf1269d 089d7720
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
VERSION = 4
PATCHLEVEL = 9
SUBLEVEL = 50
SUBLEVEL = 51
EXTRAVERSION =
NAME = Roaring Lionus

+3 −2
Original line number Diff line number Diff line
@@ -204,6 +204,7 @@ void set_personality_ia32(bool);

#define ELF_CORE_COPY_REGS(pr_reg, regs)			\
do {								\
	unsigned long base;					\
	unsigned v;						\
	(pr_reg)[0] = (regs)->r15;				\
	(pr_reg)[1] = (regs)->r14;				\
@@ -226,8 +227,8 @@ do { \
	(pr_reg)[18] = (regs)->flags;				\
	(pr_reg)[19] = (regs)->sp;				\
	(pr_reg)[20] = (regs)->ss;				\
	(pr_reg)[21] = current->thread.fsbase;			\
	(pr_reg)[22] = current->thread.gsbase;			\
	rdmsrl(MSR_FS_BASE, base); (pr_reg)[21] = base;		\
	rdmsrl(MSR_KERNEL_GS_BASE, base); (pr_reg)[22] = base;	\
	asm("movl %%ds,%0" : "=r" (v)); (pr_reg)[23] = v;	\
	asm("movl %%es,%0" : "=r" (v)); (pr_reg)[24] = v;	\
	asm("movl %%fs,%0" : "=r" (v)); (pr_reg)[25] = v;	\
+131 −105
Original line number Diff line number Diff line
@@ -136,6 +136,123 @@ void release_thread(struct task_struct *dead_task)
	}
}

enum which_selector {
	FS,
	GS
};

/*
 * Saves the FS or GS base for an outgoing thread if FSGSBASE extensions are
 * not available.  The goal is to be reasonably fast on non-FSGSBASE systems.
 * It's forcibly inlined because it'll generate better code and this function
 * is hot.
 */
static __always_inline void save_base_legacy(struct task_struct *prev_p,
					     unsigned short selector,
					     enum which_selector which)
{
	if (likely(selector == 0)) {
		/*
		 * On Intel (without X86_BUG_NULL_SEG), the segment base could
		 * be the pre-existing saved base or it could be zero.  On AMD
		 * (with X86_BUG_NULL_SEG), the segment base could be almost
		 * anything.
		 *
		 * This branch is very hot (it's hit twice on almost every
		 * context switch between 64-bit programs), and avoiding
		 * the RDMSR helps a lot, so we just assume that whatever
		 * value is already saved is correct.  This matches historical
		 * Linux behavior, so it won't break existing applications.
		 *
		 * To avoid leaking state, on non-X86_BUG_NULL_SEG CPUs, if we
		 * report that the base is zero, it needs to actually be zero:
		 * see the corresponding logic in load_seg_legacy.
		 */
	} else {
		/*
		 * If the selector is 1, 2, or 3, then the base is zero on
		 * !X86_BUG_NULL_SEG CPUs and could be anything on
		 * X86_BUG_NULL_SEG CPUs.  In the latter case, Linux
		 * has never attempted to preserve the base across context
		 * switches.
		 *
		 * If selector > 3, then it refers to a real segment, and
		 * saving the base isn't necessary.
		 */
		if (which == FS)
			prev_p->thread.fsbase = 0;
		else
			prev_p->thread.gsbase = 0;
	}
}

static __always_inline void save_fsgs(struct task_struct *task)
{
	savesegment(fs, task->thread.fsindex);
	savesegment(gs, task->thread.gsindex);
	save_base_legacy(task, task->thread.fsindex, FS);
	save_base_legacy(task, task->thread.gsindex, GS);
}

static __always_inline void loadseg(enum which_selector which,
				    unsigned short sel)
{
	if (which == FS)
		loadsegment(fs, sel);
	else
		load_gs_index(sel);
}

static __always_inline void load_seg_legacy(unsigned short prev_index,
					    unsigned long prev_base,
					    unsigned short next_index,
					    unsigned long next_base,
					    enum which_selector which)
{
	if (likely(next_index <= 3)) {
		/*
		 * The next task is using 64-bit TLS, is not using this
		 * segment at all, or is having fun with arcane CPU features.
		 */
		if (next_base == 0) {
			/*
			 * Nasty case: on AMD CPUs, we need to forcibly zero
			 * the base.
			 */
			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
				loadseg(which, __USER_DS);
				loadseg(which, next_index);
			} else {
				/*
				 * We could try to exhaustively detect cases
				 * under which we can skip the segment load,
				 * but there's really only one case that matters
				 * for performance: if both the previous and
				 * next states are fully zeroed, we can skip
				 * the load.
				 *
				 * (This assumes that prev_base == 0 has no
				 * false positives.  This is the case on
				 * Intel-style CPUs.)
				 */
				if (likely(prev_index | next_index | prev_base))
					loadseg(which, next_index);
			}
		} else {
			if (prev_index != next_index)
				loadseg(which, next_index);
			wrmsrl(which == FS ? MSR_FS_BASE : MSR_KERNEL_GS_BASE,
			       next_base);
		}
	} else {
		/*
		 * The next task is using a real segment.  Loading the selector
		 * is sufficient.
		 */
		loadseg(which, next_index);
	}
}

int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
		unsigned long arg, struct task_struct *p, unsigned long tls)
{
@@ -216,10 +333,19 @@ start_thread_common(struct pt_regs *regs, unsigned long new_ip,
		    unsigned long new_sp,
		    unsigned int _cs, unsigned int _ss, unsigned int _ds)
{
	WARN_ON_ONCE(regs != current_pt_regs());

	if (static_cpu_has(X86_BUG_NULL_SEG)) {
		/* Loading zero below won't clear the base. */
		loadsegment(fs, __USER_DS);
		load_gs_index(__USER_DS);
	}

	loadsegment(fs, 0);
	loadsegment(es, _ds);
	loadsegment(ds, _ds);
	load_gs_index(0);

	regs->ip		= new_ip;
	regs->sp		= new_sp;
	regs->cs		= _cs;
@@ -264,7 +390,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
	struct fpu *next_fpu = &next->fpu;
	int cpu = smp_processor_id();
	struct tss_struct *tss = &per_cpu(cpu_tss, cpu);
	unsigned prev_fsindex, prev_gsindex;
	fpu_switch_t fpu_switch;

	fpu_switch = switch_fpu_prepare(prev_fpu, next_fpu, cpu);
@@ -274,8 +399,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
	 *
	 * (e.g. xen_load_tls())
	 */
	savesegment(fs, prev_fsindex);
	savesegment(gs, prev_gsindex);
	save_fsgs(prev_p);

	/*
	 * Load TLS before restoring any segments so that segment loads
@@ -314,108 +438,10 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
	if (unlikely(next->ds | prev->ds))
		loadsegment(ds, next->ds);

	/*
	 * Switch FS and GS.
	 *
	 * These are even more complicated than DS and ES: they have
	 * 64-bit bases are that controlled by arch_prctl.  The bases
	 * don't necessarily match the selectors, as user code can do
	 * any number of things to cause them to be inconsistent.
	 *
	 * We don't promise to preserve the bases if the selectors are
	 * nonzero.  We also don't promise to preserve the base if the
	 * selector is zero and the base doesn't match whatever was
	 * most recently passed to ARCH_SET_FS/GS.  (If/when the
	 * FSGSBASE instructions are enabled, we'll need to offer
	 * stronger guarantees.)
	 *
	 * As an invariant,
	 * (fsbase != 0 && fsindex != 0) || (gsbase != 0 && gsindex != 0) is
	 * impossible.
	 */
	if (next->fsindex) {
		/* Loading a nonzero value into FS sets the index and base. */
		loadsegment(fs, next->fsindex);
	} else {
		if (next->fsbase) {
			/* Next index is zero but next base is nonzero. */
			if (prev_fsindex)
				loadsegment(fs, 0);
			wrmsrl(MSR_FS_BASE, next->fsbase);
		} else {
			/* Next base and index are both zero. */
			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
				/*
				 * We don't know the previous base and can't
				 * find out without RDMSR.  Forcibly clear it.
				 */
				loadsegment(fs, __USER_DS);
				loadsegment(fs, 0);
			} else {
				/*
				 * If the previous index is zero and ARCH_SET_FS
				 * didn't change the base, then the base is
				 * also zero and we don't need to do anything.
				 */
				if (prev->fsbase || prev_fsindex)
					loadsegment(fs, 0);
			}
		}
	}
	/*
	 * Save the old state and preserve the invariant.
	 * NB: if prev_fsindex == 0, then we can't reliably learn the base
	 * without RDMSR because Intel user code can zero it without telling
	 * us and AMD user code can program any 32-bit value without telling
	 * us.
	 */
	if (prev_fsindex)
		prev->fsbase = 0;
	prev->fsindex = prev_fsindex;

	if (next->gsindex) {
		/* Loading a nonzero value into GS sets the index and base. */
		load_gs_index(next->gsindex);
	} else {
		if (next->gsbase) {
			/* Next index is zero but next base is nonzero. */
			if (prev_gsindex)
				load_gs_index(0);
			wrmsrl(MSR_KERNEL_GS_BASE, next->gsbase);
		} else {
			/* Next base and index are both zero. */
			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
				/*
				 * We don't know the previous base and can't
				 * find out without RDMSR.  Forcibly clear it.
				 *
				 * This contains a pointless SWAPGS pair.
				 * Fixing it would involve an explicit check
				 * for Xen or a new pvop.
				 */
				load_gs_index(__USER_DS);
				load_gs_index(0);
			} else {
				/*
				 * If the previous index is zero and ARCH_SET_GS
				 * didn't change the base, then the base is
				 * also zero and we don't need to do anything.
				 */
				if (prev->gsbase || prev_gsindex)
					load_gs_index(0);
			}
		}
	}
	/*
	 * Save the old state and preserve the invariant.
	 * NB: if prev_gsindex == 0, then we can't reliably learn the base
	 * without RDMSR because Intel user code can zero it without telling
	 * us and AMD user code can program any 32-bit value without telling
	 * us.
	 */
	if (prev_gsindex)
		prev->gsbase = 0;
	prev->gsindex = prev_gsindex;
	load_seg_legacy(prev->fsindex, prev->fsbase,
			next->fsindex, next->fsbase, FS);
	load_seg_legacy(prev->gsindex, prev->gsbase,
			next->gsindex, next->gsbase, GS);

	switch_fpu_finish(next_fpu, fpu_switch);

+2 −0
Original line number Diff line number Diff line
@@ -5844,6 +5844,8 @@ static void raid5_do_work(struct work_struct *work)

	spin_unlock_irq(&conf->device_lock);

	r5l_flush_stripe_to_raid(conf->log);

	async_tx_issue_pending_all();
	blk_finish_plug(&plug);

+3 −3
Original line number Diff line number Diff line
@@ -317,12 +317,12 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,

	if (v != MBOX_OWNER_DRV) {
		ret = (v == MBOX_OWNER_FW) ? -EBUSY : -ETIMEDOUT;
		t4_record_mbox(adap, cmd, MBOX_LEN, access, ret);
		t4_record_mbox(adap, cmd, size, access, ret);
		return ret;
	}

	/* Copy in the new mailbox command and send it on its way ... */
	t4_record_mbox(adap, cmd, MBOX_LEN, access, 0);
	t4_record_mbox(adap, cmd, size, access, 0);
	for (i = 0; i < size; i += 8)
		t4_write_reg64(adap, data_reg + i, be64_to_cpu(*p++));

@@ -371,7 +371,7 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
	}

	ret = (pcie_fw & PCIE_FW_ERR_F) ? -ENXIO : -ETIMEDOUT;
	t4_record_mbox(adap, cmd, MBOX_LEN, access, ret);
	t4_record_mbox(adap, cmd, size, access, ret);
	dev_err(adap->pdev_dev, "command %#x in mailbox %d timed out\n",
		*(const u8 *)cmd, mbox);
	t4_report_fw_error(adap);
Loading