Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 35000870 authored by Anton Blanchard's avatar Anton Blanchard Committed by Benjamin Herrenschmidt
Browse files

powerpc: Optimise enable_kernel_altivec

Add two optimisations to enable_kernel_altivec:

- enable_kernel_altivec has already determined if we need to
save the previous task's state but we call giveup_altivec
in both cases, requiring an extra branch in giveup_altivec. Create
giveup_altivec_notask which only turns on the VMX bit in the
MSR.

- We write the VMX MSR bit each time we call enable_kernel_altivec
even it was already set. Check the bit and branch out if we have
already set it. The classic case for this is vectored IO
where we have to copy multiple buffers to or from userspace.

The following testcase was used to confirm this patch improves
performance:

http://ozlabs.org/~anton/junkcode/copy_to_user.c



Since the current breakpoint for using VMX in copy_tofrom_user is
4096 bytes, I'm using buffers of 4096 + 1 cacheline (4224) bytes.
A benchmark of 16 entry readvs (-s 16):

time copy_to_user -l 4224 -s 16 -i 1000000

completes 5.2% faster on a POWER7 PS700.

Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
parent 8cd3c23d
Loading
Loading
Loading
Loading
+1 −0
Original line number Original line Diff line number Diff line
@@ -40,6 +40,7 @@ static inline void discard_lazy_cpu_state(void)
#ifdef CONFIG_ALTIVEC
#ifdef CONFIG_ALTIVEC
extern void flush_altivec_to_thread(struct task_struct *);
extern void flush_altivec_to_thread(struct task_struct *);
extern void giveup_altivec(struct task_struct *);
extern void giveup_altivec(struct task_struct *);
extern void giveup_altivec_notask(void);
#else
#else
static inline void flush_altivec_to_thread(struct task_struct *t)
static inline void flush_altivec_to_thread(struct task_struct *t)
{
{
+1 −1
Original line number Original line Diff line number Diff line
@@ -124,7 +124,7 @@ void enable_kernel_altivec(void)
	if (current->thread.regs && (current->thread.regs->msr & MSR_VEC))
	if (current->thread.regs && (current->thread.regs->msr & MSR_VEC))
		giveup_altivec(current);
		giveup_altivec(current);
	else
	else
		giveup_altivec(NULL);	/* just enable AltiVec for kernel - force */
		giveup_altivec_notask();
#else
#else
	giveup_altivec(last_task_used_altivec);
	giveup_altivec(last_task_used_altivec);
#endif /* CONFIG_SMP */
#endif /* CONFIG_SMP */
+10 −0
Original line number Original line Diff line number Diff line
@@ -89,6 +89,16 @@ _GLOBAL(load_up_altivec)
	/* restore registers and return */
	/* restore registers and return */
	blr
	blr


_GLOBAL(giveup_altivec_notask)
	mfmsr	r3
	andis.	r4,r3,MSR_VEC@h
	bnelr				/* Already enabled? */
	oris	r3,r3,MSR_VEC@h
	SYNC
	MTMSRD(r3)			/* enable use of VMX now */
	isync
	blr

/*
/*
 * giveup_altivec(tsk)
 * giveup_altivec(tsk)
 * Disable VMX for the task given as the argument,
 * Disable VMX for the task given as the argument,