Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 7cff8cf6 authored by Ingo Molnar's avatar Ingo Molnar
Browse files

sched: refine negative nice level granularity



refine the granularity of negative nice level tasks: let them
reschedule more often to offset the effect of them consuming
their wait_runtime proportionately slower. (This makes nice-0
task scheduling smoother in the presence of negatively
reniced tasks.)

Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent a69edb55
Loading
Loading
Loading
Loading
+10 −6
Original line number Diff line number Diff line
@@ -222,21 +222,25 @@ niced_granularity(struct sched_entity *curr, unsigned long granularity)
{
	u64 tmp;

	if (likely(curr->load.weight == NICE_0_LOAD))
		return granularity;
	/*
	 * Negative nice levels get the same granularity as nice-0:
	 * Positive nice levels get the same granularity as nice-0:
	 */
	if (likely(curr->load.weight >= NICE_0_LOAD))
		return granularity;
	if (likely(curr->load.weight < NICE_0_LOAD)) {
		tmp = curr->load.weight * (u64)granularity;
		return (long) (tmp >> NICE_0_SHIFT);
	}
	/*
	 * Positive nice level tasks get linearly finer
	 * Negative nice level tasks get linearly finer
	 * granularity:
	 */
	tmp = curr->load.weight * (u64)granularity;
	tmp = curr->load.inv_weight * (u64)granularity;

	/*
	 * It will always fit into 'long':
	 */
	return (long) (tmp >> NICE_0_SHIFT);
	return (long) (tmp >> WMULT_SHIFT);
}

static inline void