Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 93f992cc authored by Srivatsa Vaddagiri's avatar Srivatsa Vaddagiri Committed by Ingo Molnar
Browse files

sched: group scheduling code cleanup



Minor cleanups:

- Fix coding style
- remove obsolete comment

Signed-off-by: default avatarSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 86faf39d
Loading
Loading
Loading
Loading
+3 −18
Original line number Diff line number Diff line
@@ -191,12 +191,12 @@ struct task_group init_task_group = {
};

#ifdef CONFIG_FAIR_USER_SCHED
# define INIT_TASK_GRP_LOAD	2*NICE_0_LOAD
# define INIT_TASK_GROUP_LOAD	2*NICE_0_LOAD
#else
# define INIT_TASK_GRP_LOAD	NICE_0_LOAD
# define INIT_TASK_GROUP_LOAD	NICE_0_LOAD
#endif

static int init_task_group_load = INIT_TASK_GRP_LOAD;
static int init_task_group_load = INIT_TASK_GROUP_LOAD;

/* return group to which a task belongs */
static inline struct task_group *task_group(struct task_struct *p)
@@ -881,21 +881,6 @@ static inline void cpuacct_charge(struct task_struct *tsk, u64 cputime) {}

#define sched_class_highest (&rt_sched_class)

/*
 * Update delta_exec, delta_fair fields for rq.
 *
 * delta_fair clock advances at a rate inversely proportional to
 * total load (rq->load.weight) on the runqueue, while
 * delta_exec advances at the same rate as wall-clock (provided
 * cpu is not idle).
 *
 * delta_exec / delta_fair is a measure of the (smoothened) load on this
 * runqueue over any given interval. This (smoothened) load is used
 * during load balance.
 *
 * This function is called /before/ updating rq->load
 * and when switching tasks.
 */
static inline void inc_load(struct rq *rq, const struct task_struct *p)
{
	update_load_add(&rq->load, p->se.load.weight);