Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 10e69627 authored by Ingo Molnar's avatar Ingo Molnar
Browse files

Merge commit 'v3.0-rc5' into perf/core



Merge reason: Pick up the latest fixes.

Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parents af07ce3e b0af8dfd
Loading
Loading
Loading
Loading
+56 −0
Original line number Diff line number Diff line
What:		/sys/class/backlight/<backlight>/<ambient light zone>_max
What:		/sys/class/backlight/<backlight>/l1_daylight_max
What:		/sys/class/backlight/<backlight>/l2_bright_max
What:		/sys/class/backlight/<backlight>/l3_office_max
What:		/sys/class/backlight/<backlight>/l4_indoor_max
What:		/sys/class/backlight/<backlight>/l5_dark_max
Date:		Mai 2011
KernelVersion:	2.6.40
Contact:	device-drivers-devel@blackfin.uclinux.org
Description:
		Control the maximum brightness for <ambient light zone>
		on this <backlight>. Values are between 0 and 127. This file
		will also show the brightness level stored for this
		<ambient light zone>.

What:		/sys/class/backlight/<backlight>/<ambient light zone>_dim
What:		/sys/class/backlight/<backlight>/l2_bright_dim
What:		/sys/class/backlight/<backlight>/l3_office_dim
What:		/sys/class/backlight/<backlight>/l4_indoor_dim
What:		/sys/class/backlight/<backlight>/l5_dark_dim
Date:		Mai 2011
KernelVersion:	2.6.40
Contact:	device-drivers-devel@blackfin.uclinux.org
Description:
		Control the dim brightness for <ambient light zone>
		on this <backlight>. Values are between 0 and 127, typically
		set to 0. Full off when the backlight is disabled.
		This file will also show the dim brightness level stored for
		this <ambient light zone>.

What:		/sys/class/backlight/<backlight>/ambient_light_level
Date:		Mai 2011
KernelVersion:	2.6.40
Contact:	device-drivers-devel@blackfin.uclinux.org
Description:
		Get conversion value of the light sensor.
		This value is updated every 80 ms (when the light sensor
		is enabled). Returns integer between 0 (dark) and
		8000 (max ambient brightness)

What:		/sys/class/backlight/<backlight>/ambient_light_zone
Date:		Mai 2011
KernelVersion:	2.6.40
Contact:	device-drivers-devel@blackfin.uclinux.org
Description:
		Get/Set current ambient light zone. Reading returns
		integer between 1..5 (1 = daylight, 2 = bright, ..., 5 = dark).
		Writing a value between 1..5 forces the backlight controller
		to enter the corresponding ambient light zone.
		Writing 0 returns to normal/automatic ambient light level
		operation. The ambient light sensing feature on these devices
		is an extension to the API documented in
		Documentation/ABI/stable/sysfs-class-backlight.
		It can be enabled by writing the value stored in
		/sys/class/backlight/<backlight>/max_brightness to
		/sys/class/backlight/<backlight>/brightness.
 No newline at end of file
+2 −2
Original line number Diff line number Diff line
@@ -21,7 +21,7 @@ information will not be available.
To extract cgroup statistics a utility very similar to getdelays.c
has been developed, the sample output of the utility is shown below

~/balbir/cgroupstats # ./getdelays  -C "/cgroup/a"
~/balbir/cgroupstats # ./getdelays  -C "/sys/fs/cgroup/a"
sleeping 1, blocked 0, running 1, stopped 0, uninterruptible 0
~/balbir/cgroupstats # ./getdelays  -C "/cgroup"
~/balbir/cgroupstats # ./getdelays  -C "/sys/fs/cgroup"
sleeping 155, blocked 0, running 1, stopped 0, uninterruptible 2
+17 −14
Original line number Diff line number Diff line
@@ -28,16 +28,19 @@ cgroups. Here is what you can do.
- Enable group scheduling in CFQ
	CONFIG_CFQ_GROUP_IOSCHED=y

- Compile and boot into kernel and mount IO controller (blkio).
- Compile and boot into kernel and mount IO controller (blkio); see
  cgroups.txt, Why are cgroups needed?.

	mount -t cgroup -o blkio none /cgroup
	mount -t tmpfs cgroup_root /sys/fs/cgroup
	mkdir /sys/fs/cgroup/blkio
	mount -t cgroup -o blkio none /sys/fs/cgroup/blkio

- Create two cgroups
	mkdir -p /cgroup/test1/ /cgroup/test2
	mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2

- Set weights of group test1 and test2
	echo 1000 > /cgroup/test1/blkio.weight
	echo 500 > /cgroup/test2/blkio.weight
	echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
	echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight

- Create two same size files (say 512MB each) on same disk (file1, file2) and
  launch two dd threads in different cgroup to read those files.
@@ -46,12 +49,12 @@ cgroups. Here is what you can do.
	echo 3 > /proc/sys/vm/drop_caches

	dd if=/mnt/sdb/zerofile1 of=/dev/null &
	echo $! > /cgroup/test1/tasks
	cat /cgroup/test1/tasks
	echo $! > /sys/fs/cgroup/blkio/test1/tasks
	cat /sys/fs/cgroup/blkio/test1/tasks

	dd if=/mnt/sdb/zerofile2 of=/dev/null &
	echo $! > /cgroup/test2/tasks
	cat /cgroup/test2/tasks
	echo $! > /sys/fs/cgroup/blkio/test2/tasks
	cat /sys/fs/cgroup/blkio/test2/tasks

- At macro level, first dd should finish first. To get more precise data, keep
  on looking at (with the help of script), at blkio.disk_time and
@@ -68,13 +71,13 @@ Throttling/Upper Limit policy
- Enable throttling in block layer
	CONFIG_BLK_DEV_THROTTLING=y

- Mount blkio controller
        mount -t cgroup -o blkio none /cgroup/blkio
- Mount blkio controller (see cgroups.txt, Why are cgroups needed?)
        mount -t cgroup -o blkio none /sys/fs/cgroup/blkio

- Specify a bandwidth rate on particular device for root group. The format
  for policy is "<major>:<minor>  <byes_per_second>".

        echo "8:16  1048576" > /cgroup/blkio/blkio.read_bps_device
        echo "8:16  1048576" > /sys/fs/cgroup/blkio/blkio.read_bps_device

  Above will put a limit of 1MB/second on reads happening for root group
  on device having major/minor number 8:16.
@@ -108,7 +111,7 @@ Hierarchical Cgroups
  CFQ and throttling will practically treat all groups at same level.

				pivot
			     /  |   \  \
			     /  /   \  \
			root  test1 test2  test3

  Down the line we can implement hierarchical accounting/control support
@@ -149,7 +152,7 @@ Proportional weight policy files

	  Following is the format.

	  #echo dev_maj:dev_minor weight > /path/to/cgroup/blkio.weight_device
	  # echo dev_maj:dev_minor weight > blkio.weight_device
	  Configure weight=300 on /dev/sdb (8:16) in this cgroup
	  # echo 8:16 300 > blkio.weight_device
	  # cat blkio.weight_device
+36 −24
Original line number Diff line number Diff line
@@ -138,11 +138,11 @@ With the ability to classify tasks differently for different resources
the admin can easily set up a script which receives exec notifications
and depending on who is launching the browser he can

       # echo browser_pid > /mnt/<restype>/<userclass>/tasks
    # echo browser_pid > /sys/fs/cgroup/<restype>/<userclass>/tasks

With only a single hierarchy, he now would potentially have to create
a separate cgroup for every browser launched and associate it with
approp network and other resource class.  This may lead to
appropriate network and other resource class.  This may lead to
proliferation of such cgroups.

Also lets say that the administrator would like to give enhanced network
@@ -153,9 +153,9 @@ apps enhanced CPU power,
With ability to write pids directly to resource classes, it's just a
matter of :

       # echo pid > /mnt/network/<new_class>/tasks
       # echo pid > /sys/fs/cgroup/network/<new_class>/tasks
       (after some time)
       # echo pid > /mnt/network/<orig_class>/tasks
       # echo pid > /sys/fs/cgroup/network/<orig_class>/tasks

Without this ability, he would have to split the cgroup into
multiple separate ones and then associate the new cgroups with the
@@ -310,21 +310,24 @@ subsystem, this is the case for the cpuset.
To start a new job that is to be contained within a cgroup, using
the "cpuset" cgroup subsystem, the steps are something like:

 1) mkdir /dev/cgroup
 2) mount -t cgroup -ocpuset cpuset /dev/cgroup
 3) Create the new cgroup by doing mkdir's and write's (or echo's) in
    the /dev/cgroup virtual file system.
 4) Start a task that will be the "founding father" of the new job.
 5) Attach that task to the new cgroup by writing its pid to the
    /dev/cgroup tasks file for that cgroup.
 6) fork, exec or clone the job tasks from this founding father task.
 1) mount -t tmpfs cgroup_root /sys/fs/cgroup
 2) mkdir /sys/fs/cgroup/cpuset
 3) mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
 4) Create the new cgroup by doing mkdir's and write's (or echo's) in
    the /sys/fs/cgroup virtual file system.
 5) Start a task that will be the "founding father" of the new job.
 6) Attach that task to the new cgroup by writing its pid to the
    /sys/fs/cgroup/cpuset/tasks file for that cgroup.
 7) fork, exec or clone the job tasks from this founding father task.

For example, the following sequence of commands will setup a cgroup
named "Charlie", containing just CPUs 2 and 3, and Memory Node 1,
and then start a subshell 'sh' in that cgroup:

  mount -t cgroup cpuset -ocpuset /dev/cgroup
  cd /dev/cgroup
  mount -t tmpfs cgroup_root /sys/fs/cgroup
  mkdir /sys/fs/cgroup/cpuset
  mount -t cgroup cpuset -ocpuset /sys/fs/cgroup/cpuset
  cd /sys/fs/cgroup/cpuset
  mkdir Charlie
  cd Charlie
  /bin/echo 2-3 > cpuset.cpus
@@ -345,7 +348,7 @@ Creating, modifying, using the cgroups can be done through the cgroup
virtual filesystem.

To mount a cgroup hierarchy with all available subsystems, type:
# mount -t cgroup xxx /dev/cgroup
# mount -t cgroup xxx /sys/fs/cgroup

The "xxx" is not interpreted by the cgroup code, but will appear in
/proc/mounts so may be any useful identifying string that you like.
@@ -354,23 +357,32 @@ Note: Some subsystems do not work without some user input first. For instance,
if cpusets are enabled the user will have to populate the cpus and mems files
for each new cgroup created before that group can be used.

As explained in section `1.2 Why are cgroups needed?' you should create
different hierarchies of cgroups for each single resource or group of
resources you want to control. Therefore, you should mount a tmpfs on
/sys/fs/cgroup and create directories for each cgroup resource or resource
group.

# mount -t tmpfs cgroup_root /sys/fs/cgroup
# mkdir /sys/fs/cgroup/rg1

To mount a cgroup hierarchy with just the cpuset and memory
subsystems, type:
# mount -t cgroup -o cpuset,memory hier1 /dev/cgroup
# mount -t cgroup -o cpuset,memory hier1 /sys/fs/cgroup/rg1

To change the set of subsystems bound to a mounted hierarchy, just
remount with different options:
# mount -o remount,cpuset,blkio hier1 /dev/cgroup
# mount -o remount,cpuset,blkio hier1 /sys/fs/cgroup/rg1

Now memory is removed from the hierarchy and blkio is added.

Note this will add blkio to the hierarchy but won't remove memory or
cpuset, because the new options are appended to the old ones:
# mount -o remount,blkio /dev/cgroup
# mount -o remount,blkio /sys/fs/cgroup/rg1

To Specify a hierarchy's release_agent:
# mount -t cgroup -o cpuset,release_agent="/sbin/cpuset_release_agent" \
  xxx /dev/cgroup
  xxx /sys/fs/cgroup/rg1

Note that specifying 'release_agent' more than once will return failure.

@@ -379,17 +391,17 @@ when the hierarchy consists of a single (root) cgroup. Supporting
the ability to arbitrarily bind/unbind subsystems from an existing
cgroup hierarchy is intended to be implemented in the future.

Then under /dev/cgroup you can find a tree that corresponds to the
tree of the cgroups in the system. For instance, /dev/cgroup
Then under /sys/fs/cgroup/rg1 you can find a tree that corresponds to the
tree of the cgroups in the system. For instance, /sys/fs/cgroup/rg1
is the cgroup that holds the whole system.

If you want to change the value of release_agent:
# echo "/sbin/new_release_agent" > /dev/cgroup/release_agent
# echo "/sbin/new_release_agent" > /sys/fs/cgroup/rg1/release_agent

It can also be changed via remount.

If you want to create a new cgroup under /dev/cgroup:
# cd /dev/cgroup
If you want to create a new cgroup under /sys/fs/cgroup/rg1:
# cd /sys/fs/cgroup/rg1
# mkdir my_cgroup

Now you want to do something with this cgroup.
+10 −11
Original line number Diff line number Diff line
@@ -10,26 +10,25 @@ directly present in its group.

Accounting groups can be created by first mounting the cgroup filesystem.

# mkdir /cgroups
# mount -t cgroup -ocpuacct none /cgroups

With the above step, the initial or the parent accounting group
becomes visible at /cgroups. At bootup, this group includes all the
tasks in the system. /cgroups/tasks lists the tasks in this cgroup.
/cgroups/cpuacct.usage gives the CPU time (in nanoseconds) obtained by
this group which is essentially the CPU time obtained by all the tasks
# mount -t cgroup -ocpuacct none /sys/fs/cgroup

With the above step, the initial or the parent accounting group becomes
visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in
the system. /sys/fs/cgroup/tasks lists the tasks in this cgroup.
/sys/fs/cgroup/cpuacct.usage gives the CPU time (in nanoseconds) obtained
by this group which is essentially the CPU time obtained by all the tasks
in the system.

New accounting groups can be created under the parent group /cgroups.
New accounting groups can be created under the parent group /sys/fs/cgroup.

# cd /cgroups
# cd /sys/fs/cgroup
# mkdir g1
# echo $$ > g1

The above steps create a new group g1 and move the current shell
process (bash) into it. CPU time consumed by this bash and its children
can be obtained from g1/cpuacct.usage and the same is accumulated in
/cgroups/cpuacct.usage also.
/sys/fs/cgroup/cpuacct.usage also.

cpuacct.stat file lists a few statistics which further divide the
CPU time obtained by the cgroup into user and system times. Currently
Loading