Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit a32750c2 authored by Arnd Bergmann's avatar Arnd Bergmann
Browse files

Merge branches 'imx/pata' and 'imx/sata' into next/driver

Conflicts:
	arch/arm/mach-mx5/clock-mx51-mx53.c
	arch/arm/mach-mx5/devices-imx53.h
parents 318007e9 d870ea1d
Loading
Loading
Loading
Loading
+2 −0
Original line number Original line Diff line number Diff line
@@ -272,6 +272,8 @@ printk-formats.txt
	- how to get printk format specifiers right
	- how to get printk format specifiers right
prio_tree.txt
prio_tree.txt
	- info on radix-priority-search-tree use for indexing vmas.
	- info on radix-priority-search-tree use for indexing vmas.
ramoops.txt
	- documentation of the ramoops oops/panic logging module.
rbtree.txt
rbtree.txt
	- info on what red-black trees are and what they are for.
	- info on what red-black trees are and what they are for.
robust-futex-ABI.txt
robust-futex-ABI.txt
+44 −45
Original line number Original line Diff line number Diff line
@@ -45,7 +45,7 @@ arrived in memory (this becomes more likely with devices behind PCI-PCI
bridges).  In order to ensure that all the data has arrived in memory,
bridges).  In order to ensure that all the data has arrived in memory,
the interrupt handler must read a register on the device which raised
the interrupt handler must read a register on the device which raised
the interrupt.  PCI transaction ordering rules require that all the data
the interrupt.  PCI transaction ordering rules require that all the data
arrives in memory before the value can be returned from the register.
arrive in memory before the value may be returned from the register.
Using MSIs avoids this problem as the interrupt-generating write cannot
Using MSIs avoids this problem as the interrupt-generating write cannot
pass the data writes, so by the time the interrupt is raised, the driver
pass the data writes, so by the time the interrupt is raised, the driver
knows that all the data has arrived in memory.
knows that all the data has arrived in memory.
@@ -86,13 +86,13 @@ device.


int pci_enable_msi(struct pci_dev *dev)
int pci_enable_msi(struct pci_dev *dev)


A successful call will allocate ONE interrupt to the device, regardless
A successful call allocates ONE interrupt to the device, regardless
of how many MSIs the device supports.  The device will be switched from
of how many MSIs the device supports.  The device is switched from
pin-based interrupt mode to MSI mode.  The dev->irq number is changed
pin-based interrupt mode to MSI mode.  The dev->irq number is changed
to a new number which represents the message signaled interrupt.
to a new number which represents the message signaled interrupt;
This function should be called before the driver calls request_irq()
consequently, this function should be called before the driver calls
since enabling MSIs disables the pin-based IRQ and the driver will not
request_irq(), because an MSI is delivered via a vector that is
receive interrupts on the old interrupt.
different from the vector of a pin-based interrupt.


4.2.2 pci_enable_msi_block
4.2.2 pci_enable_msi_block


@@ -111,20 +111,20 @@ the device are in the range dev->irq to dev->irq + count - 1.


If this function returns a negative number, it indicates an error and
If this function returns a negative number, it indicates an error and
the driver should not attempt to request any more MSI interrupts for
the driver should not attempt to request any more MSI interrupts for
this device.  If this function returns a positive number, it will be
this device.  If this function returns a positive number, it is
less than 'count' and indicate the number of interrupts that could have
less than 'count' and indicates the number of interrupts that could have
been allocated.  In neither case will the irq value have been
been allocated.  In neither case is the irq value updated or the device
updated, nor will the device have been switched into MSI mode.
switched into MSI mode.


The device driver must decide what action to take if
The device driver must decide what action to take if
pci_enable_msi_block() returns a value less than the number asked for.
pci_enable_msi_block() returns a value less than the number requested.
Some devices can make use of fewer interrupts than the maximum they
For instance, the driver could still make use of fewer interrupts;
request; in this case the driver should call pci_enable_msi_block()
in this case the driver should call pci_enable_msi_block()
again.  Note that it is not guaranteed to succeed, even when the
again.  Note that it is not guaranteed to succeed, even when the
'count' has been reduced to the value returned from a previous call to
'count' has been reduced to the value returned from a previous call to
pci_enable_msi_block().  This is because there are multiple constraints
pci_enable_msi_block().  This is because there are multiple constraints
on the number of vectors that can be allocated; pci_enable_msi_block()
on the number of vectors that can be allocated; pci_enable_msi_block()
will return as soon as it finds any constraint that doesn't allow the
returns as soon as it finds any constraint that doesn't allow the
call to succeed.
call to succeed.


4.2.3 pci_disable_msi
4.2.3 pci_disable_msi
@@ -137,10 +137,10 @@ interrupt number and frees the previously allocated message signaled
interrupt(s).  The interrupt may subsequently be assigned to another
interrupt(s).  The interrupt may subsequently be assigned to another
device, so drivers should not cache the value of dev->irq.
device, so drivers should not cache the value of dev->irq.


A device driver must always call free_irq() on the interrupt(s)
Before calling this function, a device driver must always call free_irq()
for which it has called request_irq() before calling this function.
on any interrupt for which it previously called request_irq().
Failure to do so will result in a BUG_ON(), the device will be left with
Failure to do so results in a BUG_ON(), leaving the device with
MSI enabled and will leak its vector.
MSI enabled and thus leaking its vector.


4.3 Using MSI-X
4.3 Using MSI-X


@@ -155,10 +155,10 @@ struct msix_entry {
};
};


This allows for the device to use these interrupts in a sparse fashion;
This allows for the device to use these interrupts in a sparse fashion;
for example it could use interrupts 3 and 1027 and allocate only a
for example, it could use interrupts 3 and 1027 and yet allocate only a
two-element array.  The driver is expected to fill in the 'entry' value
two-element array.  The driver is expected to fill in the 'entry' value
in each element of the array to indicate which entries it wants the kernel
in each element of the array to indicate for which entries the kernel
to assign interrupts for.  It is invalid to fill in two entries with the
should assign interrupts; it is invalid to fill in two entries with the
same number.
same number.


4.3.1 pci_enable_msix
4.3.1 pci_enable_msix
@@ -168,10 +168,11 @@ int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
Calling this function asks the PCI subsystem to allocate 'nvec' MSIs.
Calling this function asks the PCI subsystem to allocate 'nvec' MSIs.
The 'entries' argument is a pointer to an array of msix_entry structs
The 'entries' argument is a pointer to an array of msix_entry structs
which should be at least 'nvec' entries in size.  On success, the
which should be at least 'nvec' entries in size.  On success, the
function will return 0 and the device will have been switched into
device is switched into MSI-X mode and the function returns 0.
MSI-X interrupt mode.  The 'vector' elements in each entry will have
The 'vector' member in each entry is populated with the interrupt number;
been filled in with the interrupt number.  The driver should then call
the driver should then call request_irq() for each 'vector' that it
request_irq() for each 'vector' that it decides to use.
decides to use.  The device driver is responsible for keeping track of the
interrupts assigned to the MSI-X vectors so it can free them again later.


If this function returns a negative number, it indicates an error and
If this function returns a negative number, it indicates an error and
the driver should not attempt to allocate any more MSI-X interrupts for
the driver should not attempt to allocate any more MSI-X interrupts for
@@ -181,16 +182,14 @@ below.


This function, in contrast with pci_enable_msi(), does not adjust
This function, in contrast with pci_enable_msi(), does not adjust
dev->irq.  The device will not generate interrupts for this interrupt
dev->irq.  The device will not generate interrupts for this interrupt
number once MSI-X is enabled.  The device driver is responsible for
number once MSI-X is enabled.
keeping track of the interrupts assigned to the MSI-X vectors so it can
free them again later.


Device drivers should normally call this function once per device
Device drivers should normally call this function once per device
during the initialization phase.
during the initialization phase.


It is ideal if drivers can cope with a variable number of MSI-X interrupts,
It is ideal if drivers can cope with a variable number of MSI-X interrupts;
there are many reasons why the platform may not be able to provide the
there are many reasons why the platform may not be able to provide the
exact number a driver asks for.
exact number that a driver asks for.


A request loop to achieve that might look like:
A request loop to achieve that might look like:


@@ -212,15 +211,15 @@ static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)


void pci_disable_msix(struct pci_dev *dev)
void pci_disable_msix(struct pci_dev *dev)


This API should be used to undo the effect of pci_enable_msix().  It frees
This function should be used to undo the effect of pci_enable_msix().  It frees
the previously allocated message signaled interrupts.  The interrupts may
the previously allocated message signaled interrupts.  The interrupts may
subsequently be assigned to another device, so drivers should not cache
subsequently be assigned to another device, so drivers should not cache
the value of the 'vector' elements over a call to pci_disable_msix().
the value of the 'vector' elements over a call to pci_disable_msix().


A device driver must always call free_irq() on the interrupt(s)
Before calling this function, a device driver must always call free_irq()
for which it has called request_irq() before calling this function.
on any interrupt for which it previously called request_irq().
Failure to do so will result in a BUG_ON(), the device will be left with
Failure to do so results in a BUG_ON(), leaving the device with
MSI enabled and will leak its vector.
MSI-X enabled and thus leaking its vector.


4.3.3 The MSI-X Table
4.3.3 The MSI-X Table


@@ -232,10 +231,10 @@ mask or unmask an interrupt, it should call disable_irq() / enable_irq().
4.4 Handling devices implementing both MSI and MSI-X capabilities
4.4 Handling devices implementing both MSI and MSI-X capabilities


If a device implements both MSI and MSI-X capabilities, it can
If a device implements both MSI and MSI-X capabilities, it can
run in either MSI mode or MSI-X mode but not both simultaneously.
run in either MSI mode or MSI-X mode, but not both simultaneously.
This is a requirement of the PCI spec, and it is enforced by the
This is a requirement of the PCI spec, and it is enforced by the
PCI layer.  Calling pci_enable_msi() when MSI-X is already enabled or
PCI layer.  Calling pci_enable_msi() when MSI-X is already enabled or
pci_enable_msix() when MSI is already enabled will result in an error.
pci_enable_msix() when MSI is already enabled results in an error.
If a device driver wishes to switch between MSI and MSI-X at runtime,
If a device driver wishes to switch between MSI and MSI-X at runtime,
it must first quiesce the device, then switch it back to pin-interrupt
it must first quiesce the device, then switch it back to pin-interrupt
mode, before calling pci_enable_msi() or pci_enable_msix() and resuming
mode, before calling pci_enable_msi() or pci_enable_msix() and resuming
@@ -251,7 +250,7 @@ the MSI-X facilities in preference to the MSI facilities. As mentioned
above, MSI-X supports any number of interrupts between 1 and 2048.
above, MSI-X supports any number of interrupts between 1 and 2048.
In constrast, MSI is restricted to a maximum of 32 interrupts (and
In constrast, MSI is restricted to a maximum of 32 interrupts (and
must be a power of two).  In addition, the MSI interrupt vectors must
must be a power of two).  In addition, the MSI interrupt vectors must
be allocated consecutively, so the system may not be able to allocate
be allocated consecutively, so the system might not be able to allocate
as many vectors for MSI as it could for MSI-X.  On some platforms, MSI
as many vectors for MSI as it could for MSI-X.  On some platforms, MSI
interrupts must all be targeted at the same set of CPUs whereas MSI-X
interrupts must all be targeted at the same set of CPUs whereas MSI-X
interrupts can all be targeted at different CPUs.
interrupts can all be targeted at different CPUs.
@@ -281,7 +280,7 @@ disabled to enabled and back again.


Using 'lspci -v' (as root) may show some devices with "MSI", "Message
Using 'lspci -v' (as root) may show some devices with "MSI", "Message
Signalled Interrupts" or "MSI-X" capabilities.  Each of these capabilities
Signalled Interrupts" or "MSI-X" capabilities.  Each of these capabilities
has an 'Enable' flag which will be followed with either "+" (enabled)
has an 'Enable' flag which is followed with either "+" (enabled)
or "-" (disabled).
or "-" (disabled).




@@ -298,7 +297,7 @@ The PCI stack provides three ways to disable MSIs:


Some host chipsets simply don't support MSIs properly.  If we're
Some host chipsets simply don't support MSIs properly.  If we're
lucky, the manufacturer knows this and has indicated it in the ACPI
lucky, the manufacturer knows this and has indicated it in the ACPI
FADT table.  In this case, Linux will automatically disable MSIs.
FADT table.  In this case, Linux automatically disables MSIs.
Some boards don't include this information in the table and so we have
Some boards don't include this information in the table and so we have
to detect them ourselves.  The complete list of these is found near the
to detect them ourselves.  The complete list of these is found near the
quirk_disable_all_msi() function in drivers/pci/quirks.c.
quirk_disable_all_msi() function in drivers/pci/quirks.c.
@@ -317,7 +316,7 @@ Some bridges allow you to enable MSIs by changing some bits in their
PCI configuration space (especially the Hypertransport chipsets such
PCI configuration space (especially the Hypertransport chipsets such
as the nVidia nForce and Serverworks HT2000).  As with host chipsets,
as the nVidia nForce and Serverworks HT2000).  As with host chipsets,
Linux mostly knows about them and automatically enables MSIs if it can.
Linux mostly knows about them and automatically enables MSIs if it can.
If you have a bridge which Linux doesn't yet know about, you can enable
If you have a bridge unknown to Linux, you can enable
MSIs in configuration space using whatever method you know works, then
MSIs in configuration space using whatever method you know works, then
enable MSIs on that bridge by doing:
enable MSIs on that bridge by doing:


@@ -327,7 +326,7 @@ where $bridge is the PCI address of the bridge you've enabled (eg
0000:00:0e.0).
0000:00:0e.0).


To disable MSIs, echo 0 instead of 1.  Changing this value should be
To disable MSIs, echo 0 instead of 1.  Changing this value should be
done with caution as it can break interrupt handling for all devices
done with caution as it could break interrupt handling for all devices
below this bridge.
below this bridge.


Again, please notify linux-pci@vger.kernel.org of any bridges that need
Again, please notify linux-pci@vger.kernel.org of any bridges that need
@@ -336,7 +335,7 @@ special handling.
5.3. Disabling MSIs on a single device
5.3. Disabling MSIs on a single device


Some devices are known to have faulty MSI implementations.  Usually this
Some devices are known to have faulty MSI implementations.  Usually this
is handled in the individual device driver but occasionally it's necessary
is handled in the individual device driver, but occasionally it's necessary
to handle this with a quirk.  Some drivers have an option to disable use
to handle this with a quirk.  Some drivers have an option to disable use
of MSI.  While this is a convenient workaround for the driver author,
of MSI.  While this is a convenient workaround for the driver author,
it is not good practise, and should not be emulated.
it is not good practise, and should not be emulated.
@@ -350,7 +349,7 @@ for your machine. You should also check your .config to be sure you
have enabled CONFIG_PCI_MSI.
have enabled CONFIG_PCI_MSI.


Then, 'lspci -t' gives the list of bridges above a device.  Reading
Then, 'lspci -t' gives the list of bridges above a device.  Reading
/sys/bus/pci/devices/*/msi_bus will tell you whether MSI are enabled (1)
/sys/bus/pci/devices/*/msi_bus will tell you whether MSIs are enabled (1)
or disabled (0).  If 0 is found in any of the msi_bus files belonging
or disabled (0).  If 0 is found in any of the msi_bus files belonging
to bridges between the PCI root and the device, MSIs are disabled.
to bridges between the PCI root and the device, MSIs are disabled.


+1 −1
Original line number Original line Diff line number Diff line
@@ -130,7 +130,7 @@ Linux kernel master tree:
	ftp.??.kernel.org:/pub/linux/kernel/...
	ftp.??.kernel.org:/pub/linux/kernel/...
	?? == your country code, such as "us", "uk", "fr", etc.
	?? == your country code, such as "us", "uk", "fr", etc.


	http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git
	http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git


Linux kernel mailing list:
Linux kernel mailing list:
	linux-kernel@vger.kernel.org
	linux-kernel@vger.kernel.org
+1 −1
Original line number Original line Diff line number Diff line
@@ -303,7 +303,7 @@ patches that are being emailed around.


The sign-off is a simple line at the end of the explanation for the
The sign-off is a simple line at the end of the explanation for the
patch, which certifies that you wrote it or otherwise have the right to
patch, which certifies that you wrote it or otherwise have the right to
pass it on as a open-source patch.  The rules are pretty simple: if you
pass it on as an open-source patch.  The rules are pretty simple: if you
can certify the below:
can certify the below:


        Developer's Certificate of Origin 1.1
        Developer's Certificate of Origin 1.1
+71 −0
Original line number Original line Diff line number Diff line
@@ -43,3 +43,74 @@ If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches
to IOPS mode and starts providing fairness in terms of number of requests
to IOPS mode and starts providing fairness in terms of number of requests
dispatched. Note that this mode switching takes effect only for group
dispatched. Note that this mode switching takes effect only for group
scheduling. For non-cgroup users nothing should change.
scheduling. For non-cgroup users nothing should change.

CFQ IO scheduler Idling Theory
===============================
Idling on a queue is primarily about waiting for the next request to come
on same queue after completion of a request. In this process CFQ will not
dispatch requests from other cfq queues even if requests are pending there.

The rationale behind idling is that it can cut down on number of seeks
on rotational media. For example, if a process is doing dependent
sequential reads (next read will come on only after completion of previous
one), then not dispatching request from other queue should help as we
did not move the disk head and kept on dispatching sequential IO from
one queue.

CFQ has following service trees and various queues are put on these trees.

	sync-idle	sync-noidle	async

All cfq queues doing synchronous sequential IO go on to sync-idle tree.
On this tree we idle on each queue individually.

All synchronous non-sequential queues go on sync-noidle tree. Also any
request which are marked with REQ_NOIDLE go on this service tree. On this
tree we do not idle on individual queues instead idle on the whole group
of queues or the tree. So if there are 4 queues waiting for IO to dispatch
we will idle only once last queue has dispatched the IO and there is
no more IO on this service tree.

All async writes go on async service tree. There is no idling on async
queues.

CFQ has some optimizations for SSDs and if it detects a non-rotational
media which can support higher queue depth (multiple requests at in
flight at a time), then it cuts down on idling of individual queues and
all the queues move to sync-noidle tree and only tree idle remains. This
tree idling provides isolation with buffered write queues on async tree.

FAQ
===
Q1. Why to idle at all on queues marked with REQ_NOIDLE.

A1. We only do tree idle (all queues on sync-noidle tree) on queues marked
    with REQ_NOIDLE. This helps in providing isolation with all the sync-idle
    queues. Otherwise in presence of many sequential readers, other
    synchronous IO might not get fair share of disk.

    For example, if there are 10 sequential readers doing IO and they get
    100ms each. If a REQ_NOIDLE request comes in, it will be scheduled
    roughly after 1 second. If after completion of REQ_NOIDLE request we
    do not idle, and after a couple of milli seconds a another REQ_NOIDLE
    request comes in, again it will be scheduled after 1second. Repeat it
    and notice how a workload can lose its disk share and suffer due to
    multiple sequential readers.

    fsync can generate dependent IO where bunch of data is written in the
    context of fsync, and later some journaling data is written. Journaling
    data comes in only after fsync has finished its IO (atleast for ext4
    that seemed to be the case). Now if one decides not to idle on fsync
    thread due to REQ_NOIDLE, then next journaling write will not get
    scheduled for another second. A process doing small fsync, will suffer
    badly in presence of multiple sequential readers.

    Hence doing tree idling on threads using REQ_NOIDLE flag on requests
    provides isolation from multiple sequential readers and at the same
    time we do not idle on individual threads.

Q2. When to specify REQ_NOIDLE
A2. I would think whenever one is doing synchronous write and not expecting
    more writes to be dispatched from same context soon, should be able
    to specify REQ_NOIDLE on writes and that probably should work well for
    most of the cases.
Loading