Loading CREDITS +3 −3 Original line number Original line Diff line number Diff line Loading @@ -3382,7 +3382,7 @@ S: Germany N: Geert Uytterhoeven N: Geert Uytterhoeven E: geert@linux-m68k.org E: geert@linux-m68k.org W: http://home.tvd.be/cr26864/ W: http://users.telenet.be/geertu/ P: 1024/862678A6 C51D 361C 0BD1 4C90 B275 C553 6EEA 11BA 8626 78A6 P: 1024/862678A6 C51D 361C 0BD1 4C90 B275 C553 6EEA 11BA 8626 78A6 D: m68k/Amiga and PPC/CHRP Longtrail coordinator D: m68k/Amiga and PPC/CHRP Longtrail coordinator D: Frame buffer device and XF68_FBDev maintainer D: Frame buffer device and XF68_FBDev maintainer Loading @@ -3392,8 +3392,8 @@ D: Amiga Buddha and Catweasel chipset IDE D: Atari Falcon chipset IDE D: Atari Falcon chipset IDE D: Amiga Gayle chipset IDE D: Amiga Gayle chipset IDE D: mipsel NEC DDB Vrc-5074 D: mipsel NEC DDB Vrc-5074 S: Emiel Vlieberghlaan 2A/21 S: Haterbeekstraat 55B S: B-3010 Kessel-Lo S: B-3200 Aarschot S: Belgium S: Belgium N: Chris Vance N: Chris Vance Loading Documentation/DMA-API.txt +36 −13 Original line number Original line Diff line number Diff line Loading @@ -33,7 +33,9 @@ pci_alloc_consistent(struct pci_dev *dev, size_t size, Consistent memory is memory for which a write by either the device or Consistent memory is memory for which a write by either the device or the processor can immediately be read by the processor or device the processor can immediately be read by the processor or device without having to worry about caching effects. without having to worry about caching effects. (You may however need to make sure to flush the processor's write buffers before telling devices to read that memory.) This routine allocates a region of <size> bytes of consistent memory. This routine allocates a region of <size> bytes of consistent memory. it also returns a <dma_handle> which may be cast to an unsigned it also returns a <dma_handle> which may be cast to an unsigned Loading Loading @@ -305,8 +307,8 @@ could not be created and the driver should take appropriate action (eg reduce current DMA mapping usage or delay and try again later). reduce current DMA mapping usage or delay and try again later). int int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, dma_map_sg(struct device *dev, struct scatterlist *sg, enum dma_data_direction direction) int nents, enum dma_data_direction direction) int int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction) int nents, int direction) Loading @@ -327,9 +329,30 @@ critical that the driver do something, in the case of a block driver aborting the request or even oopsing is better than doing nothing and aborting the request or even oopsing is better than doing nothing and corrupting the filesystem. corrupting the filesystem. With scatterlists, you use the resulting mapping like this: int i, count = dma_map_sg(dev, sglist, nents, direction); struct scatterlist *sg; for (i = 0, sg = sglist; i < count; i++, sg++) { hw_address[i] = sg_dma_address(sg); hw_len[i] = sg_dma_len(sg); } where nents is the number of entries in the sglist. The implementation is free to merge several consecutive sglist entries into one (e.g. with an IOMMU, or if several pages just happen to be physically contiguous) and returns the actual number of sg entries it mapped them to. On failure 0, is returned. Then you should loop count times (note: this can be less than nents times) and use sg_dma_address() and sg_dma_len() macros where you previously accessed sg->address and sg->length as shown above. void void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, dma_unmap_sg(struct device *dev, struct scatterlist *sg, enum dma_data_direction direction) int nhwentries, enum dma_data_direction direction) void void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction) int nents, int direction) Loading Documentation/DMA-mapping.txt +19 −7 Original line number Original line Diff line number Diff line Loading @@ -58,11 +58,15 @@ translating each of those pages back to a kernel address using something like __va(). [ EDIT: Update this when we integrate something like __va(). [ EDIT: Update this when we integrate Gerd Knorr's generic code which does this. ] Gerd Knorr's generic code which does this. ] This rule also means that you may not use kernel image addresses This rule also means that you may use neither kernel image addresses (ie. items in the kernel's data/text/bss segment, or your driver's) (items in data/text/bss segments), nor module image addresses, nor nor may you use kernel stack addresses for DMA. Both of these items stack addresses for DMA. These could all be mapped somewhere entirely might be mapped somewhere entirely different than the rest of physical different than the rest of physical memory. Even if those classes of memory. memory could physically work with DMA, you'd need to ensure the I/O buffers were cacheline-aligned. Without that, you'd see cacheline sharing problems (data corruption) on CPUs with DMA-incoherent caches. (The CPU could write to one word, DMA would write to a different one in the same cache line, and one of them could be overwritten.) Also, this means that you cannot take the return of a kmap() Also, this means that you cannot take the return of a kmap() call and DMA to/from that. This is similar to vmalloc(). call and DMA to/from that. This is similar to vmalloc(). Loading Loading @@ -194,7 +198,7 @@ document for how to handle this case. Finally, if your device can only drive the low 24-bits of Finally, if your device can only drive the low 24-bits of address during PCI bus mastering you might do something like: address during PCI bus mastering you might do something like: if (pci_set_dma_mask(pdev, 0x00ffffff)) { if (pci_set_dma_mask(pdev, DMA_24BIT_MASK)) { printk(KERN_WARNING printk(KERN_WARNING "mydev: 24-bit DMA addressing not available.\n"); "mydev: 24-bit DMA addressing not available.\n"); goto ignore_this_device; goto ignore_this_device; Loading Loading @@ -284,6 +288,11 @@ There are two types of DMA mappings: in order to get correct behavior on all platforms. in order to get correct behavior on all platforms. Also, on some platforms your driver may need to flush CPU write buffers in much the same way as it needs to flush write buffers found in PCI bridges (such as by reading a register's value after writing it). - Streaming DMA mappings which are usually mapped for one DMA transfer, - Streaming DMA mappings which are usually mapped for one DMA transfer, unmapped right after it (unless you use pci_dma_sync_* below) and for which unmapped right after it (unless you use pci_dma_sync_* below) and for which hardware can optimize for sequential accesses. hardware can optimize for sequential accesses. Loading @@ -303,6 +312,9 @@ There are two types of DMA mappings: Neither type of DMA mapping has alignment restrictions that come Neither type of DMA mapping has alignment restrictions that come from PCI, although some devices may have such restrictions. from PCI, although some devices may have such restrictions. Also, systems with caches that aren't DMA-coherent will work better when the underlying buffers don't share cache lines with other data. Using Consistent DMA mappings. Using Consistent DMA mappings. Loading Documentation/DocBook/Makefile +1 −1 Original line number Original line Diff line number Diff line Loading @@ -2,7 +2,7 @@ # This makefile is used to generate the kernel documentation, # This makefile is used to generate the kernel documentation, # primarily based on in-line comments in various source files. # primarily based on in-line comments in various source files. # See Documentation/kernel-doc-nano-HOWTO.txt for instruction in how # See Documentation/kernel-doc-nano-HOWTO.txt for instruction in how # to ducument the SRC - and how to read it. # to document the SRC - and how to read it. # To add a new book the only step required is to add the book to the # To add a new book the only step required is to add the book to the # list of DOCBOOKS. # list of DOCBOOKS. Loading Documentation/DocBook/kernel-api.tmpl +0 −1 Original line number Original line Diff line number Diff line Loading @@ -322,7 +322,6 @@ X!Earch/i386/kernel/mca.c <chapter id="sysfs"> <chapter id="sysfs"> <title>The Filesystem for Exporting Kernel Objects</title> <title>The Filesystem for Exporting Kernel Objects</title> !Efs/sysfs/file.c !Efs/sysfs/file.c !Efs/sysfs/dir.c !Efs/sysfs/symlink.c !Efs/sysfs/symlink.c !Efs/sysfs/bin.c !Efs/sysfs/bin.c </chapter> </chapter> Loading Loading
CREDITS +3 −3 Original line number Original line Diff line number Diff line Loading @@ -3382,7 +3382,7 @@ S: Germany N: Geert Uytterhoeven N: Geert Uytterhoeven E: geert@linux-m68k.org E: geert@linux-m68k.org W: http://home.tvd.be/cr26864/ W: http://users.telenet.be/geertu/ P: 1024/862678A6 C51D 361C 0BD1 4C90 B275 C553 6EEA 11BA 8626 78A6 P: 1024/862678A6 C51D 361C 0BD1 4C90 B275 C553 6EEA 11BA 8626 78A6 D: m68k/Amiga and PPC/CHRP Longtrail coordinator D: m68k/Amiga and PPC/CHRP Longtrail coordinator D: Frame buffer device and XF68_FBDev maintainer D: Frame buffer device and XF68_FBDev maintainer Loading @@ -3392,8 +3392,8 @@ D: Amiga Buddha and Catweasel chipset IDE D: Atari Falcon chipset IDE D: Atari Falcon chipset IDE D: Amiga Gayle chipset IDE D: Amiga Gayle chipset IDE D: mipsel NEC DDB Vrc-5074 D: mipsel NEC DDB Vrc-5074 S: Emiel Vlieberghlaan 2A/21 S: Haterbeekstraat 55B S: B-3010 Kessel-Lo S: B-3200 Aarschot S: Belgium S: Belgium N: Chris Vance N: Chris Vance Loading
Documentation/DMA-API.txt +36 −13 Original line number Original line Diff line number Diff line Loading @@ -33,7 +33,9 @@ pci_alloc_consistent(struct pci_dev *dev, size_t size, Consistent memory is memory for which a write by either the device or Consistent memory is memory for which a write by either the device or the processor can immediately be read by the processor or device the processor can immediately be read by the processor or device without having to worry about caching effects. without having to worry about caching effects. (You may however need to make sure to flush the processor's write buffers before telling devices to read that memory.) This routine allocates a region of <size> bytes of consistent memory. This routine allocates a region of <size> bytes of consistent memory. it also returns a <dma_handle> which may be cast to an unsigned it also returns a <dma_handle> which may be cast to an unsigned Loading Loading @@ -305,8 +307,8 @@ could not be created and the driver should take appropriate action (eg reduce current DMA mapping usage or delay and try again later). reduce current DMA mapping usage or delay and try again later). int int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, dma_map_sg(struct device *dev, struct scatterlist *sg, enum dma_data_direction direction) int nents, enum dma_data_direction direction) int int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction) int nents, int direction) Loading @@ -327,9 +329,30 @@ critical that the driver do something, in the case of a block driver aborting the request or even oopsing is better than doing nothing and aborting the request or even oopsing is better than doing nothing and corrupting the filesystem. corrupting the filesystem. With scatterlists, you use the resulting mapping like this: int i, count = dma_map_sg(dev, sglist, nents, direction); struct scatterlist *sg; for (i = 0, sg = sglist; i < count; i++, sg++) { hw_address[i] = sg_dma_address(sg); hw_len[i] = sg_dma_len(sg); } where nents is the number of entries in the sglist. The implementation is free to merge several consecutive sglist entries into one (e.g. with an IOMMU, or if several pages just happen to be physically contiguous) and returns the actual number of sg entries it mapped them to. On failure 0, is returned. Then you should loop count times (note: this can be less than nents times) and use sg_dma_address() and sg_dma_len() macros where you previously accessed sg->address and sg->length as shown above. void void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, dma_unmap_sg(struct device *dev, struct scatterlist *sg, enum dma_data_direction direction) int nhwentries, enum dma_data_direction direction) void void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction) int nents, int direction) Loading
Documentation/DMA-mapping.txt +19 −7 Original line number Original line Diff line number Diff line Loading @@ -58,11 +58,15 @@ translating each of those pages back to a kernel address using something like __va(). [ EDIT: Update this when we integrate something like __va(). [ EDIT: Update this when we integrate Gerd Knorr's generic code which does this. ] Gerd Knorr's generic code which does this. ] This rule also means that you may not use kernel image addresses This rule also means that you may use neither kernel image addresses (ie. items in the kernel's data/text/bss segment, or your driver's) (items in data/text/bss segments), nor module image addresses, nor nor may you use kernel stack addresses for DMA. Both of these items stack addresses for DMA. These could all be mapped somewhere entirely might be mapped somewhere entirely different than the rest of physical different than the rest of physical memory. Even if those classes of memory. memory could physically work with DMA, you'd need to ensure the I/O buffers were cacheline-aligned. Without that, you'd see cacheline sharing problems (data corruption) on CPUs with DMA-incoherent caches. (The CPU could write to one word, DMA would write to a different one in the same cache line, and one of them could be overwritten.) Also, this means that you cannot take the return of a kmap() Also, this means that you cannot take the return of a kmap() call and DMA to/from that. This is similar to vmalloc(). call and DMA to/from that. This is similar to vmalloc(). Loading Loading @@ -194,7 +198,7 @@ document for how to handle this case. Finally, if your device can only drive the low 24-bits of Finally, if your device can only drive the low 24-bits of address during PCI bus mastering you might do something like: address during PCI bus mastering you might do something like: if (pci_set_dma_mask(pdev, 0x00ffffff)) { if (pci_set_dma_mask(pdev, DMA_24BIT_MASK)) { printk(KERN_WARNING printk(KERN_WARNING "mydev: 24-bit DMA addressing not available.\n"); "mydev: 24-bit DMA addressing not available.\n"); goto ignore_this_device; goto ignore_this_device; Loading Loading @@ -284,6 +288,11 @@ There are two types of DMA mappings: in order to get correct behavior on all platforms. in order to get correct behavior on all platforms. Also, on some platforms your driver may need to flush CPU write buffers in much the same way as it needs to flush write buffers found in PCI bridges (such as by reading a register's value after writing it). - Streaming DMA mappings which are usually mapped for one DMA transfer, - Streaming DMA mappings which are usually mapped for one DMA transfer, unmapped right after it (unless you use pci_dma_sync_* below) and for which unmapped right after it (unless you use pci_dma_sync_* below) and for which hardware can optimize for sequential accesses. hardware can optimize for sequential accesses. Loading @@ -303,6 +312,9 @@ There are two types of DMA mappings: Neither type of DMA mapping has alignment restrictions that come Neither type of DMA mapping has alignment restrictions that come from PCI, although some devices may have such restrictions. from PCI, although some devices may have such restrictions. Also, systems with caches that aren't DMA-coherent will work better when the underlying buffers don't share cache lines with other data. Using Consistent DMA mappings. Using Consistent DMA mappings. Loading
Documentation/DocBook/Makefile +1 −1 Original line number Original line Diff line number Diff line Loading @@ -2,7 +2,7 @@ # This makefile is used to generate the kernel documentation, # This makefile is used to generate the kernel documentation, # primarily based on in-line comments in various source files. # primarily based on in-line comments in various source files. # See Documentation/kernel-doc-nano-HOWTO.txt for instruction in how # See Documentation/kernel-doc-nano-HOWTO.txt for instruction in how # to ducument the SRC - and how to read it. # to document the SRC - and how to read it. # To add a new book the only step required is to add the book to the # To add a new book the only step required is to add the book to the # list of DOCBOOKS. # list of DOCBOOKS. Loading
Documentation/DocBook/kernel-api.tmpl +0 −1 Original line number Original line Diff line number Diff line Loading @@ -322,7 +322,6 @@ X!Earch/i386/kernel/mca.c <chapter id="sysfs"> <chapter id="sysfs"> <title>The Filesystem for Exporting Kernel Objects</title> <title>The Filesystem for Exporting Kernel Objects</title> !Efs/sysfs/file.c !Efs/sysfs/file.c !Efs/sysfs/dir.c !Efs/sysfs/symlink.c !Efs/sysfs/symlink.c !Efs/sysfs/bin.c !Efs/sysfs/bin.c </chapter> </chapter> Loading