Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 9089e986 authored by Rhyland Klein's avatar Rhyland Klein Committed by Rom Lemarchand
Browse files

UPSTREAM: Staging: android: ion: fix typos in comments



s/comming/coming/ in drivers/staging/android/ion/ion.c
s/specfic/specific/ in drivers/staging/android/ion/ion.h
s/peformance/performance/ in drivers/staging/android/ion/ion_priv.h

Signed-off-by: default avatarTristan Lelong <tristan@lelong.xyz>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit bc47e7d97666ad32993abe0ea924ffa81a8356e7)

Change-Id: If2cb57073cec9311b93a0138a39a222ec9ccec30
Signed-off-by: default avatarRhyland Klein <rklein@nvidia.com>
parent 9a7e2ac9
Loading
Loading
Loading
Loading
+1 −1
Original line number Original line Diff line number Diff line
@@ -250,7 +250,7 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
	   our systems the only dma_address space is physical addresses.
	   our systems the only dma_address space is physical addresses.
	   Additionally, we can't afford the overhead of invalidating every
	   Additionally, we can't afford the overhead of invalidating every
	   allocation via dma_map_sg. The implicit contract here is that
	   allocation via dma_map_sg. The implicit contract here is that
	   memory comming from the heaps is ready for dma, ie if it has a
	   memory coming from the heaps is ready for dma, ie if it has a
	   cached mapping that mapping has been invalidated */
	   cached mapping that mapping has been invalidated */
	for_each_sg(buffer->sg_table->sgl, sg, buffer->sg_table->nents, i)
	for_each_sg(buffer->sg_table->sgl, sg, buffer->sg_table->nents, i)
		sg_dma_address(sg) = sg_phys(sg);
		sg_dma_address(sg) = sg_phys(sg);
+1 −1
Original line number Original line Diff line number Diff line
@@ -76,7 +76,7 @@ struct ion_platform_data {
 *		size
 *		size
 *
 *
 * Calls memblock reserve to set aside memory for heaps that are
 * Calls memblock reserve to set aside memory for heaps that are
 * located at specific memory addresses or of specfic sizes not
 * located at specific memory addresses or of specific sizes not
 * managed by the kernel
 * managed by the kernel
 */
 */
void ion_reserve(struct ion_platform_data *data);
void ion_reserve(struct ion_platform_data *data);
+2 −2
Original line number Original line Diff line number Diff line
@@ -345,7 +345,7 @@ void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr,
 * functions for creating and destroying a heap pool -- allows you
 * functions for creating and destroying a heap pool -- allows you
 * to keep a pool of pre allocated memory to use from your heap.  Keeping
 * to keep a pool of pre allocated memory to use from your heap.  Keeping
 * a pool of memory that is ready for dma, ie any cached mapping have been
 * a pool of memory that is ready for dma, ie any cached mapping have been
 * invalidated from the cache, provides a significant peformance benefit on
 * invalidated from the cache, provides a significant performance benefit on
 * many systems */
 * many systems */


/**
/**
@@ -362,7 +362,7 @@ void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr,
 *
 *
 * Allows you to keep a pool of pre allocated pages to use from your heap.
 * Allows you to keep a pool of pre allocated pages to use from your heap.
 * Keeping a pool of pages that is ready for dma, ie any cached mapping have
 * Keeping a pool of pages that is ready for dma, ie any cached mapping have
 * been invalidated from the cache, provides a significant peformance benefit
 * been invalidated from the cache, provides a significant performance benefit
 * on many systems
 * on many systems
 */
 */
struct ion_page_pool {
struct ion_page_pool {