Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 1ae105aa authored by Vinod Koul's avatar Vinod Koul
Browse files

Merge branch 'next' into for-linus-3.0

parents 02f8c6ae 5a42fb93
Loading
Loading
Loading
Loading
+164 −70
Original line number Diff line number Diff line
@@ -10,17 +10,19 @@ NOTE: For DMA Engine usage in async_tx please see:
Below is a guide to device driver writers on how to use the Slave-DMA API of the
DMA Engine. This is applicable only for slave DMA usage only.

The slave DMA usage consists of following steps
The slave DMA usage consists of following steps:
1. Allocate a DMA slave channel
2. Set slave and controller specific parameters
3. Get a descriptor for transaction
4. Submit the transaction and wait for callback notification
4. Submit the transaction
5. Issue pending requests and wait for callback notification

1. Allocate a DMA slave channel
Channel allocation is slightly different in the slave DMA context, client
drivers typically need a channel from a particular DMA controller only and even
in some cases a specific channel is desired. To request a channel
dma_request_channel() API is used.

   Channel allocation is slightly different in the slave DMA context,
   client drivers typically need a channel from a particular DMA
   controller only and even in some cases a specific channel is desired.
   To request a channel dma_request_channel() API is used.

   Interface:
	struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
@@ -29,68 +31,160 @@ struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
   where dma_filter_fn is defined as:
	typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);

When the optional 'filter_fn' parameter is set to NULL dma_request_channel
simply returns the first channel that satisfies the capability mask.  Otherwise,
when the mask parameter is insufficient for specifying the necessary channel,
the filter_fn routine can be used to disposition the available channels in the
system. The filter_fn routine is called once for each free channel in the
system.  Upon seeing a suitable channel filter_fn returns DMA_ACK which flags
that channel to be the return value from dma_request_channel.  A channel
allocated via this interface is exclusive to the caller, until
dma_release_channel() is called.
   The 'filter_fn' parameter is optional, but highly recommended for
   slave and cyclic channels as they typically need to obtain a specific
   DMA channel.

   When the optional 'filter_fn' parameter is NULL, dma_request_channel()
   simply returns the first channel that satisfies the capability mask.

   Otherwise, the 'filter_fn' routine will be called once for each free
   channel which has a capability in 'mask'.  'filter_fn' is expected to
   return 'true' when the desired DMA channel is found.

   A channel allocated via this interface is exclusive to the caller,
   until dma_release_channel() is called.

2. Set slave and controller specific parameters
Next step is always to pass some specific information to the DMA driver. Most of
the generic information which a slave DMA can use is in struct dma_slave_config.
It allows the clients to specify DMA direction, DMA addresses, bus widths, DMA
burst lengths etc. If some DMA controllers have more parameters to be sent then
they should try to embed struct dma_slave_config in their controller specific
structure. That gives flexibility to client to pass more parameters, if
required.

   Next step is always to pass some specific information to the DMA
   driver.  Most of the generic information which a slave DMA can use
   is in struct dma_slave_config.  This allows the clients to specify
   DMA direction, DMA addresses, bus widths, DMA burst lengths etc
   for the peripheral.

   If some DMA controllers have more parameters to be sent then they
   should try to embed struct dma_slave_config in their controller
   specific structure. That gives flexibility to client to pass more
   parameters, if required.

   Interface:
	int dmaengine_slave_config(struct dma_chan *chan,
				  struct dma_slave_config *config)

   Please see the dma_slave_config structure definition in dmaengine.h
   for a detailed explaination of the struct members.  Please note
   that the 'direction' member will be going away as it duplicates the
   direction given in the prepare call.

3. Get a descriptor for transaction

   For slave usage the various modes of slave transfers supported by the
   DMA-engine are:

   slave_sg	- DMA a list of scatter gather buffers from/to a peripheral
   dma_cyclic	- Perform a cyclic DMA operation from/to a peripheral till the
		  operation is explicitly stopped.
The non NULL return of this transfer API represents a "descriptor" for the given
transaction.

   A non-NULL return of this transfer API represents a "descriptor" for
   the given transaction.

   Interface:
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_sg)(
		struct dma_chan *chan,
		struct scatterlist *dst_sg, unsigned int dst_nents,
		struct scatterlist *src_sg, unsigned int src_nents,
	struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)(
		struct dma_chan *chan, struct scatterlist *sgl,
		unsigned int sg_len, enum dma_data_direction direction,
		unsigned long flags);

	struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
		struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
		size_t period_len, enum dma_data_direction direction);

4. Submit the transaction and wait for callback notification
To schedule the transaction to be scheduled by dma device, the "descriptor"
returned in above (3) needs to be submitted.
To tell the dma driver that a transaction is ready to be serviced, the
descriptor->submit() callback needs to be invoked. This chains the descriptor to
the pending queue.
   The peripheral driver is expected to have mapped the scatterlist for
   the DMA operation prior to calling device_prep_slave_sg, and must
   keep the scatterlist mapped until the DMA operation has completed.
   The scatterlist must be mapped using the DMA struct device.  So,
   normal setup should look like this:

	nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
	if (nr_sg == 0)
		/* error */

	desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg,
			direction, flags);

   Once a descriptor has been obtained, the callback information can be
   added and the descriptor must then be submitted.  Some DMA engine
   drivers may hold a spinlock between a successful preparation and
   submission so it is important that these two operations are closely
   paired.

   Note:
	Although the async_tx API specifies that completion callback
	routines cannot submit any new operations, this is not the
	case for slave/cyclic DMA.

	For slave DMA, the subsequent transaction may not be available
	for submission prior to callback function being invoked, so
	slave DMA callbacks are permitted to prepare and submit a new
	transaction.

	For cyclic DMA, a callback function may wish to terminate the
	DMA via dmaengine_terminate_all().

	Therefore, it is important that DMA engine drivers drop any
	locks before calling the callback function which may cause a
	deadlock.

	Note that callbacks will always be invoked from the DMA
	engines tasklet, never from interrupt context.

4. Submit the transaction

   Once the descriptor has been prepared and the callback information
   added, it must be placed on the DMA engine drivers pending queue.

   Interface:
	dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)

   This returns a cookie can be used to check the progress of DMA engine
   activity via other DMA engine calls not covered in this document.

   dmaengine_submit() will not start the DMA operation, it merely adds
   it to the pending queue.  For this, see step 5, dma_async_issue_pending.

5. Issue pending DMA requests and wait for callback notification

   The transactions in the pending queue can be activated by calling the
issue_pending API. If channel is idle then the first transaction in queue is
started and subsequent ones queued up.
On completion of the DMA operation the next in queue is submitted and a tasklet
triggered. The tasklet would then call the client driver completion callback
routine for notification, if set.
   issue_pending API. If channel is idle then the first transaction in
   queue is started and subsequent ones queued up.

   On completion of each DMA operation, the next in queue is started and
   a tasklet triggered. The tasklet will then call the client driver
   completion callback routine for notification, if set.

   Interface:
	void dma_async_issue_pending(struct dma_chan *chan);

==============================================================================
Further APIs:

1. int dmaengine_terminate_all(struct dma_chan *chan)

   This causes all activity for the DMA channel to be stopped, and may
   discard data in the DMA FIFO which hasn't been fully transferred.
   No callback functions will be called for any incomplete transfers.

2. int dmaengine_pause(struct dma_chan *chan)

   This pauses activity on the DMA channel without data loss.

3. int dmaengine_resume(struct dma_chan *chan)

   Resume a previously paused DMA channel.  It is invalid to resume a
   channel which is not currently paused.

4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
        dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)

   This can be used to check the status of the channel.  Please see
   the documentation in include/linux/dmaengine.h for a more complete
   description of this API.

   This can be used in conjunction with dma_async_is_complete() and
   the cookie returned from 'descriptor->submit()' to check for
   completion of a specific DMA transaction.

Additional usage notes for dma driver writers
1/ Although DMA engine specifies that completion callback routines cannot submit
any new operations, but typically for slave DMA subsequent transaction may not
be available for submit prior to callback routine being called. This requirement
is not a requirement for DMA-slave devices. But they should take care to drop
the spin-lock they might be holding before calling the callback routine
   Note:
	Not all DMA engine drivers can return reliable information for
	a running DMA channel.  It is recommended that DMA engine users
	pause or stop (via dmaengine_terminate_all) the channel before
	using this API.
+10 −0
Original line number Diff line number Diff line
@@ -88,6 +88,16 @@ static void __init ts72xx_init_machine(void)
			    ARRAY_SIZE(ts72xx_spi_devices));
}

The driver can use DMA for the transfers also. In this case ts72xx_spi_info
becomes:

static struct ep93xx_spi_info ts72xx_spi_info = {
	.num_chipselect	= ARRAY_SIZE(ts72xx_spi_devices),
	.use_dma	= true;
};

Note that CONFIG_EP93XX_DMA should be enabled as well.

Thanks to
=========
Martin Guy, H. Hartley Sweeten and others who helped me during development of
+3 −1
Original line number Diff line number Diff line
#
# Makefile for the linux kernel.
#
obj-y			:= core.o clock.o dma-m2p.o gpio.o
obj-y			:= core.o clock.o gpio.o
obj-m			:=
obj-n			:=
obj-			:=

obj-$(CONFIG_EP93XX_DMA)	+= dma.o

obj-$(CONFIG_MACH_ADSSPHERE)	+= adssphere.o
obj-$(CONFIG_MACH_EDB93XX)	+= edb93xx.o
obj-$(CONFIG_MACH_GESBC9312)	+= gesbc9312.o
+5 −1
Original line number Diff line number Diff line
@@ -492,11 +492,15 @@ static struct resource ep93xx_spi_resources[] = {
	},
};

static u64 ep93xx_spi_dma_mask = DMA_BIT_MASK(32);

static struct platform_device ep93xx_spi_device = {
	.name		= "ep93xx-spi",
	.id		= 0,
	.dev		= {
		.platform_data		= &ep93xx_spi_master_data,
		.coherent_dma_mask	= DMA_BIT_MASK(32),
		.dma_mask		= &ep93xx_spi_dma_mask,
	},
	.num_resources	= ARRAY_SIZE(ep93xx_spi_resources),
	.resource	= ep93xx_spi_resources,

arch/arm/mach-ep93xx/dma-m2p.c

deleted100644 → 0
+0 −411
Original line number Diff line number Diff line
/*
 * arch/arm/mach-ep93xx/dma-m2p.c
 * M2P DMA handling for Cirrus EP93xx chips.
 *
 * Copyright (C) 2006 Lennert Buytenhek <buytenh@wantstofly.org>
 * Copyright (C) 2006 Applied Data Systems
 *
 * Copyright (C) 2009 Ryan Mallon <ryan@bluewatersys.com>
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation; either version 2 of the License, or (at
 * your option) any later version.
 */

/*
 * On the EP93xx chip the following peripherals my be allocated to the 10
 * Memory to Internal Peripheral (M2P) channels (5 transmit + 5 receive).
 *
 *	I2S	contains 3 Tx and 3 Rx DMA Channels
 *	AAC	contains 3 Tx and 3 Rx DMA Channels
 *	UART1	contains 1 Tx and 1 Rx DMA Channels
 *	UART2	contains 1 Tx and 1 Rx DMA Channels
 *	UART3	contains 1 Tx and 1 Rx DMA Channels
 *	IrDA	contains 1 Tx and 1 Rx DMA Channels
 *
 * SSP and IDE use the Memory to Memory (M2M) channels and are not covered
 * with this implementation.
 */

#define pr_fmt(fmt) "ep93xx " KBUILD_MODNAME ": " fmt

#include <linux/kernel.h>
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/io.h>

#include <mach/dma.h>
#include <mach/hardware.h>

#define M2P_CONTROL			0x00
#define  M2P_CONTROL_STALL_IRQ_EN	(1 << 0)
#define  M2P_CONTROL_NFB_IRQ_EN		(1 << 1)
#define  M2P_CONTROL_ERROR_IRQ_EN	(1 << 3)
#define  M2P_CONTROL_ENABLE		(1 << 4)
#define M2P_INTERRUPT			0x04
#define  M2P_INTERRUPT_STALL		(1 << 0)
#define  M2P_INTERRUPT_NFB		(1 << 1)
#define  M2P_INTERRUPT_ERROR		(1 << 3)
#define M2P_PPALLOC			0x08
#define M2P_STATUS			0x0c
#define M2P_REMAIN			0x14
#define M2P_MAXCNT0			0x20
#define M2P_BASE0			0x24
#define M2P_MAXCNT1			0x30
#define M2P_BASE1			0x34

#define STATE_IDLE	0	/* Channel is inactive.  */
#define STATE_STALL	1	/* Channel is active, no buffers pending.  */
#define STATE_ON	2	/* Channel is active, one buffer pending.  */
#define STATE_NEXT	3	/* Channel is active, two buffers pending.  */

struct m2p_channel {
	char				*name;
	void __iomem			*base;
	int				irq;

	struct clk			*clk;
	spinlock_t			lock;

	void				*client;
	unsigned			next_slot:1;
	struct ep93xx_dma_buffer	*buffer_xfer;
	struct ep93xx_dma_buffer	*buffer_next;
	struct list_head		buffers_pending;
};

static struct m2p_channel m2p_rx[] = {
	{"m2p1", EP93XX_DMA_BASE + 0x0040, IRQ_EP93XX_DMAM2P1},
	{"m2p3", EP93XX_DMA_BASE + 0x00c0, IRQ_EP93XX_DMAM2P3},
	{"m2p5", EP93XX_DMA_BASE + 0x0200, IRQ_EP93XX_DMAM2P5},
	{"m2p7", EP93XX_DMA_BASE + 0x0280, IRQ_EP93XX_DMAM2P7},
	{"m2p9", EP93XX_DMA_BASE + 0x0300, IRQ_EP93XX_DMAM2P9},
	{NULL},
};

static struct m2p_channel m2p_tx[] = {
	{"m2p0", EP93XX_DMA_BASE + 0x0000, IRQ_EP93XX_DMAM2P0},
	{"m2p2", EP93XX_DMA_BASE + 0x0080, IRQ_EP93XX_DMAM2P2},
	{"m2p4", EP93XX_DMA_BASE + 0x0240, IRQ_EP93XX_DMAM2P4},
	{"m2p6", EP93XX_DMA_BASE + 0x02c0, IRQ_EP93XX_DMAM2P6},
	{"m2p8", EP93XX_DMA_BASE + 0x0340, IRQ_EP93XX_DMAM2P8},
	{NULL},
};

static void feed_buf(struct m2p_channel *ch, struct ep93xx_dma_buffer *buf)
{
	if (ch->next_slot == 0) {
		writel(buf->size, ch->base + M2P_MAXCNT0);
		writel(buf->bus_addr, ch->base + M2P_BASE0);
	} else {
		writel(buf->size, ch->base + M2P_MAXCNT1);
		writel(buf->bus_addr, ch->base + M2P_BASE1);
	}
	ch->next_slot ^= 1;
}

static void choose_buffer_xfer(struct m2p_channel *ch)
{
	struct ep93xx_dma_buffer *buf;

	ch->buffer_xfer = NULL;
	if (!list_empty(&ch->buffers_pending)) {
		buf = list_entry(ch->buffers_pending.next,
				 struct ep93xx_dma_buffer, list);
		list_del(&buf->list);
		feed_buf(ch, buf);
		ch->buffer_xfer = buf;
	}
}

static void choose_buffer_next(struct m2p_channel *ch)
{
	struct ep93xx_dma_buffer *buf;

	ch->buffer_next = NULL;
	if (!list_empty(&ch->buffers_pending)) {
		buf = list_entry(ch->buffers_pending.next,
				 struct ep93xx_dma_buffer, list);
		list_del(&buf->list);
		feed_buf(ch, buf);
		ch->buffer_next = buf;
	}
}

static inline void m2p_set_control(struct m2p_channel *ch, u32 v)
{
	/*
	 * The control register must be read immediately after being written so
	 * that the internal state machine is correctly updated. See the ep93xx
	 * users' guide for details.
	 */
	writel(v, ch->base + M2P_CONTROL);
	readl(ch->base + M2P_CONTROL);
}

static inline int m2p_channel_state(struct m2p_channel *ch)
{
	return (readl(ch->base + M2P_STATUS) >> 4) & 0x3;
}

static irqreturn_t m2p_irq(int irq, void *dev_id)
{
	struct m2p_channel *ch = dev_id;
	struct ep93xx_dma_m2p_client *cl;
	u32 irq_status, v;
	int error = 0;

	cl = ch->client;

	spin_lock(&ch->lock);
	irq_status = readl(ch->base + M2P_INTERRUPT);

	if (irq_status & M2P_INTERRUPT_ERROR) {
		writel(M2P_INTERRUPT_ERROR, ch->base + M2P_INTERRUPT);
		error = 1;
	}

	if ((irq_status & (M2P_INTERRUPT_STALL | M2P_INTERRUPT_NFB)) == 0) {
		spin_unlock(&ch->lock);
		return IRQ_NONE;
	}

	switch (m2p_channel_state(ch)) {
	case STATE_IDLE:
		pr_crit("dma interrupt without a dma buffer\n");
		BUG();
		break;

	case STATE_STALL:
		cl->buffer_finished(cl->cookie, ch->buffer_xfer, 0, error);
		if (ch->buffer_next != NULL) {
			cl->buffer_finished(cl->cookie, ch->buffer_next,
					    0, error);
		}
		choose_buffer_xfer(ch);
		choose_buffer_next(ch);
		if (ch->buffer_xfer != NULL)
			cl->buffer_started(cl->cookie, ch->buffer_xfer);
		break;

	case STATE_ON:
		cl->buffer_finished(cl->cookie, ch->buffer_xfer, 0, error);
		ch->buffer_xfer = ch->buffer_next;
		choose_buffer_next(ch);
		cl->buffer_started(cl->cookie, ch->buffer_xfer);
		break;

	case STATE_NEXT:
		pr_crit("dma interrupt while next\n");
		BUG();
		break;
	}

	v = readl(ch->base + M2P_CONTROL) & ~(M2P_CONTROL_STALL_IRQ_EN |
					      M2P_CONTROL_NFB_IRQ_EN);
	if (ch->buffer_xfer != NULL)
		v |= M2P_CONTROL_STALL_IRQ_EN;
	if (ch->buffer_next != NULL)
		v |= M2P_CONTROL_NFB_IRQ_EN;
	m2p_set_control(ch, v);

	spin_unlock(&ch->lock);
	return IRQ_HANDLED;
}

static struct m2p_channel *find_free_channel(struct ep93xx_dma_m2p_client *cl)
{
	struct m2p_channel *ch;
	int i;

	if (cl->flags & EP93XX_DMA_M2P_RX)
		ch = m2p_rx;
	else
		ch = m2p_tx;

	for (i = 0; ch[i].base; i++) {
		struct ep93xx_dma_m2p_client *client;

		client = ch[i].client;
		if (client != NULL) {
			int port;

			port = cl->flags & EP93XX_DMA_M2P_PORT_MASK;
			if (port == (client->flags &
				     EP93XX_DMA_M2P_PORT_MASK)) {
				pr_warning("DMA channel already used by %s\n",
					   cl->name ? : "unknown client");
				return ERR_PTR(-EBUSY);
			}
		}
	}

	for (i = 0; ch[i].base; i++) {
		if (ch[i].client == NULL)
			return ch + i;
	}

	pr_warning("No free DMA channel for %s\n",
		   cl->name ? : "unknown client");
	return ERR_PTR(-ENODEV);
}

static void channel_enable(struct m2p_channel *ch)
{
	struct ep93xx_dma_m2p_client *cl = ch->client;
	u32 v;

	clk_enable(ch->clk);

	v = cl->flags & EP93XX_DMA_M2P_PORT_MASK;
	writel(v, ch->base + M2P_PPALLOC);

	v = cl->flags & EP93XX_DMA_M2P_ERROR_MASK;
	v |= M2P_CONTROL_ENABLE | M2P_CONTROL_ERROR_IRQ_EN;
	m2p_set_control(ch, v);
}

static void channel_disable(struct m2p_channel *ch)
{
	u32 v;

	v = readl(ch->base + M2P_CONTROL);
	v &= ~(M2P_CONTROL_STALL_IRQ_EN | M2P_CONTROL_NFB_IRQ_EN);
	m2p_set_control(ch, v);

	while (m2p_channel_state(ch) >= STATE_ON)
		cpu_relax();

	m2p_set_control(ch, 0x0);

	while (m2p_channel_state(ch) == STATE_STALL)
		cpu_relax();

	clk_disable(ch->clk);
}

int ep93xx_dma_m2p_client_register(struct ep93xx_dma_m2p_client *cl)
{
	struct m2p_channel *ch;
	int err;

	ch = find_free_channel(cl);
	if (IS_ERR(ch))
		return PTR_ERR(ch);

	err = request_irq(ch->irq, m2p_irq, 0, cl->name ? : "dma-m2p", ch);
	if (err)
		return err;

	ch->client = cl;
	ch->next_slot = 0;
	ch->buffer_xfer = NULL;
	ch->buffer_next = NULL;
	INIT_LIST_HEAD(&ch->buffers_pending);

	cl->channel = ch;

	channel_enable(ch);

	return 0;
}
EXPORT_SYMBOL_GPL(ep93xx_dma_m2p_client_register);

void ep93xx_dma_m2p_client_unregister(struct ep93xx_dma_m2p_client *cl)
{
	struct m2p_channel *ch = cl->channel;

	channel_disable(ch);
	free_irq(ch->irq, ch);
	ch->client = NULL;
}
EXPORT_SYMBOL_GPL(ep93xx_dma_m2p_client_unregister);

void ep93xx_dma_m2p_submit(struct ep93xx_dma_m2p_client *cl,
			   struct ep93xx_dma_buffer *buf)
{
	struct m2p_channel *ch = cl->channel;
	unsigned long flags;
	u32 v;

	spin_lock_irqsave(&ch->lock, flags);
	v = readl(ch->base + M2P_CONTROL);
	if (ch->buffer_xfer == NULL) {
		ch->buffer_xfer = buf;
		feed_buf(ch, buf);
		cl->buffer_started(cl->cookie, buf);

		v |= M2P_CONTROL_STALL_IRQ_EN;
		m2p_set_control(ch, v);

	} else if (ch->buffer_next == NULL) {
		ch->buffer_next = buf;
		feed_buf(ch, buf);

		v |= M2P_CONTROL_NFB_IRQ_EN;
		m2p_set_control(ch, v);
	} else {
		list_add_tail(&buf->list, &ch->buffers_pending);
	}
	spin_unlock_irqrestore(&ch->lock, flags);
}
EXPORT_SYMBOL_GPL(ep93xx_dma_m2p_submit);

void ep93xx_dma_m2p_submit_recursive(struct ep93xx_dma_m2p_client *cl,
				     struct ep93xx_dma_buffer *buf)
{
	struct m2p_channel *ch = cl->channel;

	list_add_tail(&buf->list, &ch->buffers_pending);
}
EXPORT_SYMBOL_GPL(ep93xx_dma_m2p_submit_recursive);

void ep93xx_dma_m2p_flush(struct ep93xx_dma_m2p_client *cl)
{
	struct m2p_channel *ch = cl->channel;

	channel_disable(ch);
	ch->next_slot = 0;
	ch->buffer_xfer = NULL;
	ch->buffer_next = NULL;
	INIT_LIST_HEAD(&ch->buffers_pending);
	channel_enable(ch);
}
EXPORT_SYMBOL_GPL(ep93xx_dma_m2p_flush);

static int init_channel(struct m2p_channel *ch)
{
	ch->clk = clk_get(NULL, ch->name);
	if (IS_ERR(ch->clk))
		return PTR_ERR(ch->clk);

	spin_lock_init(&ch->lock);
	ch->client = NULL;

	return 0;
}

static int __init ep93xx_dma_m2p_init(void)
{
	int i;
	int ret;

	for (i = 0; m2p_rx[i].base; i++) {
		ret = init_channel(m2p_rx + i);
		if (ret)
			return ret;
	}

	for (i = 0; m2p_tx[i].base; i++) {
		ret = init_channel(m2p_tx + i);
		if (ret)
			return ret;
	}

	pr_info("M2P DMA subsystem initialized\n");
	return 0;
}
arch_initcall(ep93xx_dma_m2p_init);
Loading