Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit e1fd3301 authored by Manu Gautam's avatar Manu Gautam
Browse files

USB: u_ether: Align TX buffers to improve DL throughput



USB2 controller's DMA performance degrades when request buffers
are not aligned to 4 bytes. Driver is already taking care of RX
buffers. Add changes to align TX skb if skb data is not starting
at aligned address. Though it adds penalty of memcpy but still
improves performance by overcoming USB DMA bottleneck.
Buffers are currently already aligned if RNDIS aggregation is
used. Issue is seen only with ECM or RNDIS with aggregation
disabled. RNDIS DL aggregation can be disabled using:
-> echo 0 > /sys/module/g_android/parameters/rndis_dl_max_pkt_per_xfer
While at it, also double DL buffers to 20 from 10, as USB interrupt
is enabled for every 5th buffer. This often causes under runs if
USB controller also uses interrupt latency >= 1msec with data
transfer >= 120Mbps and packet size = 1500 bytes.

Change-Id: Idfd960dff73359fda8c9bc66f7056bafe2dc3265
Signed-off-by: default avatarManu Gautam <mgautam@codeaurora.org>
parent 3f8bcdf4
Loading
Loading
Loading
Loading
+16 −8
Original line number Diff line number Diff line
@@ -174,7 +174,7 @@ static void uether_debugfs_exit(struct eth_dev *dev);
 * of interconnect, data can be very bursty. tx_qmult is the
 * additional multipler on qmult.
 */
static unsigned tx_qmult = 1;
static unsigned tx_qmult = 2;
module_param(tx_qmult, uint, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(tx_qmult, "Additional queue length multiplier for tx");

@@ -1186,21 +1186,28 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb,
		dev->tx_skb_hold_count = 0;
		spin_unlock_irqrestore(&dev->req_lock, flags);
	} else {
		bool do_align = false;

		/* Check if TX buffer should be aligned before queuing to hw */
		if (!gadget_is_dwc3(dev->gadget) &&
		    !IS_ALIGNED((size_t)skb->data, 4))
			do_align = true;

		/*
		 * Some UDC requires allocation of some extra bytes for
		 * TX buffer due to hardware requirement. Check if extra
		 * bytes are already there, otherwise allocate new buffer
		 * with extra bytes and do memcpy.
		 * with extra bytes and do memcpy to align skb as well.
		 */
		length = skb->len;
		if (dev->gadget->extra_buf_alloc)
			extra_alloc = EXTRA_ALLOCATION_SIZE_U_ETH;
		tail_room = skb_tailroom(skb);
		if (tail_room < extra_alloc) {
			pr_debug("%s: tail_room  %d less than %d\n", __func__,
					tail_room, extra_alloc);
			new_skb = skb_copy_expand(skb, 0, extra_alloc -
					tail_room, GFP_ATOMIC);
		if (do_align || tail_room < extra_alloc) {
			pr_debug("%s:align skb and update tail_room %d to %d\n",
					__func__, tail_room, extra_alloc);
			tail_room = extra_alloc;
			new_skb = skb_copy_expand(skb, 0, tail_room,
						  GFP_ATOMIC);
			if (!new_skb)
				return -ENOMEM;
			dev_kfree_skb_any(skb);
@@ -1208,6 +1215,7 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb,
			dev->skb_expand_cnt++;
		}

		length = skb->len;
		req->buf = skb->data;
		req->context = skb;
	}