Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit a6b88814 authored by David S. Miller's avatar David S. Miller
Browse files


Alexei Starovoitov says:

====================
pull-request: bpf 2018-02-02

The following pull-request contains BPF updates for your *net* tree.

The main changes are:

1) support XDP attach in libbpf, from Eric.

2) minor fixes, from Daniel, Jakub, Yonghong, Alexei.
====================

Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 35277995 09c0656d
Loading
Loading
Loading
Loading
+31 −0
Original line number Diff line number Diff line
@@ -516,4 +516,35 @@ A: LLVM has a -mcpu selector for the BPF back end in order to allow the
   By the way, the BPF kernel selftests run with -mcpu=probe for better
   test coverage.

Q: In some cases clang flag "-target bpf" is used but in other cases the
   default clang target, which matches the underlying architecture, is used.
   What is the difference and when I should use which?

A: Although LLVM IR generation and optimization try to stay architecture
   independent, "-target <arch>" still has some impact on generated code:

     - BPF program may recursively include header file(s) with file scope
       inline assembly codes. The default target can handle this well,
       while bpf target may fail if bpf backend assembler does not
       understand these assembly codes, which is true in most cases.

     - When compiled without -g, additional elf sections, e.g.,
       .eh_frame and .rela.eh_frame, may be present in the object file
       with default target, but not with bpf target.

     - The default target may turn a C switch statement into a switch table
       lookup and jump operation. Since the switch table is placed
       in the global readonly section, the bpf program will fail to load.
       The bpf target does not support switch table optimization.
       The clang option "-fno-jump-tables" can be used to disable
       switch table generation.

   You should use default target when:

     - Your program includes a header file, e.g., ptrace.h, which eventually
       pulls in some header files containing file scope host assembly codes.
     - You can add "-fno-jump-tables" to work around the switch table issue.

   Otherwise, you can use bpf target.

Happy BPF hacking!
+2 −3
Original line number Diff line number Diff line
@@ -480,8 +480,7 @@ static int
nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap)
{
	struct nsim_bpf_bound_map *nmap;
	unsigned int i;
	int err;
	int i, err;

	if (WARN_ON(offmap->map.map_type != BPF_MAP_TYPE_ARRAY &&
		    offmap->map.map_type != BPF_MAP_TYPE_HASH))
@@ -518,7 +517,7 @@ nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap)
	return 0;

err_free:
	while (--i) {
	while (--i >= 0) {
		kfree(nmap->entry[i].key);
		kfree(nmap->entry[i].value);
	}
+6 −0
Original line number Diff line number Diff line
@@ -3228,6 +3228,12 @@ static inline int netif_set_real_num_rx_queues(struct net_device *dev,
}
#endif

static inline struct netdev_rx_queue *
__netif_get_rx_queue(struct net_device *dev, unsigned int rxq)
{
	return dev->_rx + rxq;
}

#ifdef CONFIG_SYSFS
static inline unsigned int get_netdev_rx_queue_index(
		struct netdev_rx_queue *queue)
+24 −8
Original line number Diff line number Diff line
@@ -1576,25 +1576,41 @@ int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *progs,
				__u32 __user *prog_ids, u32 cnt)
{
	struct bpf_prog **prog;
	u32 i = 0, id;

	unsigned long err = 0;
	u32 i = 0, *ids;
	bool nospc;

	/* users of this function are doing:
	 * cnt = bpf_prog_array_length();
	 * if (cnt > 0)
	 *     bpf_prog_array_copy_to_user(..., cnt);
	 * so below kcalloc doesn't need extra cnt > 0 check, but
	 * bpf_prog_array_length() releases rcu lock and
	 * prog array could have been swapped with empty or larger array,
	 * so always copy 'cnt' prog_ids to the user.
	 * In a rare race the user will see zero prog_ids
	 */
	ids = kcalloc(cnt, sizeof(u32), GFP_USER);
	if (!ids)
		return -ENOMEM;
	rcu_read_lock();
	prog = rcu_dereference(progs)->progs;
	for (; *prog; prog++) {
		if (*prog == &dummy_bpf_prog.prog)
			continue;
		id = (*prog)->aux->id;
		if (copy_to_user(prog_ids + i, &id, sizeof(id))) {
			rcu_read_unlock();
			return -EFAULT;
		}
		ids[i] = (*prog)->aux->id;
		if (++i == cnt) {
			prog++;
			break;
		}
	}
	nospc = !!(*prog);
	rcu_read_unlock();
	if (*prog)
	err = copy_to_user(prog_ids, ids, cnt * sizeof(u32));
	kfree(ids);
	if (err)
		return -EFAULT;
	if (nospc)
		return -ENOSPC;
	return 0;
}
+4 −0
Original line number Diff line number Diff line
@@ -151,6 +151,7 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
{
	u32 size = kattr->test.data_size_in;
	u32 repeat = kattr->test.repeat;
	struct netdev_rx_queue *rxqueue;
	struct xdp_buff xdp = {};
	u32 retval, duration;
	void *data;
@@ -165,6 +166,9 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
	xdp.data_meta = xdp.data;
	xdp.data_end = xdp.data + size;

	rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0);
	xdp.rxq = &rxqueue->xdp_rxq;

	retval = bpf_test_run(prog, &xdp, repeat, &duration);
	if (xdp.data != data + XDP_PACKET_HEADROOM + NET_IP_ALIGN)
		size = xdp.data_end - xdp.data;
Loading