Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit ca46abd6 authored by David S. Miller's avatar David S. Miller
Browse files

Merge branch 'net-sched-allow-qdiscs-to-share-filter-block-instances'



Jiri Pirko says:

====================
net: sched: allow qdiscs to share filter block instances

Currently the filters added to qdiscs are independent. So for example if you
have 2 netdevices and you create ingress qdisc on both and you want to add
identical filter rules both, you need to add them twice. This patchset
makes this easier and mainly saves resources allowing to share all filters
within a qdisc - I call it a "filter block". Also this helps to save
resources when we do offload to hw for example to expensive TCAM.

So back to the example. First, we create 2 qdiscs. Both will share
block number 22. "22" is just an identification:
$ tc qdisc add dev ens7 ingress_block 22 ingress
                        ^^^^^^^^^^^^^^^^
$ tc qdisc add dev ens8 ingress_block 22 ingress
                        ^^^^^^^^^^^^^^^^

If we don't specify "block" command line option, no shared block would
be created:
$ tc qdisc add dev ens9 ingress

Now if we list the qdiscs, we will see the block index in the output:

$ tc qdisc
qdisc ingress ffff: dev ens7 parent ffff:fff1 ingress_block 22
qdisc ingress ffff: dev ens8 parent ffff:fff1 ingress_block 22
qdisc ingress ffff: dev ens9 parent ffff:fff1

To make is more visual, the situation looks like this:

   ens7 ingress qdisc                 ens7 ingress qdisc
          |                                  |
          |                                  |
          +---------->  block 22  <----------+

Unlimited number of qdiscs may share the same block.

Note that this patchset introduces block sharing support also for clsact
qdisc:
$ tc qdisc add dev ens10 ingress_block 23 egress_block 24 clsact
$ tc qdisc show dev ens10
qdisc clsact ffff: dev ens10 parent ffff:fff1 ingress_block 23 egress_block 24

We can add filter using the block index:

$ tc filter add block 22 protocol ip pref 25 flower dst_ip 192.168.0.0/16 action drop

Note we cannot use the qdisc for filter manipulations of shared blocks:

$ tc filter add dev ens8 ingress protocol ip pref 1 flower dst_ip 192.168.100.2 action drop
Error: This filter block is shared. Please use the block index to manipulate the filters.

We will see the same output if we list filters for ingress qdisc of
ens7 and ens8, also for the block 22:

$ tc filter show block 22
filter block 22 protocol ip pref 25 flower chain 0
filter block 22 protocol ip pref 25 flower chain 0 handle 0x1
...

$ tc filter show dev ens7 ingress
filter block 22 protocol ip pref 25 flower chain 0
filter block 22 protocol ip pref 25 flower chain 0 handle 0x1
...

$ tc filter show dev ens8 ingress
filter block 22 protocol ip pref 25 flower chain 0
filter block 22 protocol ip pref 25 flower chain 0 handle 0x1
...

---
v10->v11:
- patch 2:
 - fixed error path when register_pernet_subsys fails pointed out by Cong
- patch 9:
 - rebased on top of the current net-next

v9->v10:
- patch 7:
 - fixed ifindex magic in the patch description
- userspace patches:
 - added manpages and patch descriptions

v8->v9:
- patch "net: sched: add rt netlink message type for block get" was
  removed, userspace check filter existence using qdisc dump

v7->v8:
- patch 7:
 - added comment to ifindex block magic
- patch 9:
 - new patch
- patch 10:
 - base this on the patch that introduces qdisc-generic block index
   attributes parsing/dumping
- patch 13:
 - rebased on top of current net-next

v6->v7:
- patch 1:
 - unsquashed shared block patch that was previously squashed by mistake
 - fixed error path in block create - freeing chain 0
- patch 2:
 - new patch - splitted from the previous one as it got accidentaly
   squashed in the rebasing process in the past
 - converted to idr extended
 - removed auto-generating of block indexes. Callers have to explicily
   tell that the block is shared by passing non-zero block index
 - fixed error path in block get ext - freeing chain 0
- patch 7:
 - changed extack message for block index handle as suggested by DaveA
 - added extack message when block index does not exist
 - the block ifindex magic is in define and change to 0xffffffff
   as suggested by Jamal
- patch 8:
 - new patch implementing RTM_GETBLOCK in order to query if the block
   with some index exists
- patch 9:
 - adjust to the core changes and check block index attributes for being 0

v5->v6:
- added patch 6 that introduces block handle

v4->v5:
- patch 5:
 - add tracking of binding of devs that are unable to offload and check
   that before block cbs call.

v3->v4:
- patch 1:
 - rebased on top of the current net-next
 - added some extack strings
- patch 3:
 - rebased on top of the current net-next
- patch 5:
 - propagate netdev_ops->ndo_setup_tc error up to tcf_block_offload_bind
   caller
- patch 7:
 - rebased on top of the current net-next

v2->v3:
- removed original patch 1, removing tp->q cls_bpf dependency. Fixed by
  Jakub in the meantime.
- patch 1:
 - rebased on top of the current net-next
- patch 5:
 - new patch
- patch 8:
 - removed "p_" prefix from block index function args
- patch 10:
 - add tc offload feature handling
====================

Acked-by: default avatarDavid Ahern <dsahern@gmail.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents c9a82421 4b23258d
Loading
Loading
Loading
Loading
+152 −30
Original line number Original line Diff line number Diff line
@@ -1747,72 +1747,186 @@ static int mlxsw_sp_setup_tc_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port,
}
}


static int
static int
mlxsw_sp_setup_tc_cls_flower(struct mlxsw_sp_port *mlxsw_sp_port,
mlxsw_sp_setup_tc_cls_flower(struct mlxsw_sp_acl_block *acl_block,
			     struct tc_cls_flower_offload *f,
			     struct tc_cls_flower_offload *f)
			     bool ingress)
{
{
	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_acl_block_mlxsw_sp(acl_block);

	switch (f->command) {
	switch (f->command) {
	case TC_CLSFLOWER_REPLACE:
	case TC_CLSFLOWER_REPLACE:
		return mlxsw_sp_flower_replace(mlxsw_sp_port, ingress, f);
		return mlxsw_sp_flower_replace(mlxsw_sp, acl_block, f);
	case TC_CLSFLOWER_DESTROY:
	case TC_CLSFLOWER_DESTROY:
		mlxsw_sp_flower_destroy(mlxsw_sp_port, ingress, f);
		mlxsw_sp_flower_destroy(mlxsw_sp, acl_block, f);
		return 0;
		return 0;
	case TC_CLSFLOWER_STATS:
	case TC_CLSFLOWER_STATS:
		return mlxsw_sp_flower_stats(mlxsw_sp_port, ingress, f);
		return mlxsw_sp_flower_stats(mlxsw_sp, acl_block, f);
	default:
	default:
		return -EOPNOTSUPP;
		return -EOPNOTSUPP;
	}
	}
}
}


static int mlxsw_sp_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
static int mlxsw_sp_setup_tc_block_cb_matchall(enum tc_setup_type type,
					       void *type_data,
					       void *cb_priv, bool ingress)
					       void *cb_priv, bool ingress)
{
{
	struct mlxsw_sp_port *mlxsw_sp_port = cb_priv;
	struct mlxsw_sp_port *mlxsw_sp_port = cb_priv;


	switch (type) {
	case TC_SETUP_CLSMATCHALL:
		if (!tc_can_offload(mlxsw_sp_port->dev))
		if (!tc_can_offload(mlxsw_sp_port->dev))
			return -EOPNOTSUPP;
			return -EOPNOTSUPP;


	switch (type) {
	case TC_SETUP_CLSMATCHALL:
		return mlxsw_sp_setup_tc_cls_matchall(mlxsw_sp_port, type_data,
		return mlxsw_sp_setup_tc_cls_matchall(mlxsw_sp_port, type_data,
						      ingress);
						      ingress);
	case TC_SETUP_CLSFLOWER:
	case TC_SETUP_CLSFLOWER:
		return mlxsw_sp_setup_tc_cls_flower(mlxsw_sp_port, type_data,
		return 0;
						    ingress);
	default:
	default:
		return -EOPNOTSUPP;
		return -EOPNOTSUPP;
	}
	}
}
}


static int mlxsw_sp_setup_tc_block_cb_ig(enum tc_setup_type type,
static int mlxsw_sp_setup_tc_block_cb_matchall_ig(enum tc_setup_type type,
					 void *type_data, void *cb_priv)
						  void *type_data,
						  void *cb_priv)
{
{
	return mlxsw_sp_setup_tc_block_cb(type, type_data, cb_priv, true);
	return mlxsw_sp_setup_tc_block_cb_matchall(type, type_data,
						   cb_priv, true);
}
}


static int mlxsw_sp_setup_tc_block_cb_eg(enum tc_setup_type type,
static int mlxsw_sp_setup_tc_block_cb_matchall_eg(enum tc_setup_type type,
						  void *type_data,
						  void *cb_priv)
{
	return mlxsw_sp_setup_tc_block_cb_matchall(type, type_data,
						   cb_priv, false);
}

static int mlxsw_sp_setup_tc_block_cb_flower(enum tc_setup_type type,
					     void *type_data, void *cb_priv)
					     void *type_data, void *cb_priv)
{
{
	return mlxsw_sp_setup_tc_block_cb(type, type_data, cb_priv, false);
	struct mlxsw_sp_acl_block *acl_block = cb_priv;

	switch (type) {
	case TC_SETUP_CLSMATCHALL:
		return 0;
	case TC_SETUP_CLSFLOWER:
		if (mlxsw_sp_acl_block_disabled(acl_block))
			return -EOPNOTSUPP;

		return mlxsw_sp_setup_tc_cls_flower(acl_block, type_data);
	default:
		return -EOPNOTSUPP;
	}
}

static int
mlxsw_sp_setup_tc_block_flower_bind(struct mlxsw_sp_port *mlxsw_sp_port,
				    struct tcf_block *block, bool ingress)
{
	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
	struct mlxsw_sp_acl_block *acl_block;
	struct tcf_block_cb *block_cb;
	int err;

	block_cb = tcf_block_cb_lookup(block, mlxsw_sp_setup_tc_block_cb_flower,
				       mlxsw_sp);
	if (!block_cb) {
		acl_block = mlxsw_sp_acl_block_create(mlxsw_sp, block->net);
		if (!acl_block)
			return -ENOMEM;
		block_cb = __tcf_block_cb_register(block,
						   mlxsw_sp_setup_tc_block_cb_flower,
						   mlxsw_sp, acl_block);
		if (IS_ERR(block_cb)) {
			err = PTR_ERR(block_cb);
			goto err_cb_register;
		}
	} else {
		acl_block = tcf_block_cb_priv(block_cb);
	}
	tcf_block_cb_incref(block_cb);
	err = mlxsw_sp_acl_block_bind(mlxsw_sp, acl_block,
				      mlxsw_sp_port, ingress);
	if (err)
		goto err_block_bind;

	if (ingress)
		mlxsw_sp_port->ing_acl_block = acl_block;
	else
		mlxsw_sp_port->eg_acl_block = acl_block;

	return 0;

err_block_bind:
	if (!tcf_block_cb_decref(block_cb)) {
		__tcf_block_cb_unregister(block_cb);
err_cb_register:
		mlxsw_sp_acl_block_destroy(acl_block);
	}
	return err;
}

static void
mlxsw_sp_setup_tc_block_flower_unbind(struct mlxsw_sp_port *mlxsw_sp_port,
				      struct tcf_block *block, bool ingress)
{
	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
	struct mlxsw_sp_acl_block *acl_block;
	struct tcf_block_cb *block_cb;
	int err;

	block_cb = tcf_block_cb_lookup(block, mlxsw_sp_setup_tc_block_cb_flower,
				       mlxsw_sp);
	if (!block_cb)
		return;

	if (ingress)
		mlxsw_sp_port->ing_acl_block = NULL;
	else
		mlxsw_sp_port->eg_acl_block = NULL;

	acl_block = tcf_block_cb_priv(block_cb);
	err = mlxsw_sp_acl_block_unbind(mlxsw_sp, acl_block,
					mlxsw_sp_port, ingress);
	if (!err && !tcf_block_cb_decref(block_cb)) {
		__tcf_block_cb_unregister(block_cb);
		mlxsw_sp_acl_block_destroy(acl_block);
	}
}
}


static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
				   struct tc_block_offload *f)
				   struct tc_block_offload *f)
{
{
	tc_setup_cb_t *cb;
	tc_setup_cb_t *cb;
	bool ingress;
	int err;


	if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
	if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) {
		cb = mlxsw_sp_setup_tc_block_cb_ig;
		cb = mlxsw_sp_setup_tc_block_cb_matchall_ig;
	else if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
		ingress = true;
		cb = mlxsw_sp_setup_tc_block_cb_eg;
	} else if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS) {
	else
		cb = mlxsw_sp_setup_tc_block_cb_matchall_eg;
		ingress = false;
	} else {
		return -EOPNOTSUPP;
		return -EOPNOTSUPP;
	}


	switch (f->command) {
	switch (f->command) {
	case TC_BLOCK_BIND:
	case TC_BLOCK_BIND:
		return tcf_block_cb_register(f->block, cb, mlxsw_sp_port,
		err = tcf_block_cb_register(f->block, cb, mlxsw_sp_port,
					    mlxsw_sp_port);
					    mlxsw_sp_port);
		if (err)
			return err;
		err = mlxsw_sp_setup_tc_block_flower_bind(mlxsw_sp_port,
							  f->block, ingress);
		if (err) {
			tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port);
			return err;
		}
		return 0;
	case TC_BLOCK_UNBIND:
	case TC_BLOCK_UNBIND:
		mlxsw_sp_setup_tc_block_flower_unbind(mlxsw_sp_port,
						      f->block, ingress);
		tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port);
		tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port);
		return 0;
		return 0;
	default:
	default:
@@ -1842,11 +1956,19 @@ static int mlxsw_sp_feature_hw_tc(struct net_device *dev, bool enable)
{
{
	struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
	struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);


	if (!enable && (mlxsw_sp_port->acl_rule_count ||
	if (!enable) {
			!list_empty(&mlxsw_sp_port->mall_tc_list))) {
		if (mlxsw_sp_acl_block_rule_count(mlxsw_sp_port->ing_acl_block) ||
		    mlxsw_sp_acl_block_rule_count(mlxsw_sp_port->eg_acl_block) ||
		    !list_empty(&mlxsw_sp_port->mall_tc_list)) {
			netdev_err(dev, "Active offloaded tc filters, can't turn hw_tc_offload off\n");
			netdev_err(dev, "Active offloaded tc filters, can't turn hw_tc_offload off\n");
			return -EINVAL;
			return -EINVAL;
		}
		}
		mlxsw_sp_acl_block_disable_inc(mlxsw_sp_port->ing_acl_block);
		mlxsw_sp_acl_block_disable_inc(mlxsw_sp_port->eg_acl_block);
	} else {
		mlxsw_sp_acl_block_disable_dec(mlxsw_sp_port->ing_acl_block);
		mlxsw_sp_acl_block_disable_dec(mlxsw_sp_port->eg_acl_block);
	}
	return 0;
	return 0;
}
}


+34 −9
Original line number Original line Diff line number Diff line
@@ -260,6 +260,8 @@ struct mlxsw_sp_port {
	struct list_head vlans_list;
	struct list_head vlans_list;
	struct mlxsw_sp_qdisc *root_qdisc;
	struct mlxsw_sp_qdisc *root_qdisc;
	unsigned acl_rule_count;
	unsigned acl_rule_count;
	struct mlxsw_sp_acl_block *ing_acl_block;
	struct mlxsw_sp_acl_block *eg_acl_block;
};
};


static inline bool
static inline bool
@@ -468,8 +470,11 @@ struct mlxsw_sp_acl_profile_ops {
			   void *priv, void *ruleset_priv);
			   void *priv, void *ruleset_priv);
	void (*ruleset_del)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv);
	void (*ruleset_del)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv);
	int (*ruleset_bind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv,
	int (*ruleset_bind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv,
			    struct net_device *dev, bool ingress);
			    struct mlxsw_sp_port *mlxsw_sp_port,
	void (*ruleset_unbind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv);
			    bool ingress);
	void (*ruleset_unbind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv,
			       struct mlxsw_sp_port *mlxsw_sp_port,
			       bool ingress);
	u16 (*ruleset_group_id)(void *ruleset_priv);
	u16 (*ruleset_group_id)(void *ruleset_priv);
	size_t rule_priv_size;
	size_t rule_priv_size;
	int (*rule_add)(struct mlxsw_sp *mlxsw_sp,
	int (*rule_add)(struct mlxsw_sp *mlxsw_sp,
@@ -489,17 +494,34 @@ struct mlxsw_sp_acl_ops {
				       enum mlxsw_sp_acl_profile profile);
				       enum mlxsw_sp_acl_profile profile);
};
};


struct mlxsw_sp_acl_block;
struct mlxsw_sp_acl_ruleset;
struct mlxsw_sp_acl_ruleset;


/* spectrum_acl.c */
/* spectrum_acl.c */
struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl);
struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl);
struct mlxsw_sp *mlxsw_sp_acl_block_mlxsw_sp(struct mlxsw_sp_acl_block *block);
unsigned int mlxsw_sp_acl_block_rule_count(struct mlxsw_sp_acl_block *block);
void mlxsw_sp_acl_block_disable_inc(struct mlxsw_sp_acl_block *block);
void mlxsw_sp_acl_block_disable_dec(struct mlxsw_sp_acl_block *block);
bool mlxsw_sp_acl_block_disabled(struct mlxsw_sp_acl_block *block);
struct mlxsw_sp_acl_block *mlxsw_sp_acl_block_create(struct mlxsw_sp *mlxsw_sp,
						     struct net *net);
void mlxsw_sp_acl_block_destroy(struct mlxsw_sp_acl_block *block);
int mlxsw_sp_acl_block_bind(struct mlxsw_sp *mlxsw_sp,
			    struct mlxsw_sp_acl_block *block,
			    struct mlxsw_sp_port *mlxsw_sp_port,
			    bool ingress);
int mlxsw_sp_acl_block_unbind(struct mlxsw_sp *mlxsw_sp,
			      struct mlxsw_sp_acl_block *block,
			      struct mlxsw_sp_port *mlxsw_sp_port,
			      bool ingress);
struct mlxsw_sp_acl_ruleset *
struct mlxsw_sp_acl_ruleset *
mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp, struct net_device *dev,
mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp,
			    bool ingress, u32 chain_index,
			    struct mlxsw_sp_acl_block *block, u32 chain_index,
			    enum mlxsw_sp_acl_profile profile);
			    enum mlxsw_sp_acl_profile profile);
struct mlxsw_sp_acl_ruleset *
struct mlxsw_sp_acl_ruleset *
mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp, struct net_device *dev,
mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp,
			 bool ingress, u32 chain_index,
			 struct mlxsw_sp_acl_block *block, u32 chain_index,
			 enum mlxsw_sp_acl_profile profile);
			 enum mlxsw_sp_acl_profile profile);
void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp,
void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp,
			      struct mlxsw_sp_acl_ruleset *ruleset);
			      struct mlxsw_sp_acl_ruleset *ruleset);
@@ -566,11 +588,14 @@ void mlxsw_sp_acl_fini(struct mlxsw_sp *mlxsw_sp);
extern const struct mlxsw_sp_acl_ops mlxsw_sp_acl_tcam_ops;
extern const struct mlxsw_sp_acl_ops mlxsw_sp_acl_tcam_ops;


/* spectrum_flower.c */
/* spectrum_flower.c */
int mlxsw_sp_flower_replace(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress,
int mlxsw_sp_flower_replace(struct mlxsw_sp *mlxsw_sp,
			    struct mlxsw_sp_acl_block *block,
			    struct tc_cls_flower_offload *f);
			    struct tc_cls_flower_offload *f);
void mlxsw_sp_flower_destroy(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress,
void mlxsw_sp_flower_destroy(struct mlxsw_sp *mlxsw_sp,
			     struct mlxsw_sp_acl_block *block,
			     struct tc_cls_flower_offload *f);
			     struct tc_cls_flower_offload *f);
int mlxsw_sp_flower_stats(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress,
int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
			  struct mlxsw_sp_acl_block *block,
			  struct tc_cls_flower_offload *f);
			  struct tc_cls_flower_offload *f);


/* spectrum_qdisc.c */
/* spectrum_qdisc.c */
+232 −70
Original line number Original line Diff line number Diff line
@@ -39,6 +39,7 @@
#include <linux/string.h>
#include <linux/string.h>
#include <linux/rhashtable.h>
#include <linux/rhashtable.h>
#include <linux/netdevice.h>
#include <linux/netdevice.h>
#include <net/net_namespace.h>
#include <net/tc_act/tc_vlan.h>
#include <net/tc_act/tc_vlan.h>


#include "reg.h"
#include "reg.h"
@@ -70,9 +71,23 @@ struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl)
	return acl->afk;
	return acl->afk;
}
}


struct mlxsw_sp_acl_ruleset_ht_key {
struct mlxsw_sp_acl_block_binding {
	struct net_device *dev; /* dev this ruleset is bound to */
	struct list_head list;
	struct net_device *dev;
	struct mlxsw_sp_port *mlxsw_sp_port;
	bool ingress;
	bool ingress;
};

struct mlxsw_sp_acl_block {
	struct list_head binding_list;
	struct mlxsw_sp_acl_ruleset *ruleset_zero;
	struct mlxsw_sp *mlxsw_sp;
	unsigned int rule_count;
	unsigned int disable_count;
};

struct mlxsw_sp_acl_ruleset_ht_key {
	struct mlxsw_sp_acl_block *block;
	u32 chain_index;
	u32 chain_index;
	const struct mlxsw_sp_acl_profile_ops *ops;
	const struct mlxsw_sp_acl_profile_ops *ops;
};
};
@@ -118,8 +133,185 @@ struct mlxsw_sp_fid *mlxsw_sp_acl_dummy_fid(struct mlxsw_sp *mlxsw_sp)
	return mlxsw_sp->acl->dummy_fid;
	return mlxsw_sp->acl->dummy_fid;
}
}


struct mlxsw_sp *mlxsw_sp_acl_block_mlxsw_sp(struct mlxsw_sp_acl_block *block)
{
	return block->mlxsw_sp;
}

unsigned int mlxsw_sp_acl_block_rule_count(struct mlxsw_sp_acl_block *block)
{
	return block ? block->rule_count : 0;
}

void mlxsw_sp_acl_block_disable_inc(struct mlxsw_sp_acl_block *block)
{
	if (block)
		block->disable_count++;
}

void mlxsw_sp_acl_block_disable_dec(struct mlxsw_sp_acl_block *block)
{
	if (block)
		block->disable_count--;
}

bool mlxsw_sp_acl_block_disabled(struct mlxsw_sp_acl_block *block)
{
	return block->disable_count;
}

static int
mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp,
			  struct mlxsw_sp_acl_block *block,
			  struct mlxsw_sp_acl_block_binding *binding)
{
	struct mlxsw_sp_acl_ruleset *ruleset = block->ruleset_zero;
	const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops;

	return ops->ruleset_bind(mlxsw_sp, ruleset->priv,
				 binding->mlxsw_sp_port, binding->ingress);
}

static void
mlxsw_sp_acl_ruleset_unbind(struct mlxsw_sp *mlxsw_sp,
			    struct mlxsw_sp_acl_block *block,
			    struct mlxsw_sp_acl_block_binding *binding)
{
	struct mlxsw_sp_acl_ruleset *ruleset = block->ruleset_zero;
	const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops;

	ops->ruleset_unbind(mlxsw_sp, ruleset->priv,
			    binding->mlxsw_sp_port, binding->ingress);
}

static bool mlxsw_sp_acl_ruleset_block_bound(struct mlxsw_sp_acl_block *block)
{
	return block->ruleset_zero;
}

static int
mlxsw_sp_acl_ruleset_block_bind(struct mlxsw_sp *mlxsw_sp,
				struct mlxsw_sp_acl_ruleset *ruleset,
				struct mlxsw_sp_acl_block *block)
{
	struct mlxsw_sp_acl_block_binding *binding;
	int err;

	block->ruleset_zero = ruleset;
	list_for_each_entry(binding, &block->binding_list, list) {
		err = mlxsw_sp_acl_ruleset_bind(mlxsw_sp, block, binding);
		if (err)
			goto rollback;
	}
	return 0;

rollback:
	list_for_each_entry_continue_reverse(binding, &block->binding_list,
					     list)
		mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, block, binding);
	block->ruleset_zero = NULL;

	return err;
}

static void
mlxsw_sp_acl_ruleset_block_unbind(struct mlxsw_sp *mlxsw_sp,
				  struct mlxsw_sp_acl_ruleset *ruleset,
				  struct mlxsw_sp_acl_block *block)
{
	struct mlxsw_sp_acl_block_binding *binding;

	list_for_each_entry(binding, &block->binding_list, list)
		mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, block, binding);
	block->ruleset_zero = NULL;
}

struct mlxsw_sp_acl_block *mlxsw_sp_acl_block_create(struct mlxsw_sp *mlxsw_sp,
						     struct net *net)
{
	struct mlxsw_sp_acl_block *block;

	block = kzalloc(sizeof(*block), GFP_KERNEL);
	if (!block)
		return NULL;
	INIT_LIST_HEAD(&block->binding_list);
	block->mlxsw_sp = mlxsw_sp;
	return block;
}

void mlxsw_sp_acl_block_destroy(struct mlxsw_sp_acl_block *block)
{
	WARN_ON(!list_empty(&block->binding_list));
	kfree(block);
}

static struct mlxsw_sp_acl_block_binding *
mlxsw_sp_acl_block_lookup(struct mlxsw_sp_acl_block *block,
			  struct mlxsw_sp_port *mlxsw_sp_port, bool ingress)
{
	struct mlxsw_sp_acl_block_binding *binding;

	list_for_each_entry(binding, &block->binding_list, list)
		if (binding->mlxsw_sp_port == mlxsw_sp_port &&
		    binding->ingress == ingress)
			return binding;
	return NULL;
}

int mlxsw_sp_acl_block_bind(struct mlxsw_sp *mlxsw_sp,
			    struct mlxsw_sp_acl_block *block,
			    struct mlxsw_sp_port *mlxsw_sp_port,
			    bool ingress)
{
	struct mlxsw_sp_acl_block_binding *binding;
	int err;

	if (WARN_ON(mlxsw_sp_acl_block_lookup(block, mlxsw_sp_port, ingress)))
		return -EEXIST;

	binding = kzalloc(sizeof(*binding), GFP_KERNEL);
	if (!binding)
		return -ENOMEM;
	binding->mlxsw_sp_port = mlxsw_sp_port;
	binding->ingress = ingress;

	if (mlxsw_sp_acl_ruleset_block_bound(block)) {
		err = mlxsw_sp_acl_ruleset_bind(mlxsw_sp, block, binding);
		if (err)
			goto err_ruleset_bind;
	}

	list_add(&binding->list, &block->binding_list);
	return 0;

err_ruleset_bind:
	kfree(binding);
	return err;
}

int mlxsw_sp_acl_block_unbind(struct mlxsw_sp *mlxsw_sp,
			      struct mlxsw_sp_acl_block *block,
			      struct mlxsw_sp_port *mlxsw_sp_port,
			      bool ingress)
{
	struct mlxsw_sp_acl_block_binding *binding;

	binding = mlxsw_sp_acl_block_lookup(block, mlxsw_sp_port, ingress);
	if (!binding)
		return -ENOENT;

	list_del(&binding->list);

	if (mlxsw_sp_acl_ruleset_block_bound(block))
		mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, block, binding);

	kfree(binding);
	return 0;
}

static struct mlxsw_sp_acl_ruleset *
static struct mlxsw_sp_acl_ruleset *
mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp,
			    struct mlxsw_sp_acl_block *block, u32 chain_index,
			    const struct mlxsw_sp_acl_profile_ops *ops)
			    const struct mlxsw_sp_acl_profile_ops *ops)
{
{
	struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
	struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
@@ -132,6 +324,8 @@ mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp,
	if (!ruleset)
	if (!ruleset)
		return ERR_PTR(-ENOMEM);
		return ERR_PTR(-ENOMEM);
	ruleset->ref_count = 1;
	ruleset->ref_count = 1;
	ruleset->ht_key.block = block;
	ruleset->ht_key.chain_index = chain_index;
	ruleset->ht_key.ops = ops;
	ruleset->ht_key.ops = ops;


	err = rhashtable_init(&ruleset->rule_ht, &mlxsw_sp_acl_rule_ht_params);
	err = rhashtable_init(&ruleset->rule_ht, &mlxsw_sp_acl_rule_ht_params);
@@ -142,68 +336,50 @@ mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp,
	if (err)
	if (err)
		goto err_ops_ruleset_add;
		goto err_ops_ruleset_add;


	return ruleset;

err_ops_ruleset_add:
	rhashtable_destroy(&ruleset->rule_ht);
err_rhashtable_init:
	kfree(ruleset);
	return ERR_PTR(err);
}

static void mlxsw_sp_acl_ruleset_destroy(struct mlxsw_sp *mlxsw_sp,
					 struct mlxsw_sp_acl_ruleset *ruleset)
{
	const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops;

	ops->ruleset_del(mlxsw_sp, ruleset->priv);
	rhashtable_destroy(&ruleset->rule_ht);
	kfree(ruleset);
}

static int mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp,
				     struct mlxsw_sp_acl_ruleset *ruleset,
				     struct net_device *dev, bool ingress,
				     u32 chain_index)
{
	const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops;
	struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
	int err;

	ruleset->ht_key.dev = dev;
	ruleset->ht_key.ingress = ingress;
	ruleset->ht_key.chain_index = chain_index;
	err = rhashtable_insert_fast(&acl->ruleset_ht, &ruleset->ht_node,
	err = rhashtable_insert_fast(&acl->ruleset_ht, &ruleset->ht_node,
				     mlxsw_sp_acl_ruleset_ht_params);
				     mlxsw_sp_acl_ruleset_ht_params);
	if (err)
	if (err)
		return err;
		goto err_ht_insert;
	if (!ruleset->ht_key.chain_index) {

	if (!chain_index) {
		/* We only need ruleset with chain index 0, the implicit one,
		/* We only need ruleset with chain index 0, the implicit one,
		 * to be directly bound to device. The rest of the rulesets
		 * to be directly bound to device. The rest of the rulesets
		 * are bound by "Goto action set".
		 * are bound by "Goto action set".
		 */
		 */
		err = ops->ruleset_bind(mlxsw_sp, ruleset->priv, dev, ingress);
		err = mlxsw_sp_acl_ruleset_block_bind(mlxsw_sp, ruleset, block);
		if (err)
		if (err)
			goto err_ops_ruleset_bind;
			goto err_ruleset_bind;
	}
	}
	return 0;


err_ops_ruleset_bind:
	return ruleset;

err_ruleset_bind:
	rhashtable_remove_fast(&acl->ruleset_ht, &ruleset->ht_node,
	rhashtable_remove_fast(&acl->ruleset_ht, &ruleset->ht_node,
			       mlxsw_sp_acl_ruleset_ht_params);
			       mlxsw_sp_acl_ruleset_ht_params);
	return err;
err_ht_insert:
	ops->ruleset_del(mlxsw_sp, ruleset->priv);
err_ops_ruleset_add:
	rhashtable_destroy(&ruleset->rule_ht);
err_rhashtable_init:
	kfree(ruleset);
	return ERR_PTR(err);
}
}


static void mlxsw_sp_acl_ruleset_unbind(struct mlxsw_sp *mlxsw_sp,
static void mlxsw_sp_acl_ruleset_destroy(struct mlxsw_sp *mlxsw_sp,
					 struct mlxsw_sp_acl_ruleset *ruleset)
					 struct mlxsw_sp_acl_ruleset *ruleset)
{
{
	const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops;
	const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops;
	struct mlxsw_sp_acl_block *block = ruleset->ht_key.block;
	u32 chain_index = ruleset->ht_key.chain_index;
	struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
	struct mlxsw_sp_acl *acl = mlxsw_sp->acl;


	if (!ruleset->ht_key.chain_index)
	if (!chain_index)
		ops->ruleset_unbind(mlxsw_sp, ruleset->priv);
		mlxsw_sp_acl_ruleset_block_unbind(mlxsw_sp, ruleset, block);
	rhashtable_remove_fast(&acl->ruleset_ht, &ruleset->ht_node,
	rhashtable_remove_fast(&acl->ruleset_ht, &ruleset->ht_node,
			       mlxsw_sp_acl_ruleset_ht_params);
			       mlxsw_sp_acl_ruleset_ht_params);
	ops->ruleset_del(mlxsw_sp, ruleset->priv);
	rhashtable_destroy(&ruleset->rule_ht);
	kfree(ruleset);
}
}


static void mlxsw_sp_acl_ruleset_ref_inc(struct mlxsw_sp_acl_ruleset *ruleset)
static void mlxsw_sp_acl_ruleset_ref_inc(struct mlxsw_sp_acl_ruleset *ruleset)
@@ -216,20 +392,18 @@ static void mlxsw_sp_acl_ruleset_ref_dec(struct mlxsw_sp *mlxsw_sp,
{
{
	if (--ruleset->ref_count)
	if (--ruleset->ref_count)
		return;
		return;
	mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, ruleset);
	mlxsw_sp_acl_ruleset_destroy(mlxsw_sp, ruleset);
	mlxsw_sp_acl_ruleset_destroy(mlxsw_sp, ruleset);
}
}


static struct mlxsw_sp_acl_ruleset *
static struct mlxsw_sp_acl_ruleset *
__mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp_acl *acl, struct net_device *dev,
__mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp_acl *acl,
			      bool ingress, u32 chain_index,
			      struct mlxsw_sp_acl_block *block, u32 chain_index,
			      const struct mlxsw_sp_acl_profile_ops *ops)
			      const struct mlxsw_sp_acl_profile_ops *ops)
{
{
	struct mlxsw_sp_acl_ruleset_ht_key ht_key;
	struct mlxsw_sp_acl_ruleset_ht_key ht_key;


	memset(&ht_key, 0, sizeof(ht_key));
	memset(&ht_key, 0, sizeof(ht_key));
	ht_key.dev = dev;
	ht_key.block = block;
	ht_key.ingress = ingress;
	ht_key.chain_index = chain_index;
	ht_key.chain_index = chain_index;
	ht_key.ops = ops;
	ht_key.ops = ops;
	return rhashtable_lookup_fast(&acl->ruleset_ht, &ht_key,
	return rhashtable_lookup_fast(&acl->ruleset_ht, &ht_key,
@@ -237,8 +411,8 @@ __mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp_acl *acl, struct net_device *dev,
}
}


struct mlxsw_sp_acl_ruleset *
struct mlxsw_sp_acl_ruleset *
mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp, struct net_device *dev,
mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp,
			    bool ingress, u32 chain_index,
			    struct mlxsw_sp_acl_block *block, u32 chain_index,
			    enum mlxsw_sp_acl_profile profile)
			    enum mlxsw_sp_acl_profile profile)
{
{
	const struct mlxsw_sp_acl_profile_ops *ops;
	const struct mlxsw_sp_acl_profile_ops *ops;
@@ -248,45 +422,31 @@ mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp, struct net_device *dev,
	ops = acl->ops->profile_ops(mlxsw_sp, profile);
	ops = acl->ops->profile_ops(mlxsw_sp, profile);
	if (!ops)
	if (!ops)
		return ERR_PTR(-EINVAL);
		return ERR_PTR(-EINVAL);
	ruleset = __mlxsw_sp_acl_ruleset_lookup(acl, dev, ingress,
	ruleset = __mlxsw_sp_acl_ruleset_lookup(acl, block, chain_index, ops);
						chain_index, ops);
	if (!ruleset)
	if (!ruleset)
		return ERR_PTR(-ENOENT);
		return ERR_PTR(-ENOENT);
	return ruleset;
	return ruleset;
}
}


struct mlxsw_sp_acl_ruleset *
struct mlxsw_sp_acl_ruleset *
mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp, struct net_device *dev,
mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp,
			 bool ingress, u32 chain_index,
			 struct mlxsw_sp_acl_block *block, u32 chain_index,
			 enum mlxsw_sp_acl_profile profile)
			 enum mlxsw_sp_acl_profile profile)
{
{
	const struct mlxsw_sp_acl_profile_ops *ops;
	const struct mlxsw_sp_acl_profile_ops *ops;
	struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
	struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
	struct mlxsw_sp_acl_ruleset *ruleset;
	struct mlxsw_sp_acl_ruleset *ruleset;
	int err;


	ops = acl->ops->profile_ops(mlxsw_sp, profile);
	ops = acl->ops->profile_ops(mlxsw_sp, profile);
	if (!ops)
	if (!ops)
		return ERR_PTR(-EINVAL);
		return ERR_PTR(-EINVAL);


	ruleset = __mlxsw_sp_acl_ruleset_lookup(acl, dev, ingress,
	ruleset = __mlxsw_sp_acl_ruleset_lookup(acl, block, chain_index, ops);
						chain_index, ops);
	if (ruleset) {
	if (ruleset) {
		mlxsw_sp_acl_ruleset_ref_inc(ruleset);
		mlxsw_sp_acl_ruleset_ref_inc(ruleset);
		return ruleset;
		return ruleset;
	}
	}
	ruleset = mlxsw_sp_acl_ruleset_create(mlxsw_sp, ops);
	return mlxsw_sp_acl_ruleset_create(mlxsw_sp, block, chain_index, ops);
	if (IS_ERR(ruleset))
		return ruleset;
	err = mlxsw_sp_acl_ruleset_bind(mlxsw_sp, ruleset, dev,
					ingress, chain_index);
	if (err)
		goto err_ruleset_bind;
	return ruleset;

err_ruleset_bind:
	mlxsw_sp_acl_ruleset_destroy(mlxsw_sp, ruleset);
	return ERR_PTR(err);
}
}


void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp,
void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp,
@@ -535,6 +695,7 @@ int mlxsw_sp_acl_rule_add(struct mlxsw_sp *mlxsw_sp,
		goto err_rhashtable_insert;
		goto err_rhashtable_insert;


	list_add_tail(&rule->list, &mlxsw_sp->acl->rules);
	list_add_tail(&rule->list, &mlxsw_sp->acl->rules);
	ruleset->ht_key.block->rule_count++;
	return 0;
	return 0;


err_rhashtable_insert:
err_rhashtable_insert:
@@ -548,6 +709,7 @@ void mlxsw_sp_acl_rule_del(struct mlxsw_sp *mlxsw_sp,
	struct mlxsw_sp_acl_ruleset *ruleset = rule->ruleset;
	struct mlxsw_sp_acl_ruleset *ruleset = rule->ruleset;
	const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops;
	const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops;


	ruleset->ht_key.block->rule_count--;
	list_del(&rule->list);
	list_del(&rule->list);
	rhashtable_remove_fast(&ruleset->rule_ht, &rule->ht_node,
	rhashtable_remove_fast(&ruleset->rule_ht, &rule->ht_node,
			       mlxsw_sp_acl_rule_ht_params);
			       mlxsw_sp_acl_rule_ht_params);
+19 −25
Original line number Original line Diff line number Diff line
@@ -154,10 +154,6 @@ struct mlxsw_sp_acl_tcam_group {
	struct list_head region_list;
	struct list_head region_list;
	unsigned int region_count;
	unsigned int region_count;
	struct rhashtable chunk_ht;
	struct rhashtable chunk_ht;
	struct {
		u16 local_port;
		bool ingress;
	} bound;
	struct mlxsw_sp_acl_tcam_group_ops *ops;
	struct mlxsw_sp_acl_tcam_group_ops *ops;
	const struct mlxsw_sp_acl_tcam_pattern *patterns;
	const struct mlxsw_sp_acl_tcam_pattern *patterns;
	unsigned int patterns_count;
	unsigned int patterns_count;
@@ -262,35 +258,29 @@ static void mlxsw_sp_acl_tcam_group_del(struct mlxsw_sp *mlxsw_sp,
static int
static int
mlxsw_sp_acl_tcam_group_bind(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_acl_tcam_group_bind(struct mlxsw_sp *mlxsw_sp,
			     struct mlxsw_sp_acl_tcam_group *group,
			     struct mlxsw_sp_acl_tcam_group *group,
			     struct net_device *dev, bool ingress)
			     struct mlxsw_sp_port *mlxsw_sp_port,
			     bool ingress)
{
{
	struct mlxsw_sp_port *mlxsw_sp_port;
	char ppbt_pl[MLXSW_REG_PPBT_LEN];
	char ppbt_pl[MLXSW_REG_PPBT_LEN];


	if (!mlxsw_sp_port_dev_check(dev))
	mlxsw_reg_ppbt_pack(ppbt_pl, ingress ? MLXSW_REG_PXBT_E_IACL :
		return -EINVAL;

	mlxsw_sp_port = netdev_priv(dev);
	group->bound.local_port = mlxsw_sp_port->local_port;
	group->bound.ingress = ingress;
	mlxsw_reg_ppbt_pack(ppbt_pl,
			    group->bound.ingress ? MLXSW_REG_PXBT_E_IACL :
					       MLXSW_REG_PXBT_E_EACL,
					       MLXSW_REG_PXBT_E_EACL,
			    MLXSW_REG_PXBT_OP_BIND, group->bound.local_port,
			    MLXSW_REG_PXBT_OP_BIND, mlxsw_sp_port->local_port,
			    group->id);
			    group->id);
	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ppbt), ppbt_pl);
	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ppbt), ppbt_pl);
}
}


static void
static void
mlxsw_sp_acl_tcam_group_unbind(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_acl_tcam_group_unbind(struct mlxsw_sp *mlxsw_sp,
			       struct mlxsw_sp_acl_tcam_group *group)
			       struct mlxsw_sp_acl_tcam_group *group,
			       struct mlxsw_sp_port *mlxsw_sp_port,
			       bool ingress)
{
{
	char ppbt_pl[MLXSW_REG_PPBT_LEN];
	char ppbt_pl[MLXSW_REG_PPBT_LEN];


	mlxsw_reg_ppbt_pack(ppbt_pl,
	mlxsw_reg_ppbt_pack(ppbt_pl, ingress ? MLXSW_REG_PXBT_E_IACL :
			    group->bound.ingress ? MLXSW_REG_PXBT_E_IACL :
					       MLXSW_REG_PXBT_E_EACL,
					       MLXSW_REG_PXBT_E_EACL,
			    MLXSW_REG_PXBT_OP_UNBIND, group->bound.local_port,
			    MLXSW_REG_PXBT_OP_UNBIND, mlxsw_sp_port->local_port,
			    group->id);
			    group->id);
	mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ppbt), ppbt_pl);
	mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ppbt), ppbt_pl);
}
}
@@ -1056,21 +1046,25 @@ mlxsw_sp_acl_tcam_flower_ruleset_del(struct mlxsw_sp *mlxsw_sp,
static int
static int
mlxsw_sp_acl_tcam_flower_ruleset_bind(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_acl_tcam_flower_ruleset_bind(struct mlxsw_sp *mlxsw_sp,
				      void *ruleset_priv,
				      void *ruleset_priv,
				      struct net_device *dev, bool ingress)
				      struct mlxsw_sp_port *mlxsw_sp_port,
				      bool ingress)
{
{
	struct mlxsw_sp_acl_tcam_flower_ruleset *ruleset = ruleset_priv;
	struct mlxsw_sp_acl_tcam_flower_ruleset *ruleset = ruleset_priv;


	return mlxsw_sp_acl_tcam_group_bind(mlxsw_sp, &ruleset->group,
	return mlxsw_sp_acl_tcam_group_bind(mlxsw_sp, &ruleset->group,
					    dev, ingress);
					    mlxsw_sp_port, ingress);
}
}


static void
static void
mlxsw_sp_acl_tcam_flower_ruleset_unbind(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_acl_tcam_flower_ruleset_unbind(struct mlxsw_sp *mlxsw_sp,
					void *ruleset_priv)
					void *ruleset_priv,
					struct mlxsw_sp_port *mlxsw_sp_port,
					bool ingress)
{
{
	struct mlxsw_sp_acl_tcam_flower_ruleset *ruleset = ruleset_priv;
	struct mlxsw_sp_acl_tcam_flower_ruleset *ruleset = ruleset_priv;


	mlxsw_sp_acl_tcam_group_unbind(mlxsw_sp, &ruleset->group);
	mlxsw_sp_acl_tcam_group_unbind(mlxsw_sp, &ruleset->group,
				       mlxsw_sp_port, ingress);
}
}


static u16
static u16
Loading