Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 40e993aa authored by Rafael J. Wysocki's avatar Rafael J. Wysocki
Browse files

Merge OPP material for v4.11 to satisfy dependencies.

parents b1e9a649 0764c604
Loading
Loading
Loading
Loading
+16 −36
Original line number Diff line number Diff line
@@ -79,22 +79,6 @@ dependent subsystems such as cpufreq are left to the discretion of the SoC
specific framework which uses the OPP library. Similar care needs to be taken
care to refresh the cpufreq table in cases of these operations.

WARNING on OPP List locking mechanism:
-------------------------------------------------
OPP library uses RCU for exclusivity. RCU allows the query functions to operate
in multiple contexts and this synchronization mechanism is optimal for a read
intensive operations on data structure as the OPP library caters to.

To ensure that the data retrieved are sane, the users such as SoC framework
should ensure that the section of code operating on OPP queries are locked
using RCU read locks. The opp_find_freq_{exact,ceil,floor},
opp_get_{voltage, freq, opp_count} fall into this category.

opp_{add,enable,disable} are updaters which use mutex and implement it's own
RCU locking mechanisms. These functions should *NOT* be called under RCU locks
and other contexts that prevent blocking functions in RCU or mutex operations
from working.

2. Initial OPP List Registration
================================
The SoC implementation calls dev_pm_opp_add function iteratively to add OPPs per
@@ -137,15 +121,18 @@ functions return the matching pointer representing the opp if a match is
found, else returns error. These errors are expected to be handled by standard
error checks such as IS_ERR() and appropriate actions taken by the caller.

Callers of these functions shall call dev_pm_opp_put() after they have used the
OPP. Otherwise the memory for the OPP will never get freed and result in
memleak.

dev_pm_opp_find_freq_exact - Search for an OPP based on an *exact* frequency and
	availability. This function is especially useful to enable an OPP which
	is not available by default.
	Example: In a case when SoC framework detects a situation where a
	higher frequency could be made available, it can use this function to
	find the OPP prior to call the dev_pm_opp_enable to actually make it available.
	 rcu_read_lock();
	 opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
	 rcu_read_unlock();
	 dev_pm_opp_put(opp);
	 /* dont operate on the pointer.. just do a sanity check.. */
	 if (IS_ERR(opp)) {
		pr_err("frequency not disabled!\n");
@@ -163,9 +150,8 @@ dev_pm_opp_find_freq_floor - Search for an available OPP which is *at most* the
	frequency.
	Example: To find the highest opp for a device:
	 freq = ULONG_MAX;
	 rcu_read_lock();
	 dev_pm_opp_find_freq_floor(dev, &freq);
	 rcu_read_unlock();
	 opp = dev_pm_opp_find_freq_floor(dev, &freq);
	 dev_pm_opp_put(opp);

dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the
	provided frequency. This function is useful while searching for a
@@ -173,17 +159,15 @@ dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the
	frequency.
	Example 1: To find the lowest opp for a device:
	 freq = 0;
	 rcu_read_lock();
	 dev_pm_opp_find_freq_ceil(dev, &freq);
	 rcu_read_unlock();
	 opp = dev_pm_opp_find_freq_ceil(dev, &freq);
	 dev_pm_opp_put(opp);
	Example 2: A simplified implementation of a SoC cpufreq_driver->target:
	 soc_cpufreq_target(..)
	 {
		/* Do stuff like policy checks etc. */
		/* Find the best frequency match for the req */
		rcu_read_lock();
		opp = dev_pm_opp_find_freq_ceil(dev, &freq);
		rcu_read_unlock();
		dev_pm_opp_put(opp);
		if (!IS_ERR(opp))
			soc_switch_to_freq_voltage(freq);
		else
@@ -208,9 +192,8 @@ dev_pm_opp_enable - Make a OPP available for operation.
	implementation might choose to do something as follows:
	 if (cur_temp < temp_low_thresh) {
		/* Enable 1GHz if it was disabled */
		rcu_read_lock();
		opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
		rcu_read_unlock();
		dev_pm_opp_put(opp);
		/* just error check */
		if (!IS_ERR(opp))
			ret = dev_pm_opp_enable(dev, 1000000000);
@@ -224,9 +207,8 @@ dev_pm_opp_disable - Make an OPP to be not available for operation
	choose to do something as follows:
	 if (cur_temp > temp_high_thresh) {
		/* Disable 1GHz if it was enabled */
		rcu_read_lock();
		opp = dev_pm_opp_find_freq_exact(dev, 1000000000, true);
		rcu_read_unlock();
		dev_pm_opp_put(opp);
		/* just error check */
		if (!IS_ERR(opp))
			ret = dev_pm_opp_disable(dev, 1000000000);
@@ -249,10 +231,9 @@ dev_pm_opp_get_voltage - Retrieve the voltage represented by the opp pointer.
	 soc_switch_to_freq_voltage(freq)
	 {
		/* do things */
		rcu_read_lock();
		opp = dev_pm_opp_find_freq_ceil(dev, &freq);
		v = dev_pm_opp_get_voltage(opp);
		rcu_read_unlock();
		dev_pm_opp_put(opp);
		if (v)
			regulator_set_voltage(.., v);
		/* do other things */
@@ -266,12 +247,12 @@ dev_pm_opp_get_freq - Retrieve the freq represented by the opp pointer.
	 {
		/* do things.. */
		 max_freq = ULONG_MAX;
		 rcu_read_lock();
		 max_opp = dev_pm_opp_find_freq_floor(dev,&max_freq);
		 requested_opp = dev_pm_opp_find_freq_ceil(dev,&freq);
		 if (!IS_ERR(max_opp) && !IS_ERR(requested_opp))
			r = soc_test_validity(max_opp, requested_opp);
		 rcu_read_unlock();
		 dev_pm_opp_put(max_opp);
		 dev_pm_opp_put(requested_opp);
		/* do other things */
	 }
	 soc_test_validity(..)
@@ -289,7 +270,6 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device
	 soc_notify_coproc_available_frequencies()
	 {
		/* Do things */
		rcu_read_lock();
		num_available = dev_pm_opp_get_opp_count(dev);
		speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL);
		/* populate the table in increasing order */
@@ -298,8 +278,8 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device
			speeds[i] = freq;
			freq++;
			i++;
			dev_pm_opp_put(opp);
		}
		rcu_read_unlock();

		soc_notify_coproc(AVAILABLE_FREQs, speeds, num_available);
		/* Do other things */
+2 −3
Original line number Diff line number Diff line
@@ -130,17 +130,16 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
	freq = clk_get_rate(clk);
	clk_put(clk);

	rcu_read_lock();
	opp = dev_pm_opp_find_freq_ceil(dev, &freq);
	if (IS_ERR(opp)) {
		rcu_read_unlock();
		pr_err("%s: unable to find boot up OPP for vdd_%s\n",
			__func__, vdd_name);
		goto exit;
	}

	bootup_volt = dev_pm_opp_get_voltage(opp);
	rcu_read_unlock();
	dev_pm_opp_put(opp);

	if (!bootup_volt) {
		pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n",
		       __func__, vdd_name);
+384 −627

File changed.

Preview size limit exceeded, changes collapsed.

+15 −51
Original line number Diff line number Diff line
@@ -42,11 +42,6 @@
 *
 * WARNING: It is  important for the callers to ensure refreshing their copy of
 * the table if any of the mentioned functions have been invoked in the interim.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Since we just use the regular accessor functions to access the internal data
 * structures, we use RCU read lock inside this function. As a result, users of
 * this function DONOT need to use explicit locks for invoking.
 */
int dev_pm_opp_init_cpufreq_table(struct device *dev,
				  struct cpufreq_frequency_table **table)
@@ -56,19 +51,13 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
	int i, max_opps, ret = 0;
	unsigned long rate;

	rcu_read_lock();

	max_opps = dev_pm_opp_get_opp_count(dev);
	if (max_opps <= 0) {
		ret = max_opps ? max_opps : -ENODATA;
		goto out;
	}
	if (max_opps <= 0)
		return max_opps ? max_opps : -ENODATA;

	freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC);
	if (!freq_table) {
		ret = -ENOMEM;
		goto out;
	}
	if (!freq_table)
		return -ENOMEM;

	for (i = 0, rate = 0; i < max_opps; i++, rate++) {
		/* find next rate */
@@ -83,6 +72,8 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
		/* Is Boost/turbo opp ? */
		if (dev_pm_opp_is_turbo(opp))
			freq_table[i].flags = CPUFREQ_BOOST_FREQ;

		dev_pm_opp_put(opp);
	}

	freq_table[i].driver_data = i;
@@ -91,7 +82,6 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
	*table = &freq_table[0];

out:
	rcu_read_unlock();
	if (ret)
		kfree(freq_table);

@@ -147,12 +137,6 @@ void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of)
 * This removes the OPP tables for CPUs present in the @cpumask.
 * This should be used to remove all the OPPs entries associated with
 * the cpus in @cpumask.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Hence this function internally uses RCU updater strategy with mutex locks
 * to keep the integrity of the internal data structures. Callers should ensure
 * that this function is *NOT* called under RCU protection or in contexts where
 * mutex cannot be locked.
 */
void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask)
{
@@ -169,12 +153,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table);
 * @cpumask.
 *
 * Returns -ENODEV if OPP table isn't already present.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Hence this function internally uses RCU updater strategy with mutex locks
 * to keep the integrity of the internal data structures. Callers should ensure
 * that this function is *NOT* called under RCU protection or in contexts where
 * mutex cannot be locked.
 */
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
				const struct cpumask *cpumask)
@@ -184,13 +162,9 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
	struct device *dev;
	int cpu, ret = 0;

	mutex_lock(&opp_table_lock);

	opp_table = _find_opp_table(cpu_dev);
	if (IS_ERR(opp_table)) {
		ret = PTR_ERR(opp_table);
		goto unlock;
	}
	if (IS_ERR(opp_table))
		return PTR_ERR(opp_table);

	for_each_cpu(cpu, cpumask) {
		if (cpu == cpu_dev->id)
@@ -213,8 +187,8 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
		/* Mark opp-table as multiple CPUs are sharing it now */
		opp_table->shared_opp = OPP_TABLE_ACCESS_SHARED;
	}
unlock:
	mutex_unlock(&opp_table_lock);

	dev_pm_opp_put_opp_table(opp_table);

	return ret;
}
@@ -229,12 +203,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus);
 *
 * Returns -ENODEV if OPP table isn't already present and -EINVAL if the OPP
 * table's status is access-unknown.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Hence this function internally uses RCU updater strategy with mutex locks
 * to keep the integrity of the internal data structures. Callers should ensure
 * that this function is *NOT* called under RCU protection or in contexts where
 * mutex cannot be locked.
 */
int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
{
@@ -242,17 +210,13 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
	struct opp_table *opp_table;
	int ret = 0;

	mutex_lock(&opp_table_lock);

	opp_table = _find_opp_table(cpu_dev);
	if (IS_ERR(opp_table)) {
		ret = PTR_ERR(opp_table);
		goto unlock;
	}
	if (IS_ERR(opp_table))
		return PTR_ERR(opp_table);

	if (opp_table->shared_opp == OPP_TABLE_ACCESS_UNKNOWN) {
		ret = -EINVAL;
		goto unlock;
		goto put_opp_table;
	}

	cpumask_clear(cpumask);
@@ -264,8 +228,8 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
		cpumask_set_cpu(cpu_dev->id, cpumask);
	}

unlock:
	mutex_unlock(&opp_table_lock);
put_opp_table:
	dev_pm_opp_put_opp_table(opp_table);

	return ret;
}
+60 −94
Original line number Diff line number Diff line
@@ -24,9 +24,11 @@

static struct opp_table *_managed_opp(const struct device_node *np)
{
	struct opp_table *opp_table;
	struct opp_table *opp_table, *managed_table = NULL;

	mutex_lock(&opp_table_lock);

	list_for_each_entry_rcu(opp_table, &opp_tables, node) {
	list_for_each_entry(opp_table, &opp_tables, node) {
		if (opp_table->np == np) {
			/*
			 * Multiple devices can point to the same OPP table and
@@ -35,14 +37,18 @@ static struct opp_table *_managed_opp(const struct device_node *np)
			 * But the OPPs will be considered as shared only if the
			 * OPP table contains a "opp-shared" property.
			 */
			if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED)
				return opp_table;
			if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) {
				_get_opp_table_kref(opp_table);
				managed_table = opp_table;
			}

			return NULL;
			break;
		}
	}

	return NULL;
	mutex_unlock(&opp_table_lock);

	return managed_table;
}

void _of_init_opp_table(struct opp_table *opp_table, struct device *dev)
@@ -229,34 +235,28 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
 * @dev:	device pointer used to lookup OPP table.
 *
 * Free OPPs created using static entries present in DT.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Hence this function indirectly uses RCU updater strategy with mutex locks
 * to keep the integrity of the internal data structures. Callers should ensure
 * that this function is *NOT* called under RCU protection or in contexts where
 * mutex cannot be locked.
 */
void dev_pm_opp_of_remove_table(struct device *dev)
{
	_dev_pm_opp_remove_table(dev, false);
	_dev_pm_opp_find_and_remove_table(dev, false);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);

/* Returns opp descriptor node for a device, caller must do of_node_put() */
static struct device_node *_of_get_opp_desc_node(struct device *dev)
struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev)
{
	/*
	 * TODO: Support for multiple OPP tables.
	 *
	 * There should be only ONE phandle present in "operating-points-v2"
	 * property.
	 */

	return of_parse_phandle(dev->of_node, "operating-points-v2", 0);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node);

/**
 * _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings)
 * @opp_table:	OPP table
 * @dev:	device for which we do this operation
 * @np:		device node
 *
@@ -264,12 +264,6 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev)
 * opp can be controlled using dev_pm_opp_enable/disable functions and may be
 * removed by dev_pm_opp_remove.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Hence this function internally uses RCU updater strategy with mutex locks
 * to keep the integrity of the internal data structures. Callers should ensure
 * that this function is *NOT* called under RCU protection or in contexts where
 * mutex cannot be locked.
 *
 * Return:
 * 0		On success OR
 *		Duplicate OPPs (both freq and volt are same) and opp->available
@@ -278,22 +272,17 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev)
 * -ENOMEM	Memory allocation failure
 * -EINVAL	Failed parsing the OPP node
 */
static int _opp_add_static_v2(struct device *dev, struct device_node *np)
static int _opp_add_static_v2(struct opp_table *opp_table, struct device *dev,
			      struct device_node *np)
{
	struct opp_table *opp_table;
	struct dev_pm_opp *new_opp;
	u64 rate;
	u32 val;
	int ret;

	/* Hold our table modification lock here */
	mutex_lock(&opp_table_lock);

	new_opp = _allocate_opp(dev, &opp_table);
	if (!new_opp) {
		ret = -ENOMEM;
		goto unlock;
	}
	new_opp = _opp_allocate(opp_table);
	if (!new_opp)
		return -ENOMEM;

	ret = of_property_read_u64(np, "opp-hz", &rate);
	if (ret < 0) {
@@ -327,8 +316,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
		goto free_opp;

	ret = _opp_add(dev, new_opp, opp_table);
	if (ret)
	if (ret) {
		/* Don't return error for duplicate OPPs */
		if (ret == -EBUSY)
			ret = 0;
		goto free_opp;
	}

	/* OPP to select on device suspend */
	if (of_property_read_bool(np, "opp-suspend")) {
@@ -345,8 +338,6 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
	if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max)
		opp_table->clock_latency_ns_max = new_opp->clock_latency_ns;

	mutex_unlock(&opp_table_lock);

	pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n",
		 __func__, new_opp->turbo, new_opp->rate,
		 new_opp->supplies[0].u_volt, new_opp->supplies[0].u_volt_min,
@@ -356,13 +347,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
	 * Notify the changes in the availability of the operable
	 * frequency/voltage list.
	 */
	srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp);
	blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ADD, new_opp);
	return 0;

free_opp:
	_opp_remove(opp_table, new_opp, false);
unlock:
	mutex_unlock(&opp_table_lock);
	_opp_free(new_opp);

	return ret;
}

@@ -373,41 +363,35 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
	struct opp_table *opp_table;
	int ret = 0, count = 0;

	mutex_lock(&opp_table_lock);

	opp_table = _managed_opp(opp_np);
	if (opp_table) {
		/* OPPs are already managed */
		if (!_add_opp_dev(dev, opp_table))
			ret = -ENOMEM;
		mutex_unlock(&opp_table_lock);
		return ret;
		goto put_opp_table;
	}
	mutex_unlock(&opp_table_lock);

	opp_table = dev_pm_opp_get_opp_table(dev);
	if (!opp_table)
		return -ENOMEM;

	/* We have opp-table node now, iterate over it and add OPPs */
	for_each_available_child_of_node(opp_np, np) {
		count++;

		ret = _opp_add_static_v2(dev, np);
		ret = _opp_add_static_v2(opp_table, dev, np);
		if (ret) {
			dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
				ret);
			goto free_table;
			_dev_pm_opp_remove_table(opp_table, dev, false);
			goto put_opp_table;
		}
	}

	/* There should be one of more OPP defined */
	if (WARN_ON(!count))
		return -ENOENT;

	mutex_lock(&opp_table_lock);

	opp_table = _find_opp_table(dev);
	if (WARN_ON(IS_ERR(opp_table))) {
		ret = PTR_ERR(opp_table);
		mutex_unlock(&opp_table_lock);
		goto free_table;
	if (WARN_ON(!count)) {
		ret = -ENOENT;
		goto put_opp_table;
	}

	opp_table->np = opp_np;
@@ -416,12 +400,8 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
	else
		opp_table->shared_opp = OPP_TABLE_ACCESS_EXCLUSIVE;

	mutex_unlock(&opp_table_lock);

	return 0;

free_table:
	dev_pm_opp_of_remove_table(dev);
put_opp_table:
	dev_pm_opp_put_opp_table(opp_table);

	return ret;
}
@@ -429,9 +409,10 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
/* Initializes OPP tables based on old-deprecated bindings */
static int _of_add_opp_table_v1(struct device *dev)
{
	struct opp_table *opp_table;
	const struct property *prop;
	const __be32 *val;
	int nr;
	int nr, ret = 0;

	prop = of_find_property(dev->of_node, "operating-points", NULL);
	if (!prop)
@@ -449,18 +430,27 @@ static int _of_add_opp_table_v1(struct device *dev)
		return -EINVAL;
	}

	opp_table = dev_pm_opp_get_opp_table(dev);
	if (!opp_table)
		return -ENOMEM;

	val = prop->value;
	while (nr) {
		unsigned long freq = be32_to_cpup(val++) * 1000;
		unsigned long volt = be32_to_cpup(val++);

		if (_opp_add_v1(dev, freq, volt, false))
			dev_warn(dev, "%s: Failed to add OPP %ld\n",
				 __func__, freq);
		ret = _opp_add_v1(opp_table, dev, freq, volt, false);
		if (ret) {
			dev_err(dev, "%s: Failed to add OPP %ld (%d)\n",
				__func__, freq, ret);
			_dev_pm_opp_remove_table(opp_table, dev, false);
			break;
		}
		nr -= 2;
	}

	return 0;
	dev_pm_opp_put_opp_table(opp_table);
	return ret;
}

/**
@@ -469,12 +459,6 @@ static int _of_add_opp_table_v1(struct device *dev)
 *
 * Register the initial OPP table with the OPP library for given device.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Hence this function indirectly uses RCU updater strategy with mutex locks
 * to keep the integrity of the internal data structures. Callers should ensure
 * that this function is *NOT* called under RCU protection or in contexts where
 * mutex cannot be locked.
 *
 * Return:
 * 0		On success OR
 *		Duplicate OPPs (both freq and volt are same) and opp->available
@@ -495,7 +479,7 @@ int dev_pm_opp_of_add_table(struct device *dev)
	 * OPPs have two version of bindings now. The older one is deprecated,
	 * try for the new binding first.
	 */
	opp_np = _of_get_opp_desc_node(dev);
	opp_np = dev_pm_opp_of_get_opp_desc_node(dev);
	if (!opp_np) {
		/*
		 * Try old-deprecated bindings for backward compatibility with
@@ -519,12 +503,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table);
 *
 * This removes the OPP tables for CPUs present in the @cpumask.
 * This should be used only to remove static entries created from DT.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Hence this function internally uses RCU updater strategy with mutex locks
 * to keep the integrity of the internal data structures. Callers should ensure
 * that this function is *NOT* called under RCU protection or in contexts where
 * mutex cannot be locked.
 */
void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask)
{
@@ -537,12 +515,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table);
 * @cpumask:	cpumask for which OPP table needs to be added.
 *
 * This adds the OPP tables for CPUs present in the @cpumask.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Hence this function internally uses RCU updater strategy with mutex locks
 * to keep the integrity of the internal data structures. Callers should ensure
 * that this function is *NOT* called under RCU protection or in contexts where
 * mutex cannot be locked.
 */
int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask)
{
@@ -590,12 +562,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table);
 * This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev.
 *
 * Returns -ENOENT if operating-points-v2 isn't present for @cpu_dev.
 *
 * Locking: The internal opp_table and opp structures are RCU protected.
 * Hence this function internally uses RCU updater strategy with mutex locks
 * to keep the integrity of the internal data structures. Callers should ensure
 * that this function is *NOT* called under RCU protection or in contexts where
 * mutex cannot be locked.
 */
int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
				   struct cpumask *cpumask)
@@ -605,7 +571,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
	int cpu, ret = 0;

	/* Get OPP descriptor node */
	np = _of_get_opp_desc_node(cpu_dev);
	np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
	if (!np) {
		dev_dbg(cpu_dev, "%s: Couldn't find opp node.\n", __func__);
		return -ENOENT;
@@ -630,7 +596,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
		}

		/* Get OPP descriptor node */
		tmp_np = _of_get_opp_desc_node(tcpu_dev);
		tmp_np = dev_pm_opp_of_get_opp_desc_node(tcpu_dev);
		if (!tmp_np) {
			dev_err(tcpu_dev, "%s: Couldn't find opp node.\n",
				__func__);
Loading