Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 1aacdcf9 authored by Veerabhadrarao Badiganti's avatar Veerabhadrarao Badiganti Committed by Ram Prakash Gupta
Browse files

mmc: Clock Scaling changes on 4.19 kernel



Besides below listed patches, resolved merge conflicts
and updated scaling in CQE path to support upstream CQE/MQ
solution.

This is squash of following mmc clock scaling patches:
-----------------------------------------------------

CMQ specific changes:
--------------------
9591b99 : fix invalid state handling during cm
339a4e1 : Add clock scaling for CMDQ mode
ToDo: 01deb37 : serialize the requests if we are scaled down~

Generic scaling chagnes:
-----------------------
From 4.14 kernel:
a95906d   : Remove suspend/resume clk-scaling logic from mmc_rese
d139f5e3  :Update target-frequency while resuming clock scalin

From 4.9 kernel
f4ac989   : Use new flag for suspending clk scaling
6720af387a: Correct the checks while setting clock scaling frequencies
2f3f4ebb4 : Use PF_MEMALLOC flag for clock scaling context
e2f8af3d91: Handle error case in mmc_suspend
7f5e93a4db: Disable clock scaling during shutdown
450552a6c : Avoid returning error when clok scaling devfreq is removed
b263703672: Use mmc_reset instead of power_restore
4ef47bcd0 : Fix the issue with clock scaling in resume-scaling
48887f382 : initialize the devfreq table with default frequencies
f3218eb38 : fix the pointer cast of freq_table
edac91ad8b: fix issue with clock scaling in HS200 mode
c364cee0b8: modify scaling up/down sequence
a0723f267 : Add NULL check for host->card
d18963a   : fix deadlock between runtime-suspend and
ce8b61e2  : Set max frequency when disabling clock scaling
1a9cf8b   : fix issue with devfreq clock scalin
9379bf9   : resolve deadlock between devfreq update and susp
0fcc35f   : support DDR52 bus-speed during eMMC clock scaling
164db7b21 : mmc: core: fix downdifferential for clock scaling
ee975d3   : check if manual BKOPS is ongoing before scaling
999102c32 : add support for devfreq suspend/resume
8f00420   : clock-scaling: scale only for data re
3904067   : add runtime PM voting to devfreq context
5cd5d9f   : avoid returning error value for clk-scaling
c96b666   : disable clock scaling before sys
87605ad   : change locking from irq_save to bh
df07bec   : Fix clock scaling for HS400 with enhanced stro
54f9de1   : Fix in mmc clk-s
a52f84e   : fix MMC clock scaling to meet upstream HS4
7dc5f79   : devfreq: migrate to devfreq based clock
f125b55   : fix SD card runtime suspend sequence for cloc
6297e40   : fix disable clock scalin
a12e92b   : core: Fix clock frequency transitions during inval
4e3a8b4   : Exit clock scaling prior to removing
3449b4f   : add clock-scaling support to HS400 cards
86787cd   : core: Add support for notifying host driver while scaling
a3f56e1   : Bypass clock scaling while accessing RPMB partition
8ca7092   : run clock scaling only in valid card state
cb18d8585d: Add load based clock scaling support
a0230ca50 : Fix MMC clock scaling in case of tuning failure
7095b8e8c : Allow changing bus frequency for SD/eMMC cards in runtime

Sysfs changes for clock scaling feature (from 4.9):
-------------------------------------------------
8920fc713 : Update the logic of controlling clk scaling through sysfs
64753b808 : claim mmc host while enabling clock scaling from userspace
00d52acb3 : Add sysfs entries for dynamic control of clock scaling
16d0832   : Add a debugfs entry to set max clock rat

mmc: sd: Add support for Ultra High Speed card to get
max frequency

This change adds Ultra High Speed cards to
mmc_sd_get_max_clock() API.
Cards that support Ultra High Speed can set timing of
SDR104 which supports frequency up to 208Mhz.

mmc: core: Add active request in mmc driver to be used in scaling

Number of active request are needed to decide when to stop scaling.
Add active reqs variable in host structure to keep the active request
count and use in scaling.

mmc: core: Use freq table with devfreq

Register the min/max frequencies with devfreq and use
these to determine if we're trying to step up or down.

mmc: sd: Add deferred scaling change bus speed

In deferred scaling, no need to claim host as host
is already claimed using get card.

Change-Id: Iaf42fdbb738e0d8a05c28be5a61db1540dde0ca5
Signed-off-by: default avatarSujit Reddy Thumma <sthumma@codeaurora.org>
Signed-off-by: default avatarTalel Shenhar <tatias@codeaurora.org>
Signed-off-by: default avatarSahitya Tummala <stummala@codeaurora.org>
Signed-off-by: default avatarAsutosh Das <asutoshd@codeaurora.org>
Signed-off-by: default avatarRitesh Harjani <riteshh@codeaurora.org>
Signed-off-by: default avatarVeerabhadrarao Badiganti <vbadigan@codeaurora.org>
Signed-off-by: default avatarBao D. Nguyen <nguyenb@codeaurora.org>
Signed-off-by: default avatarRam Prakash Gupta <rampraka@codeaurora.org>
parent 55151ee9
Loading
Loading
Loading
Loading
+19 −1
Original line number Original line Diff line number Diff line
@@ -892,6 +892,7 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
		}
		}


		card->ext_csd.part_config = part_config;
		card->ext_csd.part_config = part_config;
		card->part_curr = part_type;


		ret = mmc_blk_part_switch_post(card, main_md->part_curr);
		ret = mmc_blk_part_switch_post(card, main_md->part_curr);
	}
	}
@@ -1478,6 +1479,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
	unsigned long flags;
	unsigned long flags;
	bool put_card;
	bool put_card;
	int err;
	int err;
	bool is_dcmd = false;


	mmc_cqe_post_req(host, mrq);
	mmc_cqe_post_req(host, mrq);


@@ -1505,16 +1507,21 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
	spin_lock_irqsave(q->queue_lock, flags);
	spin_lock_irqsave(q->queue_lock, flags);


	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
	atomic_dec(&host->active_reqs);


	put_card = (mmc_tot_in_flight(mq) == 0);
	put_card = (mmc_tot_in_flight(mq) == 0);


	mmc_cqe_check_busy(mq);
	mmc_cqe_check_busy(mq);


	is_dcmd = (mmc_issue_type(mq, req) ==  MMC_ISSUE_DCMD);

	spin_unlock_irqrestore(q->queue_lock, flags);
	spin_unlock_irqrestore(q->queue_lock, flags);


	if (!mq->cqe_busy)
	if (!mq->cqe_busy)
		blk_mq_run_hw_queues(q, true);
		blk_mq_run_hw_queues(q, true);


	mmc_cqe_clk_scaling_stop_busy(host, true, is_dcmd);

	if (put_card)
	if (put_card)
		mmc_put_card(mq->card, &mq->ctx);
		mmc_put_card(mq->card, &mq->ctx);
}
}
@@ -1593,11 +1600,20 @@ static int mmc_blk_cqe_issue_flush(struct mmc_queue *mq, struct request *req)
static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req)
static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req)
{
{
	struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
	struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
	int err = 0;


	mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL);
	mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL);
	mqrq->brq.mrq.req = req;
	mqrq->brq.mrq.req = req;


	return mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq);
	mmc_deferred_scaling(mq->card->host);
	mmc_cqe_clk_scaling_start_busy(mq, mq->card->host, true);

	err =  mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq);

	if (err)
		mmc_cqe_clk_scaling_stop_busy(mq->card->host, true, false);

	return err;
}
}


static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
@@ -2009,12 +2025,14 @@ static void mmc_blk_mq_poll_completion(struct mmc_queue *mq,
static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req)
static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req)
{
{
	struct request_queue *q = req->q;
	struct request_queue *q = req->q;
	struct mmc_host *host = mq->card->host;
	unsigned long flags;
	unsigned long flags;
	bool put_card;
	bool put_card;


	spin_lock_irqsave(q->queue_lock, flags);
	spin_lock_irqsave(q->queue_lock, flags);


	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
	atomic_dec(&host->active_reqs);


	put_card = (mmc_tot_in_flight(mq) == 0);
	put_card = (mmc_tot_in_flight(mq) == 0);


+875 −0

File changed.

Preview size limit exceeded, changes collapsed.

+19 −0
Original line number Original line Diff line number Diff line
@@ -17,6 +17,7 @@
struct mmc_host;
struct mmc_host;
struct mmc_card;
struct mmc_card;
struct mmc_request;
struct mmc_request;
struct mmc_queue;


#define MMC_CMD_RETRIES        3
#define MMC_CMD_RETRIES        3


@@ -32,6 +33,9 @@ struct mmc_bus_ops {
	int (*shutdown)(struct mmc_host *);
	int (*shutdown)(struct mmc_host *);
	int (*hw_reset)(struct mmc_host *);
	int (*hw_reset)(struct mmc_host *);
	int (*sw_reset)(struct mmc_host *);
	int (*sw_reset)(struct mmc_host *);
	int (*change_bus_speed)(struct mmc_host *host, unsigned long *freq);
	int (*change_bus_speed_deferred)(struct mmc_host *host,
							unsigned long *freq);
};
};


void mmc_attach_bus(struct mmc_host *host, const struct mmc_bus_ops *ops);
void mmc_attach_bus(struct mmc_host *host, const struct mmc_bus_ops *ops);
@@ -59,6 +63,8 @@ void mmc_power_up(struct mmc_host *host, u32 ocr);
void mmc_power_off(struct mmc_host *host);
void mmc_power_off(struct mmc_host *host);
void mmc_power_cycle(struct mmc_host *host, u32 ocr);
void mmc_power_cycle(struct mmc_host *host, u32 ocr);
void mmc_set_initial_state(struct mmc_host *host);
void mmc_set_initial_state(struct mmc_host *host);
int mmc_clk_update_freq(struct mmc_host *host,
		unsigned long freq, enum mmc_load state);


static inline void mmc_delay(unsigned int ms)
static inline void mmc_delay(unsigned int ms)
{
{
@@ -89,6 +95,19 @@ void mmc_remove_host_debugfs(struct mmc_host *host);
void mmc_add_card_debugfs(struct mmc_card *card);
void mmc_add_card_debugfs(struct mmc_card *card);
void mmc_remove_card_debugfs(struct mmc_card *card);
void mmc_remove_card_debugfs(struct mmc_card *card);


extern bool mmc_can_scale_clk(struct mmc_host *host);
extern int mmc_init_clk_scaling(struct mmc_host *host);
extern int mmc_suspend_clk_scaling(struct mmc_host *host);
extern int mmc_resume_clk_scaling(struct mmc_host *host);
extern int mmc_exit_clk_scaling(struct mmc_host *host);
extern void mmc_deferred_scaling(struct mmc_host *host);
extern void mmc_cqe_clk_scaling_start_busy(struct mmc_queue *mq,
	struct mmc_host *host, bool lock_needed);
extern void mmc_cqe_clk_scaling_stop_busy(struct mmc_host *host,
			bool lock_needed, bool is_cqe_dcmd);

extern unsigned long mmc_get_max_frequency(struct mmc_host *host);

int mmc_execute_tuning(struct mmc_card *card);
int mmc_execute_tuning(struct mmc_card *card);
int mmc_hs200_to_hs400(struct mmc_card *card);
int mmc_hs200_to_hs400(struct mmc_card *card);
int mmc_hs400_to_hs200(struct mmc_card *card);
int mmc_hs400_to_hs200(struct mmc_card *card);
+99 −1
Original line number Original line Diff line number Diff line
@@ -196,7 +196,17 @@ static int mmc_ios_show(struct seq_file *s, void *data)


	return 0;
	return 0;
}
}
DEFINE_SHOW_ATTRIBUTE(mmc_ios);
static int mmc_ios_open(struct inode *inode, struct file *file)
{
	return single_open(file, mmc_ios_show, inode->i_private);
}

static const struct file_operations mmc_ios_fops = {
	.open           = mmc_ios_open,
	.read           = seq_read,
	.llseek         = seq_lseek,
	.release        = single_release,
};


static int mmc_clock_opt_get(void *data, u64 *val)
static int mmc_clock_opt_get(void *data, u64 *val)
{
{
@@ -225,6 +235,81 @@ static int mmc_clock_opt_set(void *data, u64 val)
DEFINE_SIMPLE_ATTRIBUTE(mmc_clock_fops, mmc_clock_opt_get, mmc_clock_opt_set,
DEFINE_SIMPLE_ATTRIBUTE(mmc_clock_fops, mmc_clock_opt_get, mmc_clock_opt_set,
	"%llu\n");
	"%llu\n");


static int mmc_scale_get(void *data, u64 *val)
{
	struct mmc_host *host = data;

	*val = host->clk_scaling.curr_freq;

	return 0;
}

static int mmc_scale_set(void *data, u64 val)
{
	int err = 0;
	struct mmc_host *host = data;

	mmc_claim_host(host);

	/* change frequency from sysfs manually */
	err = mmc_clk_update_freq(host, val, host->clk_scaling.state);
	if (err == -EAGAIN)
		err = 0;
	else if (err)
		pr_err("%s: clock scale to %llu failed with error %d\n",
			mmc_hostname(host), val, err);
	else
		pr_debug("%s: clock change to %llu finished successfully (%s)\n",
			mmc_hostname(host), val, current->comm);

	mmc_release_host(host);

	return err;
}

DEFINE_SIMPLE_ATTRIBUTE(mmc_scale_fops, mmc_scale_get, mmc_scale_set,
	"%llu\n");

static int mmc_max_clock_get(void *data, u64 *val)
{
	struct mmc_host *host = data;

	if (!host)
		return -EINVAL;

	*val = host->f_max;

	return 0;
}

static int mmc_max_clock_set(void *data, u64 val)
{
	struct mmc_host *host = data;
	int err = -EINVAL;
	unsigned long freq = val;
	unsigned int old_freq;

	if (!host || (val < host->f_min))
		goto out;

	mmc_claim_host(host);
	if (host->bus_ops && host->bus_ops->change_bus_speed) {
		old_freq = host->f_max;
		host->f_max = freq;

		err = host->bus_ops->change_bus_speed(host, &freq);

		if (err)
			host->f_max = old_freq;
	}
	mmc_release_host(host);
out:
	return err;
}

DEFINE_SIMPLE_ATTRIBUTE(mmc_max_clock_fops, mmc_max_clock_get,
		mmc_max_clock_set, "%llu\n");

void mmc_add_host_debugfs(struct mmc_host *host)
void mmc_add_host_debugfs(struct mmc_host *host)
{
{
	struct dentry *root;
	struct dentry *root;
@@ -253,6 +338,19 @@ void mmc_add_host_debugfs(struct mmc_host *host)
			&mmc_clock_fops))
			&mmc_clock_fops))
		goto err_node;
		goto err_node;


	if (!debugfs_create_file("max_clock", 0600, root, host,
		&mmc_max_clock_fops))
		goto err_node;

	if (!debugfs_create_file("scale", 0600, root, host,
		&mmc_scale_fops))
		goto err_node;

	if (!debugfs_create_bool("skip_clk_scale_freq_update",
		0600, root,
		&host->clk_scaling.skip_clk_scale_freq_update))
		goto err_node;

#ifdef CONFIG_FAIL_MMC_REQUEST
#ifdef CONFIG_FAIL_MMC_REQUEST
	if (fail_request)
	if (fail_request)
		setup_fault_attr(&fail_default_attr, fail_request);
		setup_fault_attr(&fail_default_attr, fail_request);
+164 −1
Original line number Original line Diff line number Diff line
@@ -34,6 +34,10 @@


#define cls_dev_to_mmc_host(d)	container_of(d, struct mmc_host, class_dev)
#define cls_dev_to_mmc_host(d)	container_of(d, struct mmc_host, class_dev)


#define MMC_DEVFRQ_DEFAULT_UP_THRESHOLD 35
#define MMC_DEVFRQ_DEFAULT_DOWN_THRESHOLD 5
#define MMC_DEVFRQ_DEFAULT_POLLING_MSEC 100

static DEFINE_IDA(mmc_host_ida);
static DEFINE_IDA(mmc_host_ida);


static void mmc_host_classdev_release(struct device *dev)
static void mmc_host_classdev_release(struct device *dev)
@@ -391,6 +395,7 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
	}
	}


	spin_lock_init(&host->lock);
	spin_lock_init(&host->lock);
	atomic_set(&host->active_reqs, 0);
	init_waitqueue_head(&host->wq);
	init_waitqueue_head(&host->wq);
	INIT_DELAYED_WORK(&host->detect, mmc_rescan);
	INIT_DELAYED_WORK(&host->detect, mmc_rescan);
	INIT_DELAYED_WORK(&host->sdio_irq_work, sdio_irq_work);
	INIT_DELAYED_WORK(&host->sdio_irq_work, sdio_irq_work);
@@ -412,9 +417,156 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)


	return host;
	return host;
}
}

EXPORT_SYMBOL(mmc_alloc_host);
EXPORT_SYMBOL(mmc_alloc_host);


static ssize_t enable_show(struct device *dev,
		struct device_attribute *attr, char *buf)
{
	struct mmc_host *host = cls_dev_to_mmc_host(dev);

	if (!host)
		return -EINVAL;

	return snprintf(buf, PAGE_SIZE, "%d\n", mmc_can_scale_clk(host));
}

static ssize_t enable_store(struct device *dev,
		struct device_attribute *attr, const char *buf, size_t count)
{
	struct mmc_host *host = cls_dev_to_mmc_host(dev);
	unsigned long value;

	if (!host || !host->card || kstrtoul(buf, 0, &value))
		return -EINVAL;

	mmc_get_card(host->card, NULL);

	if (!value) {
		/* Suspend the clock scaling and mask host capability */
		if (host->clk_scaling.enable)
			mmc_suspend_clk_scaling(host);
		host->clk_scaling.enable = false;
		host->caps2 &= ~MMC_CAP2_CLK_SCALE;
		host->clk_scaling.state = MMC_LOAD_HIGH;
		/* Set to max. frequency when disabling */
		mmc_clk_update_freq(host, host->card->clk_scaling_highest,
					host->clk_scaling.state);
	} else if (value) {
		/* Unmask host capability and resume scaling */
		host->caps2 |= MMC_CAP2_CLK_SCALE;
		if (!host->clk_scaling.enable) {
			host->clk_scaling.enable = true;
			mmc_resume_clk_scaling(host);
		}
	}

	mmc_put_card(host->card, NULL);

	return count;
}

static ssize_t up_threshold_show(struct device *dev,
		struct device_attribute *attr, char *buf)
{
	struct mmc_host *host = cls_dev_to_mmc_host(dev);

	if (!host)
		return -EINVAL;

	return snprintf(buf, PAGE_SIZE, "%d\n", host->clk_scaling.upthreshold);
}

#define MAX_PERCENTAGE	100
static ssize_t up_threshold_store(struct device *dev,
		struct device_attribute *attr, const char *buf, size_t count)
{
	struct mmc_host *host = cls_dev_to_mmc_host(dev);
	unsigned long value;

	if (!host || kstrtoul(buf, 0, &value) || (value > MAX_PERCENTAGE))
		return -EINVAL;

	host->clk_scaling.upthreshold = value;

	pr_debug("%s: clkscale_up_thresh set to %lu\n",
			mmc_hostname(host), value);
	return count;
}

static ssize_t down_threshold_show(struct device *dev,
		struct device_attribute *attr, char *buf)
{
	struct mmc_host *host = cls_dev_to_mmc_host(dev);

	if (!host)
		return -EINVAL;

	return snprintf(buf, PAGE_SIZE, "%d\n",
			host->clk_scaling.downthreshold);
}

static ssize_t down_threshold_store(struct device *dev,
		struct device_attribute *attr, const char *buf, size_t count)
{
	struct mmc_host *host = cls_dev_to_mmc_host(dev);
	unsigned long value;

	if (!host || kstrtoul(buf, 0, &value) || (value > MAX_PERCENTAGE))
		return -EINVAL;

	host->clk_scaling.downthreshold = value;

	pr_debug("%s: clkscale_down_thresh set to %lu\n",
			mmc_hostname(host), value);
	return count;
}

static ssize_t polling_interval_show(struct device *dev,
		struct device_attribute *attr, char *buf)
{
	struct mmc_host *host = cls_dev_to_mmc_host(dev);

	if (!host)
		return -EINVAL;

	return snprintf(buf, PAGE_SIZE, "%lu milliseconds\n",
			host->clk_scaling.polling_delay_ms);
}

static ssize_t polling_interval_store(struct device *dev,
		struct device_attribute *attr, const char *buf, size_t count)
{
	struct mmc_host *host = cls_dev_to_mmc_host(dev);
	unsigned long value;

	if (!host || kstrtoul(buf, 0, &value))
		return -EINVAL;

	host->clk_scaling.polling_delay_ms = value;

	pr_debug("%s: clkscale_polling_delay_ms set to %lu\n",
			mmc_hostname(host), value);
	return count;
}

DEVICE_ATTR_RW(enable);
DEVICE_ATTR_RW(polling_interval);
DEVICE_ATTR_RW(up_threshold);
DEVICE_ATTR_RW(down_threshold);

static struct attribute *clk_scaling_attrs[] = {
	&dev_attr_enable.attr,
	&dev_attr_up_threshold.attr,
	&dev_attr_down_threshold.attr,
	&dev_attr_polling_interval.attr,
	NULL,
};

static struct attribute_group clk_scaling_attr_grp = {
	.name = "clk_scaling",
	.attrs = clk_scaling_attrs,
};

/**
/**
 *	mmc_add_host - initialise host hardware
 *	mmc_add_host - initialise host hardware
 *	@host: mmc host
 *	@host: mmc host
@@ -436,10 +588,20 @@ int mmc_add_host(struct mmc_host *host)


	led_trigger_register_simple(dev_name(&host->class_dev), &host->led);
	led_trigger_register_simple(dev_name(&host->class_dev), &host->led);


	host->clk_scaling.upthreshold = MMC_DEVFRQ_DEFAULT_UP_THRESHOLD;
	host->clk_scaling.downthreshold = MMC_DEVFRQ_DEFAULT_DOWN_THRESHOLD;
	host->clk_scaling.polling_delay_ms = MMC_DEVFRQ_DEFAULT_POLLING_MSEC;
	host->clk_scaling.skip_clk_scale_freq_update = false;

#ifdef CONFIG_DEBUG_FS
#ifdef CONFIG_DEBUG_FS
	mmc_add_host_debugfs(host);
	mmc_add_host_debugfs(host);
#endif
#endif


	err = sysfs_create_group(&host->class_dev.kobj, &clk_scaling_attr_grp);
	if (err)
		pr_err("%s: failed to create clk scale sysfs group with err %d\n",
				__func__, err);

	mmc_start_host(host);
	mmc_start_host(host);
	if (!(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
	if (!(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
		mmc_register_pm_notifier(host);
		mmc_register_pm_notifier(host);
@@ -466,6 +628,7 @@ void mmc_remove_host(struct mmc_host *host)
#ifdef CONFIG_DEBUG_FS
#ifdef CONFIG_DEBUG_FS
	mmc_remove_host_debugfs(host);
	mmc_remove_host_debugfs(host);
#endif
#endif
	sysfs_remove_group(&host->class_dev.kobj, &clk_scaling_attr_grp);


	device_del(&host->class_dev);
	device_del(&host->class_dev);


Loading