Loading Documentation/mmc/mmc-dev-attrs.txt +0 −82 Original line number Diff line number Diff line Loading @@ -8,40 +8,6 @@ The following attributes are read/write. force_ro Enforce read-only access even if write protect switch is off. num_wr_reqs_to_start_packing This attribute is used to determine the trigger for activating the write packing, in case the write packing control feature is enabled. When the MMC manages to reach a point where num_wr_reqs_to_start_packing write requests could be packed, it enables the write packing feature. This allows us to start the write packing only when it is beneficial and has minimum affect on the read latency. The number of potential packed requests that will trigger the packing can be configured via sysfs by writing the required value to: /sys/block/<block_dev_name>/num_wr_reqs_to_start_packing. The default value of num_wr_reqs_to_start_packing was determined by running parallel lmdd write and lmdd read operations and calculating the max number of packed writes requests. num_wr_reqs_to_start_packing This attribute is used to determine the trigger for activating the write packing, in case the write packing control feature is enabled. When the MMC manages to reach a point where num_wr_reqs_to_start_packing write requests could be packed, it enables the write packing feature. This allows us to start the write packing only when it is beneficial and has minimum affect on the read latency. The number of potential packed requests that will trigger the packing can be configured via sysfs by writing the required value to: /sys/block/<block_dev_name>/num_wr_reqs_to_start_packing. The default value of num_wr_reqs_to_start_packing was determined by running parallel lmdd write and lmdd read operations and calculating the max number of packed writes requests. SD and MMC Device Attributes ============================ Loading Loading @@ -109,51 +75,3 @@ Note on raw_rpmb_size_mult: "raw_rpmb_size_mult" is a multiple of 128kB block. RPMB size in byte is calculated by using the following equation: RPMB partition size = 128kB x raw_rpmb_size_mult SD/MMC/SDIO Clock Gating Attribute ================================== Read and write access is provided to following attribute. This attribute appears only if CONFIG_MMC_CLKGATE is enabled. clkgate_delay Tune the clock gating delay with desired value in milliseconds. echo <desired delay> > /sys/class/mmc_host/mmcX/clkgate_delay SD/MMC/SDIO Clock Scaling Attributes ==================================== Read and write accesses are provided to following attributes. polling_interval Measured in milliseconds, this attribute defines how often we need to check the card usage and make decisions on frequency scaling. up_threshold This attribute defines what should be the average card usage between the polling interval for the mmc core to make a decision on whether it should increase the frequency. For example when it is set to '35' it means that between the checking intervals the card needs to be on average more than 35% in use to scale up the frequency. The value should be between 0 - 100 so that it can be compared against load percentage. down_threshold Similar to up_threshold, but on lowering the frequency. For example, when it is set to '2' it means that between the checking intervals the card needs to be on average less than 2% in use to scale down the clocks to minimum frequency. The value should be between 0 - 100 so that it can be compared against load percentage. enable Enable clock scaling for hosts (and cards) that support ultrahigh speed modes (SDR104, DDR50, HS200). echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/polling_interval echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/up_threshold echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/down_threshold echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/enable No newline at end of file drivers/mmc/Kconfig +0 −10 Original line number Diff line number Diff line Loading @@ -12,16 +12,6 @@ menuconfig MMC If you want MMC/SD/SDIO support, you should say Y here and also to your specific host controller driver. config MMC_PERF_PROFILING bool "MMC performance profiling" depends on MMC != n default n help This enables the support for collecting performance numbers for the MMC at the Queue and Host layers. If you want to collect MMC performance numbers, say Y here. if MMC source "drivers/mmc/core/Kconfig" Loading drivers/mmc/core/Kconfig +0 −31 Original line number Diff line number Diff line Loading @@ -61,17 +61,6 @@ config MMC_BLOCK_MINORS If unsure, say 8 here. config MMC_BLOCK_DEFERRED_RESUME bool "Defer MMC layer resume until I/O is requested" depends on MMC_BLOCK default n help Say Y here to enable deferred MMC resume until I/O is requested. This will reduce overall resume latency and save power when there is an SD card inserted but not being used. config SDIO_UART tristate "SDIO UART/GPS class support" depends on TTY Loading @@ -91,23 +80,3 @@ config MMC_TEST This driver is only of interest to those developing or testing a host driver. Most people should say N here. config MMC_RING_BUFFER bool "MMC_RING_BUFFER" depends on MMC default n help This enables the ring buffer tracing of significant events for mmc driver to provide command history for debugging purpose. If unsure, say N. config MMC_CLKGATE bool "MMC host clock gating" help This will attempt to aggressively gate the clock to the MMC card. This is done to save power due to gating off the logic and bus noise when the MMC card is not in use. Your host driver has to support handling this in order for it to be of any use. If unsure, say N. No newline at end of file drivers/mmc/core/Makefile +0 −1 Original line number Diff line number Diff line Loading @@ -14,7 +14,6 @@ obj-$(CONFIG_PWRSEQ_SIMPLE) += pwrseq_simple.o obj-$(CONFIG_PWRSEQ_SD8787) += pwrseq_sd8787.o obj-$(CONFIG_PWRSEQ_EMMC) += pwrseq_emmc.o mmc_core-$(CONFIG_DEBUG_FS) += debugfs.o obj-$(CONFIG_MMC_RING_BUFFER) += ring_buffer.o obj-$(CONFIG_MMC_BLOCK) += mmc_block.o mmc_block-objs := block.o queue.o obj-$(CONFIG_MMC_TEST) += mmc_test.o Loading drivers/mmc/core/block.c +33 −50 Original line number Diff line number Diff line Loading @@ -31,7 +31,6 @@ #include <linux/cdev.h> #include <linux/mutex.h> #include <linux/scatterlist.h> #include <linux/bitops.h> #include <linux/string_helpers.h> #include <linux/delay.h> #include <linux/capability.h> Loading @@ -42,7 +41,6 @@ #include <linux/mmc/ioctl.h> #include <linux/mmc/card.h> #include <linux/mmc/core.h> #include <linux/mmc/host.h> #include <linux/mmc/mmc.h> #include <linux/mmc/sd.h> Loading Loading @@ -71,7 +69,7 @@ MODULE_ALIAS("mmc:block"); * second software timer to timeout the whole request, so 10 seconds should be * ample. */ #define MMC_BLK_TIMEOUT_MS (30 * 1000) #define MMC_BLK_TIMEOUT_MS (10 * 1000) #define MMC_SANITIZE_REQ_TIMEOUT 240000 #define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16) #define MMC_EXTRACT_VALUE_FROM_ARG(x) ((x & 0x0000FF00) >> 8) Loading Loading @@ -112,7 +110,6 @@ struct mmc_blk_data { unsigned int flags; #define MMC_BLK_CMD23 (1 << 0) /* Can do SET_BLOCK_COUNT for multiblock */ #define MMC_BLK_REL_WR (1 << 1) /* MMC Reliable write support */ #define MMC_BLK_PACKED_CMD (1 << 2) /* MMC packed command support */ unsigned int usage; unsigned int read_only; Loading @@ -123,7 +120,7 @@ struct mmc_blk_data { #define MMC_BLK_DISCARD BIT(2) #define MMC_BLK_SECDISCARD BIT(3) #define MMC_BLK_CQE_RECOVERY BIT(4) #define MMC_BLK_FLUSH BIT(5) /* * Only set in main mmc_blk_data associated * with mmc_card with dev_set_drvdata, and keeps Loading Loading @@ -215,11 +212,11 @@ static ssize_t power_ro_lock_show(struct device *dev, struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev)); struct mmc_card *card; int locked = 0; if (!md) return -EINVAL; card = md->queue.card; if (card->ext_csd.boot_ro_lock & EXT_CSD_BOOT_WP_B_PERM_WP_EN) locked = 2; else if (card->ext_csd.boot_ro_lock & EXT_CSD_BOOT_WP_B_PWR_WP_EN) Loading Loading @@ -284,7 +281,6 @@ static ssize_t force_ro_show(struct device *dev, struct device_attribute *attr, { int ret; struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev)); if (!md) return -EINVAL; Loading @@ -302,7 +298,6 @@ static ssize_t force_ro_store(struct device *dev, struct device_attribute *attr, char *end; struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev)); unsigned long set = simple_strtoul(buf, &end, 0); if (!md) return -EINVAL; Loading @@ -322,6 +317,9 @@ static int mmc_blk_open(struct block_device *bdev, fmode_t mode) { struct mmc_blk_data *md = mmc_blk_get(bdev->bd_disk); int ret = -ENXIO; if (!md) return -EINVAL; mutex_lock(&block_mutex); if (md) { Loading Loading @@ -461,8 +459,7 @@ static int ioctl_do_sanitize(struct mmc_card *card) { int err; if (!mmc_can_sanitize(card) && (card->host->caps2 & MMC_CAP2_SANITIZE)) { if (!mmc_can_sanitize(card)) { pr_warn("%s: %s - SANITIZE is not supported\n", mmc_hostname(card->host), __func__); err = -EOPNOTSUPP; Loading Loading @@ -656,13 +653,13 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md, struct request *req; idata = mmc_blk_ioctl_copy_from_user(ic_ptr); if (IS_ERR_OR_NULL(idata)) if (IS_ERR(idata)) return PTR_ERR(idata); /* This will be NULL on non-RPMB ioctl():s */ idata->rpmb = rpmb; card = md->queue.card; if (IS_ERR_OR_NULL(card)) { if (IS_ERR(card)) { err = PTR_ERR(card); goto cmd_done; } Loading Loading @@ -873,8 +870,7 @@ static inline int mmc_blk_part_switch(struct mmc_card *card, int ret = 0; struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev); if ((main_md->part_curr == part_type) && (card->part_curr == part_type)) if (main_md->part_curr == part_type) return 0; if (mmc_card_mmc(card)) { Loading @@ -891,9 +887,6 @@ static inline int mmc_blk_part_switch(struct mmc_card *card, EXT_CSD_PART_CONFIG, part_config, card->ext_csd.part_time); if (ret) { pr_err("%s: %s: switch failure, %d -> %d\n", mmc_hostname(card->host), __func__, main_md->part_curr, part_type); mmc_blk_part_switch_post(card, part_type); return ret; } Loading Loading @@ -1055,15 +1048,8 @@ static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host, md->reset_done |= type; err = mmc_hw_reset(host); if (err && err != -EOPNOTSUPP) { /* We failed to reset so we need to abort the request */ pr_err("%s: %s: failed to reset %d\n", mmc_hostname(host), __func__, err); return -ENODEV; } /* Ensure we switch back to the correct partition */ if (host->card) { if (err != -EOPNOTSUPP) { struct mmc_blk_data *main_md = dev_get_drvdata(&host->card->dev); int part_err; Loading Loading @@ -1270,21 +1256,6 @@ static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) int ret = 0; ret = mmc_flush_cache(card); if (ret == -ENODEV) { pr_err("%s: %s: restart mmc card\n", req->rq_disk->disk_name, __func__); if (mmc_blk_reset(md, card->host, MMC_BLK_FLUSH)) pr_err("%s: %s: fail to restart mmc\n", req->rq_disk->disk_name, __func__); else mmc_blk_reset_success(md, MMC_BLK_FLUSH); } if (ret) { pr_err("%s: %s: notify flush error to upper layers\n", req->rq_disk->disk_name, __func__); ret = -EIO; } blk_mq_end_request(req, ret ? BLK_STS_IOERR : BLK_STS_OK); } Loading Loading @@ -1508,6 +1479,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req) unsigned long flags; bool put_card; int err; bool is_dcmd = false; mmc_cqe_post_req(host, mrq); Loading Loading @@ -1535,16 +1507,21 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req) spin_lock_irqsave(q->queue_lock, flags); mq->in_flight[mmc_issue_type(mq, req)] -= 1; atomic_dec(&host->active_reqs); put_card = (mmc_tot_in_flight(mq) == 0); mmc_cqe_check_busy(mq); is_dcmd = (mmc_issue_type(mq, req) == MMC_ISSUE_DCMD); spin_unlock_irqrestore(q->queue_lock, flags); if (!mq->cqe_busy) blk_mq_run_hw_queues(q, true); mmc_cqe_clk_scaling_stop_busy(host, true, is_dcmd); if (put_card) mmc_put_card(mq->card, &mq->ctx); } Loading Loading @@ -1623,10 +1600,20 @@ static int mmc_blk_cqe_issue_flush(struct mmc_queue *mq, struct request *req) static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req) { struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); int err = 0; mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL); mqrq->brq.mrq.req = req; mmc_deferred_scaling(mq->card->host); mmc_cqe_clk_scaling_start_busy(mq, mq->card->host, true); err = mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq); return mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq); if (err) mmc_cqe_clk_scaling_stop_busy(mq->card->host, true, false); return err; } static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, Loading Loading @@ -2038,12 +2025,14 @@ static void mmc_blk_mq_poll_completion(struct mmc_queue *mq, static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req) { struct request_queue *q = req->q; struct mmc_host *host = mq->card->host; unsigned long flags; bool put_card; spin_lock_irqsave(q->queue_lock, flags); mq->in_flight[mmc_issue_type(mq, req)] -= 1; atomic_dec(&host->active_reqs); put_card = (mmc_tot_in_flight(mq) == 0); Loading Loading @@ -2229,6 +2218,7 @@ static int mmc_blk_mq_issue_rw_rq(struct mmc_queue *mq, mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); mqrq->brq.mrq.done = mmc_blk_mq_req_done; mqrq->brq.mrq.req = req; mmc_pre_req(host, &mqrq->brq.mrq); Loading Loading @@ -2994,10 +2984,6 @@ static int mmc_blk_probe(struct mmc_card *card) dev_set_drvdata(&card->dev, md); #ifdef CONFIG_MMC_BLOCK_DEFERRED_RESUME mmc_set_bus_resume_policy(card->host, 1); #endif if (mmc_add_disk(md)) goto out; Loading @@ -3009,7 +2995,7 @@ static int mmc_blk_probe(struct mmc_card *card) /* Add two debugfs entries */ mmc_blk_add_debugfs(card, md); pm_runtime_set_autosuspend_delay(&card->dev, MMC_AUTOSUSPEND_DELAY_MS); pm_runtime_set_autosuspend_delay(&card->dev, 3000); pm_runtime_use_autosuspend(&card->dev); /* Loading Loading @@ -3046,9 +3032,6 @@ static void mmc_blk_remove(struct mmc_card *card) pm_runtime_put_noidle(&card->dev); mmc_blk_remove_req(md); dev_set_drvdata(&card->dev, NULL); #ifdef CONFIG_MMC_BLOCK_DEFERRED_RESUME mmc_set_bus_resume_policy(card->host, 0); #endif destroy_workqueue(card->complete_wq); } Loading Loading
Documentation/mmc/mmc-dev-attrs.txt +0 −82 Original line number Diff line number Diff line Loading @@ -8,40 +8,6 @@ The following attributes are read/write. force_ro Enforce read-only access even if write protect switch is off. num_wr_reqs_to_start_packing This attribute is used to determine the trigger for activating the write packing, in case the write packing control feature is enabled. When the MMC manages to reach a point where num_wr_reqs_to_start_packing write requests could be packed, it enables the write packing feature. This allows us to start the write packing only when it is beneficial and has minimum affect on the read latency. The number of potential packed requests that will trigger the packing can be configured via sysfs by writing the required value to: /sys/block/<block_dev_name>/num_wr_reqs_to_start_packing. The default value of num_wr_reqs_to_start_packing was determined by running parallel lmdd write and lmdd read operations and calculating the max number of packed writes requests. num_wr_reqs_to_start_packing This attribute is used to determine the trigger for activating the write packing, in case the write packing control feature is enabled. When the MMC manages to reach a point where num_wr_reqs_to_start_packing write requests could be packed, it enables the write packing feature. This allows us to start the write packing only when it is beneficial and has minimum affect on the read latency. The number of potential packed requests that will trigger the packing can be configured via sysfs by writing the required value to: /sys/block/<block_dev_name>/num_wr_reqs_to_start_packing. The default value of num_wr_reqs_to_start_packing was determined by running parallel lmdd write and lmdd read operations and calculating the max number of packed writes requests. SD and MMC Device Attributes ============================ Loading Loading @@ -109,51 +75,3 @@ Note on raw_rpmb_size_mult: "raw_rpmb_size_mult" is a multiple of 128kB block. RPMB size in byte is calculated by using the following equation: RPMB partition size = 128kB x raw_rpmb_size_mult SD/MMC/SDIO Clock Gating Attribute ================================== Read and write access is provided to following attribute. This attribute appears only if CONFIG_MMC_CLKGATE is enabled. clkgate_delay Tune the clock gating delay with desired value in milliseconds. echo <desired delay> > /sys/class/mmc_host/mmcX/clkgate_delay SD/MMC/SDIO Clock Scaling Attributes ==================================== Read and write accesses are provided to following attributes. polling_interval Measured in milliseconds, this attribute defines how often we need to check the card usage and make decisions on frequency scaling. up_threshold This attribute defines what should be the average card usage between the polling interval for the mmc core to make a decision on whether it should increase the frequency. For example when it is set to '35' it means that between the checking intervals the card needs to be on average more than 35% in use to scale up the frequency. The value should be between 0 - 100 so that it can be compared against load percentage. down_threshold Similar to up_threshold, but on lowering the frequency. For example, when it is set to '2' it means that between the checking intervals the card needs to be on average less than 2% in use to scale down the clocks to minimum frequency. The value should be between 0 - 100 so that it can be compared against load percentage. enable Enable clock scaling for hosts (and cards) that support ultrahigh speed modes (SDR104, DDR50, HS200). echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/polling_interval echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/up_threshold echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/down_threshold echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/enable No newline at end of file
drivers/mmc/Kconfig +0 −10 Original line number Diff line number Diff line Loading @@ -12,16 +12,6 @@ menuconfig MMC If you want MMC/SD/SDIO support, you should say Y here and also to your specific host controller driver. config MMC_PERF_PROFILING bool "MMC performance profiling" depends on MMC != n default n help This enables the support for collecting performance numbers for the MMC at the Queue and Host layers. If you want to collect MMC performance numbers, say Y here. if MMC source "drivers/mmc/core/Kconfig" Loading
drivers/mmc/core/Kconfig +0 −31 Original line number Diff line number Diff line Loading @@ -61,17 +61,6 @@ config MMC_BLOCK_MINORS If unsure, say 8 here. config MMC_BLOCK_DEFERRED_RESUME bool "Defer MMC layer resume until I/O is requested" depends on MMC_BLOCK default n help Say Y here to enable deferred MMC resume until I/O is requested. This will reduce overall resume latency and save power when there is an SD card inserted but not being used. config SDIO_UART tristate "SDIO UART/GPS class support" depends on TTY Loading @@ -91,23 +80,3 @@ config MMC_TEST This driver is only of interest to those developing or testing a host driver. Most people should say N here. config MMC_RING_BUFFER bool "MMC_RING_BUFFER" depends on MMC default n help This enables the ring buffer tracing of significant events for mmc driver to provide command history for debugging purpose. If unsure, say N. config MMC_CLKGATE bool "MMC host clock gating" help This will attempt to aggressively gate the clock to the MMC card. This is done to save power due to gating off the logic and bus noise when the MMC card is not in use. Your host driver has to support handling this in order for it to be of any use. If unsure, say N. No newline at end of file
drivers/mmc/core/Makefile +0 −1 Original line number Diff line number Diff line Loading @@ -14,7 +14,6 @@ obj-$(CONFIG_PWRSEQ_SIMPLE) += pwrseq_simple.o obj-$(CONFIG_PWRSEQ_SD8787) += pwrseq_sd8787.o obj-$(CONFIG_PWRSEQ_EMMC) += pwrseq_emmc.o mmc_core-$(CONFIG_DEBUG_FS) += debugfs.o obj-$(CONFIG_MMC_RING_BUFFER) += ring_buffer.o obj-$(CONFIG_MMC_BLOCK) += mmc_block.o mmc_block-objs := block.o queue.o obj-$(CONFIG_MMC_TEST) += mmc_test.o Loading
drivers/mmc/core/block.c +33 −50 Original line number Diff line number Diff line Loading @@ -31,7 +31,6 @@ #include <linux/cdev.h> #include <linux/mutex.h> #include <linux/scatterlist.h> #include <linux/bitops.h> #include <linux/string_helpers.h> #include <linux/delay.h> #include <linux/capability.h> Loading @@ -42,7 +41,6 @@ #include <linux/mmc/ioctl.h> #include <linux/mmc/card.h> #include <linux/mmc/core.h> #include <linux/mmc/host.h> #include <linux/mmc/mmc.h> #include <linux/mmc/sd.h> Loading Loading @@ -71,7 +69,7 @@ MODULE_ALIAS("mmc:block"); * second software timer to timeout the whole request, so 10 seconds should be * ample. */ #define MMC_BLK_TIMEOUT_MS (30 * 1000) #define MMC_BLK_TIMEOUT_MS (10 * 1000) #define MMC_SANITIZE_REQ_TIMEOUT 240000 #define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16) #define MMC_EXTRACT_VALUE_FROM_ARG(x) ((x & 0x0000FF00) >> 8) Loading Loading @@ -112,7 +110,6 @@ struct mmc_blk_data { unsigned int flags; #define MMC_BLK_CMD23 (1 << 0) /* Can do SET_BLOCK_COUNT for multiblock */ #define MMC_BLK_REL_WR (1 << 1) /* MMC Reliable write support */ #define MMC_BLK_PACKED_CMD (1 << 2) /* MMC packed command support */ unsigned int usage; unsigned int read_only; Loading @@ -123,7 +120,7 @@ struct mmc_blk_data { #define MMC_BLK_DISCARD BIT(2) #define MMC_BLK_SECDISCARD BIT(3) #define MMC_BLK_CQE_RECOVERY BIT(4) #define MMC_BLK_FLUSH BIT(5) /* * Only set in main mmc_blk_data associated * with mmc_card with dev_set_drvdata, and keeps Loading Loading @@ -215,11 +212,11 @@ static ssize_t power_ro_lock_show(struct device *dev, struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev)); struct mmc_card *card; int locked = 0; if (!md) return -EINVAL; card = md->queue.card; if (card->ext_csd.boot_ro_lock & EXT_CSD_BOOT_WP_B_PERM_WP_EN) locked = 2; else if (card->ext_csd.boot_ro_lock & EXT_CSD_BOOT_WP_B_PWR_WP_EN) Loading Loading @@ -284,7 +281,6 @@ static ssize_t force_ro_show(struct device *dev, struct device_attribute *attr, { int ret; struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev)); if (!md) return -EINVAL; Loading @@ -302,7 +298,6 @@ static ssize_t force_ro_store(struct device *dev, struct device_attribute *attr, char *end; struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev)); unsigned long set = simple_strtoul(buf, &end, 0); if (!md) return -EINVAL; Loading @@ -322,6 +317,9 @@ static int mmc_blk_open(struct block_device *bdev, fmode_t mode) { struct mmc_blk_data *md = mmc_blk_get(bdev->bd_disk); int ret = -ENXIO; if (!md) return -EINVAL; mutex_lock(&block_mutex); if (md) { Loading Loading @@ -461,8 +459,7 @@ static int ioctl_do_sanitize(struct mmc_card *card) { int err; if (!mmc_can_sanitize(card) && (card->host->caps2 & MMC_CAP2_SANITIZE)) { if (!mmc_can_sanitize(card)) { pr_warn("%s: %s - SANITIZE is not supported\n", mmc_hostname(card->host), __func__); err = -EOPNOTSUPP; Loading Loading @@ -656,13 +653,13 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md, struct request *req; idata = mmc_blk_ioctl_copy_from_user(ic_ptr); if (IS_ERR_OR_NULL(idata)) if (IS_ERR(idata)) return PTR_ERR(idata); /* This will be NULL on non-RPMB ioctl():s */ idata->rpmb = rpmb; card = md->queue.card; if (IS_ERR_OR_NULL(card)) { if (IS_ERR(card)) { err = PTR_ERR(card); goto cmd_done; } Loading Loading @@ -873,8 +870,7 @@ static inline int mmc_blk_part_switch(struct mmc_card *card, int ret = 0; struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev); if ((main_md->part_curr == part_type) && (card->part_curr == part_type)) if (main_md->part_curr == part_type) return 0; if (mmc_card_mmc(card)) { Loading @@ -891,9 +887,6 @@ static inline int mmc_blk_part_switch(struct mmc_card *card, EXT_CSD_PART_CONFIG, part_config, card->ext_csd.part_time); if (ret) { pr_err("%s: %s: switch failure, %d -> %d\n", mmc_hostname(card->host), __func__, main_md->part_curr, part_type); mmc_blk_part_switch_post(card, part_type); return ret; } Loading Loading @@ -1055,15 +1048,8 @@ static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host, md->reset_done |= type; err = mmc_hw_reset(host); if (err && err != -EOPNOTSUPP) { /* We failed to reset so we need to abort the request */ pr_err("%s: %s: failed to reset %d\n", mmc_hostname(host), __func__, err); return -ENODEV; } /* Ensure we switch back to the correct partition */ if (host->card) { if (err != -EOPNOTSUPP) { struct mmc_blk_data *main_md = dev_get_drvdata(&host->card->dev); int part_err; Loading Loading @@ -1270,21 +1256,6 @@ static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) int ret = 0; ret = mmc_flush_cache(card); if (ret == -ENODEV) { pr_err("%s: %s: restart mmc card\n", req->rq_disk->disk_name, __func__); if (mmc_blk_reset(md, card->host, MMC_BLK_FLUSH)) pr_err("%s: %s: fail to restart mmc\n", req->rq_disk->disk_name, __func__); else mmc_blk_reset_success(md, MMC_BLK_FLUSH); } if (ret) { pr_err("%s: %s: notify flush error to upper layers\n", req->rq_disk->disk_name, __func__); ret = -EIO; } blk_mq_end_request(req, ret ? BLK_STS_IOERR : BLK_STS_OK); } Loading Loading @@ -1508,6 +1479,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req) unsigned long flags; bool put_card; int err; bool is_dcmd = false; mmc_cqe_post_req(host, mrq); Loading Loading @@ -1535,16 +1507,21 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req) spin_lock_irqsave(q->queue_lock, flags); mq->in_flight[mmc_issue_type(mq, req)] -= 1; atomic_dec(&host->active_reqs); put_card = (mmc_tot_in_flight(mq) == 0); mmc_cqe_check_busy(mq); is_dcmd = (mmc_issue_type(mq, req) == MMC_ISSUE_DCMD); spin_unlock_irqrestore(q->queue_lock, flags); if (!mq->cqe_busy) blk_mq_run_hw_queues(q, true); mmc_cqe_clk_scaling_stop_busy(host, true, is_dcmd); if (put_card) mmc_put_card(mq->card, &mq->ctx); } Loading Loading @@ -1623,10 +1600,20 @@ static int mmc_blk_cqe_issue_flush(struct mmc_queue *mq, struct request *req) static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req) { struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); int err = 0; mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL); mqrq->brq.mrq.req = req; mmc_deferred_scaling(mq->card->host); mmc_cqe_clk_scaling_start_busy(mq, mq->card->host, true); err = mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq); return mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq); if (err) mmc_cqe_clk_scaling_stop_busy(mq->card->host, true, false); return err; } static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, Loading Loading @@ -2038,12 +2025,14 @@ static void mmc_blk_mq_poll_completion(struct mmc_queue *mq, static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req) { struct request_queue *q = req->q; struct mmc_host *host = mq->card->host; unsigned long flags; bool put_card; spin_lock_irqsave(q->queue_lock, flags); mq->in_flight[mmc_issue_type(mq, req)] -= 1; atomic_dec(&host->active_reqs); put_card = (mmc_tot_in_flight(mq) == 0); Loading Loading @@ -2229,6 +2218,7 @@ static int mmc_blk_mq_issue_rw_rq(struct mmc_queue *mq, mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); mqrq->brq.mrq.done = mmc_blk_mq_req_done; mqrq->brq.mrq.req = req; mmc_pre_req(host, &mqrq->brq.mrq); Loading Loading @@ -2994,10 +2984,6 @@ static int mmc_blk_probe(struct mmc_card *card) dev_set_drvdata(&card->dev, md); #ifdef CONFIG_MMC_BLOCK_DEFERRED_RESUME mmc_set_bus_resume_policy(card->host, 1); #endif if (mmc_add_disk(md)) goto out; Loading @@ -3009,7 +2995,7 @@ static int mmc_blk_probe(struct mmc_card *card) /* Add two debugfs entries */ mmc_blk_add_debugfs(card, md); pm_runtime_set_autosuspend_delay(&card->dev, MMC_AUTOSUSPEND_DELAY_MS); pm_runtime_set_autosuspend_delay(&card->dev, 3000); pm_runtime_use_autosuspend(&card->dev); /* Loading Loading @@ -3046,9 +3032,6 @@ static void mmc_blk_remove(struct mmc_card *card) pm_runtime_put_noidle(&card->dev); mmc_blk_remove_req(md); dev_set_drvdata(&card->dev, NULL); #ifdef CONFIG_MMC_BLOCK_DEFERRED_RESUME mmc_set_bus_resume_policy(card->host, 0); #endif destroy_workqueue(card->complete_wq); } Loading