Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit e27c5b9d authored by Tejun Heo's avatar Tejun Heo Committed by Jens Axboe
Browse files

writeback: remove broken rbtree_postorder_for_each_entry_safe() usage in cgwb_bdi_destroy()



a20135ff ("writeback: don't drain bdi_writeback_congested on bdi
destruction") added rbtree_postorder_for_each_entry_safe() which is
used to remove all entries; however, according to Cody, the iterator
isn't safe against operations which may rebalance the tree.  Fix it by
switching to repeatedly removing rb_first() until empty.

Signed-off-by: default avatarTejun Heo <tj@kernel.org>
Reported-by: default avatarCody P Schafer <dev@codyps.com>
Fixes: a20135ff ("writeback: don't drain bdi_writeback_congested on bdi destruction")
Link: http://lkml.kernel.org/g/1443997973-1700-1-git-send-email-dev@codyps.com


Signed-off-by: default avatarJens Axboe <axboe@fb.com>
parent 0dfc70c3
Loading
Loading
Loading
Loading
+6 −4
Original line number Diff line number Diff line
@@ -681,7 +681,7 @@ static int cgwb_bdi_init(struct backing_dev_info *bdi)
static void cgwb_bdi_destroy(struct backing_dev_info *bdi)
{
	struct radix_tree_iter iter;
	struct bdi_writeback_congested *congested, *congested_n;
	struct rb_node *rbn;
	void **slot;

	WARN_ON(test_bit(WB_registered, &bdi->wb.state));
@@ -691,9 +691,11 @@ static void cgwb_bdi_destroy(struct backing_dev_info *bdi)
	radix_tree_for_each_slot(slot, &bdi->cgwb_tree, &iter, 0)
		cgwb_kill(*slot);

	rbtree_postorder_for_each_entry_safe(congested, congested_n,
					&bdi->cgwb_congested_tree, rb_node) {
		rb_erase(&congested->rb_node, &bdi->cgwb_congested_tree);
	while ((rbn = rb_first(&bdi->cgwb_congested_tree))) {
		struct bdi_writeback_congested *congested =
			rb_entry(rbn, struct bdi_writeback_congested, rb_node);

		rb_erase(rbn, &bdi->cgwb_congested_tree);
		congested->bdi = NULL;	/* mark @congested unlinked */
	}