block: Use u64_stats_init() to initialize seqcounts
Now that seqcounts are lockdep enabled objects, we need to explicitly
initialize runtime allocated seqcounts so that lockdep can track them.
Without this patch, Fengguang was seeing:
[ 4.127282] INFO: trying to register non-static key.
[ 4.128027] the code is fine but needs lockdep annotation.
[ 4.128027] turning off the locking correctness validator.
[ 4.128027] CPU: 0 PID: 96 Comm: kworker/u4:1 Not tainted 3.12.0-next-20131108-10601-gbad570d #2
[ 4.128027] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ ... ]
[ 4.128027] Call Trace:
[ 4.128027] [<7908e744>] ? console_unlock+0x353/0x380
[ 4.128027] [<79dc7cf2>] dump_stack+0x48/0x60
[ 4.128027] [<7908953e>] __lock_acquire.isra.26+0x7e3/0xceb
[ 4.128027] [<7908a1c5>] lock_acquire+0x71/0x9a
[ 4.128027] [<794079aa>] ? blk_throtl_bio+0x1c3/0x485
[ 4.128027] [<7940658b>] throtl_update_dispatch_stats+0x7c/0x153
[ 4.128027] [<794079aa>] ? blk_throtl_bio+0x1c3/0x485
[ 4.128027] [<794079aa>] blk_throtl_bio+0x1c3/0x485
...
Use u64_stats_init() for all affected data structures, which initializes
the seqcount.
Reported-and-Tested-by: Fengguang Wu <fengguang.wu@intel.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
[ Folded in another fix from the mailing list as well as a fix to that fix. Tweaked commit message. ]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1384314134-6895-1-git-send-email-john.stultz@linaro.org
[ So I actually think that the two SOBs from PeterZ are the right depiction of the patch route. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 8331aba..0653404 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -256,6 +256,12 @@
} \
} while (0)
+static void tg_stats_init(struct tg_stats_cpu *tg_stats)
+{
+ blkg_rwstat_init(&tg_stats->service_bytes);
+ blkg_rwstat_init(&tg_stats->serviced);
+}
+
/*
* Worker for allocating per cpu stat for tgs. This is scheduled on the
* system_wq once there are some groups on the alloc_list waiting for
@@ -269,12 +275,16 @@
alloc_stats:
if (!stats_cpu) {
+ int cpu;
+
stats_cpu = alloc_percpu(struct tg_stats_cpu);
if (!stats_cpu) {
/* allocation failed, try again after some time */
schedule_delayed_work(dwork, msecs_to_jiffies(10));
return;
}
+ for_each_possible_cpu(cpu)
+ tg_stats_init(per_cpu_ptr(stats_cpu, cpu));
}
spin_lock_irq(&tg_stats_alloc_lock);