Tune snapshot-merge performance
Currently, there is one thread per partition
for snapshot merge. When all these threads are
run in parallel, this may stress the system
as the merge threads are both CPU and I/O bound.
Allow only two merge threads to be in-flight
at any point in time. This will ensure that there
is forward progress done with respect to snapshot-merge
and only two cores are used as against using
5-6 cores.
Additionally, system and prodcut partitions are merged
first. This is primarily because /root is mounted
of system partition and faster the merge completes
on /system partition, we can switch the dm tables
immediately. There is no change in the merge phase
from libsnapshot perspective. This prioritization
is based on each merge phase. If the system partition
merge is in second phase, then it takes priority
in that phase.
As a side benefit, this should also
reduce the memory usage when merge is in-flight
given that we now limit the threads.
There is slight delay in overall merge time as
we now throttle the merge.
No boot time regressions observed.
Full OTA:
Merge time (Without this patch): 42 seconds
Merge time (With this patch): 46 seconds
Incremental OTA:
Merge time (Without this patch): 52 seconds
Merge time (With this patch): 57 seconds
system partition merge completes in the first ~12-16 seconds.
App-launch (COLD) on Pixel:
Baseline (After snapshot-merge is completed when there is no daemon):
==========================
Chrome: 250
youtube: 631
camera: 230
==========================
Without this patch when snapshot-merge is in-progress (in ms):
Full - OTA
Chrome: 1729
youtube: 3126
camera: 1525
==========================
With this patch when snapshot-merge is in-progress (in ms):
Full - OTA
Chrome: 1061
youtube: 820
camera: 1378
Incremental - OTA (350M)
Chrome: 495
youtube: 1442
camera: 641
=====================
Bug: 237490659
Ignore-AOSP-First: cherry-pick from aosp
Test: Full and incremental OTA
Signed-off-by: Akilesh Kailash <akailash@google.com>
Change-Id: I887d5073dba88e9a8a85ac10c771e4ccee7c84ff
Loading
Please register or sign in to comment