Add support for (explicitly) parallel sync-groups
This is a best-effort attempt at supporting simultaneous active sync-groups. It is best-effort because WM is not transactionalized so we can't really control what goes into what sync. For this initial version, parallel syncs must be explicitly requested. Starting a sync-set that is NOT explicitly asked to be parallel while an existing sync is active will throw. Currently, a parallel syncset will ignore "indirect" members. This means that it will only wait on members that are directly added to the syncset instead of waiting on the whole subtree rooted at the added members. This logic only applies to anything above Activity level. Activities will still wait on their children regardless since WMCore generally considers anything inside activities as "content". In the future, we will probably have to separate "parallel" from "ignore-indirect". "ignore-indirect" enables syncs to run in parallel across levels even if they are part of the same subtree: for example, if an activity is in one syncset and that activity's task is in another syncset, the task doesn't need to wait for the activity to finish and vice-versa. To handle uncertainty, though, the syncs need to revert to serializing with eachother if they contain non-ignored overlapping sub-trees. This is achieved with the addition of a "dependency" list. If we have SyncA and SyncB active simultaneously, and then try to add a container to SyncB which is already in SyncA, then a dependency will be added to SyncB on SyncA instead of the container. This forces SyncB to wait on SyncA to finish. Cycles are resolved by "moving" conflicting containers from the dependent into the dependency so that there is only one direction of dependencies. When parallel syncs overlap, the readiness-order is: first - dependency second - in order of start-time. Bug: 277838915 Bug: 264536014 Test: atest SyncEngineTests Test: this change, alone, should be a no-op so existing tests too. Change-Id: Iebe293d73e2528c785627abd5e4d9fd2702a3a64
Loading
Please register or sign in to comment