[res] Optimize idmap format for lookups
Idmap format is currently storing sorted mappings for the overlays: be it target id -> overlay id, or target id -> frro data, or the reverse overlay id -> target id list All of these require binary search for finding the right entry, and this binary search always touches just 4 bytes of the key, skipping over all remaining bytes of value. This usually doesn't make much of a difference for the smaller idmaps but in case of the larger ones binary search has to load a whole bunch of RAM into the CPU cache to then throw at least half of it away. This CL rearranges all mappings into two separate lists: the first one only contains the sorted keys, and the second one stores the corresponding data in the same order. This means the search can only touch the minimum amount of RAM and disk pages, and then jump exactly to the value of the element it found. We don't have any benchmarks that would _directly_ capture the speedup here, and the Java resources_perf ones are too noisy to make a clear call, but overall they look like some 3-5% speedup for the overlaid lookups Test: atest libanrdoidfw_tests idmap2_tests libandroidfw_benchmarks Flag: EXEMPT performance optimization Change-Id: I450797f233c9371e70738546a89feaa0e683b333
Loading
Please register or sign in to comment