Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit b3eb194c authored by Takaya Saeki's avatar Takaya Saeki
Browse files

ueventd: allow the ueventd main loop to run in parallel

The coldboot is currently running in multiple subprocesses in parallel
for faster boot time, while the main loop that polls further uevents
runs in a single process and is not parallelized. However, the main
loop also can be the bottleneck of boot, since services such as vold
waits for ueventd to set up device paths and kernel modules. In
addition, initializing kernel modules that take long (e.g. order of
100ms-) in serial takes perceptible time in total.

This change gives ueventd the ability to run the main loop in multiple
threads as well as the cold boot. This feature is configurable by a new
`parallel_ueventd_main_loop` in ueventd.rc. This is a similar syntax to
`parallel_restorecon`, and disabled by default. The number of worker
threads is also configurable by
`parallel_ueventd_main_loop_max_workers`.

Bug: 400592897
Test: DUT boots with and without `parallel_ueventd_main_loop enabled`
Test: Tested `parallel_ueventd_main_loop_max_workers` works
Test: DUT has same SELinux labels for sysfs
Change-Id: I22edd5b84fa502c2a020e4d125af6d95312559c4
parent 503a9d35
Loading
Loading
Loading
Loading
+28 −0
Original line number Diff line number Diff line
@@ -211,3 +211,31 @@ For example
    parallel_restorecon_dir /sys/devices
    parallel_restorecon_dir /sys/devices/platform
    parallel_restorecon_dir /sys/devices/platform/soc

## Parallel uevent main loop
--------
After coldboot is complete, ueventd enters its main loop. The main loop handles all uevents that
happen after coldboot. This main loop is not parallelized by default unlike the coldboot process
described above. You can optionally parallelize this main loop by.

    parallel_ueventd_main_loop enabled

By default this spawns the same number of threads as the number of the logical cores. You can
optionally tweak the number of workers.

    parallel_ueventd_main_loop_max_workers 2

There are two motivations you might want to try parallelizing the main loop; boot time and kernel
module initialization time.

The main loop handles events that occur when you modify device states (e.g. plug/unplug a new
device), but it also processes uevents necessary for the boot process such as uevents related to
`/data` partitions. These uevents block the boot process, so parallelizing the main loop might help
the boot time for the same reason as parallel restorecon does in coldboot (e.g. labeling sysfs nodes
that cannot be migrated to genfscon).

Some kernel modules take a perceptible time for initialization. If a device is supposed to
initialize multiple number of such kernel modules, parallelizing them reduces the total
initialization time. In addition, these kernel module initializations can block following events for
`/data` block devices until they complete. In that case, this also contributes to making the boot
time faster.
+50 −6
Original line number Diff line number Diff line
@@ -43,6 +43,7 @@
#include "modalias_handler.h"
#include "selabel.h"
#include "selinux.h"
#include "uevent_dependency_graph.h"
#include "uevent_handler.h"
#include "uevent_listener.h"
#include "ueventd_parser.h"
@@ -337,6 +338,42 @@ static UeventdConfiguration GetConfiguration() {
    return ParseConfig(canonical);
}

void main_loop(const UeventListener& uevent_listener,
               const std::vector<std::unique_ptr<UeventHandler>>& uevent_handlers) {
    uevent_listener.Poll([&uevent_handlers](const Uevent& uevent) {
        for (auto& uevent_handler : uevent_handlers) {
            uevent_handler->HandleUevent(uevent);
        }
        return ListenerAction::kContinue;
    });
}

void parallel_main_loop(const UeventListener& uevent_listener,
                        const std::vector<std::unique_ptr<UeventHandler>>& uevent_handlers,
                        size_t num_threads) {
    LOG(INFO) << "parallel main loop is enabled with " << num_threads << " threads";

    std::vector<std::thread> threads;
    UeventDependencyGraph graph;

    for (unsigned int i = 0; i < num_threads; i++) {
        threads.emplace_back([&graph, &uevent_handlers] {
            while (true) {
                auto uevent = graph.WaitDependencyFreeEvent();
                for (auto& uevent_handler : uevent_handlers) {
                    uevent_handler->HandleUevent(uevent);
                }
                graph.MarkEventCompleted(uevent.seqnum);
            }
        });
    }

    uevent_listener.Poll([&graph](const Uevent& uevent) {
        graph.Add(uevent);
        return ListenerAction::kContinue;
    });
}

int ueventd_main(int argc, char** argv) {
    /*
     * init sets the umask to 077 for forked processes. We need to
@@ -404,14 +441,21 @@ int ueventd_main(int argc, char** argv) {

    // Restore prio before main loop
    setpriority(PRIO_PROCESS, 0, 0);
    uevent_listener.Poll([&uevent_handlers](const Uevent& uevent) {
        for (auto& uevent_handler : uevent_handlers) {
            uevent_handler->HandleUevent(uevent);

    if (ueventd_configuration.enable_parallel_ueventd_main_loop) {
        size_t num_threads =
                std::thread::hardware_concurrency() != 0 ? std::thread::hardware_concurrency() : 4;
        if (ueventd_configuration.parallel_main_loop_max_workers.has_value()) {
            num_threads = std::min(num_threads,
                                   ueventd_configuration.parallel_main_loop_max_workers.value());
        }
        parallel_main_loop(uevent_listener, uevent_handlers, num_threads);
    } else {
        main_loop(uevent_listener, uevent_handlers);
    }
        return ListenerAction::kContinue;
    });

    return 0;
    LOG(ERROR) << "main loop exited unexpectedly";
    return EXIT_FAILURE;
}

}  // namespace init
+23 −0
Original line number Diff line number Diff line
@@ -178,6 +178,23 @@ Result<void> ParseUeventSocketRcvbufSizeLine(std::vector<std::string>&& args,
    return {};
}

Result<void> ParseMainLoopMaxWorkers(std::vector<std::string>&& args,
                                     std::optional<size_t>* max_workers) {
    if (args.size() != 2) {
        return Error() << "parallel_ueventd_main_loop_max_workers lines take exactly one parameter";
    }

    size_t parsed_worker_num;
    if (!ParseByteCount(args[1], &parsed_worker_num)) {
        return Error() << "could not parse size '" << args[1]
                       << "' for parallel_ueventd_main_loop_max_workers";
    }

    *max_workers = parsed_worker_num;

    return {};
}

class SubsystemParser : public SectionParser {
  public:
    SubsystemParser(std::vector<Subsystem>* subsystems) : subsystems_(subsystems) {}
@@ -291,6 +308,12 @@ UeventdConfiguration ParseConfig(const std::vector<std::string>& configs) {
    parser.AddSingleLineParser("parallel_restorecon",
                               std::bind(ParseEnabledDisabledLine, _1,
                                         &ueventd_configuration.enable_parallel_restorecon));
    parser.AddSingleLineParser("parallel_ueventd_main_loop_max_workers",
                               std::bind(ParseMainLoopMaxWorkers, _1,
                                         &ueventd_configuration.parallel_main_loop_max_workers));
    parser.AddSingleLineParser("parallel_ueventd_main_loop",
                               std::bind(ParseEnabledDisabledLine, _1,
                                         &ueventd_configuration.enable_parallel_ueventd_main_loop));

    for (const auto& config : configs) {
        parser.ParseConfig(config);
+2 −0
Original line number Diff line number Diff line
@@ -36,6 +36,8 @@ struct UeventdConfiguration {
    bool enable_modalias_handling = false;
    size_t uevent_socket_rcvbuf_size = 0;
    bool enable_parallel_restorecon = false;
    bool enable_parallel_ueventd_main_loop = false;
    std::optional<size_t> parallel_main_loop_max_workers;
};

UeventdConfiguration ParseConfig(const std::vector<std::string>& configs);