Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 0092fe39 authored by Steven Moreland's avatar Steven Moreland
Browse files

libbinder: RPC clear behavior

Previously, when RpcState cleared its state in response to an
error, there were two issues that might happen related to proxy
destruction:
- a BpBinder could have lost its last strong ref on another thread
  but not have the destruction reflected in RpcSession yet. This
  is the issue causing crashes in the callback test (when the call
  is oneway, callback is not oneway, and the call is delayed)
- this code could run the BpBinder destructor if mNodeForBinder is
  the last wp<> holder of BpBinder (which has object lifetime weak).
  This could cause an issue hypothetically if an attached object
  (via attachObject) made binder calls in its destructor. In order
  to prevent this, 'binder' is held onto, instead of 'node.sentRef'.

Fixes: 237330627
Test: (running for several minutes)
    m binderRpcTest && adb sync && adb shell "while /data/nativetest64/binderRpcTest/binderRpcTest --gtest_filter="*Callbacks*"; do logcat -c; done

Change-Id: I21a702217b0749932d77c3acf11e879ee77dd22b
parent dcc3db7b
Loading
Loading
Loading
Loading
+13 −15
Original line number Diff line number Diff line
@@ -244,33 +244,31 @@ void RpcState::clear() {
                            "New state should be impossible after terminating!");
        return;
    }
    mTerminated = true;

    if (SHOULD_LOG_RPC_DETAIL) {
        ALOGE("RpcState::clear()");
        dumpLocked();
    }

    // if the destructor of a binder object makes another RPC call, then calling
    // decStrong could deadlock. So, we must hold onto these binders until
    // mNodeMutex is no longer taken.
    std::vector<sp<IBinder>> tempHoldBinder;

    mTerminated = true;
    // invariants
    for (auto& [address, node] : mNodeForAddress) {
        sp<IBinder> binder = node.binder.promote();
        LOG_ALWAYS_FATAL_IF(binder == nullptr,
        bool guaranteedHaveBinder = node.timesSent > 0;
        if (guaranteedHaveBinder) {
            LOG_ALWAYS_FATAL_IF(node.sentRef == nullptr,
                                "Binder expected to be owned with address: %" PRIu64 " %s", address,
                                node.toString().c_str());

        if (node.sentRef != nullptr) {
            tempHoldBinder.push_back(node.sentRef);
        }
    }

    mNodeForAddress.clear();
    // if the destructor of a binder object makes another RPC call, then calling
    // decStrong could deadlock. So, we must hold onto these binders until
    // mNodeMutex is no longer taken.
    auto temp = std::move(mNodeForAddress);
    mNodeForAddress.clear(); // RpcState isn't reusable, but for future/explicit

    _l.unlock();
    tempHoldBinder.clear(); // explicit
    temp.clear(); // explicit
}

void RpcState::dumpLocked() {
+0 −7
Original line number Diff line number Diff line
@@ -1029,13 +1029,6 @@ TEST_P(BinderRpc, Callbacks) {
                // since this session has an incoming connection w/ a threadpool, we
                // need to manually shut it down
                EXPECT_TRUE(proc.proc.sessions.at(0).session->shutdownAndWait(true));

                proc.proc.host.setCustomExitStatusCheck([](int wstatus) {
                    // Flaky. Sometimes gets SIGABRT.
                    EXPECT_TRUE((WIFEXITED(wstatus) && WEXITSTATUS(wstatus) == 0) ||
                                (WIFSIGNALED(wstatus) && WTERMSIG(wstatus) == SIGABRT))
                            << "server process failed: " << WaitStatusToString(wstatus);
                });
                proc.expectAlreadyShutdown = true;
            }
        }