SF: Fix many flakes in IdleTimerTest
There were a few flakes that were possible due to the way the idle thread could end up being scheduled. In particular the following tests have historically been flaky: IdleTimerTest.idleTimerIdlesTest IdleTimerTest.noCallbacksAfterStopAndResetTest IdleTimerTest.noCallbacksAfterStopTest IdleTimerTest.resetTest IdleTimerTest.startStopTest IdleTimerTest.timeoutCallbackExecutionTest One thing that helps is to boost the priority of the test process as a whole so that the IdleTimer thread is scheduled when it wants to be run. But even then there are no guarantees that requesting a callback after "3ms" actually generates that callback that quickly. The adjustments to the test include: 1) Removing calls to sleep_for, in favor of doing follow-up operations immediately after (start() then stop() for example), or relying on the ability of the AsyncCallRecorder to wait for callbacks. 2) For startStopTest, using a larger interval, and sanity checking with a clock source that an unexpected event really is unexpected. 3) For resetTest also using a long interval to observe the behavior if the reset happens shortly before a callback for the previous interval would be made. There was unfortunately one change necessary to the implementation, not just the test. It turned out that the condition.wait_for could return after the interval had expired without returning std::cv_status::timeout, and therefore not trigger a callback even though it should have. This lead to idleTimerIdlesTest being flakey even with the other changes. There was also second issue discovered in Scheduler, where it did not shut down the thread properly in its destructor, which could allow the callback the Scheduler sets to be invoked while the Scheduler instance is being destroyed (leading to lock attempt on a destroyed mutex). Test: libsurfaceflinger_unittest --gtest_repeat=1000 # ALL pass x1000 Test: atest libsurfaceflinger_unittest # All, not just IdleTimerTest Bug: 122319747 Change-Id: I716451524c32cc6a299523c47c11cfefd6ab4460
Loading
Please register or sign in to comment