Loading current.txt +2 −1 Original line number Diff line number Diff line Loading @@ -589,7 +589,8 @@ fd65298e1e09e0e3c781ab18305920d757dbe55a3b459ce17814ec5cf6dfee99 android.hardwar ce8dbe76eb9ee94b46ef98f725be992e760a5751073d4f4912484026541371f3 android.hardware.health@2.1::IHealth 26f04510a0b57aba5167c5c0a7c2f077c2acbb98b81902a072517829fd9fd67f android.hardware.health@2.1::IHealthInfoCallback db47f4ceceb1f06c656f39caa70c557b0f8471ef59fd58611bea667ffca20101 android.hardware.health@2.1::types 34515afa2bb792d3c6d8495a5f5d907d179c8507ca5e55c10050d02ae1d516ef android.hardware.neuralnetworks@1.3::IDevice 9e59fffceed0dd72a9799e04505db5f777bbbea1af0695ba4107ef6d967c6fda android.hardware.neuralnetworks@1.3::IDevice fd5a2b723b75acbdd9f31bd07e0f83293c52f99f8d9b87bf58eeb6018f665fde android.hardware.neuralnetworks@1.3::IPreparedModelCallback b74fe72cfe438f50e772e6a307657ff449d5bde83c15dd1f140ff2edbe73499c android.hardware.neuralnetworks@1.3::types 274fb1254a6d1a97824ec5c880eeefc0e410dc6d3a2a4c34052201169d2b7de0 android.hardware.radio@1.5::types c8e81d912827a5d49b2ddcdc4eb4556c5d231a899a1dca879309e04210daa4a0 android.hardware.radio@1.5::IRadio Loading neuralnetworks/1.3/Android.bp +1 −0 Original line number Diff line number Diff line Loading @@ -9,6 +9,7 @@ hidl_interface { srcs: [ "types.hal", "IDevice.hal", "IPreparedModelCallback.hal", ], interfaces: [ "android.hardware.neuralnetworks@1.0", Loading neuralnetworks/1.3/IDevice.hal +83 −4 Original line number Diff line number Diff line Loading @@ -22,7 +22,7 @@ import @1.2::Constant; import @1.2::DeviceType; import @1.2::Extension; import @1.2::IDevice; import @1.2::IPreparedModelCallback; import IPreparedModelCallback; /** * This interface represents a device driver. Loading Loading @@ -134,18 +134,18 @@ interface IDevice extends @1.2::IDevice { * not provided, or match the numModelCache returned from * getNumberOfCacheFilesNeeded. The cache handles will be provided in * the same order when retrieving the preparedModel from cache files * with prepareModelFromCache. * with prepareModelFromCache_1_3. * @param dataCache A vector of handles with each entry holding exactly one * cache file descriptor for the constants' cache. The length of the * vector must either be 0 indicating that caching information is not * provided, or match the numDataCache returned from * getNumberOfCacheFilesNeeded. The cache handles will be provided in * the same order when retrieving the preparedModel from cache files * with prepareModelFromCache. * with prepareModelFromCache_1_3. * @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN * identifying the prepared model. The same token will be provided when * retrieving the prepared model from the cache files with * prepareModelFromCache. Tokens should be chosen to have a low rate of * prepareModelFromCache_1_3. Tokens should be chosen to have a low rate of * collision for a particular application. The driver cannot detect a * collision; a collision will result in a failed execution or in a * successful execution that produces incorrect output values. If both Loading @@ -168,4 +168,83 @@ interface IDevice extends @1.2::IDevice { uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token, IPreparedModelCallback callback) generates (ErrorStatus status); /** * Creates a prepared model from cache files for execution. * * prepareModelFromCache_1_3 is used to retrieve a prepared model directly from * cache files to avoid slow model compilation time. There are * two types of cache file handles provided to the driver: model cache * and data cache. For more information on the two types of cache handles, * refer to getNumberOfCacheFilesNeeded. * * The file descriptors must be opened with read and write permission. A file may * have any size, and the corresponding file descriptor may have any offset. The * driver must truncate a file to zero size before writing to that file. The file * descriptors may be closed by the client once the asynchronous preparation has * finished. The driver must dup a file descriptor if it wants to get access to * the cache file later. * * The model is prepared asynchronously with respect to the caller. The * prepareModelFromCache_1_3 function must verify the inputs to the * prepareModelFromCache_1_3 function are correct, and that the security-sensitive * cache has not been modified since it was last written by the driver. * If there is an error, or if compilation caching is not supported, or if the * security-sensitive cache has been modified, prepareModelFromCache_1_3 must * immediately invoke the callback with the appropriate ErrorStatus value and * nullptr for the IPreparedModel, then return with the same ErrorStatus. If * the inputs to the prepareModelFromCache_1_3 function are valid, the security-sensitive * cache is not modified, and there is no error, prepareModelFromCache_1_3 must launch an * asynchronous task to prepare the model in the background, and immediately return * from prepareModelFromCache_1_3 with ErrorStatus::NONE. If the asynchronous task * fails to launch, prepareModelFromCache_1_3 must immediately invoke the callback * with ErrorStatus::GENERAL_FAILURE and nullptr for the IPreparedModel, then * return with ErrorStatus::GENERAL_FAILURE. * * When the asynchronous task has finished preparing the model, it must * immediately invoke the callback function provided as an input to * prepareModelFromCache_1_3. If the model was prepared successfully, the * callback object must be invoked with an error status of ErrorStatus::NONE * and the produced IPreparedModel object. If an error occurred preparing * the model, the callback object must be invoked with the appropriate * ErrorStatus value and nullptr for the IPreparedModel. * * The only information that may be unknown to the model at this stage is * the shape of the tensors, which may only be known at execution time. As * such, some driver services may return partially prepared models, where * the prepared model may only be finished when it is paired with a set of * inputs to the model. Note that the same prepared model object may be * used with different shapes of inputs on different (possibly concurrent) * executions. * * @param modelCache A vector of handles with each entry holding exactly one * cache file descriptor for the security-sensitive cache. The length of * the vector must match the numModelCache returned from getNumberOfCacheFilesNeeded. * The cache handles will be provided in the same order as with prepareModel_1_3. * @param dataCache A vector of handles with each entry holding exactly one * cache file descriptor for the constants' cache. The length of the vector * must match the numDataCache returned from getNumberOfCacheFilesNeeded. * The cache handles will be provided in the same order as with prepareModel_1_3. * @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN * identifying the prepared model. It is the same token provided when saving * the cache files with prepareModel_1_3. Tokens should be chosen * to have a low rate of collision for a particular application. The driver * cannot detect a collision; a collision will result in a failed execution * or in a successful execution that produces incorrect output values. * @param callback A callback object used to return the error status of * preparing the model for execution and the prepared model if * successful, nullptr otherwise. The callback object's notify function * must be called exactly once, even if the model could not be prepared. * @return status Error status of launching a task which prepares the model * in the background; must be: * - NONE if preparation task is successfully launched * - DEVICE_UNAVAILABLE if driver is offline or busy * - GENERAL_FAILURE if caching is not supported or if there is an * unspecified error * - INVALID_ARGUMENT if one of the input arguments is invalid */ prepareModelFromCache_1_3(vec<handle> modelCache, vec<handle> dataCache, uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token, IPreparedModelCallback callback) generates (ErrorStatus status); }; neuralnetworks/1.3/IPreparedModelCallback.hal 0 → 100644 +57 −0 Original line number Diff line number Diff line /* * Copyright (C) 2019 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package android.hardware.neuralnetworks@1.3; import @1.0::ErrorStatus; import @1.2::IPreparedModelCallback; import @1.2::IPreparedModel; /** * IPreparedModelCallback must be used to return a prepared model produced by an * asynchronous task launched from IDevice::prepareModel. */ interface IPreparedModelCallback extends @1.2::IPreparedModelCallback { /** * There are three notify methods declared for the IPreparedModelCallback * interface: notify_1_3, notify_1_2, and notify. One of the three * notify methods must be invoked immediately after the asynchronous * task holding this callback has finished preparing the model. If the model was * successfully prepared, one of the notify methods must be invoked with * ErrorStatus::NONE and the prepared model. If the model was not able to be * successfully prepared, one of the notify methods must be invoked with the * appropriate ErrorStatus and nullptr as the IPreparedModel. If the asynchronous * task holding this callback fails to launch or if the model provided to * IDevice::prepareModel is invalid, one of the notify methods must be invoked * with the appropriate error as well as nullptr for the IPreparedModel. * * @param status Error status returned from the asynchronous model * preparation task; must be: * - NONE if the asynchronous task successfully prepared the * model * - DEVICE_UNAVAILABLE if driver is offline or busy * - GENERAL_FAILURE if the asynchronous task resulted in an * unspecified error * - INVALID_ARGUMENT if one of the input arguments to * prepareModel is invalid * @param preparedModel A model that has been asynchronously prepared for * execution. If the model was unable to be prepared * due to an error, nullptr must be passed in place of * the IPreparedModel object. */ oneway notify_1_3(ErrorStatus status, IPreparedModel preparedModel); }; neuralnetworks/1.3/vts/functional/Android.bp +19 −0 Original line number Diff line number Diff line Loading @@ -14,6 +14,24 @@ // limitations under the License. // cc_library_static { name: "VtsHalNeuralNetworksV1_3Callbacks", defaults: ["VtsHalTargetTestDefaults"], export_include_dirs: ["include"], srcs: [ "Callbacks.cpp", ], static_libs: [ "android.hardware.neuralnetworks@1.0", "android.hardware.neuralnetworks@1.1", "android.hardware.neuralnetworks@1.2", "android.hardware.neuralnetworks@1.3", ], header_libs: [ "libbase_headers", ] } cc_test { name: "VtsHalNeuralnetworksV1_3TargetTest", defaults: ["VtsHalTargetTestDefaults"], Loading Loading @@ -44,6 +62,7 @@ cc_test { "libneuralnetworks_utils", "VtsHalNeuralNetworksV1_0_utils", "VtsHalNeuralNetworksV1_2Callbacks", "VtsHalNeuralNetworksV1_3Callbacks", ], whole_static_libs: [ "neuralnetworks_generated_V1_0_example", Loading Loading
current.txt +2 −1 Original line number Diff line number Diff line Loading @@ -589,7 +589,8 @@ fd65298e1e09e0e3c781ab18305920d757dbe55a3b459ce17814ec5cf6dfee99 android.hardwar ce8dbe76eb9ee94b46ef98f725be992e760a5751073d4f4912484026541371f3 android.hardware.health@2.1::IHealth 26f04510a0b57aba5167c5c0a7c2f077c2acbb98b81902a072517829fd9fd67f android.hardware.health@2.1::IHealthInfoCallback db47f4ceceb1f06c656f39caa70c557b0f8471ef59fd58611bea667ffca20101 android.hardware.health@2.1::types 34515afa2bb792d3c6d8495a5f5d907d179c8507ca5e55c10050d02ae1d516ef android.hardware.neuralnetworks@1.3::IDevice 9e59fffceed0dd72a9799e04505db5f777bbbea1af0695ba4107ef6d967c6fda android.hardware.neuralnetworks@1.3::IDevice fd5a2b723b75acbdd9f31bd07e0f83293c52f99f8d9b87bf58eeb6018f665fde android.hardware.neuralnetworks@1.3::IPreparedModelCallback b74fe72cfe438f50e772e6a307657ff449d5bde83c15dd1f140ff2edbe73499c android.hardware.neuralnetworks@1.3::types 274fb1254a6d1a97824ec5c880eeefc0e410dc6d3a2a4c34052201169d2b7de0 android.hardware.radio@1.5::types c8e81d912827a5d49b2ddcdc4eb4556c5d231a899a1dca879309e04210daa4a0 android.hardware.radio@1.5::IRadio Loading
neuralnetworks/1.3/Android.bp +1 −0 Original line number Diff line number Diff line Loading @@ -9,6 +9,7 @@ hidl_interface { srcs: [ "types.hal", "IDevice.hal", "IPreparedModelCallback.hal", ], interfaces: [ "android.hardware.neuralnetworks@1.0", Loading
neuralnetworks/1.3/IDevice.hal +83 −4 Original line number Diff line number Diff line Loading @@ -22,7 +22,7 @@ import @1.2::Constant; import @1.2::DeviceType; import @1.2::Extension; import @1.2::IDevice; import @1.2::IPreparedModelCallback; import IPreparedModelCallback; /** * This interface represents a device driver. Loading Loading @@ -134,18 +134,18 @@ interface IDevice extends @1.2::IDevice { * not provided, or match the numModelCache returned from * getNumberOfCacheFilesNeeded. The cache handles will be provided in * the same order when retrieving the preparedModel from cache files * with prepareModelFromCache. * with prepareModelFromCache_1_3. * @param dataCache A vector of handles with each entry holding exactly one * cache file descriptor for the constants' cache. The length of the * vector must either be 0 indicating that caching information is not * provided, or match the numDataCache returned from * getNumberOfCacheFilesNeeded. The cache handles will be provided in * the same order when retrieving the preparedModel from cache files * with prepareModelFromCache. * with prepareModelFromCache_1_3. * @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN * identifying the prepared model. The same token will be provided when * retrieving the prepared model from the cache files with * prepareModelFromCache. Tokens should be chosen to have a low rate of * prepareModelFromCache_1_3. Tokens should be chosen to have a low rate of * collision for a particular application. The driver cannot detect a * collision; a collision will result in a failed execution or in a * successful execution that produces incorrect output values. If both Loading @@ -168,4 +168,83 @@ interface IDevice extends @1.2::IDevice { uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token, IPreparedModelCallback callback) generates (ErrorStatus status); /** * Creates a prepared model from cache files for execution. * * prepareModelFromCache_1_3 is used to retrieve a prepared model directly from * cache files to avoid slow model compilation time. There are * two types of cache file handles provided to the driver: model cache * and data cache. For more information on the two types of cache handles, * refer to getNumberOfCacheFilesNeeded. * * The file descriptors must be opened with read and write permission. A file may * have any size, and the corresponding file descriptor may have any offset. The * driver must truncate a file to zero size before writing to that file. The file * descriptors may be closed by the client once the asynchronous preparation has * finished. The driver must dup a file descriptor if it wants to get access to * the cache file later. * * The model is prepared asynchronously with respect to the caller. The * prepareModelFromCache_1_3 function must verify the inputs to the * prepareModelFromCache_1_3 function are correct, and that the security-sensitive * cache has not been modified since it was last written by the driver. * If there is an error, or if compilation caching is not supported, or if the * security-sensitive cache has been modified, prepareModelFromCache_1_3 must * immediately invoke the callback with the appropriate ErrorStatus value and * nullptr for the IPreparedModel, then return with the same ErrorStatus. If * the inputs to the prepareModelFromCache_1_3 function are valid, the security-sensitive * cache is not modified, and there is no error, prepareModelFromCache_1_3 must launch an * asynchronous task to prepare the model in the background, and immediately return * from prepareModelFromCache_1_3 with ErrorStatus::NONE. If the asynchronous task * fails to launch, prepareModelFromCache_1_3 must immediately invoke the callback * with ErrorStatus::GENERAL_FAILURE and nullptr for the IPreparedModel, then * return with ErrorStatus::GENERAL_FAILURE. * * When the asynchronous task has finished preparing the model, it must * immediately invoke the callback function provided as an input to * prepareModelFromCache_1_3. If the model was prepared successfully, the * callback object must be invoked with an error status of ErrorStatus::NONE * and the produced IPreparedModel object. If an error occurred preparing * the model, the callback object must be invoked with the appropriate * ErrorStatus value and nullptr for the IPreparedModel. * * The only information that may be unknown to the model at this stage is * the shape of the tensors, which may only be known at execution time. As * such, some driver services may return partially prepared models, where * the prepared model may only be finished when it is paired with a set of * inputs to the model. Note that the same prepared model object may be * used with different shapes of inputs on different (possibly concurrent) * executions. * * @param modelCache A vector of handles with each entry holding exactly one * cache file descriptor for the security-sensitive cache. The length of * the vector must match the numModelCache returned from getNumberOfCacheFilesNeeded. * The cache handles will be provided in the same order as with prepareModel_1_3. * @param dataCache A vector of handles with each entry holding exactly one * cache file descriptor for the constants' cache. The length of the vector * must match the numDataCache returned from getNumberOfCacheFilesNeeded. * The cache handles will be provided in the same order as with prepareModel_1_3. * @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN * identifying the prepared model. It is the same token provided when saving * the cache files with prepareModel_1_3. Tokens should be chosen * to have a low rate of collision for a particular application. The driver * cannot detect a collision; a collision will result in a failed execution * or in a successful execution that produces incorrect output values. * @param callback A callback object used to return the error status of * preparing the model for execution and the prepared model if * successful, nullptr otherwise. The callback object's notify function * must be called exactly once, even if the model could not be prepared. * @return status Error status of launching a task which prepares the model * in the background; must be: * - NONE if preparation task is successfully launched * - DEVICE_UNAVAILABLE if driver is offline or busy * - GENERAL_FAILURE if caching is not supported or if there is an * unspecified error * - INVALID_ARGUMENT if one of the input arguments is invalid */ prepareModelFromCache_1_3(vec<handle> modelCache, vec<handle> dataCache, uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token, IPreparedModelCallback callback) generates (ErrorStatus status); };
neuralnetworks/1.3/IPreparedModelCallback.hal 0 → 100644 +57 −0 Original line number Diff line number Diff line /* * Copyright (C) 2019 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package android.hardware.neuralnetworks@1.3; import @1.0::ErrorStatus; import @1.2::IPreparedModelCallback; import @1.2::IPreparedModel; /** * IPreparedModelCallback must be used to return a prepared model produced by an * asynchronous task launched from IDevice::prepareModel. */ interface IPreparedModelCallback extends @1.2::IPreparedModelCallback { /** * There are three notify methods declared for the IPreparedModelCallback * interface: notify_1_3, notify_1_2, and notify. One of the three * notify methods must be invoked immediately after the asynchronous * task holding this callback has finished preparing the model. If the model was * successfully prepared, one of the notify methods must be invoked with * ErrorStatus::NONE and the prepared model. If the model was not able to be * successfully prepared, one of the notify methods must be invoked with the * appropriate ErrorStatus and nullptr as the IPreparedModel. If the asynchronous * task holding this callback fails to launch or if the model provided to * IDevice::prepareModel is invalid, one of the notify methods must be invoked * with the appropriate error as well as nullptr for the IPreparedModel. * * @param status Error status returned from the asynchronous model * preparation task; must be: * - NONE if the asynchronous task successfully prepared the * model * - DEVICE_UNAVAILABLE if driver is offline or busy * - GENERAL_FAILURE if the asynchronous task resulted in an * unspecified error * - INVALID_ARGUMENT if one of the input arguments to * prepareModel is invalid * @param preparedModel A model that has been asynchronously prepared for * execution. If the model was unable to be prepared * due to an error, nullptr must be passed in place of * the IPreparedModel object. */ oneway notify_1_3(ErrorStatus status, IPreparedModel preparedModel); };
neuralnetworks/1.3/vts/functional/Android.bp +19 −0 Original line number Diff line number Diff line Loading @@ -14,6 +14,24 @@ // limitations under the License. // cc_library_static { name: "VtsHalNeuralNetworksV1_3Callbacks", defaults: ["VtsHalTargetTestDefaults"], export_include_dirs: ["include"], srcs: [ "Callbacks.cpp", ], static_libs: [ "android.hardware.neuralnetworks@1.0", "android.hardware.neuralnetworks@1.1", "android.hardware.neuralnetworks@1.2", "android.hardware.neuralnetworks@1.3", ], header_libs: [ "libbase_headers", ] } cc_test { name: "VtsHalNeuralnetworksV1_3TargetTest", defaults: ["VtsHalTargetTestDefaults"], Loading Loading @@ -44,6 +62,7 @@ cc_test { "libneuralnetworks_utils", "VtsHalNeuralNetworksV1_0_utils", "VtsHalNeuralNetworksV1_2Callbacks", "VtsHalNeuralNetworksV1_3Callbacks", ], whole_static_libs: [ "neuralnetworks_generated_V1_0_example", Loading