soc: qcom: secure_buffer: Process large SG tables in batches
Currently, if processing an SG table consumes more memory
than can fit in the pre-allocated buffer, then calls to
hyp_assign_table() will fail as if there were not enough
memory available to process the request.
Instead, for every call to hyp assign, allocate enough
memory to process the maximum batch size, and process large
SG tables in pieces, using this memory. This avoids failures
due to large SG tables. Also, since the memory for handling
these requests is now allocated per hyp_assign_table() call,
we can drop the pre-allocated buffer, as it is no longer in
use.
Change-Id: Ie9899a5e2c8de6127707609101f5fb557e3f0533
Signed-off-by:
Isaac J. Manjarres <isaacm@codeaurora.org>
Loading
Please register or sign in to comment