Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit ee00479d authored by Ben Hutchings's avatar Ben Hutchings Committed by Shuah Khan
Browse files

selftests: vm: Try harder to allocate huge pages



If we need to increase the number of huge pages, drop caches first
to reduce fragmentation and then check that we actually allocated
as many as we wanted.  Retry once if that doesn't work.

Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
Signed-off-by: default avatarShuah Khan <shuahkh@osg.samsung.com>
parent 3b4d3819
Loading
Loading
Loading
Loading
+14 −1
Original line number Diff line number Diff line
@@ -20,13 +20,26 @@ done < /proc/meminfo
if [ -n "$freepgs" ] && [ -n "$pgsize" ]; then
	nr_hugepgs=`cat /proc/sys/vm/nr_hugepages`
	needpgs=`expr $needmem / $pgsize`
	if [ $freepgs -lt $needpgs ]; then
	tries=2
	while [ $tries -gt 0 ] && [ $freepgs -lt $needpgs ]; do
		lackpgs=$(( $needpgs - $freepgs ))
		echo 3 > /proc/sys/vm/drop_caches
		echo $(( $lackpgs + $nr_hugepgs )) > /proc/sys/vm/nr_hugepages
		if [ $? -ne 0 ]; then
			echo "Please run this test as root"
			exit 1
		fi
		while read name size unit; do
			if [ "$name" = "HugePages_Free:" ]; then
				freepgs=$size
			fi
		done < /proc/meminfo
		tries=$((tries - 1))
	done
	if [ $freepgs -lt $needpgs ]; then
		printf "Not enough huge pages available (%d < %d)\n" \
		       $freepgs $needpgs
		exit 1
	fi
else
	echo "no hugetlbfs support in kernel?"