- Aug 19, 2021
-
-
Gagan Malvi authored
- Too much spam in dmesg. Signed-off-by:
Gagan Malvi <malvigagan@gmail.com>
-
Vaisakh Murali authored
* fpsgo is a proprietary kernel driver (yes, these exist in mtk), this line keeps spamming the log, masking what I actually want from the logs Signed-off-by:
Vaisakh Murali <vaisakhmurali@gmail.com> Change-Id: I9c2c194fe6e3903b5bca080209339338f9224ccc
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co> Change-Id: I51ffece35e2897ddbc4873f8fcb8961bd7c1d9dd
-
TheMalachite authored
Change-Id: I85aab06f322373bee8ae0ab53b68e368ac3b48d6
-
TheMalachite authored
Change-Id: I73bbdf9d4f1363bcb31cc3e20d64bfdf3fa2e9f1
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co> Change-Id: Ia8ff494149c89b606efe9c1721fc6a89e4bae0e6
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co> Change-Id: I9b54ad032efd8672a7430c88ae09262a1c36a669
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co> Change-Id: I64fc24ccae15608c107e234911121d6c3abd660d
-
darkhz authored
* This has performance improvements.
-
SamarV-121 authored
* Fixes failed to mount /tmp/com.android.adbd.apex to loop device /dev/block/loop16 error spam in recovery Signed-off-by:
SamarV-121 <samarvispute121@gmail.com> Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Sultan Alsawaf authored
For us, it's most helpful to have the round-robin timeslice as low as is allowed by the scheduler to reduce latency. Since it's limited by the scheduler tick rate, just set the default to 1 jiffy, which is the lowest possible value. Signed-off-by:
Sultan Alsawaf <sultan@kerneltoast.com> Signed-off-by:
Oktapra Amtono <oktapra.amtono@gmail.com> Signed-off-by:
CloudedQuartz <ravenklawasd@gmail.com>
-
Sultan Alsawaf authored
Although SCHED_FIFO is a real-time scheduling policy, it can have bad results on system latency, since each SCHED_FIFO task will run to completion before yielding to another task. This can result in visible micro-stalls when a SCHED_FIFO task hogs the CPU for too long. On a system where latency is favored over throughput, using SCHED_RR is a better choice than SCHED_FIFO. Signed-off-by:
Sultan Alsawaf <sultan@kerneltoast.com> Signed-off-by:
Oktapra Amtono <oktapra.amtono@gmail.com> Signed-off-by:
CloudedQuartz <ravenklawasd@gmail.com>
-
- Jul 09, 2021
-
-
Fangrui Song authored
arm64 references the start address of .builtin_fw (__start_builtin_fw) with a pair of R_AARCH64_ADR_PREL_PG_HI21/R_AARCH64_LDST64_ABS_LO12_NC relocations. The compiler is allowed to emit the R_AARCH64_LDST64_ABS_LO12_NC relocation because struct builtin_fw in include/linux/firmware.h is 8-byte aligned. The R_AARCH64_LDST64_ABS_LO12_NC relocation requires the address to be a multiple of 8, which may not be the case if .builtin_fw is empty. Unconditionally align .builtin_fw to fix the linker error. 32-bit architectures could use ALIGN(4) but that would add unnecessary complexity, so just use ALIGN(8). Link: https://lkml.kernel.org/r/20201208054646.2913063-1-maskray@google.com Link: https://github.com/ClangBuiltLinux/linux/issues/1204 Fixes: 5658c769 (firmware: allow firmware files to be built into kernel image) Signed-off-by:
Fangrui Song <maskray@google.com> Reported-by:
kernel test robot <lkp@intel.com> Acked-by:
Arnd Bergmann <arnd@arndb.de> Reviewed-by:
Nick Desaulniers <ndesaulniers@google.com> Tested-by:
Nick Desaulniers <ndesaulniers@google.com> Tested-by:
Douglas Anderson <dianders@chromium.org> Acked-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Gagan Malvi <malvi@aospa.co> Change-Id: I1d81ae7a73b346e8139a37f58152f2aa39f32af6
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co> Change-Id: I6a9b85da4abe27aa19cce3450468742d7dcde59a
-
Ash Blake authored
Strings in C are pointers to their first character, so we can't just compare str to '\0'. If we want to check if str is empty by checking if the first character is null, we have to dereference the str. Signed-off-by:
Vaisakh Murali <mvaisakh@statixos.com>
-
Miles Chen authored
Currently some struct LCM_UTIL_FUNCS variable are mark as const variable and should not be written. However, in we still cast these variable to non-const pointer and write them. Fix it by removing the const from these variables, so we can modify these variables. This issue is found by using clang-r370808: "Writes to variables declared const through casts to non-const pointers (explicitly undefined behavior) are now removed. The next release of Clang adds UBSAN support for catching such mistakes." Variables: static const struct LCM_UTIL_FUNCS lcm_utils_dsi0; static const struct LCM_UTIL_FUNCS lcm_utils_dsi1; static const struct LCM_UTIL_FUNCS lcm_utils_dsidual; [163:disp_check]pc : (null) [163:disp_check]lr : lcm_suspend+0xc8/0xf4 [163:disp_check]sp : ffffff800c37bd10 [163:disp_check]x29: ffffff800c37bd10 x28: 000000000000fffc [163:disp_check]x27: ffffff800a6de000 x26: 000000000000fffd [163:disp_check]x25: ffffff800a6de000 x24: 000000000000fffb [163:disp_check]x23: 0000000000000028 x22: ffffff800a6de000 [163:disp_check]x21: ffffff800956c04e x20: ffffff800992bcc5 [163:disp_check]x19: 0000000000000000 x18: ffffff8009876000 [163:disp_check]x17: 0000000000000000 x16: 0000000000000000 [163:disp_check]x15: 00000000000001f6 x14: 00000000000001f6 [163:disp_check]x13: 0000000000000004 x12: 0000000004a3fee0 [163:disp_check]x11: 843a0e2be8e0cd00 x10: 0000000000000001 [163:disp_check]x9 : 843a0e2be8e0cd00 x8 : 0000000000000000 [163:disp_check]x7 : ffffff800814c4ac x6 : 0000000000000000 [163:disp_check]x5 : 0000000000000080 x4 : 0000000000000001 [163:disp_check]x3 : ffffff800992bcc5 x2 : 0000000000000000 [164:disp_check]x1 : 0000000000000028 x0 : 0000000000000000 [163:disp_check]Call trace: [163:disp_check] (null) [163:disp_check] disp_lcm_suspend+0x128/0x174 [163:disp_check] primary_display_esd_recovery+0x244/0x67c [163:disp_check] primary_display_check_recovery_worker_kthread+0x1a4/0x350 [163:disp_check] kthread+0x118/0x128 [163:disp_check] ret_from_fork+0x10/0x18 [163:disp_check]Code: bad PC value [163:disp_check]---[ end trace ffa1eb51d09561c6 ]--- Change-Id: I38be21d2b244622ce0c836a05994f148c813b8aa Feature: [Module]Display Driver Signed-off-by:
Miles Chen <miles.chen@mediatek.com> CR-Id: ALPS04988239
-
Nick Desaulniers authored
LLVM implemented a recent "libcall optimization" that lowers calls to `sprintf(dest, "%s", str)` where the return value is used to `stpcpy(dest, str) - dest`. This generally avoids the machinery involved in parsing format strings. `stpcpy` is just like `strcpy` except it returns the pointer to the new tail of `dest`. This optimization was introduced into clang-12. Implement this so that we don't observe linkage failures due to missing symbol definitions for `stpcpy`. Similar to last year's fire drill with: commit 5f074f3e192f ("lib/string.c: implement a basic bcmp") The kernel is somewhere between a "freestanding" environment (no full libc) and "hosted" environment (many symbols from libc exist with the same type, function signature, and semantics). As H. Peter Anvin notes, there's not really a great way to inform the compiler that you're targeting a freestanding environment but would like to opt-in to some libcall optimizations (see pr/47280 below), rather than opt-out. Arvind notes, -fno-builtin-* behaves slightly differently between GCC and Clang, and Clang is missing many __builtin_* definitions, which I consider a bug in Clang and am working on fixing. Masahiro summarizes the subtle distinction between compilers justly: To prevent transformation from foo() into bar(), there are two ways in Clang to do that; -fno-builtin-foo, and -fno-builtin-bar. There is only one in GCC; -fno-buitin-foo. (Any difference in that behavior in Clang is likely a bug from a missing __builtin_* definition.) Masahiro also notes: We want to disable optimization from foo() to bar(), but we may still benefit from the optimization from foo() into something else. If GCC implements the same transform, we would run into a problem because it is not -fno-builtin-bar, but -fno-builtin-foo that disables that optimization. In this regard, -fno-builtin-foo would be more future-proof than -fno-built-bar, but -fno-builtin-foo is still potentially overkill. We may want to prevent calls from foo() being optimized into calls to bar(), but we still may want other optimization on calls to foo(). It seems that compilers today don't quite provide the fine grain control over which libcall optimizations pseudo-freestanding environments would prefer. Finally, Kees notes that this interface is unsafe, so we should not encourage its use. As such, I've removed the declaration from any header, but it still needs to be exported to avoid linkage errors in modules. Cc: stable@vger.kernel.org Link: https://bugs.llvm.org/show_bug.cgi?id=47162 Link: https://bugs.llvm.org/show_bug.cgi?id=47280 Link: https://github.com/ClangBuiltLinux/linux/issues/1126 Link: https://man7.org/linux/man-pages/man3/stpcpy.3.html Link: https://pubs.opengroup.org/onlinepubs/9699919799/functions/stpcpy.html Link: https://reviews.llvm.org/D85963 Suggested-by:
Andy Lavr <andy.lavr@gmail.com> Suggested-by:
Arvind Sankar <nivedita@alum.mit.edu> Suggested-by:
Joe Perches <joe@perches.com> Suggested-by:
Masahiro Yamada <masahiroy@kernel.org> Suggested-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Reported-by:
Sami Tolvanen <samitolvanen@google.com> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Signed-off-by:
Danny Lin <danny@kdrag0n.dev> Signed-off-by:
Vaisakh Murali <mvaisakh@statixos.com>
-
Ard Biesheuvel authored
readelf complains about the section layout of vmlinux when building with CONFIG_RELOCATABLE=y (for KASLR): readelf: Warning: [21]: Link field (0) should index a symtab section. readelf: Warning: [21]: Info field (0) should index a relocatable section. Also, it seems that our use of '-pie -shared' is contradictory, and thus ambiguous. In general, the way KASLR is wired up at the moment is highly tailored to how ld.bfd happens to implement (and conflate) PIE executables and shared libraries, so given the current effort to support other toolchains, let's fix some of these issues as well. - Drop the -pie linker argument and just leave -shared. In ld.bfd, the differences between them are unclear (except for the ELF type of the produced image [0]) but lld chokes on seeing both at the same time. - Rename the .rela output section to .rela.dyn, as is customary for shared libraries and PIE executables, so that it is not misidentified by readelf as a static relocation section (producing the warnings above). - Pass the -z notext and -z norelro options to explicitly instruct the linker to permit text relocations, and to omit the RELRO program header (which requires a certain section layout that we don't adhere to in the kernel). These are the defaults for current versions of ld.bfd. - Discard .eh_frame and .gnu.hash sections to avoid them from being emitted between .head.text and .text, screwing up the section layout. These changes only affect the ELF image, and produce the same binary image. [0] b9dce7f1 ("arm64: kernel: force ET_DYN ELF type for ...") Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Peter Smith <peter.smith@linaro.org> Tested-by:
Nick Desaulniers <ndesaulniers@google.com> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com> Change-Id: Ide3385cc7a138e6efc7ae22d7139c5feb38e9f72
-
Masahiro Yamada authored
Currently LDFLAGS is not cleared, so same flags are accumulated in LDFLAGS when the top Makefile is recursively invoked. I found unneeded rebuild for ARCH=arm64 when CONFIG_TRIM_UNUSED_KSYMS is enabled. If include/generated/autoksyms.h is updated, the top Makefile is recursively invoked, then arch/arm64/Makefile adds one more '-maarch64linux'. Due to the command line change, modules are rebuilt needlessly. Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by:
Nicolas Pitre <nico@linaro.org>
-
Olof Johansson authored
Not all toolchains have the baremetal elf targets, RedHat/Fedora ones in particular. So, probe for whether it's available and use the previous (linux) targets if it isn't. Reported-by:
Laura Abbott <labbott@redhat.com> Tested-by:
Laura Abbott <labbott@redhat.com> Acked-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Paul Kocialkowski <contact@paulk.fr> Signed-off-by:
Olof Johansson <olof@lixom.net> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Andrew Pinski authored
The kernel needs to be compiled as a LP64 binary for ARM64, even when using a compiler that defaults to code-generation for the ILP32 ABI. Consequently, we need to explicitly pass '-mabi=lp64' (supported on gcc-4.9 and newer). Signed-off-by:
Andrew Pinski <Andrew.Pinski@caviumnetworks.com> Signed-off-by:
Philipp Tomsich <philipp.tomsich@theobroma-systems.com> Signed-off-by:
Christoph Muellner <christoph.muellner@theobroma-systems.com> Signed-off-by:
Yury Norov <ynorov@caviumnetworks.com> Reviewed-by:
David Daney <ddaney@caviumnetworks.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Kshitij authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co> Change-Id: Ibfea16e61fe54c341b11d908407c47b0c1c1b9d2
-
- May 22, 2021
-
-
Sultan Alsawaf authored
Binder code is very hot, so checking frequently to see if a debug message should be printed is a waste of cycles. We're not debugging binder, so just stub out the debug prints to compile them out entirely. Signed-off-by:
Sultan Alsawaf <sultan@kerneltoast.com>
-
- May 16, 2021
-
-
Sultan Alsawaf authored
Most write buffers are rather small and can fit on the stack, eliminating the need to allocate them dynamically. Reserve a 4 KiB stack buffer for this purpose to avoid the overhead of dynamic memory allocation. Signed-off-by:
Sultan Alsawaf <sultan@kerneltoast.com>
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
- May 03, 2021
-
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Vaisakh Murali authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Sultan Alsawaf authored
Allocating pages with __get_free_page is slower than going through the slab allocator to grab free pages out from a pool. These are the results from running the code at the bottom of this message: [ 1.278602] speedtest: __get_free_page: 9 us [ 1.278606] speedtest: kmalloc: 4 us [ 1.278609] speedtest: kmem_cache_alloc: 4 us [ 1.278611] speedtest: vmalloc: 13 us kmalloc and kmem_cache_alloc (which is what kmalloc uses for common sizes behind the scenes) are the fastest choices. Use kmalloc to speed up sg list allocation. This is the code used to produce the above measurements: #include <linux/kthread.h> #include <linux/slab.h> #include <linux/vmalloc.h> static int speedtest(void *data) { static const struct sched_param sched_max_rt_prio = { .sched_priority = MAX_RT_PRIO - 1 }; volatile s64 ctotal = 0, gtotal = 0, ktotal = 0, vtotal = 0; struct kmem_cache *page_pool; int i, j, trials = 1000; volatile ktime_t start; void *ptr[100]; sched_setscheduler_nocheck(current, SCHED_FIFO, &sched_max_rt_prio); page_pool = kmem_cache_create("pages", PAGE_SIZE, PAGE_SIZE, SLAB_PANIC, NULL); for (i = 0; i < trials; i++) { start = ktime_get(); for (j = 0; j < ARRAY_SIZE(ptr); j++) while (!(ptr[j] = kmem_cache_alloc(page_pool, GFP_KERNEL))); ctotal += ktime_us_delta(ktime_get(), start); for (j = 0; j < ARRAY_SIZE(ptr); j++) kmem_cache_free(page_pool, ptr[j]); start = ktime_get(); for (j = 0; j < ARRAY_SIZE(ptr); j++) while (!(ptr[j] = (void *)__get_free_page(GFP_KERNEL))); gtotal += ktime_us_delta(ktime_get(), start); for (j = 0; j < ARRAY_SIZE(ptr); j++) free_page((unsigned long)ptr[j]); start = ktime_get(); for (j = 0; j < ARRAY_SIZE(ptr); j++) while (!(ptr[j] = kmalloc(PAGE_SIZE, GFP_KERNEL))); ktotal += ktime_us_delta(ktime_get(), start); for (j = 0; j < ARRAY_SIZE(ptr); j++) kfree(ptr[j]); start = ktime_get(); *ptr = vmalloc(ARRAY_SIZE(ptr) * PAGE_SIZE); vtotal += ktime_us_delta(ktime_get(), start); vfree(*ptr); } kmem_cache_destroy(page_pool); printk("%s: __get_free_page: %lld us\n", __func__, gtotal / trials); printk("%s: kmalloc: %lld us\n", __func__, ktotal / trials); printk("%s: kmem_cache_alloc: %lld us\n", __func__, ctotal / trials); printk("%s: vmalloc: %lld us\n", __func__, vtotal / trials); complete(data); return 0; } static int __init start_test(void) { DECLARE_COMPLETION_ONSTACK(done); BUG_ON(IS_ERR(kthread_run(speedtest, &done, "malloc_test"))); wait_for_completion(&done); return 0; } late_initcall(start_test); Signed-off-by:
Sultan Alsawaf <sultan@kerneltoast.com>
-
Park Ju Hyung authored
Poorly made kernel trees often use trace_printk() without properly guarding them in a #ifdef macro. Such usage of trace_printk() causes a warning at boot and additional memory allocation. This option serves to disable those all at once with ease. Signed-off-by:
Park Ju Hyung <qkrwngud825@gmail.com>
-
Vaisakh Murali authored
* Memory overheads and cpu times are intensively utilised here. Signed-off-by:
Vaisakh Murali <vaisakhmurali@gmail.com> Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Vaisakh Murali authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Gagan Malvi authored
THOU SHALLN'T SPAM MY LOGS. Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-
Gagan Malvi authored
Signed-off-by:
Gagan Malvi <malvi@aospa.co>
-