Loading .mailmap +2 −1 Original line number Diff line number Diff line Loading @@ -17,7 +17,8 @@ Aleksey Gorelov <aleksey_gorelov@phoenix.com> Al Viro <viro@ftp.linux.org.uk> Al Viro <viro@zenIV.linux.org.uk> Andreas Herrmann <aherrman@de.ibm.com> Andrew Morton <akpm@osdl.org> Andrey Ryabinin <ryabinin.a.a@gmail.com> <a.ryabinin@samsung.com> Andrew Morton <akpm@linux-foundation.org> Andrew Vasquez <andrew.vasquez@qlogic.com> Andy Adamson <andros@citi.umich.edu> Archit Taneja <archit@ti.com> Loading Documentation/features/debug/KASAN/arch-support.txt 0 → 100644 +40 −0 Original line number Diff line number Diff line # # Feature name: KASAN # Kconfig: HAVE_ARCH_KASAN # description: arch supports the KASAN runtime memory checker # ----------------------- | arch |status| ----------------------- | alpha: | TODO | | arc: | TODO | | arm: | TODO | | arm64: | ok | | avr32: | TODO | | blackfin: | TODO | | c6x: | TODO | | cris: | TODO | | frv: | TODO | | h8300: | TODO | | hexagon: | TODO | | ia64: | TODO | | m32r: | TODO | | m68k: | TODO | | metag: | TODO | | microblaze: | TODO | | mips: | TODO | | mn10300: | TODO | | nios2: | TODO | | openrisc: | TODO | | parisc: | TODO | | powerpc: | TODO | | s390: | TODO | | score: | TODO | | sh: | TODO | | sparc: | TODO | | tile: | TODO | | um: | TODO | | unicore32: | TODO | | x86: | ok | | xtensa: | TODO | ----------------------- Documentation/kasan.txt +23 −23 Original line number Diff line number Diff line Kernel address sanitizer ================ KernelAddressSanitizer (KASAN) ============================== 0. Overview =========== Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides KernelAddressSANitizer (KASAN) is a dynamic memory error detector. It provides a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs. KASan uses compile-time instrumentation for checking every memory access, therefore you will need a gcc version of 4.9.2 or later. KASan could detect out of bounds accesses to stack or global variables, but only if gcc 5.0 or later was used to built the kernel. KASAN uses compile-time instrumentation for checking every memory access, therefore you will need a GCC version 4.9.2 or later. GCC 5.0 or later is required for detection of out-of-bounds accesses to stack or global variables. Currently KASan is supported only for x86_64 architecture and requires that the kernel be built with the SLUB allocator. Currently KASAN is supported only for x86_64 architecture and requires the kernel to be built with the SLUB allocator. 1. Usage ========= ======== To enable KASAN configure kernel with: CONFIG_KASAN = y and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline is compiler instrumentation types. The former produces smaller binary the latter is 1.1 - 2 times faster. Inline instrumentation requires a gcc version of 5.0 or later. and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and inline are compiler instrumentation types. The former produces smaller binary the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC version 5.0 or later. Currently KASAN works only with the SLUB memory allocator. For better bug detection and nicer report, enable CONFIG_STACKTRACE and put at least 'slub_debug=U' in the boot cmdline. For better bug detection and nicer reporting, enable CONFIG_STACKTRACE. To disable instrumentation for specific files or directories, add a line similar to the following to the respective kernel Makefile: Loading @@ -42,7 +40,7 @@ similar to the following to the respective kernel Makefile: KASAN_SANITIZE := n 1.1 Error reports ========== ================= A typical out of bounds access report looks like this: Loading Loading @@ -119,14 +117,16 @@ Memory state around the buggy address: ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ================================================================== First sections describe slub object where bad access happened. See 'SLUB Debug output' section in Documentation/vm/slub.txt for details. The header of the report discribe what kind of bug happened and what kind of access caused it. It's followed by the description of the accessed slub object (see 'SLUB Debug output' section in Documentation/vm/slub.txt for details) and the description of the accessed memory page. In the last section the report shows memory state around the accessed address. Reading this part requires some more understanding of how KASAN works. Reading this part requires some understanding of how KASAN works. Each 8 bytes of memory are encoded in one shadow byte as accessible, partially accessible, freed or they can be part of a redzone. The state of each 8 aligned bytes of memory is encoded in one shadow byte. Those 8 bytes can be accessible, partially accessible, freed or be a redzone. We use the following encoding for each shadow byte: 0 means that all 8 bytes of the corresponding memory region are accessible; number N (1 <= N <= 7) means that the first N bytes are accessible, and other (8 - N) bytes are not; Loading @@ -139,7 +139,7 @@ the accessed address is partially accessible. 2. Implementation details ======================== ========================= From a high level, our approach to memory error detection is similar to that of kmemcheck: use shadow memory to record whether each byte of memory is safe Loading arch/arm64/Kconfig +8 −7 Original line number Diff line number Diff line Loading @@ -39,6 +39,7 @@ config ARM64 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP select HAVE_ARCH_KGDB select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_TRACEHOOK Loading Loading @@ -172,6 +173,13 @@ config KERNEL_MODE_NEON config FIX_EARLYCON_MEM def_bool y config PGTABLE_LEVELS int default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42 default 3 if ARM64_64K_PAGES && ARM64_VA_BITS_48 default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 4 if ARM64_4K_PAGES && ARM64_VA_BITS_48 source "init/Kconfig" source "kernel/Kconfig.freezer" Loading Loading @@ -505,13 +513,6 @@ config ARM64_VA_BITS default 42 if ARM64_VA_BITS_42 default 48 if ARM64_VA_BITS_48 config ARM64_PGTABLE_LEVELS int default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42 default 3 if ARM64_64K_PAGES && ARM64_VA_BITS_48 default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 4 if ARM64_4K_PAGES && ARM64_VA_BITS_48 config CPU_BIG_ENDIAN bool "Build big-endian kernel" help Loading arch/arm64/Makefile +7 −0 Original line number Diff line number Diff line Loading @@ -43,6 +43,13 @@ else TEXT_OFFSET := 0x00080000 endif # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - 3)) - (1 << 61) # in 32-bit arithmetic KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - 3)) \ - (1 << (64 - 32 - 3)) )) ) export TEXT_OFFSET GZFLAGS core-y += arch/arm64/kernel/ arch/arm64/mm/ Loading Loading
.mailmap +2 −1 Original line number Diff line number Diff line Loading @@ -17,7 +17,8 @@ Aleksey Gorelov <aleksey_gorelov@phoenix.com> Al Viro <viro@ftp.linux.org.uk> Al Viro <viro@zenIV.linux.org.uk> Andreas Herrmann <aherrman@de.ibm.com> Andrew Morton <akpm@osdl.org> Andrey Ryabinin <ryabinin.a.a@gmail.com> <a.ryabinin@samsung.com> Andrew Morton <akpm@linux-foundation.org> Andrew Vasquez <andrew.vasquez@qlogic.com> Andy Adamson <andros@citi.umich.edu> Archit Taneja <archit@ti.com> Loading
Documentation/features/debug/KASAN/arch-support.txt 0 → 100644 +40 −0 Original line number Diff line number Diff line # # Feature name: KASAN # Kconfig: HAVE_ARCH_KASAN # description: arch supports the KASAN runtime memory checker # ----------------------- | arch |status| ----------------------- | alpha: | TODO | | arc: | TODO | | arm: | TODO | | arm64: | ok | | avr32: | TODO | | blackfin: | TODO | | c6x: | TODO | | cris: | TODO | | frv: | TODO | | h8300: | TODO | | hexagon: | TODO | | ia64: | TODO | | m32r: | TODO | | m68k: | TODO | | metag: | TODO | | microblaze: | TODO | | mips: | TODO | | mn10300: | TODO | | nios2: | TODO | | openrisc: | TODO | | parisc: | TODO | | powerpc: | TODO | | s390: | TODO | | score: | TODO | | sh: | TODO | | sparc: | TODO | | tile: | TODO | | um: | TODO | | unicore32: | TODO | | x86: | ok | | xtensa: | TODO | -----------------------
Documentation/kasan.txt +23 −23 Original line number Diff line number Diff line Kernel address sanitizer ================ KernelAddressSanitizer (KASAN) ============================== 0. Overview =========== Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides KernelAddressSANitizer (KASAN) is a dynamic memory error detector. It provides a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs. KASan uses compile-time instrumentation for checking every memory access, therefore you will need a gcc version of 4.9.2 or later. KASan could detect out of bounds accesses to stack or global variables, but only if gcc 5.0 or later was used to built the kernel. KASAN uses compile-time instrumentation for checking every memory access, therefore you will need a GCC version 4.9.2 or later. GCC 5.0 or later is required for detection of out-of-bounds accesses to stack or global variables. Currently KASan is supported only for x86_64 architecture and requires that the kernel be built with the SLUB allocator. Currently KASAN is supported only for x86_64 architecture and requires the kernel to be built with the SLUB allocator. 1. Usage ========= ======== To enable KASAN configure kernel with: CONFIG_KASAN = y and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline is compiler instrumentation types. The former produces smaller binary the latter is 1.1 - 2 times faster. Inline instrumentation requires a gcc version of 5.0 or later. and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and inline are compiler instrumentation types. The former produces smaller binary the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC version 5.0 or later. Currently KASAN works only with the SLUB memory allocator. For better bug detection and nicer report, enable CONFIG_STACKTRACE and put at least 'slub_debug=U' in the boot cmdline. For better bug detection and nicer reporting, enable CONFIG_STACKTRACE. To disable instrumentation for specific files or directories, add a line similar to the following to the respective kernel Makefile: Loading @@ -42,7 +40,7 @@ similar to the following to the respective kernel Makefile: KASAN_SANITIZE := n 1.1 Error reports ========== ================= A typical out of bounds access report looks like this: Loading Loading @@ -119,14 +117,16 @@ Memory state around the buggy address: ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ================================================================== First sections describe slub object where bad access happened. See 'SLUB Debug output' section in Documentation/vm/slub.txt for details. The header of the report discribe what kind of bug happened and what kind of access caused it. It's followed by the description of the accessed slub object (see 'SLUB Debug output' section in Documentation/vm/slub.txt for details) and the description of the accessed memory page. In the last section the report shows memory state around the accessed address. Reading this part requires some more understanding of how KASAN works. Reading this part requires some understanding of how KASAN works. Each 8 bytes of memory are encoded in one shadow byte as accessible, partially accessible, freed or they can be part of a redzone. The state of each 8 aligned bytes of memory is encoded in one shadow byte. Those 8 bytes can be accessible, partially accessible, freed or be a redzone. We use the following encoding for each shadow byte: 0 means that all 8 bytes of the corresponding memory region are accessible; number N (1 <= N <= 7) means that the first N bytes are accessible, and other (8 - N) bytes are not; Loading @@ -139,7 +139,7 @@ the accessed address is partially accessible. 2. Implementation details ======================== ========================= From a high level, our approach to memory error detection is similar to that of kmemcheck: use shadow memory to record whether each byte of memory is safe Loading
arch/arm64/Kconfig +8 −7 Original line number Diff line number Diff line Loading @@ -39,6 +39,7 @@ config ARM64 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP select HAVE_ARCH_KGDB select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_TRACEHOOK Loading Loading @@ -172,6 +173,13 @@ config KERNEL_MODE_NEON config FIX_EARLYCON_MEM def_bool y config PGTABLE_LEVELS int default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42 default 3 if ARM64_64K_PAGES && ARM64_VA_BITS_48 default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 4 if ARM64_4K_PAGES && ARM64_VA_BITS_48 source "init/Kconfig" source "kernel/Kconfig.freezer" Loading Loading @@ -505,13 +513,6 @@ config ARM64_VA_BITS default 42 if ARM64_VA_BITS_42 default 48 if ARM64_VA_BITS_48 config ARM64_PGTABLE_LEVELS int default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42 default 3 if ARM64_64K_PAGES && ARM64_VA_BITS_48 default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 4 if ARM64_4K_PAGES && ARM64_VA_BITS_48 config CPU_BIG_ENDIAN bool "Build big-endian kernel" help Loading
arch/arm64/Makefile +7 −0 Original line number Diff line number Diff line Loading @@ -43,6 +43,13 @@ else TEXT_OFFSET := 0x00080000 endif # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - 3)) - (1 << 61) # in 32-bit arithmetic KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - 3)) \ - (1 << (64 - 32 - 3)) )) ) export TEXT_OFFSET GZFLAGS core-y += arch/arm64/kernel/ arch/arm64/mm/ Loading