This project is mirrored from https://github.com/Exynos7580/android_kernel_samsung_exynos7580-common.git.
Pull mirroring updated .
- Jan 25, 2019
-
-
Alexander Alexeev authored
This removed improper Samsung's changes
-
- Sep 21, 2018
-
- Sep 18, 2018
-
-
Sean Hoyt authored
irled disabled, enable sx9310
-
Sean Hoyt authored
-
deadman96385 authored
Them being combined seems to be causing issues
-
Sean Hoyt authored
-
- Sep 11, 2018
-
-
Disable Cpusets for j7elte to avoid lags
-
- Sep 07, 2018
-
-
messi2050 authored
-
- Aug 20, 2018
-
-
Sean Hoyt authored
-
deadman96385 authored
-
deadman96385 authored
-
- Aug 05, 2018
-
-
Yuri.Sh authored
When io_is_busy set to 1, little cluster freq is on max when I/O read/write is in progress. but it's not needed. Freq will rise, but not to max.
-
Danny Wood authored
Also applied following required fixes: * net: wireless: bcmdhd_1_77 reduce kernel logging * net: wireless: bcmdhd_xxx: fix buffer overrun in wl_android_set_roampref * net: wireless: bcmdhd_xxx: Heap overflow in wl_run_escan * Fix runtime firmware loading issues caused by new android O driver vendor path changes (keeps our kernel config and device tree inline with other drivers)
-
Danny Wood authored
Change-Id: I82e29af9d88e20d6147789a62186bca9a4e994e0
-
Danny Wood authored
We deal with this properly in the new power HAL Change-Id: Ic6bfe80750ac09a0ee66333317cedd75ae492e6c
-
Kees Cook authored
Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000 broke AddressSanitizer. This is a partial revert of: commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") The AddressSanitizer tool has hard-coded expectations about where executable mappings are loaded. The motivation for changing the PIE base in the above commits was to avoid the Stack-Clash CVEs that allowed executable mappings to get too close to heap and stack. This was mainly a problem on 32-bit, but the 64-bit bases were moved too, in an effort to proactively protect those systems (proofs of concept do exist that show 64-bit collisions, but other recent changes to fix stack accounting and setuid behaviors will minimize the impact). The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC base), so only the 64-bit PIE base needs to be reverted to let x86 and arm64 ASan binaries run again. Future changes to the 64-bit PIE base on these architectures can be made optional once a more dynamic method for dealing with AddressSanitizer is found. (e.g. always loading PIE into the mmap region for marked binaries.) Reported-by:
Kostya Serebryany <kcc@google.com> Cc: stable@vger.kernel.org Signed-off-by:
Kees Cook <keescook@chromium.org>
-
Kees Cook authored
The ELF_ET_DYN_BASE position was originally intended to keep loaders away from ET_EXEC binaries. (For example, running "/lib/ld-linux.so.2 /bin/cat" might cause the subsequent load of /bin/cat into where the loader had been loaded.) With the advent of PIE (ET_DYN binaries with an INTERP Program Header), ELF_ET_DYN_BASE continued to be used since the kernel was only looking at ET_DYN. However, since ELF_ET_DYN_BASE is traditionally set at the top 1/3rd of the TASK_SIZE, a substantial portion of the address space is unused. For 32-bit tasks when RLIMIT_STACK is set to RLIM_INFINITY, programs are loaded below the mmap region. This means they can be made to collide (CVE-2017-1000370) or nearly collide (CVE-2017-1000371) with pathological stack regions. Lowering ELF_ET_DYN_BASE solves both by moving programs above the mmap region in all cases, and will now additionally avoid programs falling back to the mmap region by enforcing MAP_FIXED for program loads (i.e. if it would have collided with the stack, now it will fail to load instead of falling back to the mmap region). To allow for a lower ELF_ET_DYN_BASE, loaders (ET_DYN without INTERP) are loaded into the mmap region, leaving space available for either an ET_EXEC binary with a fixed location or PIE being loaded into mmap by the loader. Only PIE programs are loaded offset from ELF_ET_DYN_BASE, which means architectures can now safely lower their values without risk of loaders colliding with their subsequently loaded programs. For 64-bit, ELF_ET_DYN_BASE is best set to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. For 32-bit, 4MB is used as the traditional minimum load location, likely to avoid historically requiring a 4MB page table entry when only a portion of the first 4MB would be used (since the NULL address is avoided). Thanks to PaX Team, Daniel Micay, and Rik van Riel for inspiration and suggestions on how to implement this solution. Fixes: d1fd836dcf00 ("mm: split ET_DYN ASLR from mmap ASLR") Cc: stable@vger.kernel.org Cc: x86@kernel.org Signed-off-by:
Kees Cook <keescook@chromium.org> Acked-by:
Rik van Riel <riel@redhat.com>
-
Kees Cook authored
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. On 32-bit use 4MB, to match ARM. This could be 0x8000, the standard ET_EXEC load address, but that is needlessly close to the NULL address, and anyone running arm compat PIE will have an MMU, so the tight mapping is not needed. Cc: stable@vger.kernel.org Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Kees Cook <keescook@chromium.org>
-
Kees Cook authored
This moves arch_mmap_rnd() into the ELF loader for handling ET_DYN ASLR in a separate region from mmap ASLR, as already done on s390. Removes CONFIG_BINFMT_ELF_RANDOMIZE_PIE. Reported-by:
Hector Marco-Gisbert <hecmargi@upv.es> Signed-off-by:
Kees Cook <keescook@chromium.org>
-
Kees Cook authored
When an architecture fully supports randomizing the ELF load location, a per-arch mmap_rnd() function is used to find a randomized mmap base. In preparation for randomizing the location of ET_DYN binaries separately from mmap, this renames and exports these functions as arch_mmap_rnd(). Additionally introduces CONFIG_ARCH_HAS_ELF_RANDOMIZE for describing this feature on architectures that support it (which is a superset of ARCH_BINFMT_ELF_RANDOMIZE_PIE, since s390 already supports a separated ET_DYN ASLR from mmap ASLR without the ARCH_BINFMT_ELF_RANDOMIZE_PIE logic). Signed-off-by:
Kees Cook <keescook@chromium.org>
-
Alexander Alexeev authored
-
Arun Chandran authored
When user asks to turn off ASLR by writing "0" to /proc/sys/kernel/randomize_va_space there should not be any randomization to mmap base, stack, VDSO, libs, text and heap Currently arm64 violates this behavior by randomising text. Fix this by defining a constant ELF_ET_DYN_BASE. The randomisation of mm->mmap_base is done by setup_new_exec -> arch_pick_mmap_layout -> mmap_base -> mmap_rnd. Signed-off-by:
Arun Chandran <achandran@mvista.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Alexander Alexeev authored
This reverts commit 19c07279e18264897aa0788e07e793e4b8928991.
-
Alexander Alexeev authored
When chacha20_block() outputs the keystream block, it uses 'u32' stores directly. However, the callers (crypto/chacha20_generic.c and drivers/char/random.c) declare the keystream buffer as a 'u8' array, which is not guaranteed to have the needed alignment. Fix it by having both callers declare the keystream as a 'u32' array. For now this is preferable to switching over to the unaligned access macros because chacha20_block() is only being used in cases where we can easily control the alignment (stack buffers). Based on commit: https://github.com/torvalds/linux/commit/9f480faec58cd6197a007ea1dcac6b7c3daf1139 Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Theodore Ts'o authored
The CRNG is faster, and we don't pretend to track entropy usage in the CRNG any more. This is part of commit https://github.com/torvalds/linux/commit/e192be9d9a30555aae2ca1dc3aad37cba484cd4a First part: https://github.com/Exynos-7580/android_kernel_samsung_exynos7580-common/commit/1145bed5201e23ad2b414dd2befc5fedd1faa058
-
Alexander Alexeev authored
-
Alexander Alexeev authored
-
Nathan Chancellor authored
security/keys/encrypted-keys/encrypted.c: In function 'encrypted_read': security/keys/encrypted-keys/encrypted.c:922:6: warning: 'master_keylen' may be used uninitialized in this function [-Wmaybe-uninitialized] ret = get_derived_key(derived_key, ENC_KEY, master_key, master_keylen); ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ security/keys/encrypted-keys/encrypted.c:922:6: warning: 'master_key' may be used uninitialized in this function [-Wmaybe-uninitialized] security/keys/encrypted-keys/encrypted.c: In function 'encrypted_instantiate': security/keys/encrypted-keys/encrypted.c:688:6: warning: 'master_keylen' may be used uninitialized in this function [-Wmaybe-uninitialized] ret = datablob_hmac_verify(epayload, format, master_key, master_keylen); ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ security/keys/encrypted-keys/encrypted.c:660:9: note: 'master_keylen' was declared here size_t master_keylen; ^~~~~~~~~~~~~ security/keys/encrypted-keys/encrypted.c:688:6: warning: 'master_key' may be used uninitialized in this function [-Wmaybe-uninitialized] ret = datablob_hmac_verify(epayload, format, master_key, master_keylen); ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ security/keys/encrypted-keys/encrypted.c:656:6: note: 'master_key' was declared here u8 *master_key; ^~~~~~~~~~ A null pointer is handled properly by the code in this case. size_t should be initialized to 0. Signed-off-by:
Nathan Chancellor <natechancellor@gmail.com>
-
Joe Richey authored
This commit exposes the necessary constants and structures for a userspace program to pass filesystem encryption keys into the keyring. The fscrypt_key structure was already part of the kernel ABI, this change just makes it so programs no longer have to redeclare these structures (like e4crypt in e2fsprogs currently does). Note that we do not expose the other FS_*_KEY_SIZE constants as they are not necessary. Only XTS is supported for contents_encryption_mode, so currently FS_MAX_KEY_SIZE bytes of key material must always be passed to the kernel. This commit also removes __packed from fscrypt_key as it does not contain any implicit padding and does not refer to an on-disk structure. Signed-off-by:
Joe Richey <joerichey@google.com> Signed-off-by:
Theodore Ts'o <tytso@mit.edu>
-
Behan Webster authored
Add a macro which replaces the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This macro instead allocates the appropriate amount of memory using an char array. The new code can be compiled with both gcc and clang. struct shash_desc contains a flexible array member member ctx declared with CRYPTO_MINALIGN_ATTR, so sizeof(struct shash_desc) aligns the beginning of the array declared after struct shash_desc with long long. No trailing padding is required because it is not a struct type that can be used in an array. The CRYPTO_MINALIGN_ATTR is required so that desc is aligned with long long as would be the case for a struct containing a member with CRYPTO_MINALIGN_ATTR. If you want to get to the ctx at the end of the shash_desc as before you can do so using shash_desc_ctx(shash) Signed-off-by:
Behan Webster <behanw@converseincode.com> Reviewed-by:
Mark Charlebois <charlebm@gmail.com> Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Cc: Michał Mirosław <mirqus@gmail.com>
-
Alexander Alexeev authored
-
Ard Biesheuvel authored
This is a straight port to arm64/NEON of the x86 SSE3 implementation of the ChaCha20 stream cipher. It uses the new skcipher walksize attribute to process the input in strides of 4x the block size. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> backported to 3.10 the same way arm chacha20 SNEON3 imp. for 3.18 was done : https://android-review.googlesource.com/c/kernel/common/+/551349 Signed-off-by:
Mister Oyster <oysterized@gmail.com>
-
Martin Willi authored
As architecture specific drivers need a software fallback, export a ChaCha20 en-/decryption function together with some helpers in a header file. Signed-off-by:
Martin Willi <martin@strongswan.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> (cherry picked from commit 31d7247da57226e847f0f102a10c27c0722c429b, omitted chacha20poly1305.c changes) Change-Id: I044f18bf5b01f10da47ce17d58c3ecd4da941dba Signed-off-by:
Eric Biggers <ebiggers@google.com>
-
Martin Willi authored
ChaCha20 is a high speed 256-bit key size stream cipher algorithm designed by Daniel J. Bernstein. It is further specified in RFC7539 for use in IETF protocols as a building block for the ChaCha20-Poly1305 AEAD. This is a portable C implementation without any architecture specific optimizations. It uses a 16-byte IV, which includes the 12-byte ChaCha20 nonce prepended by the initial block counter. Some algorithms require an explicit counter value, for example the mentioned AEAD construction. Signed-off-by:
Martin Willi <martin@strongswan.org> Acked-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> (cherry picked from commit c08d0e647305c3f8f640010a56c9e4bafb9488d3) Change-Id: I5892b1451e46f915c0ed8e711bdded9e6f4a4aae Signed-off-by:
Eric Biggers <ebiggers@google.com>
-
Ard Biesheuvel authored
To reduce the number of copies of boilerplate code throughout the tree, this patch implements generic glue for the SHA-256 algorithm. This allows a specific arch or hardware implementation to only implement the special handling that it needs. The users need to supply an implementation of void (sha256_block_fn)(struct sha256_state *sst, u8 const *src, int blocks) and pass it to the SHA-256 base functions. For easy casting between the prototype above and existing block functions that take a 'u32 state[]' as their first argument, the 'state' member of struct sha256_state is moved to the base of the struct. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
To reduce the number of copies of boilerplate code throughout the tree, this patch implements generic glue for the SHA-512 algorithm. This allows a specific arch or hardware implementation to only implement the special handling that it needs. The users need to supply an implementation of void (sha512_block_fn)(struct sha512_state *sst, u8 const *src, int blocks) and pass it to the SHA-512 base functions. For easy casting between the prototype above and existing block functions that take a 'u64 state[]' as their first argument, the 'state' member of struct sha512_state is moved to the base of the struct. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Add the files that are generated by the recently merged OpenSSL SHA-256/512 implementation to .gitignore so Git disregards them when showing untracked files. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-