Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Skip to content
This project is mirrored from https://github.com/Exynos7580/android_kernel_samsung_exynos7580-common.git. Pull mirroring updated .
  1. Jan 25, 2019
  2. Sep 21, 2018
  3. Sep 18, 2018
  4. Sep 11, 2018
  5. Sep 07, 2018
  6. Aug 20, 2018
  7. Aug 05, 2018
    • Yuri.Sh's avatar
      CPUFREQ: prevent setting io_is_busy by ROM waste of power. · d7232ada
      Yuri.Sh authored
      When io_is_busy set to 1, little cluster freq is on max when
      I/O read/write is in progress. but it's not needed.
      Freq will rise, but not to max.
      d7232ada
    • Danny Wood's avatar
      net: wireless: Updated bcmdhd_1_77 from A320F Oreo Source (A320FLXXU2CRE3) · ecbd8440
      Danny Wood authored
      Also applied following required fixes:
      * net: wireless: bcmdhd_1_77 reduce kernel logging
      * net: wireless: bcmdhd_xxx: fix buffer overrun in wl_android_set_roampref
      * net: wireless: bcmdhd_xxx: Heap overflow in wl_run_escan
      * Fix runtime firmware loading issues caused by new android O driver vendor
      path changes (keeps our kernel config and device tree inline with other drivers)
      ecbd8440
    • Danny Wood's avatar
      Disabled verbose debugging in the battery fuel guage driver · 33c659dd
      Danny Wood authored
      Change-Id: I82e29af9d88e20d6147789a62186bca9a4e994e0
      33c659dd
    • Danny Wood's avatar
      cpufreq: interactive: Removed the POWER_SUSPEND code · 267f9728
      Danny Wood authored
      We deal with this properly in the new power HAL
      
      Change-Id: Ic6bfe80750ac09a0ee66333317cedd75ae492e6c
      267f9728
    • Kees Cook's avatar
      mm: Revert x86_64 and arm64 ELF_ET_DYN_BASE base · 27ba99af
      Kees Cook authored
      
      
      Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000
      broke AddressSanitizer. This is a partial revert of:
      
        commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE")
        commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB")
      
      The AddressSanitizer tool has hard-coded expectations about where
      executable mappings are loaded. The motivation for changing the PIE
      base in the above commits was to avoid the Stack-Clash CVEs that
      allowed executable mappings to get too close to heap and stack. This
      was mainly a problem on 32-bit, but the 64-bit bases were moved too,
      in an effort to proactively protect those systems (proofs of concept
      do exist that show 64-bit collisions, but other recent changes to fix
      stack accounting and setuid behaviors will minimize the impact).
      
      The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC
      base), so only the 64-bit PIE base needs to be reverted to let x86 and
      arm64 ASan binaries run again. Future changes to the 64-bit PIE base on
      these architectures can be made optional once a more dynamic method for
      dealing with AddressSanitizer is found. (e.g. always loading PIE into
      the mmap region for marked binaries.)
      
      Reported-by: default avatarKostya Serebryany <kcc@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      27ba99af
    • Kees Cook's avatar
      binfmt_elf: Use ELF_ET_DYN_BASE only for PIE · 9be19eca
      Kees Cook authored
      
      
      The ELF_ET_DYN_BASE position was originally intended to keep loaders
      away from ET_EXEC binaries. (For example, running "/lib/ld-linux.so.2
      /bin/cat" might cause the subsequent load of /bin/cat into where the
      loader had been loaded.) With the advent of PIE (ET_DYN binaries with
      an INTERP Program Header), ELF_ET_DYN_BASE continued to be used since
      the kernel was only looking at ET_DYN. However, since ELF_ET_DYN_BASE
      is traditionally set at the top 1/3rd of the TASK_SIZE, a substantial
      portion of the address space is unused.
      
      For 32-bit tasks when RLIMIT_STACK is set to RLIM_INFINITY, programs
      are loaded below the mmap region. This means they can be made to collide
      (CVE-2017-1000370) or nearly collide (CVE-2017-1000371) with pathological
      stack regions. Lowering ELF_ET_DYN_BASE solves both by moving programs
      above the mmap region in all cases, and will now additionally avoid
      programs falling back to the mmap region by enforcing MAP_FIXED for
      program loads (i.e. if it would have collided with the stack, now it
      will fail to load instead of falling back to the mmap region).
      
      To allow for a lower ELF_ET_DYN_BASE, loaders (ET_DYN without INTERP)
      are loaded into the mmap region, leaving space available for either an
      ET_EXEC binary with a fixed location or PIE being loaded into mmap by the
      loader. Only PIE programs are loaded offset from ELF_ET_DYN_BASE, which
      means architectures can now safely lower their values without risk of
      loaders colliding with their subsequently loaded programs.
      
      For 64-bit, ELF_ET_DYN_BASE is best set to 4GB to allow runtimes to
      use the entire 32-bit address space for 32-bit pointers. For 32-bit,
      4MB is used as the traditional minimum load location, likely to avoid
      historically requiring a 4MB page table entry when only a portion of the
      first 4MB would be used (since the NULL address is avoided).
      
      Thanks to PaX Team, Daniel Micay, and Rik van Riel for inspiration and
      suggestions on how to implement this solution.
      
      Fixes: d1fd836dcf00 ("mm: split ET_DYN ASLR from mmap ASLR")
      Cc: stable@vger.kernel.org
      Cc: x86@kernel.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      9be19eca
    • Kees Cook's avatar
      arm64: Move ELF_ET_DYN_BASE to 4GB / 4MB · 54090b6d
      Kees Cook authored
      
      
      Now that explicitly executed loaders are loaded in the mmap region, we
      have more freedom to decide where we position PIE binaries in the address
      space to avoid possible collisions with mmap or stack regions.
      
      For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit
      address space for 32-bit pointers. On 32-bit use 4MB, to match ARM. This
      could be 0x8000, the standard ET_EXEC load address, but that is needlessly
      close to the NULL address, and anyone running arm compat PIE will have an
      MMU, so the tight mapping is not needed.
      
      Cc: stable@vger.kernel.org
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      54090b6d
    • Kees Cook's avatar
      mm: split ET_DYN ASLR from mmap ASLR · c93365f3
      Kees Cook authored
      
      
      This moves arch_mmap_rnd() into the ELF loader for handling ET_DYN ASLR
      in a separate region from mmap ASLR, as already done on s390. Removes
      CONFIG_BINFMT_ELF_RANDOMIZE_PIE.
      
      Reported-by: default avatarHector Marco-Gisbert <hecmargi@upv.es>
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      c93365f3
    • Kees Cook's avatar
      mm: expose arch_mmap_rnd when available · c2d810c0
      Kees Cook authored
      
      
      When an architecture fully supports randomizing the ELF load location,
      a per-arch mmap_rnd() function is used to find a randomized mmap base.
      In preparation for randomizing the location of ET_DYN binaries
      separately from mmap, this renames and exports these functions as
      arch_mmap_rnd(). Additionally introduces CONFIG_ARCH_HAS_ELF_RANDOMIZE
      for describing this feature on architectures that support it
      (which is a superset of ARCH_BINFMT_ELF_RANDOMIZE_PIE, since s390
      already supports a separated ET_DYN ASLR from mmap ASLR without the
      ARCH_BINFMT_ELF_RANDOMIZE_PIE logic).
      
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      c2d810c0
    • Alexander Alexeev's avatar
      78f261b3
    • Arun Chandran's avatar
      arm64: ASLR: Don't randomise text when randomise_va_space == 0 · c35d71b3
      Arun Chandran authored
      
      
      When user asks to turn off ASLR by writing "0" to
      /proc/sys/kernel/randomize_va_space there should not be
      any randomization to mmap base, stack, VDSO, libs, text and heap
      
      Currently arm64 violates this behavior by randomising text.
      Fix this by defining a constant ELF_ET_DYN_BASE. The randomisation of
      mm->mmap_base is done by setup_new_exec -> arch_pick_mmap_layout ->
      mmap_base -> mmap_rnd.
      
      Signed-off-by: default avatarArun Chandran <achandran@mvista.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c35d71b3
    • Alexander Alexeev's avatar
      Revert "arm64: move ET_DYN base lower in the address space" · 46903088
      Alexander Alexeev authored
      This reverts commit 19c07279e18264897aa0788e07e793e4b8928991.
      46903088
    • Alexander Alexeev's avatar
      BACKPORT: crypto: chacha20 - Fix keystream alignment for chacha20_block() · d863ca1a
      Alexander Alexeev authored
      When chacha20_block() outputs the keystream block, it uses 'u32' stores
      directly.  However, the callers (crypto/chacha20_generic.c and
      drivers/char/random.c) declare the keystream buffer as a 'u8' array,
      which is not guaranteed to have the needed alignment.
      
      Fix it by having both callers declare the keystream as a 'u32' array.
      For now this is preferable to switching over to the unaligned access
      macros because chacha20_block() is only being used in cases where we can
      easily control the alignment (stack buffers).
      
      Based on commit: https://github.com/torvalds/linux/commit/9f480faec58cd6197a007ea1dcac6b7c3daf1139
      
      
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d863ca1a
    • Theodore Ts'o's avatar
    • Alexander Alexeev's avatar
      f69f643c
    • Alexander Alexeev's avatar
      security: keys: fix compiling error · 7d877fcd
      Alexander Alexeev authored
      7d877fcd
    • Nathan Chancellor's avatar
      security: keys: fix maybe-uninitialized warnings · b4375617
      Nathan Chancellor authored
      
      
      security/keys/encrypted-keys/encrypted.c: In function 'encrypted_read':
      security/keys/encrypted-keys/encrypted.c:922:6: warning: 'master_keylen' may be used uninitialized in this function [-Wmaybe-uninitialized]
        ret = get_derived_key(derived_key, ENC_KEY, master_key, master_keylen);
        ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      security/keys/encrypted-keys/encrypted.c:922:6: warning: 'master_key' may be used uninitialized in this function [-Wmaybe-uninitialized]
      security/keys/encrypted-keys/encrypted.c: In function 'encrypted_instantiate':
      security/keys/encrypted-keys/encrypted.c:688:6: warning: 'master_keylen' may be used uninitialized in this function [-Wmaybe-uninitialized]
        ret = datablob_hmac_verify(epayload, format, master_key, master_keylen);
        ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      security/keys/encrypted-keys/encrypted.c:660:9: note: 'master_keylen' was declared here
        size_t master_keylen;
               ^~~~~~~~~~~~~
      security/keys/encrypted-keys/encrypted.c:688:6: warning: 'master_key' may be used uninitialized in this function [-Wmaybe-uninitialized]
        ret = datablob_hmac_verify(epayload, format, master_key, master_keylen);
        ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      security/keys/encrypted-keys/encrypted.c:656:6: note: 'master_key' was declared here
        u8 *master_key;
            ^~~~~~~~~~
      
      A null pointer is handled properly by the code in this case. size_t should be initialized to 0.
      
      Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      b4375617
    • Joe Richey's avatar
      fscrypt: Move key structure and constants to uapi · ba609554
      Joe Richey authored
      
      
      This commit exposes the necessary constants and structures for a
      userspace program to pass filesystem encryption keys into the keyring.
      The fscrypt_key structure was already part of the kernel ABI, this
      change just makes it so programs no longer have to redeclare these
      structures (like e4crypt in e2fsprogs currently does).
      
      Note that we do not expose the other FS_*_KEY_SIZE constants as they are
      not necessary. Only XTS is supported for contents_encryption_mode, so
      currently FS_MAX_KEY_SIZE bytes of key material must always be passed to
      the kernel.
      
      This commit also removes __packed from fscrypt_key as it does not
      contain any implicit padding and does not refer to an on-disk structure.
      
      Signed-off-by: default avatarJoe Richey <joerichey@google.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      ba609554
    • Behan Webster's avatar
      crypto: LLVMLinux: Add macro to remove use of VLAIS in crypto code · d954f27e
      Behan Webster authored
      
      
      Add a macro which replaces the use of a Variable Length Array In Struct (VLAIS)
      with a C99 compliant equivalent. This macro instead allocates the appropriate
      amount of memory using an char array.
      
      The new code can be compiled with both gcc and clang.
      
      struct shash_desc contains a flexible array member member ctx declared with
      CRYPTO_MINALIGN_ATTR, so sizeof(struct shash_desc) aligns the beginning
      of the array declared after struct shash_desc with long long.
      
      No trailing padding is required because it is not a struct type that can
      be used in an array.
      
      The CRYPTO_MINALIGN_ATTR is required so that desc is aligned with long long
      as would be the case for a struct containing a member with
      CRYPTO_MINALIGN_ATTR.
      
      If you want to get to the ctx at the end of the shash_desc as before you can do
      so using shash_desc_ctx(shash)
      
      Signed-off-by: default avatarBehan Webster <behanw@converseincode.com>
      Reviewed-by: default avatarMark Charlebois <charlebm@gmail.com>
      Acked-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Cc: Michał Mirosław <mirqus@gmail.com>
      d954f27e
    • Alexander Alexeev's avatar
    • Ard Biesheuvel's avatar
      backport: crypto: arm64/chacha20 - implement NEON version based on SSE3 code · e57c5e26
      Ard Biesheuvel authored
      
      
      This is a straight port to arm64/NEON of the x86 SSE3 implementation
      of the ChaCha20 stream cipher. It uses the new skcipher walksize
      attribute to process the input in strides of 4x the block size.
      
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      
      backported to 3.10 the same way arm chacha20 SNEON3 imp. for 3.18 was
      done : https://android-review.googlesource.com/c/kernel/common/+/551349
      
      
      
      Signed-off-by: default avatarMister Oyster <oysterized@gmail.com>
      e57c5e26
    • Martin Willi's avatar
      BACKPORT: crypto: chacha20 - Export common ChaCha20 helpers · 25a6aa87
      Martin Willi authored
      
      
      As architecture specific drivers need a software fallback, export a
      ChaCha20 en-/decryption function together with some helpers in a header
      file.
      
      Signed-off-by: default avatarMartin Willi <martin@strongswan.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      (cherry picked from commit 31d7247da57226e847f0f102a10c27c0722c429b,
       omitted chacha20poly1305.c changes)
      Change-Id: I044f18bf5b01f10da47ce17d58c3ecd4da941dba
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      25a6aa87
    • Martin Willi's avatar
      UPSTREAM: crypto: chacha20 - Add a generic ChaCha20 stream cipher implementation · 98037412
      Martin Willi authored
      
      
      ChaCha20 is a high speed 256-bit key size stream cipher algorithm designed by
      Daniel J. Bernstein. It is further specified in RFC7539 for use in IETF
      protocols as a building block for the ChaCha20-Poly1305 AEAD.
      
      This is a portable C implementation without any architecture specific
      optimizations. It uses a 16-byte IV, which includes the 12-byte ChaCha20 nonce
      prepended by the initial block counter. Some algorithms require an explicit
      counter value, for example the mentioned AEAD construction.
      
      Signed-off-by: default avatarMartin Willi <martin@strongswan.org>
      Acked-by: default avatarSteffen Klassert <steffen.klassert@secunet.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      (cherry picked from commit c08d0e647305c3f8f640010a56c9e4bafb9488d3)
      Change-Id: I5892b1451e46f915c0ed8e711bdded9e6f4a4aae
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      98037412
    • Ard Biesheuvel's avatar
      crypto: sha256 - implement base layer for SHA-256 · db7c5d9b
      Ard Biesheuvel authored
      
      
      To reduce the number of copies of boilerplate code throughout
      the tree, this patch implements generic glue for the SHA-256
      algorithm. This allows a specific arch or hardware implementation
      to only implement the special handling that it needs.
      
      The users need to supply an implementation of
      
        void (sha256_block_fn)(struct sha256_state *sst, u8 const *src, int blocks)
      
      and pass it to the SHA-256 base functions. For easy casting between the
      prototype above and existing block functions that take a 'u32 state[]'
      as their first argument, the 'state' member of struct sha256_state is
      moved to the base of the struct.
      
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      db7c5d9b
    • Ard Biesheuvel's avatar
      crypto: sha512 - implement base layer for SHA-512 · fe77f5f7
      Ard Biesheuvel authored
      
      
      To reduce the number of copies of boilerplate code throughout
      the tree, this patch implements generic glue for the SHA-512
      algorithm. This allows a specific arch or hardware implementation
      to only implement the special handling that it needs.
      
      The users need to supply an implementation of
      
        void (sha512_block_fn)(struct sha512_state *sst, u8 const *src, int blocks)
      
      and pass it to the SHA-512 base functions. For easy casting between the
      prototype above and existing block functions that take a 'u64 state[]'
      as their first argument, the 'state' member of struct sha512_state is
      moved to the base of the struct.
      
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      fe77f5f7
    • Ard Biesheuvel's avatar
      crypto: arm64/sha2 - add generated .S files to .gitignore · a8945b22
      Ard Biesheuvel authored
      
      
      Add the files that are generated by the recently merged OpenSSL
      SHA-256/512 implementation to .gitignore so Git disregards them
      when showing untracked files.
      
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a8945b22
Loading