Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 4d333b47 authored by qctecmdr's avatar qctecmdr Committed by Gerrit - the friendly Code Review server
Browse files

Merge "Merge android-4.19.62 (f232ce65) into msm-4.19"

parents b0a6c436 a71cb5b9
Loading
Loading
Loading
Loading
+2 −2
Original line number Diff line number Diff line
@@ -29,7 +29,7 @@ Contact: Bjørn Mork <bjorn@mork.no>
Description:
		Unsigned integer.

		Write a number ranging from 1 to 127 to add a qmap mux
		Write a number ranging from 1 to 254 to add a qmap mux
		based network device, supported by recent Qualcomm based
		modems.

@@ -46,5 +46,5 @@ Contact: Bjørn Mork <bjorn@mork.no>
Description:
		Unsigned integer.

		Write a number ranging from 1 to 127 to delete a previously
		Write a number ranging from 1 to 254 to delete a previously
		created qmap mux based network device.
+1 −0
Original line number Diff line number Diff line
@@ -9,5 +9,6 @@ are configurable at compile, boot or run time.
.. toctree::
   :maxdepth: 1

   spectre
   l1tf
   mds
+697 −0

File added.

Preview size limit exceeded, changes collapsed.

+0 −6
Original line number Diff line number Diff line
@@ -4989,12 +4989,6 @@
			emulate     [default] Vsyscalls turn into traps and are
			            emulated reasonably safely.

			native      Vsyscalls are native syscall instructions.
			            This is a little bit faster than trapping
			            and makes a few dynamic recompilers work
			            better than they would in emulation mode.
			            It also makes exploits much easier to write.

			none        Vsyscalls don't work at all.  This makes
			            them quite hard to use for exploits but
			            might break your system.
+3 −0
Original line number Diff line number Diff line
@@ -177,6 +177,9 @@ These helper barriers exist because architectures have varying implicit
ordering on their SMP atomic primitives. For example our TSO architectures
provide full ordered atomics and these barriers are no-ops.

NOTE: when the atomic RmW ops are fully ordered, they should also imply a
compiler barrier.

Thus:

  atomic_fetch_add();
Loading