- 07 Oct, 2019 35 commits
-
-
Paul Burton authored
Use smp_mb__before_atomic() rather than smp_mb__before_llsc() in test_and_set_bit(), test_and_clear_bit() & test_and_change_bit(). The _atomic() versions make semantic sense in these cases, and will allow a later patch to omit redundant barriers for Loongson3 systems that already include a barrier within __test_bit_op(). Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Generate the sync instructions required to workaround Loongson3 LL/SC errata within inline asm blocks, which feels a little safer than doing it from C where strictly speaking the compiler would be well within its rights to insert a memory access between the separate asm statements we previously had, containing sync & ll instructions respectively. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Rather than using custom SZLONG_LOG & SZLONG_MASK macros to shift & mask a bit index to form word & bit offsets respectively, make use of the standard BIT_WORD() & BITS_PER_LONG macros for the same purpose. volatile is added to the definition of pointers to the long-sized word we'll operate on, in order to prevent the compiler complaining that we cast away the volatile qualifier of the addr argument. This should have no effect on generated code, which in the LL/SC case is inline asm anyway & in the non-LLSC case access is constrained by compiler barriers provided by raw_local_irq_{save,restore}(). Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Introduce __bit_op() & __test_bit_op() macros which abstract away the implementation of LL/SC loops. This cuts down on a lot of duplicate boilerplate code, and also allows R10000_LLSC_WAR to be handled outside of the individual bitop functions. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
The IRQ-disabling non-LLSC fallbacks for bitops on UP systems already return a zero or one, so there's no need to perform another comparison against zero. Move these comparisons into the LLSC paths to avoid the redundant work. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Use the BIT() macro in asm/bitops.h rather than open-coding its equivalent. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
The logical operations or & xor used in the test_and_set_bit_lock(), test_and_clear_bit() & test_and_change_bit() functions currently force the value 1<<bit to be placed in a register. If the bit is compile-time constant & fits within the immediate field of an or/xor instruction (ie. 16 bits) then we can make use of the ori/xori instruction variants & avoid the use of an extra register. Add the extra "i" constraints in order to allow use of these immediate encodings. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
The only difference between test_and_set_bit() & test_and_set_bit_lock() is memory ordering barrier semantics - the former provides a full barrier whilst the latter only provides acquire semantics. We can therefore implement test_and_set_bit() in terms of test_and_set_bit_lock() with the addition of the extra memory barrier. Do this in order to avoid duplicating logic. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
The start position for an ins instruction is always encoded as an immediate, so allowing registers to be used by the inline asm makes no sense. It should never happen anyway since a bit index should always be small enough to be treated as an immediate, but remove the nonsensical "r" for sanity. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Rather than #ifdef on CONFIG_CPU_* to determine whether the ins instruction is supported we can simply check MIPS_ISA_REV to discover whether we're targeting MIPSr2 or higher. Do so in order to clean up the code. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
set_bit() can set bits 0-15 using an ori instruction, rather than loading the value -1 into a register & then using an ins instruction. That is, rather than the following: li t0, -1 ll t1, 0(t2) ins t1, t0, 4, 1 sc t1, 0(t2) We can have the simpler: ll t1, 0(t2) ori t1, t1, 0x10 sc t1, 0(t2) The or path already allows immediates to be used, so simply restricting the ins path to bits that don't fit in immediates is sufficient to take advantage of this. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Reorder conditions in our various bitops functions that check kernel_uses_llsc such that they handle the !kernel_uses_llsc case first. This allows us to avoid the need to duplicate the kernel_uses_llsc check in all the other cases. For functions that don't involve barriers common to the various implementations, we switch to returning from within each if block making each case easier to read in isolation. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Remove the remaining duplication between 32b & 64b in asm/atomic.h by making use of an ATOMIC_OPS() macro to generate: - atomic_read()/atomic64_read() - atomic_set()/atomic64_set() - atomic_cmpxchg()/atomic64_cmpxchg() - atomic_xchg()/atomic64_xchg() This is consistent with the way all other functions in asm/atomic.h are generated, and ensures consistency between the 32b & 64b functions. Of note is that this results in the above now being static inline functions rather than macros. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Unify the definitions of atomic_sub_if_positive() & atomic64_sub_if_positive() using a macro like we do for most other atomic functions. This allows us to share the implementation ensuring consistency between the two. Notably this provides the appropriate loongson3_war barriers in the atomic64_sub_if_positive() case which were previously missing. The code is rearranged a little to handle the !kernel_uses_llsc case first in order to de-indent the LL/SC case & allow us not to go over 80 characters per line. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Use smp_mb__before_atomic() & smp_mb__after_atomic() in atomic_sub_if_positive() rather than the equivalent smp_mb__before_llsc() & smp_llsc_mb(). The former are more standard & this preps us for avoiding redundant duplicate barriers on Loongson3 in a later patch. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Generate the sync instructions required to workaround Loongson3 LL/SC errata within inline asm blocks, which feels a little safer than doing it from C where strictly speaking the compiler would be well within its rights to insert a memory access between the separate asm statements we previously had, containing sync & ll instructions respectively. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Cut down on duplication by generalizing the ATOMIC_OP(), ATOMIC_OP_RETURN() & ATOMIC_FETCH_OP() macros to work for both 32b & 64b atomics, and removing the ATOMIC64_ variants. This ensures consistency between our atomic_* & atomic64_* functions. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Handle the !kernel_uses_llsc path first in our ATOMIC_OP(), ATOMIC_OP_RETURN() & ATOMIC_FETCH_OP() macros & return from within the block. This allows us to de-indent the kernel_uses_llsc path by one level which will be useful when making further changes. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
We define macros in asm/atomic.h which end each line with space characters before a backslash to continue on the next line. Remove the space characters leaving tabs as the whitespace used for conformity with coding convention. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Use the new __SYNC() infrastructure to implement sync_ginv(), for consistency with much of the rest of the asm/barrier.h. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Implement __sync() using the new __SYNC() infrastructure, which will take care of not emitting an instruction for old R3k CPUs that don't support it. The only behavioral difference is that __sync() will now provide a compiler barrier on these old CPUs, but that seems like reasonable behavior anyway. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
The definition of fast_mb() is the same in both the Octeon & non-Octeon cases, so remove the duplication & define it only once. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
We #ifdef on Cavium Octeon CPUs, but emit the same sync instruction in both cases. Remove the #ifdef & simply expand to the __sync() macro. Whilst here indent the strong ordering case definitions to match the indentation of the weak ordering ones, helping readability. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Simplify our definitions of rmb() & wmb() using the new __SYNC() infrastructure. The fast_rmb() & fast_wmb() macros are removed, since they only provided a level of indirection that made the code less readable & weren't directly used anywhere in the kernel tree. The Octeon #ifdef'ery is removed, since the "syncw" instruction previously used is merely an alias for "sync 4" which __SYNC() will emit for the wmb sync type when the kernel is configured for an Octeon CPU. Similarly __SYNC() will emit nothing for the rmb sync type in Octeon configurations. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
Introduce an asm/sync.h header which provides infrastructure that can be used to generate sync instructions of various types, and for various reasons. For example if we need a sync instruction that provides a full completion barrier but only on systems which have weak memory ordering, we can generate the appropriate assembly code using: __SYNC(full, weak_ordering) When the kernel is configured to run on systems with weak memory ordering (ie. CONFIG_WEAK_ORDERING is selected) we'll emit a sync instruction. When the kernel is configured to run on systems with strong memory ordering (ie. CONFIG_WEAK_ORDERING is not selected) we'll emit nothing. The caller doesn't need to know which happened - it simply says what it needs & when, with no concern for checking the kernel configuration. There are some scenarios in which we may want to emit code only when we *didn't* emit a sync instruction. For example, some Loongson3 CPUs suffer from a bug that requires us to emit a sync instruction prior to each ll instruction (enabled by CONFIG_CPU_LOONGSON3_WORKAROUNDS). In cases where this bug workaround is enabled, it's wasteful to then have more generic code emit another sync instruction to provide barriers we need in general. A __SYNC_ELSE() macro allows for this, providing an extra argument that contains code to be assembled only in cases where the sync instruction was not emitted. For example if we have a scenario in which we generally want to emit a release barrier but for affected Loongson3 configurations upgrade that to a full completion barrier, we can do that like so: __SYNC_ELSE(full, loongson3_war, __SYNC(rl, always)) The assembly generated by these macros can be used either as inline assembly or in assembly source files. Differing types of sync as provided by MIPSr6 are defined, but currently they all generate a full completion barrier except in kernels configured for Cavium Octeon systems. There the wmb sync-type is used, and rmb syncs are omitted, as has been the case since commit 6b07d38a ("MIPS: Octeon: Use optimized memory barrier primitives."). Using __SYNC() with the wmb or rmb types will abstract away the Octeon specific behavior and allow us to later clean up asm/barrier.h code that currently includes a plethora of #ifdef's. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
When targeting MIPSr6 or higher make use of a compact branch in LL/SC loops, preventing the insertion of a delay slot nop that only serves to waste space. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Paul Burton authored
We currently duplicate the definition of __scbeqz in asm/atomic.h & asm/cmpxchg.h. Move it to asm/llsc.h & rename it to __SC_BEQZ to fit better with the existing __SC macro provided there. We include a tab in the string in order to avoid the need for users to indent code any further to include whitespace of their own after the instruction mnemonic. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
-
Stefan Roese authored
This patch adds support for the GARDENA smart Gateway, which is based on the MediaTek MT7688 SoC. It is equipped with 128 MiB of DDR and 8 MiB of flash (SPI NOR) and additional 128MiB SPI NAND storage. Signed-off-by: Stefan Roese <sr@denx.de> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Harvey Hunt <harveyhuntnexus@gmail.com> Cc: John Crispin <john@phrozen.org> Cc: linux-mips@vger.kernel.org
-
Stefan Roese authored
This patch adds the vendor prefix for gardena and a short description including the compatible string for the "GARDENA smart Gateway" based on the MT7688 SoC. Signed-off-by: Stefan Roese <sr@denx.de> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: devicetree@vger.kernel.org Cc: linux-mips@vger.kernel.org
-
Stefan Roese authored
This patch adds the "ralink,mt7688a-soc" compatible to the ralink DT bindings documentation. This compatible is already used by some MIPS boards (e.g. omega2p.dts) but not yet documented. It will also be used by the upcoming "GARDENA smart Gateway" support. Signed-off-by: Stefan Roese <sr@denx.de> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: devicetree@vger.kernel.org Cc: linux-mips@vger.kernel.org
-
Stefan Roese authored
This patch adds the I2C controller description to the MT7628A dtsi file. Signed-off-by: Stefan Roese <sr@denx.de> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Harvey Hunt <harveyhuntnexus@gmail.com> Cc: John Crispin <john@phrozen.org> Cc: linux-mips@vger.kernel.org
-
Paul Burton authored
The r4k-bugs64 code will no longer be built for MIPSr6 kernel configurations, so there's no need to perform checks for MIPSr6 within the code. Drop those redundant checks. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
-
Paul Burton authored
Only build the checks for R4k errata workarounds if we expect that the kernel might actually run on a system with an R4k CPU - ie. CONFIG_SYS_HAS_CPU_R4X00=y & we're targeting a pre-MIPSr1 ISA revision. Rename cpu-bugs64.c to r4k-bugs64.c to indicate the fact that the code is specific to R4k CPUs. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
-
Thomas Bogendoerfer authored
Node ids don't need to be contiguous in Linux, so the concept to use compact node ids to make them contiguous isn't needed at all. This patchset therefore removes it. Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
-
Thomas Bogendoerfer authored
Most of the SN/SN0 header files are inherited from IRIX header files, but not all of that stuff is useful for Linux. Remove not used parts. Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
-
- 06 Oct, 2019 4 commits
-
-
Linus Torvalds authored
-
Linus Torvalds authored
In commit 4ed28639 ("fs, elf: drop MAP_FIXED usage from elf_map") we changed elf to use MAP_FIXED_NOREPLACE instead of MAP_FIXED for the executable mappings. Then, people reported that it broke some binaries that had overlapping segments from the same file, and commit ad55eac7 ("elf: enforce MAP_FIXED on overlaying elf segments") re-instated MAP_FIXED for some overlaying elf segment cases. But only some - despite the summary line of that commit, it only did it when it also does a temporary brk vma for one obvious overlapping case. Now Russell King reports another overlapping case with old 32-bit x86 binaries, which doesn't trigger that limited case. End result: we had better just drop MAP_FIXED_NOREPLACE entirely, and go back to MAP_FIXED. Yes, it's a sign of old binaries generated with old tool-chains, but we do pride ourselves on not breaking existing setups. This still leaves MAP_FIXED_NOREPLACE in place for the load_elf_interp() and the old load_elf_library() use-cases, because nobody has reported breakage for those. Yet. Note that in all the cases seen so far, the overlapping elf sections seem to be just re-mapping of the same executable with different section attributes. We could possibly introduce a new MAP_FIXED_NOFILECHANGE flag or similar, which acts like NOREPLACE, but allows just remapping the same executable file using different protection flags. It's not clear that would make a huge difference to anything, but if people really hate that "elf remaps over previous maps" behavior, maybe at least a more limited form of remapping would alleviate some concerns. Alternatively, we should take a look at our elf_map() logic to see if we end up not mapping things properly the first time. In the meantime, this is the minimal "don't do that then" patch while people hopefully think about it more. Reported-by: Russell King <linux@armlinux.org.uk> Fixes: 4ed28639 ("fs, elf: drop MAP_FIXED usage from elf_map") Fixes: ad55eac7 ("elf: enforce MAP_FIXED on overlaying elf segments") Cc: Michal Hocko <mhocko@suse.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.infradead.org/users/hch/dma-mappingLinus Torvalds authored
Pull dma-mapping regression fix from Christoph Hellwig: "Revert an incorret hunk from a patch that caused problems on various arm boards (Andrey Smirnov)" * tag 'dma-mapping-5.4-1' of git://git.infradead.org/users/hch/dma-mapping: dma-mapping: fix false positive warnings in dma_common_free_remap()
-
git://git.kernel.org/pub/scm/linux/kernel/git/soc/socLinus Torvalds authored
Pull ARM SoC fixes from Olof Johansson: "A few fixes this time around: - Fixup of some clock specifications for DRA7 (device-tree fix) - Removal of some dead/legacy CPU OPP/PM code for OMAP that throws warnings at boot - A few more minor fixups for OMAPs, most around display - Enable STM32 QSPI as =y since their rootfs sometimes comes from there - Switch CONFIG_REMOTEPROC to =y since it went from tristate to bool - Fix of thermal zone definition for ux500 (5.4 regression)" * tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: ARM: multi_v7_defconfig: Fix SPI_STM32_QSPI support ARM: dts: ux500: Fix up the CPU thermal zone arm64/ARM: configs: Change CONFIG_REMOTEPROC from m to y ARM: dts: am4372: Set memory bandwidth limit for DISPC ARM: OMAP2+: Fix warnings with broken omap2_set_init_voltage() ARM: OMAP2+: Add missing LCDC midlemode for am335x ARM: OMAP2+: Fix missing reset done flag for am3 and am43 ARM: dts: Fix gpio0 flags for am335x-icev2 ARM: omap2plus_defconfig: Enable more droid4 devices as loadable modules ARM: omap2plus_defconfig: Enable DRM_TI_TFP410 DTS: ARM: gta04: introduce legacy spi-cs-high to make display work again ARM: dts: Fix wrong clocks for dra7 mcasp clk: ti: dra7: Fix mcasp8 clock bits
-
- 05 Oct, 2019 1 commit
-
-
Linus Torvalds authored
Merge tag 'kbuild-fixes-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild fixes from Masahiro Yamada: - remove unneeded ar-option and KBUILD_ARFLAGS - remove long-deprecated SUBDIRS - fix modpost to suppress false-positive warnings for UML builds - fix namespace.pl to handle relative paths to ${objtree}, ${srctree} - make setlocalversion work for /bin/sh - make header archive reproducible - fix some Makefiles and documents * tag 'kbuild-fixes-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kheaders: make headers archive reproducible kbuild: update compile-test header list for v5.4-rc2 kbuild: two minor updates for Documentation/kbuild/modules.rst scripts/setlocalversion: clear local variable to make it work for sh namespace: fix namespace.pl script to support relative paths video/logo: do not generate unneeded logo C files video/logo: remove unneeded *.o pattern from clean-files integrity: remove pointless subdir-$(CONFIG_...) integrity: remove unneeded, broken attempt to add -fshort-wchar modpost: fix static EXPORT_SYMBOL warnings for UML build kbuild: correct formatting of header in kbuild module docs kbuild: remove SUBDIRS support kbuild: remove ar-option and KBUILD_ARFLAGS
-