- 03 Dec, 2020 40 commits
-
-
Srikar Dronamraju authored
We want to reuse the is_kvm_guest() name in a subsequent patch but with a new body. Hence rename is_kvm_guest() to check_kvm_guest(). No additional changes. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: kernel test robot <lkp@intel.com> # int -> bool fix [mpe: Fold in fix from lkp to use true/false not 0/1] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-3-srikar@linux.vnet.ibm.com
-
Srikar Dronamraju authored
Only code/declaration movement, in anticipation of doing a KVM-aware vcpu_is_preempted(). No additional changes. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-2-srikar@linux.vnet.ibm.com
-
Nicholas Piggin authored
It's often useful to know the register state for interrupts in the stack frame. In the below example (with this patch applied), the important information is the state of the page fault. A blatant case like this probably rather should have the page fault regs passed down to the warning, but quite often there are less obvious cases where an interrupt shows up that might give some more clues. The downside is longer and more complex bug output. Bug: Write fault blocked by AMR! WARNING: CPU: 0 PID: 72 at arch/powerpc/include/asm/book3s/64/kup-radix.h:164 __do_page_fault+0x880/0xa90 Modules linked in: CPU: 0 PID: 72 Comm: systemd-gpt-aut Not tainted NIP: c00000000006e2f0 LR: c00000000006e2ec CTR: 0000000000000000 REGS: c00000000a4f3420 TRAP: 0700 MSR: 8000000000021033 <SF,ME,IR,DR,RI,LE> CR: 28002840 XER: 20040000 CFAR: c000000000128be0 IRQMASK: 3 GPR00: c00000000006e2ec c00000000a4f36c0 c0000000014f0700 0000000000000020 GPR04: 0000000000000001 c000000001290f50 0000000000000001 c000000001290f80 GPR08: c000000001612b08 0000000000000000 0000000000000000 00000000ffffe0f7 GPR12: 0000000048002840 c0000000016e0000 c00c000000021c80 c000000000fd6f60 GPR16: 0000000000000000 c00000000a104698 0000000000000003 c0000000087f0000 GPR20: 0000000000000100 c0000000070330b8 0000000000000000 0000000000000004 GPR24: 0000000002000000 0000000000000300 0000000002000000 c00000000a5b0c00 GPR28: 0000000000000000 000000000a000000 00007fffb2a90038 c00000000a4f3820 NIP [c00000000006e2f0] __do_page_fault+0x880/0xa90 LR [c00000000006e2ec] __do_page_fault+0x87c/0xa90 Call Trace: [c00000000a4f36c0] [c00000000006e2ec] __do_page_fault+0x87c/0xa90 (unreliable) [c00000000a4f3780] [c000000000e1c034] do_page_fault+0x34/0x90 [c00000000a4f37b0] [c000000000008908] data_access_common_virt+0x158/0x1b0 --- interrupt: 300 at __copy_tofrom_user_base+0x9c/0x5a4 NIP: c00000000009b028 LR: c000000000802978 CTR: 0000000000000800 REGS: c00000000a4f3820 TRAP: 0300 MSR: 800000000280b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24004840 XER: 00000000 CFAR: c00000000009aff4 DAR: 00007fffb2a90038 DSISR: 0a000000 IRQMASK: 0 GPR00: 0000000000000000 c00000000a4f3ac0 c0000000014f0700 00007fffb2a90028 GPR04: c000000008720010 0000000000010000 0000000000000000 0000000000000000 GPR08: 0000000000000000 0000000000000000 0000000000000000 0000000000000001 GPR12: 0000000000004000 c0000000016e0000 c00c000000021c80 c000000000fd6f60 GPR16: 0000000000000000 c00000000a104698 0000000000000003 c0000000087f0000 GPR20: 0000000000000100 c0000000070330b8 0000000000000000 0000000000000004 GPR24: c00000000a4f3c80 c000000008720000 0000000000010000 0000000000000000 GPR28: 0000000000010000 0000000008720000 0000000000010000 c000000001515b98 NIP [c00000000009b028] __copy_tofrom_user_base+0x9c/0x5a4 LR [c000000000802978] copyout+0x68/0xc0 --- interrupt: 300 [c00000000a4f3af0] [c0000000008074b8] copy_page_to_iter+0x188/0x540 [c00000000a4f3b50] [c00000000035c678] generic_file_buffered_read+0x358/0xd80 [c00000000a4f3c40] [c0000000004c1e90] blkdev_read_iter+0x50/0x80 [c00000000a4f3c60] [c00000000045733c] new_sync_read+0x12c/0x1c0 [c00000000a4f3d00] [c00000000045a1f0] vfs_read+0x1d0/0x240 [c00000000a4f3d50] [c00000000045a7f4] ksys_read+0x84/0x140 [c00000000a4f3da0] [c000000000033a60] system_call_exception+0x100/0x280 [c00000000a4f3e10] [c00000000000c508] system_call_common+0xf8/0x2f8 Instruction dump: eae10078 3be0000b 4bfff890 60420000 792917e1 4182ff18 3c82ffab 3884a5e0 3c62ffab 3863a6e8 480ba891 60000000 <0fe00000> 3be0000b 4bfff860 e93c0938 Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201107023305.2384874-1-npiggin@gmail.com
-
Athira Rajeev authored
The power_pmu_event_init() callback access per-cpu variable (cpu_hw_events) to check for event constraints and Branch Stack (BHRB). Current usage is to disable preemption when accessing the per-cpu variable, but this does not prevent timer callback from interrupting event_init. Fix this by using 'local_irq_save/restore' to make sure the code path is invoked with disabled interrupts. This change is tested in mambo simulator to ensure that, if a timer interrupt comes in during the per-cpu access in event_init, it will be soft masked and replayed later. For testing purpose, introduced a udelay() in power_pmu_event_init() to make sure a timer interrupt arrives while in per-cpu variable access code between local_irq_save/resore. As expected the timer interrupt was replayed later during local_irq_restore called from power_pmu_event_init. This was confirmed by adding breakpoint in mambo and checking the backtrace when timer_interrupt was hit. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1606814880-1720-1-git-send-email-atrajeev@linux.vnet.ibm.com
-
Harish authored
Patch fixes uninitialized variable warning in bad_accesses test which causes the selftests build to fail in older distibutions bad_accesses.c: In function ‘bad_access’: bad_accesses.c:52:9: error: ‘x’ may be used uninitialized in this function [-Werror=maybe-uninitialized] printf("Bad - no SEGV! (%c)\n", x); ^ cc1: all warnings being treated as errors Signed-off-by: Harish <harish@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201092403.238182-1-harish@linux.ibm.com
-
Daniel Axtens authored
I did an in-place build of the self-tests and found that it left the tree dirty. Add missed test binaries to .gitignore Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201144427.1228745-1-dja@axtens.net
-
Daniel Axtens authored
In a bunch of our security flushes, we use a comma rather than a semicolon to 'terminate' an assignment. Nothing breaks, but checkpatch picks it up if you copy it into another flush. Switch to semicolons for ending statements. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201144344.1228421-1-dja@axtens.net
-
Frederic Barrat authored
Update bus speed definition for PCI Gen4 and 5. Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201130152949.26467-1-fbarrat@linux.ibm.com
-
Jordan Niethe authored
This enables GENERIC_BUG_RELATIVE_POINTERS on Power so that 32-bit offsets are stored in the bug entries rather than 64-bit pointers. While this doesn't save space for 32-bit machines, use it anyway so there is only one code path. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201005203.15210-1-jniethe5@gmail.com
-
Ravi Bangoria authored
With CONFIG_PPC_8xx and CONFIG_XMON set, kernel build fails with arch/powerpc/xmon/xmon.c:1379:12: error: 'find_free_data_bpt' defined but not used [-Werror=unused-function] Fix it by enclosing find_free_data_bpt() inside #ifndef CONFIG_PPC_8xx. Fixes: 30df74d6 ("powerpc/watchpoint/xmon: Support 2nd DAWR") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201130034406.288047-1-ravi.bangoria@linux.ibm.com
-
Youling Tang authored
Use the common STABS_DEBUG and DWARF_DEBUG and ELF_DETAILS macro rule for the linker script in an effort. Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1606460857-2723-1-git-send-email-tangyouling@loongson.cn
-
Jordan Niethe authored
Commit 63ce271b ("powerpc/prom: convert PROM_BUG() to standard trap") added an EMIT_BUG_ENTRY for the trap after the branch to start_kernel(). The EMIT_BUG_ENTRY was for the address "0b", however the trap was not labeled with "0". Hence the address used for bug is in relative_toc() where the previous "0" label is. Label the trap as "0" so the correct address is used. Fixes: 63ce271b ("powerpc/prom: convert PROM_BUG() to standard trap") Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201130004404.30953-1-jniethe5@gmail.com
-
Christophe Leroy authored
Rename the guard define to _ASM_POWERPC_VDSO_H And remove useless #ifdef __KERNEL__ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9902590d410cd1c2afa48b83b277faf0711f07b2.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
VDSO32_LBASE and VDSO64_LBASE are 0. Remove them to simplify code. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6c4d6570d886bbe1cc471e8ca01602e4b4d9beb5.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
DBG() is not used anymore. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e11a9b50e709f197bb3aa2ed1d80d2dee8714afc.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
There is no way to get out of vdso_init() prematuraly anymore. Remove vdso_ready as it will always be 1. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0e1e18c6329b848aa3edeeba76509b4d76182e7d.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
vdso_fixup_features() cannot fail anymore and that's the only function called by vdso_setup(). vdso_setup() has become trivial and can be removed. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/11522eec6140f510a8c89c63cbb739277d097fdc.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
lib32_elfinfo and lib64_elfinfo are not used anymore, remove them. Also remove vdso32_kbase and vdso64_kbase while removing the last use. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/01ac65abf22f0428f8f764525a7d84459c54d806.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The members related to the symbol section in struct lib32_elfinfo and struct lib64_elfinfo are not used anymore, removed them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b779e5b7cc0354e2f87fd407fe5b02f4a8a73825.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The text member in struct lib32_elfinfo and struct lib64_elfinfo is not used, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f53dcc9bb1946a7854d15b34d03d3d2e2003848c.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
vdso_patches[] is now empty, remove it and remove all functions that depends on it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/27d75debd6e4ddeaffe1d66ffed1e7526684a004.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Signal trampoline offsets are now generated at buildtime. Runtime generated offsets are not used anymore, remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7c192d35a437151837cf4c48aeccb42380d6daac.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
__kernel_datapage_offset is not used anymore, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ddb5c746bec4e1a026d7c85243213a1876ef844f.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
vdso32_pages and vdso64_pages are not used anymore. Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bce021f616cbaf39dfb5766cf7ef114adcb918d9.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
__kernel_sync_dicache_p5() is an alternative to __kernel_sync_dicache() when cpu has CPU_FTR_COHERENT_ICACHE Remove this alternative function and merge __kernel_sync_dicache_p5() into __kernel_sync_dicache() using standard CPU feature fixup. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4c7dcc6544882761b2b0249d7a8ec2c3a8088cb5.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Add builtin symbols to locate fixup section and use them instead of locating sections through elf headers at runtime. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2954526981859ca1ccfcfc7a7c4263920e9ddfcb.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
This is copied from arm64. Instead of using runtime generated signal trampoline offsets, get offsets at buildtime. If the said trampoline doesn't exist, build will fail. So no need to check whether the trampoline exists or not in the VDSO. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f8bfd6812c3e3678b1cdb4d55a52f9eb022b40d3.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The \tmp param is not used anymore, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4b13f897dcccce8ae03c031a4598cf26b32e2f1c.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The VDSO datapage and the text pages are always located immediately next to each other, so it can be hardcoded without an indirection through __kernel_datapage_offset Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b08f5ef99d64cfc38f79b7ad5310d9b4d2479eeb.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Move the vdso datapage in front of the VDSO area, before vdso test. This will allow to remove the __kernel_datapage_offset symbol and simplify __get_datapage() in following patches. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b68c99b6e8ee0b1d99bfa4c7e34c359fc1bc1000.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
All other architectures but s390 use a void pointer named 'vdso' to reference the VDSO mapping. In a following patch, the VDSO data page will be put in front of text, vdso_base will then not anymore point to VDSO text. To avoid confusion between vdso_base and VDSO text, rename vdso_base into vdso and make it a void __user *. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8e6cefe474aa4ceba028abb729485cd46c140990.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Provide vdso_remap() through _install_special_mapping() and drop arch_remap(). This adds a test of the size and returns -EINVAL if the size is not correct. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/373c66f768fa9cc8890f3b55462209a98c522326.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Copied from commit 2fea7f6c ("arm64: vdso: move to _install_special_mapping and remove arch_vma_name"). Use the new _install_special_mapping() API added by commit a62c34bd ("x86, mm: Improve _install_special_mapping and fix x86 vdso naming") which obsolete install_special_mapping(). And remove arch_vma_name() as the name is handled by the new API. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: kernel test robot <lkp@intel.com> [mpe: Squash fix to use PTR_ERR_OR_ZERO() from lkp] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e7e5dfe0f93234e31051f2a610b4b07f50b0082f.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
To simplify arch_setup_additional_pages() exit, rename it __arch_setup_additional_pages() and create a caller arch_setup_additional_pages() which does the locking. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/603c1d039d3f928ee95e547fcd2219fcf4c3b514.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In arch_setup_additional_pages(), instead of using number of VDSO pages and recalculate VDSO size, directly use the VDSO size. As vdso_ready is set, vdso_pages can't be 0 so just remove the test. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4edfa548c3885a430b765335dc720105716e273f.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
No need of all those #ifdefs around the pagelist initialisation, use IS_ENABLED(), GCC will kick out unused static variables. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f9333432e329b1fcbbbf846cb1cd4a1c4127a60b.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The setup of VDSO pages is identical for 32 bits VDSO and 64 bits VDSO. Refactor that setup. And use &vdsoXX_start which is synonym of vdsoXX_kbase. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/269ffb54c37fc1d46128f77d7a39f88ef4a9957d.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
No need of a NULL last element in pagelists, install_special_mapping() knows how long the list is. Remove that element. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e58d95ab859e3cbc9bae3c9ce2959e17d2864f5d.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Partly copied from commit 16fb1a9b ("arm64: vdso: clean up vdso_pagelist initialization"). No need to get_page() the vdso text/data - these are part of the kernel image. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9d14540bd10832b6c9519d74fb5728fdc4974b36.1601197618.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Today vdso_data structure has: - syscall_map_32[] and syscall_map_64[] on PPC64 - syscall_map_32[] on PPC32 On PPC32, syscall_map_32[] is populated using sys_call_table[]. On PPC64, syscall_map_64[] is populated using sys_call_table[] and syscal_map_32[] is populated using compat_sys_call_table[]. To simplify vdso_setup_syscall_map(), - On PPC32 rename syscall_map_32[] into syscall_map[], - On PPC64 rename syscall_map_64[] into syscall_map[], - On PPC64 rename syscall_map_32[] into compat_syscall_map[]. That way, syscall_map[] gets populated using sys_call_table[] and compat_syscall_map[] gets population using compat_sys_call_table[]. Also define an empty compat_syscall_map[] on PPC32 to avoid ifdefs. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/472734be0d9991eee320a06824219a5b2663736b.1601197618.git.christophe.leroy@csgroup.eu
-