- 04 Dec, 2014 1 commit
-
-
Andy Lutomirski authored
It appears that some SCHEDULE_USER (asm for schedule_user) callers in arch/x86/kernel/entry_64.S are called from RCU kernel context, and schedule_user will return in RCU user context. This causes RCU warnings and possible failures. This is intended to be a minimal fix suitable for 3.18. Reported-and-tested-by: Dave Jones <davej@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 03 Dec, 2014 21 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linuxLinus Torvalds authored
Pull i2c bugfixes from Wolfram Sang: "A few driver bugfixes for 3.18" * 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux: i2c: omap: fix i207 errata handling i2c: designware: prevent early stop on TX FIFO empty i2c: omap: fix NACK and Arbitration Lost irq handling
-
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pciLinus Torvalds authored
Pull PCI fix from Bjorn Helgaas: "This fixes a Tegra20 regression that we introduced during the v3.18 merge window" * tag 'pci-v3.18-fixes-4' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: PCI: tegra: Use physical range for I/O mapping
-
git://git.kernel.org/pub/scm/linux/kernel/git/glikely/linuxLinus Torvalds authored
Pull devicetree bugfix from Grant Likely: "One more bug fix for v3.18. I debated whether or not to send you this merge request because we're at such a late rc. The bug isn't critical in that there is only one system known to be affected and the patch is easy to backport. The codepath is used by pretty much every DT based system, so there is risk a of regression (it /should/ be safe, but I've been bitten by stuff that should be safe before). I've had it in linux-next for a week and haven't received any complaints. I think it probably should just be merged right away rather than waiting for the merge window and backporting. It does fix a real bug and the code is theoretically safer after the change. I can't think of any situation where it would be dangerous to reserve the DT memory an extra time. Summary from tag: Single bugfix for boot failure seen in the wild. The memory reserve code tries to be clever about reserving the FDT, but it should just go ahead and reserve it unconditionally to avoid the problem of partial overlap described in the patch" * tag 'devicetree-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/glikely/linux: of/fdt: memblock_reserve /memreserve/ regions in the case of partial overlap
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull block core regression fix from Jens Axboe: "Single fix for a regression introduced in this development cycle, where dm on top of dif/dix is broken. From Darrick Wong" * 'for-linus' of git://git.kernel.dk/linux-block: block: fix regression where bio_integrity_process uses wrong bio_vec iterator
-
git://people.freedesktop.org/~airlied/linuxLinus Torvalds authored
Pull drm fixes from Dave Airlie: "Radeon and Nouveau fixes: So nouveau had a few regression introduced, Ben and Maarten finally tracked down the one that was causing problems on my MacBookPro, also nvidia gave some info on the an engine we were using incorrectly, so disable our use of it, and one regresion with pci hotplug affecting optimus users. Radeon has an oops fixs, sync fix, and one workaround to avoid broken functionality on 32-bit x86, this needs better root causing and a better fix, but the bandaid is a lot safer at this point" * 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: drm/radeon: kernel panic in drm_calc_vbltimestamp_from_scanoutpos with 3.18.0-rc6 drm/radeon: Ignore RADEON_GEM_GTT_WC on 32-bit x86 drm/radeon: sync all BOs involved in a CS v2 nouveau: move the hotplug ignore to correct place. drm/nouveau/gf116: remove copy1 engine drm/nouveau: prevent stale fence->channel pointers, and protect with rcu drm/nouveau/fifo/g84-: ack non-stall interrupt before handling it
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds authored
Pull networking fixes from David Miller: 1) Fill in ethtool link parameters for all link types in cxgb4, from Hariprasad Shenai. 2) Fix probe regressions in stmmac driver, from Huacai Chen. 3) Network namespace leaks on errirs in rtnetlink, from Nicolas Dichtel. 4) Remove erroneous BUG check which can actually trigger legitimately, in xen-netfront. From Seth Forshee. 5) Validate length of IFLA_BOND_ARP_IP_TARGET netlink attributes, from Thomas Grag. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: cxgb4: Fill in supported link mode for SFP modules xen-netfront: Remove BUGs on paged skb data which crosses a page boundary sh_eth: Fix sleeping function called from invalid context stmmac: platform: Move plat_dat checking earlier sh_eth: Fix skb alloc size and alignment adjust rule. rtnetlink: release net refcnt on error in do_setlink() bond: Check length of IFLA_BOND_ARP_IP_TARGET attributes
-
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-securityLinus Torvalds authored
Pull keyring/nfs fixes from James Morris: "From David Howells: The first one fixes the handling of maximum buffer size for key descriptions, fixing the size at 4095 + NUL char rather than whatever PAGE_SIZE happens to be and permits you to read back the full description without it getting clipped because some extra information got prepended. The second and third fix a bug in NFS idmapper handling whereby a key representing a mapping between an id and a name expires and causing EKEYEXPIRED to be seen internally in NFS (which prevents the mapping from happening) rather than re-looking up the mapping" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: KEYS: request_key() should reget expired keys rather than give EKEYEXPIRED KEYS: Simplify KEYRING_SEARCH_{NO,DO}_STATE_CHECK flags KEYS: Fix the size of the key description passed to/from userspace
-
Linus Torvalds authored
Merge misc fixes from Andrew Morton: "10 fixes" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: slab: fix nodeid bounds check for non-contiguous node IDs lib/genalloc.c: export devm_gen_pool_create() for modules mm: fix anon_vma_clone() error treatment mm: fix swapoff hang after page migration and fork fat: fix oops on corrupted vfat fs ipc/sem.c: fully initialize sem_array before making it visible drivers/input/evdev.c: don't kfree() a vmalloc address mm/vmpressure.c: fix race in vmpressure_work_fn() mm: frontswap: invalidate expired data on a dup-store failure mm: do not overwrite reserved pages counter at show_mem()
-
Paul Mackerras authored
The bounds check for nodeid in ____cache_alloc_node gives false positives on machines where the node IDs are not contiguous, leading to a panic at boot time. For example, on a POWER8 machine the node IDs are typically 0, 1, 16 and 17. This means that num_online_nodes() returns 4, so when ____cache_alloc_node is called with nodeid = 16 the VM_BUG_ON triggers, like this: kernel BUG at /home/paulus/kernel/kvm/mm/slab.c:3079! Call Trace: .____cache_alloc_node+0x5c/0x270 (unreliable) .kmem_cache_alloc_node_trace+0xdc/0x360 .init_list+0x3c/0x128 .kmem_cache_init+0x1dc/0x258 .start_kernel+0x2a0/0x568 start_here_common+0x20/0xa8 To fix this, we instead compare the nodeid with MAX_NUMNODES, and additionally make sure it isn't negative (since nodeid is an int). The check is there mainly to protect the array dereference in the get_node() call in the next line, and the array being dereferenced is of size MAX_NUMNODES. If the nodeid is in range but invalid (for example if the node is off-line), the BUG_ON in the next line will catch that. Fixes: 14e50c6a ("mm: slab: Verify the nodeid passed to ____cache_alloc_node") Signed-off-by: Paul Mackerras <paulus@samba.org> Reviewed-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Reviewed-by: Pekka Enberg <penberg@kernel.org> Acked-by: David Rientjes <rientjes@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Michal Simek authored
Modules can use this function for creating pool. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Acked-by: Lad, Prabhakar <prabhakar.csengg@gmail.com> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Olof Johansson <olof@lixom.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com> Cc: Philipp Zabel <p.zabel@pengutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Daniel Forrest authored
Andrew Morton noticed that the error return from anon_vma_clone() was being dropped and replaced with -ENOMEM (which is not itself a bug because the only error return value from anon_vma_clone() is -ENOMEM). I did an audit of callers of anon_vma_clone() and discovered an actual bug where the error return was being lost. In __split_vma(), between Linux 3.11 and 3.12 the code was changed so the err variable is used before the call to anon_vma_clone() and the default initial value of -ENOMEM is overwritten. So a failure of anon_vma_clone() will return success since err at this point is now zero. Below is a patch which fixes this bug and also propagates the error return value from anon_vma_clone() in all cases. Fixes: ef0855d3 ("mm: mempolicy: turn vma_set_policy() into vma_dup_policy()") Signed-off-by: Daniel Forrest <dan.forrest@ssec.wisc.edu> Reviewed-by: Michal Hocko <mhocko@suse.cz> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Rik van Riel <riel@redhat.com> Cc: Tim Hartrick <tim@edgecast.com> Cc: Hugh Dickins <hughd@google.com> Cc: Michel Lespinasse <walken@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> [3.12+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Hugh Dickins authored
I've been seeing swapoff hangs in recent testing: it's cycling around trying unsuccessfully to find an mm for some remaining pages of swap. I have been exercising swap and page migration more heavily recently, and now notice a long-standing error in copy_one_pte(): it's trying to add dst_mm to swapoff's mmlist when it finds a swap entry, but is doing so even when it's a migration entry or an hwpoison entry. Which wouldn't matter much, except it adds dst_mm next to src_mm, assuming src_mm is already on the mmlist: which may not be so. Then if pages are later swapped out from dst_mm, swapoff won't be able to find where to replace them. There's already a !non_swap_entry() test for stats: move that up before the swap_duplicate() and the addition to mmlist. Signed-off-by: Hugh Dickins <hughd@google.com> Cc: Kelley Nielsen <kelleynnn@gmail.com> Cc: <stable@vger.kernel.org> [2.6.18+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Al Viro authored
a) don't bother with ->d_time for positives - we only check it for negatives anyway. b) make sure to set it at unlink and rmdir time - at *that* point soon-to-be negative dentry matches then-current directory contents c) don't go into renaming of old alias in vfat_lookup() unless it has the same parent (which it will, unless we are seeing corrupted image) [hirofumi@mail.parknet.co.jp: make change minimum, don't call d_move() for dir] Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Cc: <stable@vger.kernel.org> [3.17.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Manfred Spraul authored
ipc_addid() makes a new ipc identifier visible to everyone. New objects start as locked, so that the caller can complete the initialization after the call. Within struct sem_array, at least sma->sem_base and sma->sem_nsems are accessed without any locks, therefore this approach doesn't work. Thus: Move the ipc_addid() to the end of the initialization. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Reported-by: Rik van Riel <riel@redhat.com> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: Davidlohr Bueso <dave@stgolabs.net> Acked-by: Rafael Aquini <aquini@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Andrew Morton authored
If kzalloc() failed and then evdev_open_device() fails, evdev_open() will pass a vmalloc'ed pointer to kfree. This might fix https://bugzilla.kernel.org/show_bug.cgi?id=88401, where there was a crash in kfree(). Reported-by: Christian Casteyde <casteyde.christian@free.fr> Belatedly-Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: Henrik Rydberg <rydberg@euromail.se> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Hariprasad Shenai authored
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Seth Forshee authored
These BUGs can be erroneously triggered by frags which refer to tail pages within a compound page. The data in these pages may overrun the hardware page while still being contained within the compound page, but since compound_order() evaluates to 0 for tail pages the assertion fails. The code already iterates through subsequent pages correctly in this scenario, so the BUGs are unnecessary and can be removed. Fixes: f36c3747 ("xen/netfront: handle compound page fragments on transmit") Cc: <stable@vger.kernel.org> # 3.7+ Signed-off-by: Seth Forshee <seth.forshee@canonical.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Morton authored
In some android devices, there will be a "divide by zero" exception. vmpr->scanned could be zero before spin_lock(&vmpr->sr_lock). Addresses https://bugzilla.kernel.org/show_bug.cgi?id=88051 [akpm@linux-foundation.org: neaten] Reported-by: ji_ang <ji_ang@163.com> Cc: Anton Vorontsov <anton.vorontsov@linaro.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Weijie Yang authored
If a frontswap dup-store failed, it should invalidate the expired page in the backend, or it could trigger some data corruption issue. Such as: 1. use zswap as the frontswap backend with writeback feature 2. store a swap page(version_1) to entry A, success 3. dup-store a newer page(version_2) to the same entry A, fail 4. use __swap_writepage() write version_2 page to swapfile, success 5. zswap do shrink, writeback version_1 page to swapfile 6. version_2 page is overwrited by version_1, data corrupt. This patch fixes this issue by invalidating expired data immediately when meet a dup-store failure. Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Seth Jennings <sjennings@variantweb.net> Cc: Dan Streetman <ddstreet@ieee.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Bob Liu <bob.liu@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Rafael Aquini authored
Minor fixlet to perform the reserved pages counter aggregation for each node, at show_mem() Signed-off-by: Rafael Aquini <aquini@redhat.com> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: Johannes Weiner <jweiner@redhat.com> Acked-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://people.freedesktop.org/~agd5f/linuxDave Airlie authored
A few more small fixes for 3.18. * 'drm-fixes-3.18' of git://people.freedesktop.org/~agd5f/linux: drm/radeon: kernel panic in drm_calc_vbltimestamp_from_scanoutpos with 3.18.0-rc6 drm/radeon: Ignore RADEON_GEM_GTT_WC on 32-bit x86 drm/radeon: sync all BOs involved in a CS v2
-
- 02 Dec, 2014 12 commits
-
-
Petr Mladek authored
I was unable too boot 3.18.0-rc6 because of the following kernel panic in drm_calc_vbltimestamp_from_scanoutpos(): [drm] Initialized drm 1.1.0 20060810 [drm] radeon kernel modesetting enabled. [drm] initializing kernel modesetting (RV100 0x1002:0x515E 0x15D9:0x8080). [drm] register mmio base: 0xC8400000 [drm] register mmio size: 65536 radeon 0000:0b:01.0: VRAM: 128M 0x00000000D0000000 - 0x00000000D7FFFFFF (16M used) radeon 0000:0b:01.0: GTT: 512M 0x00000000B0000000 - 0x00000000CFFFFFFF [drm] Detected VRAM RAM=128M, BAR=128M [drm] RAM width 16bits DDR [TTM] Zone kernel: Available graphics memory: 3829346 kiB [TTM] Zone dma32: Available graphics memory: 2097152 kiB [TTM] Initializing pool allocator [TTM] Initializing DMA pool allocator [drm] radeon: 16M of VRAM memory ready [drm] radeon: 512M of GTT memory ready. [drm] GART: num cpu pages 131072, num gpu pages 131072 [drm] PCI GART of 512M enabled (table at 0x0000000037880000). radeon 0000:0b:01.0: WB disabled radeon 0000:0b:01.0: fence driver on ring 0 use gpu addr 0x00000000b0000000 and cpu addr 0xffff8800bbbfa000 [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). [drm] Driver supports precise vblank timestamp query. [drm] radeon: irq initialized. [drm] Loading R100 Microcode radeon 0000:0b:01.0: Direct firmware load for radeon/R100_cp.bin failed with error -2 radeon_cp: Failed to load firmware "radeon/R100_cp.bin" [drm:r100_cp_init] *ERROR* Failed to load firmware! radeon 0000:0b:01.0: failed initializing CP (-2). radeon 0000:0b:01.0: Disabling GPU acceleration [drm] radeon: cp finalized BUG: unable to handle kernel NULL pointer dereference at 000000000000025c IP: [<ffffffff8150423b>] drm_calc_vbltimestamp_from_scanoutpos+0x4b/0x320 PGD 0 Oops: 0000 [#1] SMP Modules linked in: CPU: 1 PID: 1 Comm: swapper/0 Not tainted 3.18.0-rc6-4-default #2649 Hardware name: Supermicro X7DB8/X7DB8, BIOS 6.00 07/26/2006 task: ffff880234da2010 ti: ffff880234da4000 task.ti: ffff880234da4000 RIP: 0010:[<ffffffff8150423b>] [<ffffffff8150423b>] drm_calc_vbltimestamp_from_scanoutpos+0x4b/0x320 RSP: 0000:ffff880234da7918 EFLAGS: 00010086 RAX: ffffffff81557890 RBX: 0000000000000000 RCX: ffff880234da7a48 RDX: ffff880234da79f4 RSI: 0000000000000000 RDI: ffff880232e15000 RBP: ffff880234da79b8 R08: 0000000000000000 R09: 0000000000000000 R10: 000000000000000a R11: 0000000000000001 R12: ffff880232dda1c0 R13: ffff880232e1518c R14: 0000000000000292 R15: ffff880232e15000 FS: 0000000000000000(0000) GS:ffff88023fc40000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 000000000000025c CR3: 0000000002014000 CR4: 00000000000007e0 Stack: ffff880234da79d8 0000000000000286 ffff880232dcbc00 0000000000002480 ffff880234da7958 0000000000000296 ffff880234da7998 ffffffff8151b51d ffff880234da7a48 0000000032dcbeb0 ffff880232dcbc00 ffff880232dcbc58 Call Trace: [<ffffffff8151b51d>] ? drm_vma_offset_remove+0x1d/0x110 [<ffffffff8152dc98>] radeon_get_vblank_timestamp_kms+0x38/0x60 [<ffffffff8152076a>] ? ttm_bo_release_list+0xba/0x180 [<ffffffff81503751>] drm_get_last_vbltimestamp+0x41/0x70 [<ffffffff81503933>] vblank_disable_and_save+0x73/0x1d0 [<ffffffff81106b2f>] ? try_to_del_timer_sync+0x4f/0x70 [<ffffffff81505245>] drm_vblank_cleanup+0x65/0xa0 [<ffffffff815604fa>] radeon_irq_kms_fini+0x1a/0x70 [<ffffffff8156c07e>] r100_init+0x26e/0x410 [<ffffffff8152ae3e>] radeon_device_init+0x7ae/0xb50 [<ffffffff8152d57f>] radeon_driver_load_kms+0x8f/0x210 [<ffffffff81506965>] drm_dev_register+0xb5/0x110 [<ffffffff8150998f>] drm_get_pci_dev+0x8f/0x200 [<ffffffff815291cd>] radeon_pci_probe+0xad/0xe0 [<ffffffff8141a365>] local_pci_probe+0x45/0xa0 [<ffffffff8141b741>] pci_device_probe+0xd1/0x130 [<ffffffff81633dad>] driver_probe_device+0x12d/0x3e0 [<ffffffff8163413b>] __driver_attach+0x9b/0xa0 [<ffffffff816340a0>] ? __device_attach+0x40/0x40 [<ffffffff81631cd3>] bus_for_each_dev+0x63/0xa0 [<ffffffff8163378e>] driver_attach+0x1e/0x20 [<ffffffff81633390>] bus_add_driver+0x180/0x240 [<ffffffff81634914>] driver_register+0x64/0xf0 [<ffffffff81419cac>] __pci_register_driver+0x4c/0x50 [<ffffffff81509bf5>] drm_pci_init+0xf5/0x120 [<ffffffff821dc871>] ? ttm_init+0x6a/0x6a [<ffffffff821dc908>] radeon_init+0x97/0xb5 [<ffffffff810002fc>] do_one_initcall+0xbc/0x1f0 [<ffffffff810e3278>] ? __wake_up+0x48/0x60 [<ffffffff8218e256>] kernel_init_freeable+0x18a/0x215 [<ffffffff8218d983>] ? initcall_blacklist+0xc0/0xc0 [<ffffffff818a78f0>] ? rest_init+0x80/0x80 [<ffffffff818a78fe>] kernel_init+0xe/0xf0 [<ffffffff818c0c3c>] ret_from_fork+0x7c/0xb0 [<ffffffff818a78f0>] ? rest_init+0x80/0x80 Code: 45 ac 0f 88 a8 01 00 00 3b b7 d0 01 00 00 49 89 ff 0f 83 99 01 00 00 48 8b 47 20 48 8b 80 88 00 00 00 48 85 c0 0f 84 cd 01 00 00 <41> 8b b1 5c 02 00 00 41 8b 89 58 02 00 00 89 75 98 41 8b b1 60 RIP [<ffffffff8150423b>] drm_calc_vbltimestamp_from_scanoutpos+0x4b/0x320 RSP <ffff880234da7918> CR2: 000000000000025c ---[ end trace ad2c0aadf48e2032 ]--- Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000009 It has helped me to add a NULL pointer check that was suggested at http://lists.freedesktop.org/archives/dri-devel/2014-October/070663.html I am not familiar with the code. But the change looks sane and we need something fast at this stage of 3.18 development. Suggested-by: Helge Deller <deller@gmx.de> Signed-off-by: Petr Mladek <pmladek@suse.cz> Tested-by: Petr Mladek <pmladek@suse.cz> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
Michel Dänzer authored
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=84627Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
Christian König authored
Not just the userspace relocs, otherwise we won't wait for a swapped out page tables to be swapped in again. v2: rebased on Alex current drm-fixes-3.18 Signed-off-by: Christian König <christian.koenig@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Darrick J. Wong authored
bio integrity handling is broken on a system with LVM layered atop a DIF/DIX SCSI drive because device mapper clones the bio, modifies the clone, and sends the clone to the lower layers for processing. However, the clone bio has bi_vcnt == 0, which means that when the sd driver calls bio_integrity_process to attach DIX data, the for_each_segment_all() call (which uses bi_vcnt) returns immediately and random garbage is sent to the disk on a disk write. The disk of course returns an error. Therefore, teach bio_integrity_process() to use bio_for_each_segment() to iterate the bio_vecs, since the per-bio iterator tracks which bio_vecs are associated with that particular bio. The integrity handling code is effectively part of the "driver" (it's not the bio owner), so it must use the correct iterator function. v2: Fix a compiler warning about abandoned local variables. This patch supersedes "block: bio_integrity_process uses wrong bio_vec iterator". Patch applies against 3.18-rc6. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
James Morris authored
Merge tag 'keys-fixes-20141201' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs into for-linus
-
Dave Airlie authored
Introduced in b440bde7, however it was added to the wrong function in nouveau. https://bugzilla.kernel.org/show_bug.cgi?id=86011 Cc: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.15+ Signed-off-by: Dave Airlie <airlied@redhat.com>
-
git://anongit.freedesktop.org/git/nouveau/linux-2.6Dave Airlie authored
Just a couple of fixes for the fallout from the fence rework. * 'linux-3.18' of git://anongit.freedesktop.org/git/nouveau/linux-2.6: drm/nouveau/gf116: remove copy1 engine drm/nouveau: prevent stale fence->channel pointers, and protect with rcu drm/nouveau/fifo/g84-: ack non-stall interrupt before handling it
-
Ilia Mirkin authored
Indications are that no GF116's actually have a copy engine there, but actually have the decompression engine. This engine can be made to do copies, but that should be done separately. Unclear why this didn't turn up on all GF116's, but perhaps the non-mobile ones came with enough VRAM to not trigger ttm migrations in test scenarios. Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=85465 Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=59168 Cc: stable@vger.kernel.org Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Maarten Lankhorst authored
Tested-by: Alexandre Courbot <acourbot@nvidia.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Closes a very unlikely race that can occur if another NonStallInterrupt method passes between checking fences and acking the previous interrupt. With this change, the interrupt will re-fire under such conditions. Tested-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4Linus Torvalds authored
Pull ext4 bugfix from Ted Ts'o: "Fix an ext4 metadata checksum regression introduced in v3.18-rc3" * tag 'ext4_for_linus_urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: jbd2: fix regression where we fail to initialize checksum seed when loading
-
Darrick J. Wong authored
When we're enabling journal features, we cannot use the predicate jbd2_journal_has_csum_v2or3() because we haven't yet set the sb feature flag fields! Moreover, we just finished loading the shash driver, so the test is unnecessary; calculate the seed always. Without this patch, we fail to initialize the checksum seed the first time we turn on journal_checksum, which means that all journal blocks written during that first mount are corrupt. Transactions written after the second mount will be fine, since the feature flag will be set in the journal superblock. xfstests generic/{034,321,322} are the regression tests. (This is important for 3.18.) Signed-off-by: Darrick J. Wong <darrick.wong@oracle.coM> Reported-by: Eric Whitney <enwlinux@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
-
- 01 Dec, 2014 6 commits
-
-
Thierry Reding authored
Commit 0b0b0893 ("of/pci: Fix the conversion of IO ranges into IO resources") changed how I/O resources are parsed from DT. Rather than containing the physical address of the I/O region, the addresses will now be in I/O address space. On Tegra the union of all ranges is used to expose a top-level memory- mapped resource for the PCI host bridge. This helps to make /proc/iomem more readable. Combining both of the above, the union would now include the I/O space region. This causes a regression on Tegra20, where the physical base address of the PCIe controller (and therefore of the union) is located at physical address 0x80000000. Since I/O space starts at 0, the union will now include all of system RAM which starts at 0x00000000. This commit fixes this by keeping two copies of the I/O range: one that represents the range in the CPU's physical address space, the other for the range in the I/O address space. This allows the translation setup within the driver to reuse the physical addresses. The code registering the I/O region with the PCI core uses both ranges to establish the mapping. Fixes: 0b0b0893 ("of/pci: Fix the conversion of IO ranges into IO resources") Reported-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de>
-
David Howells authored
Since the keyring facility can be viewed as a cache (at least in some applications), the local expiration time on the key should probably be viewed as a 'needs updating after this time' property rather than an absolute 'anyone now wanting to use this object is out of luck' property. Since request_key() is the main interface for the usage of keys, this should update or replace an expired key rather than issuing EKEYEXPIRED if the local expiration has been reached (ie. it should refresh the cache). For absolute conditions where refreshing the cache probably doesn't help, the key can be negatively instantiated using KEYCTL_REJECT_KEY with EKEYEXPIRED given as the error to issue. This will still cause request_key() to return EKEYEXPIRED as that was explicitly set. In the future, if the key type has an update op available, we might want to upcall with the expired key and allow the upcall to update it. We would pass a different operation name (the first column in /etc/request-key.conf) to the request-key program. request_key() returning EKEYEXPIRED is causing an NFS problem which Chuck Lever describes thusly: After about 10 minutes, my NFSv4 functional tests fail because the ownership of the test files goes to "-2". Looking at /proc/keys shows that the id_resolv keys that map to my test user ID have expired. The ownership problem persists until the expired keys are purged from the keyring, and fresh keys are obtained. I bisected the problem to 3.13 commit b2a4df20 ("KEYS: Expand the capacity of a keyring"). This commit inadvertantly changes the API contract of the internal function keyring_search_aux(). The root cause appears to be that b2a4df20 made "no state check" the default behavior. "No state check" means the keyring search iterator function skips checking the key's expiry timeout, and returns expired keys. request_key_and_link() depends on getting an -EAGAIN result code to know when to perform an upcall to refresh an expired key. This patch can be tested directly by: keyctl request2 user debug:fred a @s keyctl timeout %user:debug:fred 3 sleep 4 keyctl request2 user debug:fred a @s Without the patch, the last command gives error EKEYEXPIRED, but with the command it gives a new key. Reported-by: Carl Hetherington <cth@carlh.net> Reported-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Chuck Lever <chuck.lever@oracle.com>
-
David Howells authored
Simplify KEYRING_SEARCH_{NO,DO}_STATE_CHECK flags to be two variations of the same flag. They are effectively mutually exclusive and one or the other should be provided, but not both. Keyring cycle detection and key possession determination are the only things that set NO_STATE_CHECK, except that neither flag really does anything there because neither purpose makes use of the keyring_search_iterator() function, but rather provides their own. For cycle detection we definitely want to check inside of expired keyrings, just so that we don't create a cycle we can't get rid of. Revoked keyrings are cleared at revocation time and can't then be reused, so shouldn't be a problem either way. For possession determination, we *might* want to validate each keyring before searching it: do you possess a key that's hidden behind an expired or just plain inaccessible keyring? Currently, the answer is yes. Note that you cannot, however, possess a key behind a revoked keyring because they are cleared on revocation. keyring_search() sets DO_STATE_CHECK, which is correct. request_key_and_link() currently doesn't specify whether to check the key state or not - but it should set DO_STATE_CHECK. key_get_instantiation_authkey() also currently doesn't specify whether to check the key state or not - but it probably should also set DO_STATE_CHECK. Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Chuck Lever <chuck.lever@oracle.com>
-
David Howells authored
When a key description argument is imported into the kernel from userspace, as happens in add_key(), request_key(), KEYCTL_JOIN_SESSION_KEYRING, KEYCTL_SEARCH, the description is copied into a buffer up to PAGE_SIZE in size. PAGE_SIZE, however, is a variable quantity, depending on the arch. Fix this at 4096 instead (ie. 4095 plus a NUL termination) and define a constant (KEY_MAX_DESC_SIZE) to this end. When reading the description back with KEYCTL_DESCRIBE, a PAGE_SIZE internal buffer is allocated into which the information and description will be rendered. This means that the description will get truncated if an extremely long description it has to be crammed into the buffer with the stringified information. There is no particular need to copy the description into the buffer, so just copy it directly to userspace in a separate operation. Reported-by: Christian Kastner <debian@kvr.at> Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Christian Kastner <debian@kvr.at>
-
Sebastian Ott authored
Commit eb7e7d76 "s390: Replace __get_cpu_var uses" broke machine check handling. We copy machine check information from per-cpu to a stack variable for local processing. Next we should zap the per-cpu variable, not the stack variable. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Linus Torvalds authored
-