1. 27 Nov, 2015 18 commits
    • Dmitry Tunin's avatar
      Bluetooth: ath3k: Add support of AR3012 0cf3:817b device · 2b42ac30
      Dmitry Tunin authored
      commit 18e0afab upstream.
      
      T: Bus=04 Lev=02 Prnt=02 Port=04 Cnt=01 Dev#= 3 Spd=12 MxCh= 0
      D: Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1
      P: Vendor=0cf3 ProdID=817b Rev=00.02
      C: #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=100mA
      I: If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
      I: If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
      
      BugLink: https://bugs.launchpad.net/bugs/1506615Signed-off-by: default avatarDmitry Tunin <hanipouspilot@gmail.com>
      Signed-off-by: default avatarMarcel Holtmann <marcel@holtmann.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      2b42ac30
    • Dmitry Tunin's avatar
      Bluetooth: ath3k: Add new AR3012 0930:021c id · 467b9ca3
      Dmitry Tunin authored
      commit cd355ff0 upstream.
      
      This adapter works with the existing linux-firmware.
      
      T:  Bus=01 Lev=01 Prnt=01 Port=03 Cnt=02 Dev#=  3 Spd=12  MxCh= 0
      D:  Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs=  1
      P:  Vendor=0930 ProdID=021c Rev=00.01
      C:  #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=100mA
      I:  If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
      I:  If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
      
      BugLink: https://bugs.launchpad.net/bugs/1502781Signed-off-by: default avatarDmitry Tunin <hanipouspilot@gmail.com>
      Signed-off-by: default avatarMarcel Holtmann <marcel@holtmann.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      467b9ca3
    • Daeho Jeong's avatar
      ext4, jbd2: ensure entering into panic after recording an error in superblock · 08c89a61
      Daeho Jeong authored
      commit 4327ba52 upstream.
      
      If a EXT4 filesystem utilizes JBD2 journaling and an error occurs, the
      journaling will be aborted first and the error number will be recorded
      into JBD2 superblock and, finally, the system will enter into the
      panic state in "errors=panic" option.  But, in the rare case, this
      sequence is little twisted like the below figure and it will happen
      that the system enters into panic state, which means the system reset
      in mobile environment, before completion of recording an error in the
      journal superblock. In this case, e2fsck cannot recognize that the
      filesystem failure occurred in the previous run and the corruption
      wouldn't be fixed.
      
      Task A                        Task B
      ext4_handle_error()
      -> jbd2_journal_abort()
        -> __journal_abort_soft()
          -> __jbd2_journal_abort_hard()
          | -> journal->j_flags |= JBD2_ABORT;
          |
          |                         __ext4_abort()
          |                         -> jbd2_journal_abort()
          |                         | -> __journal_abort_soft()
          |                         |   -> if (journal->j_flags & JBD2_ABORT)
          |                         |           return;
          |                         -> panic()
          |
          -> jbd2_journal_update_sb_errno()
      Tested-by: default avatarHobin Woo <hobin.woo@samsung.com>
      Signed-off-by: default avatarDaeho Jeong <daeho.jeong@samsung.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      08c89a61
    • Filipe Manana's avatar
      Btrfs: fix truncation of compressed and inlined extents · 2a97932f
      Filipe Manana authored
      commit 0305cd5f upstream.
      
      When truncating a file to a smaller size which consists of an inline
      extent that is compressed, we did not discard (or made unusable) the
      data between the new file size and the old file size, wasting metadata
      space and allowing for the truncated data to be leaked and the data
      corruption/loss mentioned below.
      We were also not correctly decrementing the number of bytes used by the
      inode, we were setting it to zero, giving a wrong report for callers of
      the stat(2) syscall. The fsck tool also reported an error about a mismatch
      between the nbytes of the file versus the real space used by the file.
      
      Now because we weren't discarding the truncated region of the file, it
      was possible for a caller of the clone ioctl to actually read the data
      that was truncated, allowing for a security breach without requiring root
      access to the system, using only standard filesystem operations. The
      scenario is the following:
      
         1) User A creates a file which consists of an inline and compressed
            extent with a size of 2000 bytes - the file is not accessible to
            any other users (no read, write or execution permission for anyone
            else);
      
         2) The user truncates the file to a size of 1000 bytes;
      
         3) User A makes the file world readable;
      
         4) User B creates a file consisting of an inline extent of 2000 bytes;
      
         5) User B issues a clone operation from user A's file into its own
            file (using a length argument of 0, clone the whole range);
      
         6) User B now gets to see the 1000 bytes that user A truncated from
            its file before it made its file world readbale. User B also lost
            the bytes in the range [1000, 2000[ bytes from its own file, but
            that might be ok if his/her intention was reading stale data from
            user A that was never supposed to be public.
      
      Note that this contrasts with the case where we truncate a file from 2000
      bytes to 1000 bytes and then truncate it back from 1000 to 2000 bytes. In
      this case reading any byte from the range [1000, 2000[ will return a value
      of 0x00, instead of the original data.
      
      This problem exists since the clone ioctl was added and happens both with
      and without my recent data loss and file corruption fixes for the clone
      ioctl (patch "Btrfs: fix file corruption and data loss after cloning
      inline extents").
      
      So fix this by truncating the compressed inline extents as we do for the
      non-compressed case, which involves decompressing, if the data isn't already
      in the page cache, compressing the truncated version of the extent, writing
      the compressed content into the inline extent and then truncate it.
      
      The following test case for fstests reproduces the problem. In order for
      the test to pass both this fix and my previous fix for the clone ioctl
      that forbids cloning a smaller inline extent into a larger one,
      which is titled "Btrfs: fix file corruption and data loss after cloning
      inline extents", are needed. Without that other fix the test fails in a
      different way that does not leak the truncated data, instead part of
      destination file gets replaced with zeroes (because the destination file
      has a larger inline extent than the source).
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs btrfs
        _supported_os Linux
        _require_scratch
        _require_cloner
      
        rm -f $seqres.full
      
        _scratch_mkfs >>$seqres.full 2>&1
        _scratch_mount "-o compress"
      
        # Create our test files. File foo is going to be the source of a clone operation
        # and consists of a single inline extent with an uncompressed size of 512 bytes,
        # while file bar consists of a single inline extent with an uncompressed size of
        # 256 bytes. For our test's purpose, it's important that file bar has an inline
        # extent with a size smaller than foo's inline extent.
        $XFS_IO_PROG -f -c "pwrite -S 0xa1 0 128"   \
                -c "pwrite -S 0x2a 128 384" \
                $SCRATCH_MNT/foo | _filter_xfs_io
        $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 256" $SCRATCH_MNT/bar | _filter_xfs_io
      
        # Now durably persist all metadata and data. We do this to make sure that we get
        # on disk an inline extent with a size of 512 bytes for file foo.
        sync
      
        # Now truncate our file foo to a smaller size. Because it consists of a
        # compressed and inline extent, btrfs did not shrink the inline extent to the
        # new size (if the extent was not compressed, btrfs would shrink it to 128
        # bytes), it only updates the inode's i_size to 128 bytes.
        $XFS_IO_PROG -c "truncate 128" $SCRATCH_MNT/foo
      
        # Now clone foo's inline extent into bar.
        # This clone operation should fail with errno EOPNOTSUPP because the source
        # file consists only of an inline extent and the file's size is smaller than
        # the inline extent of the destination (128 bytes < 256 bytes). However the
        # clone ioctl was not prepared to deal with a file that has a size smaller
        # than the size of its inline extent (something that happens only for compressed
        # inline extents), resulting in copying the full inline extent from the source
        # file into the destination file.
        #
        # Note that btrfs' clone operation for inline extents consists of removing the
        # inline extent from the destination inode and copy the inline extent from the
        # source inode into the destination inode, meaning that if the destination
        # inode's inline extent is larger (N bytes) than the source inode's inline
        # extent (M bytes), some bytes (N - M bytes) will be lost from the destination
        # file. Btrfs could copy the source inline extent's data into the destination's
        # inline extent so that we would not lose any data, but that's currently not
        # done due to the complexity that would be needed to deal with such cases
        # (specially when one or both extents are compressed), returning EOPNOTSUPP, as
        # it's normally not a very common case to clone very small files (only case
        # where we get inline extents) and copying inline extents does not save any
        # space (unlike for normal, non-inlined extents).
        $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/foo $SCRATCH_MNT/bar
      
        # Now because the above clone operation used to succeed, and due to foo's inline
        # extent not being shinked by the truncate operation, our file bar got the whole
        # inline extent copied from foo, making us lose the last 128 bytes from bar
        # which got replaced by the bytes in range [128, 256[ from foo before foo was
        # truncated - in other words, data loss from bar and being able to read old and
        # stale data from foo that should not be possible to read anymore through normal
        # filesystem operations. Contrast with the case where we truncate a file from a
        # size N to a smaller size M, truncate it back to size N and then read the range
        # [M, N[, we should always get the value 0x00 for all the bytes in that range.
      
        # We expected the clone operation to fail with errno EOPNOTSUPP and therefore
        # not modify our file's bar data/metadata. So its content should be 256 bytes
        # long with all bytes having the value 0xbb.
        #
        # Without the btrfs bug fix, the clone operation succeeded and resulted in
        # leaking truncated data from foo, the bytes that belonged to its range
        # [128, 256[, and losing data from bar in that same range. So reading the
        # file gave us the following content:
        #
        # 0000000 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1
        # *
        # 0000200 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a
        # *
        # 0000400
        echo "File bar's content after the clone operation:"
        od -t x1 $SCRATCH_MNT/bar
      
        # Also because the foo's inline extent was not shrunk by the truncate
        # operation, btrfs' fsck, which is run by the fstests framework everytime a
        # test completes, failed reporting the following error:
        #
        #  root 5 inode 257 errors 400, nbytes wrong
      
        status=0
        exit
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      [bwh: Backported to 3.2:
       - Adjust parameters to btrfs_truncate_page() and btrfs_truncate_item()
       - Pass transaction pointer into truncate_inline_extent()
       - Add prototype of btrfs_truncate_page()
       - s/test_bit(BTRFS_ROOT_REF_COWS, &root->state)/root->ref_cows/
       - Keep using BUG_ON() for other error cases, as there is no
         btrfs_abort_transaction()
       - Adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      2a97932f
    • Chris Mason's avatar
      Btrfs: don't use ram_bytes for uncompressed inline items · 85c36cd4
      Chris Mason authored
      commit 514ac8ad upstream.
      
      If we truncate an uncompressed inline item, ram_bytes isn't updated to reflect
      the new size.  The fixe uses the size directly from the item header when
      reading uncompressed inlines, and also fixes truncate to update the
      size as it goes.
      Reported-by: default avatarJens Axboe <axboe@fb.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      [bwh: Backported to 3.2:
       - Don't use btrfs_map_token API
       - There are fewer callers of btrfs_file_extent_inline_len() to change
       - Adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      85c36cd4
    • Arnd Bergmann's avatar
      ARM: pxa: remove incorrect __init annotation on pxa27x_set_pwrmode · df2ef64a
      Arnd Bergmann authored
      commit 54c09889 upstream.
      
      The z2 machine calls pxa27x_set_pwrmode() in order to power off
      the machine, but this function gets discarded early at boot because
      it is marked __init, as pointed out by kbuild:
      
      WARNING: vmlinux.o(.text+0x145c4): Section mismatch in reference from the function z2_power_off() to the function .init.text:pxa27x_set_pwrmode()
      The function z2_power_off() references
      the function __init pxa27x_set_pwrmode().
      This is often because z2_power_off lacks a __init
      annotation or the annotation of pxa27x_set_pwrmode is wrong.
      
      This removes the __init section modifier to fix rebooting and the
      build error.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Fixes: ba4a90a6 ("ARM: pxa/z2: fix building error of pxa27x_cpu_suspend() no longer available")
      Signed-off-by: default avatarRobert Jarzmik <robert.jarzmik@free.fr>
      [bwh: Backported to 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      df2ef64a
    • David Woodhouse's avatar
      iommu/vt-d: Fix ATSR handling for Root-Complex integrated endpoints · a9f83a62
      David Woodhouse authored
      commit d14053b3 upstream.
      
      The VT-d specification says that "Software must enable ATS on endpoint
      devices behind a Root Port only if the Root Port is reported as
      supporting ATS transactions."
      
      We walk up the tree to find a Root Port, but for integrated devices we
      don't find one — we get to the host bridge. In that case we *should*
      allow ATS. Currently we don't, which means that we are incorrectly
      failing to use ATS for the integrated graphics. Fix that.
      
      We should never break out of this loop "naturally" with bus==NULL,
      since we'll always find bridge==NULL in that case (and now return 1).
      
      So remove the check for (!bridge) after the loop, since it can never
      happen. If it did, it would be worthy of a BUG_ON(!bridge). But since
      it'll oops anyway in that case, that'll do just as well.
      Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      [bwh: Backported to 3.2:
       - Adjust context
       - There's no (!bridge) check to remove]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      a9f83a62
    • Filipe Manana's avatar
      Btrfs: fix file corruption and data loss after cloning inline extents · 16f61ceb
      Filipe Manana authored
      commit 8039d87d upstream.
      
      Currently the clone ioctl allows to clone an inline extent from one file
      to another that already has other (non-inlined) extents. This is a problem
      because btrfs is not designed to deal with files having inline and regular
      extents, if a file has an inline extent then it must be the only extent
      in the file and must start at file offset 0. Having a file with an inline
      extent followed by regular extents results in EIO errors when doing reads
      or writes against the first 4K of the file.
      
      Also, the clone ioctl allows one to lose data if the source file consists
      of a single inline extent, with a size of N bytes, and the destination
      file consists of a single inline extent with a size of M bytes, where we
      have M > N. In this case the clone operation removes the inline extent
      from the destination file and then copies the inline extent from the
      source file into the destination file - we lose the M - N bytes from the
      destination file, a read operation will get the value 0x00 for any bytes
      in the the range [N, M] (the destination inode's i_size remained as M,
      that's why we can read past N bytes).
      
      So fix this by not allowing such destructive operations to happen and
      return errno EOPNOTSUPP to user space.
      
      Currently the fstest btrfs/035 tests the data loss case but it totally
      ignores this - i.e. expects the operation to succeed and does not check
      the we got data loss.
      
      The following test case for fstests exercises all these cases that result
      in file corruption and data loss:
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs btrfs
        _supported_os Linux
        _require_scratch
        _require_cloner
        _require_btrfs_fs_feature "no_holes"
        _require_btrfs_mkfs_feature "no-holes"
      
        rm -f $seqres.full
      
        test_cloning_inline_extents()
        {
            local mkfs_opts=$1
            local mount_opts=$2
      
            _scratch_mkfs $mkfs_opts >>$seqres.full 2>&1
            _scratch_mount $mount_opts
      
            # File bar, the source for all the following clone operations, consists
            # of a single inline extent (50 bytes).
            $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 50" $SCRATCH_MNT/bar \
                | _filter_xfs_io
      
            # Test cloning into a file with an extent (non-inlined) where the
            # destination offset overlaps that extent. It should not be possible to
            # clone the inline extent from file bar into this file.
            $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 16K" $SCRATCH_MNT/foo \
                | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo
      
            # Doing IO against any range in the first 4K of the file should work.
            # Due to a past clone ioctl bug which allowed cloning the inline extent,
            # these operations resulted in EIO errors.
            echo "File foo data after clone operation:"
            # All bytes should have the value 0xaa (clone operation failed and did
            # not modify our file).
            od -t x1 $SCRATCH_MNT/foo
            $XFS_IO_PROG -c "pwrite -S 0xcc 0 100" $SCRATCH_MNT/foo | _filter_xfs_io
      
            # Test cloning the inline extent against a file which has a hole in its
            # first 4K followed by a non-inlined extent. It should not be possible
            # as well to clone the inline extent from file bar into this file.
            $XFS_IO_PROG -f -c "pwrite -S 0xdd 4K 12K" $SCRATCH_MNT/foo2 \
                | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo2
      
            # Doing IO against any range in the first 4K of the file should work.
            # Due to a past clone ioctl bug which allowed cloning the inline extent,
            # these operations resulted in EIO errors.
            echo "File foo2 data after clone operation:"
            # All bytes should have the value 0x00 (clone operation failed and did
            # not modify our file).
            od -t x1 $SCRATCH_MNT/foo2
            $XFS_IO_PROG -c "pwrite -S 0xee 0 90" $SCRATCH_MNT/foo2 | _filter_xfs_io
      
            # Test cloning the inline extent against a file which has a size of zero
            # but has a prealloc extent. It should not be possible as well to clone
            # the inline extent from file bar into this file.
            $XFS_IO_PROG -f -c "falloc -k 0 1M" $SCRATCH_MNT/foo3 | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo3
      
            # Doing IO against any range in the first 4K of the file should work.
            # Due to a past clone ioctl bug which allowed cloning the inline extent,
            # these operations resulted in EIO errors.
            echo "First 50 bytes of foo3 after clone operation:"
            # Should not be able to read any bytes, file has 0 bytes i_size (the
            # clone operation failed and did not modify our file).
            od -t x1 $SCRATCH_MNT/foo3
            $XFS_IO_PROG -c "pwrite -S 0xff 0 90" $SCRATCH_MNT/foo3 | _filter_xfs_io
      
            # Test cloning the inline extent against a file which consists of a
            # single inline extent that has a size not greater than the size of
            # bar's inline extent (40 < 50).
            # It should be possible to do the extent cloning from bar to this file.
            $XFS_IO_PROG -f -c "pwrite -S 0x01 0 40" $SCRATCH_MNT/foo4 \
                | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo4
      
            # Doing IO against any range in the first 4K of the file should work.
            echo "File foo4 data after clone operation:"
            # Must match file bar's content.
            od -t x1 $SCRATCH_MNT/foo4
            $XFS_IO_PROG -c "pwrite -S 0x02 0 90" $SCRATCH_MNT/foo4 | _filter_xfs_io
      
            # Test cloning the inline extent against a file which consists of a
            # single inline extent that has a size greater than the size of bar's
            # inline extent (60 > 50).
            # It should not be possible to clone the inline extent from file bar
            # into this file.
            $XFS_IO_PROG -f -c "pwrite -S 0x03 0 60" $SCRATCH_MNT/foo5 \
                | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo5
      
            # Reading the file should not fail.
            echo "File foo5 data after clone operation:"
            # Must have a size of 60 bytes, with all bytes having a value of 0x03
            # (the clone operation failed and did not modify our file).
            od -t x1 $SCRATCH_MNT/foo5
      
            # Test cloning the inline extent against a file which has no extents but
            # has a size greater than bar's inline extent (16K > 50).
            # It should not be possible to clone the inline extent from file bar
            # into this file.
            $XFS_IO_PROG -f -c "truncate 16K" $SCRATCH_MNT/foo6 | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo6
      
            # Reading the file should not fail.
            echo "File foo6 data after clone operation:"
            # Must have a size of 16K, with all bytes having a value of 0x00 (the
            # clone operation failed and did not modify our file).
            od -t x1 $SCRATCH_MNT/foo6
      
            # Test cloning the inline extent against a file which has no extents but
            # has a size not greater than bar's inline extent (30 < 50).
            # It should be possible to clone the inline extent from file bar into
            # this file.
            $XFS_IO_PROG -f -c "truncate 30" $SCRATCH_MNT/foo7 | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo7
      
            # Reading the file should not fail.
            echo "File foo7 data after clone operation:"
            # Must have a size of 50 bytes, with all bytes having a value of 0xbb.
            od -t x1 $SCRATCH_MNT/foo7
      
            # Test cloning the inline extent against a file which has a size not
            # greater than the size of bar's inline extent (20 < 50) but has
            # a prealloc extent that goes beyond the file's size. It should not be
            # possible to clone the inline extent from bar into this file.
            $XFS_IO_PROG -f -c "falloc -k 0 1M" \
                            -c "pwrite -S 0x88 0 20" \
                            $SCRATCH_MNT/foo8 | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo8
      
            echo "File foo8 data after clone operation:"
            # Must have a size of 20 bytes, with all bytes having a value of 0x88
            # (the clone operation did not modify our file).
            od -t x1 $SCRATCH_MNT/foo8
      
            _scratch_unmount
        }
      
        echo -e "\nTesting without compression and without the no-holes feature...\n"
        test_cloning_inline_extents
      
        echo -e "\nTesting with compression and without the no-holes feature...\n"
        test_cloning_inline_extents "" "-o compress"
      
        echo -e "\nTesting without compression and with the no-holes feature...\n"
        test_cloning_inline_extents "-O no-holes" ""
      
        echo -e "\nTesting with compression and with the no-holes feature...\n"
        test_cloning_inline_extents "-O no-holes" "-o compress"
      
        status=0
        exit
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      [bwh: Backported to 3.2:
       - Adjust parameters to btrfs_drop_extents()
       - Drop use of ASSERT()
       - Keep using BUG_ON() for other error cases, as there is no
         btrfs_abort_transaction()
       - Adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      16f61ceb
    • Jan Schmidt's avatar
      Btrfs: added helper btrfs_next_item() · e83c48cb
      Jan Schmidt authored
      commit c7d22a3c upstream.
      
      btrfs_next_item() makes the btrfs path point to the next item, crossing leaf
      boundaries if needed.
      Signed-off-by: default avatarArne Jansen <sensille@gmx.net>
      Signed-off-by: default avatarJan Schmidt <list.btrfs@jan-o-sch.net>
      [bwh: Dependency of the following fix]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      e83c48cb
    • Eric Dumazet's avatar
      packet: fix match_fanout_group() · 9914546b
      Eric Dumazet authored
      commit 161642e2 upstream.
      
      Recent TCP listener patches exposed a prior af_packet bug :
      match_fanout_group() blindly assumes it is always safe
      to cast sk to a packet socket to compare fanout with af_packet_priv
      
      But SYNACK packets can be sent while attached to request_sock, which
      are smaller than a "struct sock".
      
      We can read non existent memory and crash.
      
      Fixes: c0de08d0 ("af_packet: don't emit packet on orig fanout group")
      Fixes: ca6fb065 ("tcp: attach SYNACK messages to request sockets instead of listener")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Cc: Eric Leblond <eric@regit.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      9914546b
    • Dan Carpenter's avatar
      devres: fix a for loop bounds check · e7102453
      Dan Carpenter authored
      commit 1f35d04a upstream.
      
      The iomap[] array has PCIM_IOMAP_MAX (6) elements and not
      DEVICE_COUNT_RESOURCE (16).  This bug was found using a static checker.
      It may be that the "if (!(mask & (1 << i)))" check means we never
      actually go past the end of the array in real life.
      
      Fixes: ec04b075 ('iomap: implement pcim_iounmap_regions()')
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      e7102453
    • Boris BREZILLON's avatar
      mtd: mtdpart: fix add_mtd_partitions error path · f9ac3882
      Boris BREZILLON authored
      commit e5bae867 upstream.
      
      If we fail to allocate a partition structure in the middle of the partition
      creation process, the already allocated partitions are never removed, which
      means they are still present in the partition list and their resources are
      never freed.
      Signed-off-by: default avatarBoris Brezillon <boris.brezillon@free-electrons.com>
      Signed-off-by: default avatarBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      f9ac3882
    • Dan Carpenter's avatar
      mwifiex: fix mwifiex_rdeeprom_read() · a6a8977e
      Dan Carpenter authored
      commit 1f9c6e1b upstream.
      
      There were several bugs here.
      
      1)  The done label was in the wrong place so we didn't copy any
          information out when there was no command given.
      
      2)  We were using PAGE_SIZE as the size of the buffer instead of
          "PAGE_SIZE - pos".
      
      3)  snprintf() returns the number of characters that would have been
          printed if there were enough space.  If there was not enough space
          (and we had fixed the memory corruption bug #2) then it would result
          in an information leak when we do simple_read_from_buffer().  I've
          changed it to use scnprintf() instead.
      
      I also removed the initialization at the start of the function, because
      I thought it made the code a little more clear.
      
      Fixes: 5e6e3a92 ('wireless: mwifiex: initial commit for Marvell mwifiex driver')
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Acked-by: default avatarAmitkumar Karwar <akarwar@marvell.com>
      Signed-off-by: default avatarKalle Valo <kvalo@codeaurora.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      a6a8977e
    • Valentin Rothberg's avatar
      wm831x_power: Use IRQF_ONESHOT to request threaded IRQs · 7293a7e1
      Valentin Rothberg authored
      commit 90adf98d upstream.
      
      Since commit 1c6c6952 ("genirq: Reject bogus threaded irq requests")
      threaded IRQs without a primary handler need to be requested with
      IRQF_ONESHOT, otherwise the request will fail.
      
      scripts/coccinelle/misc/irqf_oneshot.cocci detected this issue.
      
      Fixes: b5874f33 ("wm831x_power: Use genirq")
      Signed-off-by: default avatarValentin Rothberg <valentinrothberg@gmail.com>
      Signed-off-by: default avatarSebastian Reichel <sre@kernel.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      7293a7e1
    • Richard Purdie's avatar
      HID: core: Avoid uninitialized buffer access · 604bfd00
      Richard Purdie authored
      commit 79b568b9 upstream.
      
      hid_connect adds various strings to the buffer but they're all
      conditional. You can find circumstances where nothing would be written
      to it but the kernel will still print the supposedly empty buffer with
      printk. This leads to corruption on the console/in the logs.
      
      Ensure buf is initialized to an empty string.
      Signed-off-by: default avatarRichard Purdie <richard.purdie@linuxfoundation.org>
      [dvhart: Initialize string to "" rather than assign buf[0] = NULL;]
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: linux-input@vger.kernel.org
      Signed-off-by: default avatarDarren Hart <dvhart@linux.intel.com>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      604bfd00
    • Johannes Berg's avatar
      mac80211: fix driver RSSI event calculations · adc82592
      Johannes Berg authored
      commit 8ec6d978 upstream.
      
      The ifmgd->ave_beacon_signal value cannot be taken as is for
      comparisons, it must be divided by since it's represented
      like that for better accuracy of the EWMA calculations. This
      would lead to invalid driver RSSI events. Fix the used value.
      
      Fixes: 615f7b9b ("mac80211: add driver RSSI threshold events")
      Signed-off-by: default avatarJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      adc82592
    • Alex Williamson's avatar
      PCI: Use function 0 VPD for identical functions, regular VPD for others · b2af40d1
      Alex Williamson authored
      commit da2d03ea upstream.
      
      932c435c ("PCI: Add dev_flags bit to access VPD through function 0")
      added PCI_DEV_FLAGS_VPD_REF_F0.  Previously, we set the flag on every
      non-zero function of quirked devices.  If a function turned out to be
      different from function 0, i.e., it had a different class, vendor ID, or
      device ID, the flag remained set but we didn't make VPD accessible at all.
      
      Flip this around so we only set PCI_DEV_FLAGS_VPD_REF_F0 for functions that
      are identical to function 0, and allow regular VPD access for any other
      functions.
      
      [bhelgaas: changelog, stable tag]
      Fixes: 932c435c ("PCI: Add dev_flags bit to access VPD through function 0")
      Signed-off-by: default avatarAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: default avatarBjorn Helgaas <helgaas@kernel.org>
      Acked-by: default avatarMyron Stowe <myron.stowe@redhat.com>
      Acked-by: default avatarMark Rustad <mark.d.rustad@intel.com>
      [bwh: Backported to 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      b2af40d1
    • Alex Williamson's avatar
      PCI: Fix devfn for VPD access through function 0 · 991e923f
      Alex Williamson authored
      commit 9d924075 upstream.
      
      Commit 932c435c ("PCI: Add dev_flags bit to access VPD through function
      0") passes PCI_SLOT(devfn) for the devfn parameter of pci_get_slot().
      Generally this works because we're fairly well guaranteed that a PCIe
      device is at slot address 0, but for the general case, including
      conventional PCI, it's incorrect.  We need to get the slot and then convert
      it back into a devfn.
      
      Fixes: 932c435c ("PCI: Add dev_flags bit to access VPD through function 0")
      Signed-off-by: default avatarAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: default avatarBjorn Helgaas <helgaas@kernel.org>
      Acked-by: default avatarMyron Stowe <myron.stowe@redhat.com>
      Acked-by: default avatarMark Rustad <mark.d.rustad@intel.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      991e923f
  2. 17 Nov, 2015 22 commits
    • Ben Hutchings's avatar
      Linux 3.2.73 · ef0d3d06
      Ben Hutchings authored
      ef0d3d06
    • David Howells's avatar
      KEYS: Fix crash when attempt to garbage collect an uninstantiated keyring · a6826ecb
      David Howells authored
      commit f05819df upstream.
      
      The following sequence of commands:
      
          i=`keyctl add user a a @s`
          keyctl request2 keyring foo bar @t
          keyctl unlink $i @s
      
      tries to invoke an upcall to instantiate a keyring if one doesn't already
      exist by that name within the user's keyring set.  However, if the upcall
      fails, the code sets keyring->type_data.reject_error to -ENOKEY or some
      other error code.  When the key is garbage collected, the key destroy
      function is called unconditionally and keyring_destroy() uses list_empty()
      on keyring->type_data.link - which is in a union with reject_error.
      Subsequently, the kernel tries to unlink the keyring from the keyring names
      list - which oopses like this:
      
      	BUG: unable to handle kernel paging request at 00000000ffffff8a
      	IP: [<ffffffff8126e051>] keyring_destroy+0x3d/0x88
      	...
      	Workqueue: events key_garbage_collector
      	...
      	RIP: 0010:[<ffffffff8126e051>] keyring_destroy+0x3d/0x88
      	RSP: 0018:ffff88003e2f3d30  EFLAGS: 00010203
      	RAX: 00000000ffffff82 RBX: ffff88003bf1a900 RCX: 0000000000000000
      	RDX: 0000000000000000 RSI: 000000003bfc6901 RDI: ffffffff81a73a40
      	RBP: ffff88003e2f3d38 R08: 0000000000000152 R09: 0000000000000000
      	R10: ffff88003e2f3c18 R11: 000000000000865b R12: ffff88003bf1a900
      	R13: 0000000000000000 R14: ffff88003bf1a908 R15: ffff88003e2f4000
      	...
      	CR2: 00000000ffffff8a CR3: 000000003e3ec000 CR4: 00000000000006f0
      	...
      	Call Trace:
      	 [<ffffffff8126c756>] key_gc_unused_keys.constprop.1+0x5d/0x10f
      	 [<ffffffff8126ca71>] key_garbage_collector+0x1fa/0x351
      	 [<ffffffff8105ec9b>] process_one_work+0x28e/0x547
      	 [<ffffffff8105fd17>] worker_thread+0x26e/0x361
      	 [<ffffffff8105faa9>] ? rescuer_thread+0x2a8/0x2a8
      	 [<ffffffff810648ad>] kthread+0xf3/0xfb
      	 [<ffffffff810647ba>] ? kthread_create_on_node+0x1c2/0x1c2
      	 [<ffffffff815f2ccf>] ret_from_fork+0x3f/0x70
      	 [<ffffffff810647ba>] ? kthread_create_on_node+0x1c2/0x1c2
      
      Note the value in RAX.  This is a 32-bit representation of -ENOKEY.
      
      The solution is to only call ->destroy() if the key was successfully
      instantiated.
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Tested-by: default avatarDmitry Vyukov <dvyukov@google.com>
      [carnil: Backported for 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      a6826ecb
    • David Howells's avatar
      KEYS: Fix race between key destruction and finding a keyring by name · 650f6aa8
      David Howells authored
      commit 94c4554b upstream.
      
      There appears to be a race between:
      
       (1) key_gc_unused_keys() which frees key->security and then calls
           keyring_destroy() to unlink the name from the name list
      
       (2) find_keyring_by_name() which calls key_permission(), thus accessing
           key->security, on a key before checking to see whether the key usage is 0
           (ie. the key is dead and might be cleaned up).
      
      Fix this by calling ->destroy() before cleaning up the core key data -
      including key->security.
      Reported-by: default avatarPetr Matousek <pmatouse@redhat.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      [carnil: Backported to 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      650f6aa8
    • Eric Northup's avatar
      KVM: x86: work around infinite loop in microcode when #AC is delivered · 3553e5d3
      Eric Northup authored
      commit 54a20552 upstream.
      
      It was found that a guest can DoS a host by triggering an infinite
      stream of "alignment check" (#AC) exceptions.  This causes the
      microcode to enter an infinite loop where the core never receives
      another interrupt.  The host kernel panics pretty quickly due to the
      effects (CVE-2015-5307).
      Signed-off-by: default avatarEric Northup <digitaleric@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      [bwh: Backported to 3.2:
       - Add definition of AC_VECTOR
       - Adjust filename, context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      3553e5d3
    • Olga Kornievskaia's avatar
      Failing to send a CLOSE if file is opened WRONLY and server reboots on a 4.x mount · e94e60d8
      Olga Kornievskaia authored
      commit a41cbe86 upstream.
      
      A test case is as the description says:
      open(foobar, O_WRONLY);
      sleep()  --> reboot the server
      close(foobar)
      
      The bug is because in nfs4state.c in nfs4_reclaim_open_state() a few
      line before going to restart, there is
      clear_bit(NFS4CLNT_RECLAIM_NOGRACE, &state->flags).
      
      NFS4CLNT_RECLAIM_NOGRACE is a flag for the client states not open
      owner states. Value of NFS4CLNT_RECLAIM_NOGRACE is 4 which is the
      value of NFS_O_WRONLY_STATE in nfs4_state->flags. So clearing it wipes
      out state and when we go to close it, “call_close” doesn’t get set as
      state flag is not set and CLOSE doesn’t go on the wire.
      Signed-off-by: default avatarOlga Kornievskaia <aglo@umich.edu>
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      e94e60d8
    • Charles Keepax's avatar
      asix: Do full reset during ax88772_bind · 11eea7a9
      Charles Keepax authored
      [ Upstream commit 436c2a50 ]
      
      commit 3cc81d85 ("asix: Don't reset PHY on if_up for ASIX 88772")
      causes the ethernet on Arndale to no longer function. This appears to
      be because the Arndale ethernet requires a full reset before it will
      function correctly, however simply reverting the above patch causes
      problems with ethtool settings getting reset.
      
      It seems the problem is that the ethernet is not properly reset during
      bind, and indeed the code in ax88772_bind that resets the device is a
      very small subset of the actual ax88772_reset function. This patch uses
      ax88772_reset in place of the existing reset code in ax88772_bind which
      removes some code duplication and fixes the ethernet on Arndale.
      
      It is still possible that the original patch causes some issues with
      suspend and resume but that seems like a separate issue and I haven't
      had a chance to test that yet.
      Signed-off-by: default avatarCharles Keepax <ckeepax@opensource.wolfsonmicro.com>
      Tested-by: default avatarRiku Voipio <riku.voipio@linaro.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      [bwh: Backported to 3.2: adjust filename]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      11eea7a9
    • Michel Stam's avatar
      asix: Don't reset PHY on if_up for ASIX 88772 · 41700e5b
      Michel Stam authored
      [ Upstream commit 3cc81d85 ]
      
      I've noticed every time the interface is set to 'up,', the kernel
      reports that the link speed is set to 100 Mbps/Full Duplex, even
      when ethtool is used to set autonegotiation to 'off', half
      duplex, 10 Mbps.
      It can be tested by:
       ifconfig eth0 down
       ethtool -s eth0 autoneg off speed 10 duplex half
       ifconfig eth0 up
      
      Then checking 'dmesg' for the link speed.
      Signed-off-by: default avatarMichel Stam <m.stam@fugro.nl>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      [bwh: Backported to 3.2: adjust filename]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      41700e5b
    • Joe Perches's avatar
      ethtool: Use kcalloc instead of kmalloc for ethtool_get_strings · 68c3e59a
      Joe Perches authored
      [ Upstream commit 077cb37f ]
      
      It seems that kernel memory can leak into userspace by a
      kmalloc, ethtool_get_strings, then copy_to_user sequence.
      
      Avoid this by using kcalloc to zero fill the copied buffer.
      Signed-off-by: default avatarJoe Perches <joe@perches.com>
      Acked-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      68c3e59a
    • Pravin B Shelar's avatar
      skbuff: Fix skb checksum partial check. · a5e14d9f
      Pravin B Shelar authored
      [ Upstream commit 31b33dfb ]
      
      Earlier patch 6ae459bd tried to detect void ckecksum partial
      skb by comparing pull length to checksum offset. But it does
      not work for all cases since checksum-offset depends on
      updates to skb->data.
      
      Following patch fixes it by validating checksum start offset
      after skb-data pointer is updated. Negative value of checksum
      offset start means there is no need to checksum.
      
      Fixes: 6ae459bd ("skbuff: Fix skb checksum flag on skb pull")
      Reported-by: default avatarAndrew Vagin <avagin@odin.com>
      Signed-off-by: default avatarPravin B Shelar <pshelar@nicira.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      a5e14d9f
    • Pravin B Shelar's avatar
      skbuff: Fix skb checksum flag on skb pull · c3321c4e
      Pravin B Shelar authored
      [ Upstream commit 6ae459bd ]
      
      VXLAN device can receive skb with checksum partial. But the checksum
      offset could be in outer header which is pulled on receive. This results
      in negative checksum offset for the skb. Such skb can cause the assert
      failure in skb_checksum_help(). Following patch fixes the bug by setting
      checksum-none while pulling outer header.
      
      Following is the kernel panic msg from old kernel hitting the bug.
      
      ------------[ cut here ]------------
      kernel BUG at net/core/dev.c:1906!
      RIP: 0010:[<ffffffff81518034>] skb_checksum_help+0x144/0x150
      Call Trace:
      <IRQ>
      [<ffffffffa0164c28>] queue_userspace_packet+0x408/0x470 [openvswitch]
      [<ffffffffa016614d>] ovs_dp_upcall+0x5d/0x60 [openvswitch]
      [<ffffffffa0166236>] ovs_dp_process_packet_with_key+0xe6/0x100 [openvswitch]
      [<ffffffffa016629b>] ovs_dp_process_received_packet+0x4b/0x80 [openvswitch]
      [<ffffffffa016c51a>] ovs_vport_receive+0x2a/0x30 [openvswitch]
      [<ffffffffa0171383>] vxlan_rcv+0x53/0x60 [openvswitch]
      [<ffffffffa01734cb>] vxlan_udp_encap_recv+0x8b/0xf0 [openvswitch]
      [<ffffffff8157addc>] udp_queue_rcv_skb+0x2dc/0x3b0
      [<ffffffff8157b56f>] __udp4_lib_rcv+0x1cf/0x6c0
      [<ffffffff8157ba7a>] udp_rcv+0x1a/0x20
      [<ffffffff8154fdbd>] ip_local_deliver_finish+0xdd/0x280
      [<ffffffff81550128>] ip_local_deliver+0x88/0x90
      [<ffffffff8154fa7d>] ip_rcv_finish+0x10d/0x370
      [<ffffffff81550365>] ip_rcv+0x235/0x300
      [<ffffffff8151ba1d>] __netif_receive_skb+0x55d/0x620
      [<ffffffff8151c360>] netif_receive_skb+0x80/0x90
      [<ffffffff81459935>] virtnet_poll+0x555/0x6f0
      [<ffffffff8151cd04>] net_rx_action+0x134/0x290
      [<ffffffff810683d8>] __do_softirq+0xa8/0x210
      [<ffffffff8162fe6c>] call_softirq+0x1c/0x30
      [<ffffffff810161a5>] do_softirq+0x65/0xa0
      [<ffffffff810687be>] irq_exit+0x8e/0xb0
      [<ffffffff81630733>] do_IRQ+0x63/0xe0
      [<ffffffff81625f2e>] common_interrupt+0x6e/0x6e
      Reported-by: default avatarAnupam Chanda <achanda@vmware.com>
      Signed-off-by: default avatarPravin B Shelar <pshelar@nicira.com>
      Acked-by: default avatarTom Herbert <tom@herbertland.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      c3321c4e
    • Sabrina Dubroca's avatar
      net: add length argument to skb_copy_and_csum_datagram_iovec · 127500d7
      Sabrina Dubroca authored
      Without this length argument, we can read past the end of the iovec in
      memcpy_toiovec because we have no way of knowing the total length of the
      iovec's buffers.
      
      This is needed for stable kernels where 89c22d8c ("net: Fix skb
      csum races when peeking") has been backported but that don't have the
      ioviter conversion, which is almost all the stable trees <= 3.18.
      
      This also fixes a kernel crash for NFS servers when the client uses
       -onfsvers=3,proto=udp to mount the export.
      Signed-off-by: default avatarSabrina Dubroca <sd@queasysnail.net>
      Reviewed-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      [bwh: Backported to 3.2: adjust context in include/linux/skbuff.h]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      127500d7
    • Richard Guy Briggs's avatar
      sched: declare pid_alive as inline · 44211964
      Richard Guy Briggs authored
      commit 80e0b6e8 upstream.
      
      We accidentally declared pid_alive without any extern/inline connotation.
      Some platforms were fine with this, some like ia64 and mips were very angry.
      If the function is inline, the prototype should be inline!
      
      on ia64:
      include/linux/sched.h:1718: warning: 'pid_alive' declared inline after
      being called
      Signed-off-by: default avatarRichard Guy Briggs <rgb@redhat.com>
      Signed-off-by: default avatarEric Paris <eparis@redhat.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Cc: Neal Gompa <ngompa13@gmail.com>
      44211964
    • Dāvis Mosāns's avatar
      mvsas: Fix NULL pointer dereference in mvs_slot_task_free · cc1875ec
      Dāvis Mosāns authored
      commit 22805217 upstream.
      
      When pci_pool_alloc fails in mvs_task_prep then task->lldd_task stays
      NULL but it's later used in mvs_abort_task as slot which is passed
      to mvs_slot_task_free causing NULL pointer dereference.
      
      Just return from mvs_slot_task_free when passed with NULL slot.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=101891Signed-off-by: default avatarDāvis Mosāns <davispuh@gmail.com>
      Reviewed-by: default avatarTomas Henzl <thenzl@redhat.com>
      Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: default avatarJames Bottomley <JBottomley@Odin.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      cc1875ec
    • NeilBrown's avatar
      md/raid10: don't clear bitmap bit when bad-block-list write fails. · 965d8d1d
      NeilBrown authored
      commit c340702c upstream.
      
      When a write fails and a bad-block-list is present, we can
      update the bad-block-list instead of writing the data.  If
      this succeeds then it is OK clear the relevant bitmap-bit as
      no further 'sync' of the block is needed.
      
      However if writing the bad-block-list fails then we need to
      treat the write as failed and particularly must not clear
      the bitmap bit.  Otherwise the device can be re-added (after
      any hardware connection issues are resolved) and because the
      relevant bit in the bitmap is clear, that block will not be
      resynced.  This leads to data corruption.
      
      We already delay the final bio_endio() on the write until
      the bad-block-list is written so that when the write
      returns: either that data is safe, the bad-block record is
      safe, or the fact that the device is faulty is safe.
      However we *don't* delay the clearing of the bitmap, so the
      bitmap bit can be recorded as cleared before we know if the
      bad-block-list was written safely.
      
      So: delay that until the write really is safe.
      i.e. move the call to close_write() until just before
      calling bio_endio(), and recheck the 'is array degraded'
      status before making that call.
      
      This bug goes back to v3.1 when bad-block-lists were
      introduced, though it only affects arrays created with
      mdadm-3.3 or later as only those have bad-block lists.
      
      Backports will require at least
      Commit: 95af587e ("md/raid10: ensure device failure recorded before write request returns.")
      as well.  I'll send that to 'stable' separately.
      
      Note that of the two tests of R10BIO_WriteError that this
      patch adds, the first is certain to fail and the second is
      certain to succeed.  However doing it this way makes the
      patch more obviously correct.  I will tidy the code up in a
      future merge window.
      Reported-by: default avatarNate Dailey <nate.dailey@stratus.com>
      Fixes: bd870a16 ("md/raid10:  Handle write errors by updating badblock log.")
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      965d8d1d
    • NeilBrown's avatar
      md/raid10: ensure device failure recorded before write request returns. · c1fba1c8
      NeilBrown authored
      commit 95af587e upstream.
      
      When a write to one of the legs of a RAID10 fails, the failure is
      recorded in the metadata of the other legs so that after a restart
      the data on the failed drive wont be trusted even if that drive seems
      to be working again (maybe a cable was unplugged).
      
      Currently there is no interlock between the write request completing
      and the metadata update.  So it is possible that the write will
      complete, the app will confirm success in some way, and then the
      machine will crash before the metadata update completes.
      
      This is an extremely small hole for a racy to fit in, but it is
      theoretically possible and so should be closed.
      
      So:
       - set MD_CHANGE_PENDING when requesting a metadata update for a
         failed device, so we can know with certainty when it completes
       - queue requests that experienced an error on a new queue which
         is only processed after the metadata update completes
       - call raid_end_bio_io() on bios in that queue when the time comes.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      [bwh: Backported to 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      c1fba1c8
    • NeilBrown's avatar
      md/raid1: don't clear bitmap bit when bad-block-list write fails. · 1f6c748a
      NeilBrown authored
      commit bd8688a1 upstream.
      
      When a write fails and a bad-block-list is present, we can
      update the bad-block-list instead of writing the data.  If
      this succeeds then it is OK clear the relevant bitmap-bit as
      no further 'sync' of the block is needed.
      
      However if writing the bad-block-list fails then we need to
      treat the write as failed and particularly must not clear
      the bitmap bit.  Otherwise the device can be re-added (after
      any hardware connection issues are resolved) and because the
      relevant bit in the bitmap is clear, that block will not be
      resynced.  This leads to data corruption.
      
      We already delay the final bio_endio() on the write until
      the bad-block-list is written so that when the write
      returns: either that data is safe, the bad-block record is
      safe, or the fact that the device is faulty is safe.
      However we *don't* delay the clearing of the bitmap, so the
      bitmap bit can be recorded as cleared before we know if the
      bad-block-list was written safely.
      
      So: delay that until the write really is safe.
      i.e. move the call to close_write() until just before
      calling bio_endio(), and recheck the 'is array degraded'
      status before making that call.
      
      This bug goes back to v3.1 when bad-block-lists were
      introduced, though it only affects arrays created with
      mdadm-3.3 or later as only those have bad-block lists.
      
      Backports will require at least
      Commit: 55ce74d4 ("md/raid1: ensure device failure recorded before write request returns.")
      as well.  I'll send that to 'stable' separately.
      
      Note that of the two tests of R1BIO_WriteError that this
      patch adds, the first is certain to fail and the second is
      certain to succeed.  However doing it this way makes the
      patch more obviously correct.  I will tidy the code up in a
      future merge window.
      Reported-and-tested-by: default avatarNate Dailey <nate.dailey@stratus.com>
      Cc: Jes Sorensen <Jes.Sorensen@redhat.com>
      Fixes: cd5ff9a1 ("md/raid1:  Handle write errors by updating badblock log.")
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      1f6c748a
    • NeilBrown's avatar
      md/raid1: ensure device failure recorded before write request returns. · 6a1281c3
      NeilBrown authored
      commit 55ce74d4 upstream.
      
      When a write to one of the legs of a RAID1 fails, the failure is
      recorded in the metadata of the other leg(s) so that after a restart
      the data on the failed drive wont be trusted even if that drive seems
      to be working again  (maybe a cable was unplugged).
      
      Similarly when we record a bad-block in response to a write failure,
      we must not let the write complete until the bad-block update is safe.
      
      Currently there is no interlock between the write request completing
      and the metadata update.  So it is possible that the write will
      complete, the app will confirm success in some way, and then the
      machine will crash before the metadata update completes.
      
      This is an extremely small hole for a racy to fit in, but it is
      theoretically possible and so should be closed.
      
      So:
       - set MD_CHANGE_PENDING when requesting a metadata update for a
         failed device, so we can know with certainty when it completes
       - queue requests that experienced an error on a new queue which
         is only processed after the metadata update completes
       - call raid_end_bio_io() on bios in that queue when the time comes.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      [bwh: Backported to 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      6a1281c3
    • Mike Snitzer's avatar
      dm btree: fix leak of bufio-backed block in btree_split_beneath error path · 12d1c67b
      Mike Snitzer authored
      commit 4dcb8b57 upstream.
      
      btree_split_beneath()'s error path had an outstanding FIXME that speaks
      directly to the potential for _not_ cleaning up a previously allocated
      bufio-backed block.
      
      Fix this by releasing the previously allocated bufio block using
      unlock_block().
      Reported-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Acked-by: default avatarJoe Thornber <thornber@redhat.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      12d1c67b
    • Joe Thornber's avatar
      dm btree remove: fix a bug when rebalancing nodes after removal · 11020754
      Joe Thornber authored
      commit 2871c69e upstream.
      
      Commit 4c7e3093 ("dm btree remove: fix bug in redistribute3") wasn't
      a complete fix for redistribute3().
      
      The redistribute3 function takes 3 btree nodes and shares out the entries
      evenly between them.  If the three nodes in total contained
      (MAX_ENTRIES * 3) - 1 entries between them then this was erroneously getting
      rebalanced as (MAX_ENTRIES - 1) on the left and right, and (MAX_ENTRIES + 1) in
      the center.
      
      Fix this issue by being more careful about calculating the target number
      of entries for the left and right nodes.
      
      Unit tested in userspace using this program:
      https://github.com/jthornber/redistribute3-test/blob/master/redistribute3_t.cSigned-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      11020754
    • Guillaume Nault's avatar
      ppp: fix pppoe_dev deletion condition in pppoe_release() · e3e62cc7
      Guillaume Nault authored
      commit 1acea4f6 upstream.
      
      We can't rely on PPPOX_ZOMBIE to decide whether to clear po->pppoe_dev.
      PPPOX_ZOMBIE can be set by pppoe_disc_rcv() even when po->pppoe_dev is
      NULL. So we have no guarantee that (sk->sk_state & PPPOX_ZOMBIE) implies
      (po->pppoe_dev != NULL).
      Since we're releasing a PPPoE socket, we want to release the pppoe_dev
      if it exists and reset sk_state to PPPOX_DEAD, no matter the previous
      value of sk_state. So we can just check for po->pppoe_dev and avoid any
      assumption on sk->sk_state.
      
      Fixes: 2b018d57 ("pppoe: drop PPPOX_ZOMBIEs in pppoe_release")
      Signed-off-by: default avatarGuillaume Nault <g.nault@alphalink.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      e3e62cc7
    • Jan Kara's avatar
      mm: make sendfile(2) killable · 279fc860
      Jan Kara authored
      commit 296291cd upstream.
      
      Currently a simple program below issues a sendfile(2) system call which
      takes about 62 days to complete in my test KVM instance.
      
              int fd;
              off_t off = 0;
      
              fd = open("file", O_RDWR | O_TRUNC | O_SYNC | O_CREAT, 0644);
              ftruncate(fd, 2);
              lseek(fd, 0, SEEK_END);
              sendfile(fd, fd, &off, 0xfffffff);
      
      Now you should not ask kernel to do a stupid stuff like copying 256MB in
      2-byte chunks and call fsync(2) after each chunk but if you do, sysadmin
      should have a way to stop you.
      
      We actually do have a check for fatal_signal_pending() in
      generic_perform_write() which triggers in this path however because we
      always succeed in writing something before the check is done, we return
      value > 0 from generic_perform_write() and thus the information about
      signal gets lost.
      
      Fix the problem by doing the signal check before writing anything.  That
      way generic_perform_write() returns -EINTR, the error gets propagated up
      and the sendfile loop terminates early.
      Signed-off-by: default avatarJan Kara <jack@suse.com>
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      279fc860
    • Vasant Hegde's avatar
      powerpc/rtas: Validate rtas.entry before calling enter_rtas() · 08fd1afd
      Vasant Hegde authored
      commit 8832317f upstream.
      
      Currently we do not validate rtas.entry before calling enter_rtas(). This
      leads to a kernel oops when user space calls rtas system call on a powernv
      platform (see below). This patch adds code to validate rtas.entry before
      making enter_rtas() call.
      
        Oops: Exception in kernel mode, sig: 4 [#1]
        SMP NR_CPUS=1024 NUMA PowerNV
        task: c000000004294b80 ti: c0000007e1a78000 task.ti: c0000007e1a78000
        NIP: 0000000000000000 LR: 0000000000009c14 CTR: c000000000423140
        REGS: c0000007e1a7b920 TRAP: 0e40   Not tainted  (3.18.17-340.el7_1.pkvm3_1_0.2400.1.ppc64le)
        MSR: 1000000000081000 <HV,ME>  CR: 00000000  XER: 00000000
        CFAR: c000000000009c0c SOFTE: 0
        NIP [0000000000000000]           (null)
        LR [0000000000009c14] 0x9c14
        Call Trace:
        [c0000007e1a7bba0] [c00000000041a7f4] avc_has_perm_noaudit+0x54/0x110 (unreliable)
        [c0000007e1a7bd80] [c00000000002ddc0] ppc_rtas+0x150/0x2d0
        [c0000007e1a7be30] [c000000000009358] syscall_exit+0x0/0x98
      
      Fixes: 55190f88 ("powerpc: Add skeleton PowerNV platform")
      Reported-by: default avatarNAGESWARA R. SASTRY <nasastry@in.ibm.com>
      Signed-off-by: default avatarVasant Hegde <hegdevasant@linux.vnet.ibm.com>
      [mpe: Reword change log, trim oops, and add stable + fixes]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      08fd1afd