1. 06 Sep, 2012 4 commits
    • Prashant Gaikwad's avatar
      ARM: tegra20: Separate out clk ops and clk data · 86edb87a
      Prashant Gaikwad authored
      Move clock initialization data to separate file. This is
      required for migrating to generic clock framework if static
      initialization is used.
      Signed-off-by: default avatarPrashant Gaikwad <pgaikwad@nvidia.com>
      Signed-off-by: default avatarStephen Warren <swarren@nvidia.com>
      86edb87a
    • Prashant Gaikwad's avatar
      ARM: tegra30: Separate out clk ops and clk data · 88e790a4
      Prashant Gaikwad authored
      Move clock initialization data to separate file. This is
      required for migrating to generic clock framework if static
      initialization is used.
      Signed-off-by: default avatarPrashant Gaikwad <pgaikwad@nvidia.com>
      Signed-off-by: default avatarStephen Warren <swarren@nvidia.com>
      88e790a4
    • Stephen Warren's avatar
      ARM: tegra: fix U16 divider range check · eb70e1bd
      Stephen Warren authored
      A U16 divider can divide a clock by 1..64K. However, the range-check
      in clk_div16_get_divider() limited the range to 1..256. Fix this. NVIDIA's
      downstream kernels already have the fixed range-check.
      
      In practice this is a problem on Whistler's I2C bus, which uses a bus
      clock rate of 100KHz (rather than the more common 400KHz on Tegra boards),
      which requires a HW module clock of 8*100KHz. The parent clock is 216MHz,
      leading to a desired divider of 270. Prior to conversion to the common
      clock framework, this range error was somehow ignored/irrelevant and
      caused no problems. However, the common clock framework evidently has
      more rigorous error-checking, so this failure causes the I2C bus to fail
      to operate correctly.
      Signed-off-by: default avatarStephen Warren <swarren@nvidia.com>
      eb70e1bd
    • Stephen Warren's avatar
      ARM: tegra: turn on UART A clock at boot · 37c241ed
      Stephen Warren authored
      Some boards use UART D for the main serial console, and some use UART A.
      UART D's clock is listed in board-dt-tegra20.c's clock table, whereas
      UART A's clock is not. This causes the clock code to think UART A's
      clock is unsed. The common clock framework turns off unused clocks at
      boot time. This makes the kernel appear to hang. Add UART A's clock into
      the clock table to prevent this. Eventually, this requirement should be
      handled by the UART driver, and/or properties in a board-specific device
      tree file.
      Signed-off-by: default avatarStephen Warren <swarren@nvidia.com>
      37c241ed
  2. 01 Sep, 2012 5 commits
  3. 30 Aug, 2012 5 commits
  4. 29 Aug, 2012 18 commits
  5. 28 Aug, 2012 8 commits
    • Stefan Behrens's avatar
      Btrfs: fix that repair code is spuriously executed for transid failures · 256dd1bb
      Stefan Behrens authored
      If verify_parent_transid() fails for all mirrors, the current code
      calls repair_io_failure() anyway which means:
      - that the disk block is rewritten without repairing anything and
      - that a kernel log message is printed which misleadingly claims
        that a read error was corrected.
      
      This is an example:
      parent transid verify failed on 615015833600 wanted 110423 found 110424
      parent transid verify failed on 615015833600 wanted 110423 found 110424
      btrfs read error corrected: ino 1 off 615015833600 (dev /dev/...)
      
      It is wrong to ignore the results from verify_parent_transid() and to
      call repair_eb_io_failure() when the verification of the transids failed.
      This commit fixes the issue.
      Signed-off-by: default avatarStefan Behrens <sbehrens@giantdisaster.de>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      256dd1bb
    • Liu Bo's avatar
      Btrfs: fix ordered extent leak when failing to start a transaction · d280e5be
      Liu Bo authored
      We cannot just return error before freeing ordered extent and releasing reserved
      space when we fail to start a transacion.
      Signed-off-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      d280e5be
    • Liu Bo's avatar
      Btrfs: fix a dio write regression · 24c03fa5
      Liu Bo authored
      This bug is introduced by commit 3b8bde746f6f9bd36a9f05f5f3b6e334318176a9
      (Btrfs: lock extents as we map them in DIO).
      
      In dio write, we should unlock the section which we didn't do IO on in case that
      we fall back to buffered write.  But we need to not only unlock the section
      but also cleanup reserved space for the section.
      
      This bug was found while running xfstests 133, with this 133 no longer complains.
      Signed-off-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      24c03fa5
    • Josef Bacik's avatar
      Btrfs: fix deadlock with freeze and sync V2 · bd7de2c9
      Josef Bacik authored
      We can deadlock with freeze right now because we unconditionally start a
      transaction in our ->sync_fs() call.  To fix this just check and see if we
      have a running transaction to commit.  This saves us from the deadlock
      because at this point we'll have the umount sem for the sb so we're safe
      from freezes coming in after we've done our check.  With this patch the
      freeze xfstests no longer deadlocks.  Thanks,
      Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      bd7de2c9
    • Stefan Behrens's avatar
      Btrfs: revert checksum error statistic which can cause a BUG() · 5ee0844d
      Stefan Behrens authored
      Commit 442a4f63 added btrfs device
      statistic counters for detected IO and checksum errors to Linux 3.5.
      The statistic part that counts checksum errors in
      end_bio_extent_readpage() can cause a BUG() in a subfunction:
      "kernel BUG at fs/btrfs/volumes.c:3762!"
      That part is reverted with the current patch.
      However, the counting of checksum errors in the scrub context remains
      active, and the counting of detected IO errors (read, write or flush
      errors) in all contexts remains active.
      
      Cc: stable <stable@vger.kernel.org> # 3.5
      Signed-off-by: default avatarStefan Behrens <sbehrens@giantdisaster.de>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      5ee0844d
    • Stefan Behrens's avatar
      Btrfs: remove superblock writing after fatal error · 68ce9682
      Stefan Behrens authored
      With commit acce952b, btrfs was changed to flag the filesystem with
      BTRFS_SUPER_FLAG_ERROR and switch to read-only mode after a fatal
      error happened like a write I/O errors of all mirrors.
      In such situations, on unmount, the superblock is written in
      btrfs_error_commit_super(). This is done with the intention to be able
      to evaluate the error flag on the next mount. A warning is printed
      in this case during the next mount and the log tree is ignored.
      
      The issue is that it is possible that the superblock points to a root
      that was not written (due to write I/O errors).
      The result is that the filesystem cannot be mounted. btrfsck also does
      not start and all the other btrfs-progs tools fail to start as well.
      However, mount -o recovery is working well and does the right things
      to recover the filesystem (i.e., don't use the log root, clear the
      free space cache and use the next mountable root that is stored in the
      root backup array).
      
      This patch removes the writing of the superblock when
      BTRFS_SUPER_FLAG_ERROR is set, and removes the handling of the error
      flag in the mount function.
      
      These lines can be used to reproduce the issue (using /dev/sdm):
      SCRATCH_DEV=/dev/sdm
      SCRATCH_MNT=/mnt
      echo 0 25165824 linear $SCRATCH_DEV 0 | dmsetup create foo
      ls -alLF /dev/mapper/foo
      mkfs.btrfs /dev/mapper/foo
      mount /dev/mapper/foo $SCRATCH_MNT
      echo bar > $SCRATCH_MNT/foo
      sync
      echo 0 25165824 error | dmsetup reload foo
      dmsetup resume foo
      ls -alF $SCRATCH_MNT
      touch $SCRATCH_MNT/1
      ls -alF $SCRATCH_MNT
      sleep 35
      echo 0 25165824 linear $SCRATCH_DEV 0 | dmsetup reload foo
      dmsetup resume foo
      sleep 1
      umount $SCRATCH_MNT
      btrfsck /dev/mapper/foo
      dmsetup remove foo
      Signed-off-by: default avatarStefan Behrens <sbehrens@giantdisaster.de>
      Signed-off-by: default avatarJan Schmidt <list.btrfs@jan-o-sch.net>
      68ce9682
    • Josef Bacik's avatar
      Btrfs: allow delayed refs to be merged · ae1e206b
      Josef Bacik authored
      Daniel Blueman reported a bug with fio+balance on a ramdisk setup.
      Basically what happens is the balance relocates a tree block which will drop
      the implicit refs for all of its children and adds a full backref.  Once the
      block is relocated we have to add the implicit refs back, so when we cow the
      block again we add the implicit refs for its children back.  The problem
      comes when the original drop ref doesn't get run before we add the implicit
      refs back.  The delayed ref stuff will specifically prefer ADD operations
      over DROP to keep us from freeing up an extent that will have references to
      it, so we try to add the implicit ref before it is actually removed and we
      panic.  This worked fine before because the add would have just canceled the
      drop out and we would have been fine.  But the backref walking work needs to
      be able to freeze the delayed ref stuff in time so we have this ever
      increasing sequence number that gets attached to all new delayed ref updates
      which makes us not merge refs and we run into this issue.
      
      So to fix this we need to merge delayed refs.  So everytime we run a
      clustered ref we need to try and merge all of its delayed refs.  The backref
      walking stuff locks the delayed ref head before processing, so if we have it
      locked we are safe to merge any refs inside of the sequence number.  If
      there is no sequence number we can merge all refs.  Doing this not only
      fixes our bug but keeps the delayed ref code from adding and removing
      useless refs and batching together multiple refs into one search instead of
      one search per delayed ref, which will really help our commit times.  I ran
      this with Daniels test and 276 and I haven't seen any problems.  Thanks,
      Reported-by: default avatarDaniel J Blueman <daniel@quora.org>
      Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
      ae1e206b
    • Josef Bacik's avatar
      Btrfs: fix enospc problems when deleting a subvol · 5a24e84c
      Josef Bacik authored
      Subvol delete is a special kind of awful where we use the global reserve to
      cover the ENOSPC requirements.  The problem is once we're done removing
      everything we do a btrfs_update_inode(), which by default will try to do the
      delayed update stuff which will use it's own reserve.  There will be no
      space in this reserve and we'll return ENOSPC.  So instead use
      btrfs_update_inode_fallback() which will just fallback to updating the inode
      item in the case of enospc.  This is fine because the global reserve covers
      the space requirements for this.  With this patch I can now delete a subvol
      on a problem image Dave Sterba sent me.  Thanks,
      Reported-by: default avatarDavid Sterba <dave@jikos.cz>
      Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
      Signed-off-by: default avatarChris Mason <chris.mason@fusionio.com>
      5a24e84c