1. 29 Aug, 2017 8 commits
    • Badhri Jagan Sridharan's avatar
      staging: typec: tcpm: add cc change handling in src states · f3b73364
      Badhri Jagan Sridharan authored
      In the case that the lower layer driver reports a cc change directly
      from SINK state to SOURCE state, TCPM doesn't handle these cc change
      in SRC_SEND_CAPABILITIES, SRC_READY states. And with SRC_ATTACHED
      state, the change is not handled as the port is still considered
      connected.
      
      [49606.131672] state change DRP_TOGGLING -> SRC_ATTACH_WAIT
      [49606.131701] pending state change SRC_ATTACH_WAIT -> SRC_ATTACHED @
      200 ms
      [49606.329952] state change SRC_ATTACH_WAIT -> SRC_ATTACHED [delayed 200
      ms]
      [49606.329978] polarity 0
      [49606.329989] Requesting mux mode 1, config 0, polarity 0
      [49606.349416] vbus:=1 charge=0
      [49606.372274] pending state change SRC_ATTACHED -> SRC_UNATTACHED @ 480
      ms
      [49606.372431] VBUS on
      [49606.372488] state change SRC_ATTACHED -> SRC_STARTUP
      ...
      (the lower layer driver reports a direct change from source to sink)
      [49606.536927] pending state change SRC_SEND_CAPABILITIES ->
      SRC_SEND_CAPABILITIES @ 150 ms
      [49606.547244] CC1: 2 -> 5, CC2: 0 -> 0 [state SRC_SEND_CAPABILITIES,
      polarity 0, connected]
      
      This can happen when the lower layer driver and/or the hardware
      handles a portion of the Type-C state machine work, and quietly goes
      through the unattached state.
      
      Originally-from: Yueyao Zhu <yueyao@google.com>
      Signed-off-by: default avatarBadhri Jagan Sridharan <Badhri@google.com>
      Reviewed-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f3b73364
    • Badhri Jagan Sridharan's avatar
      staging: typec: tcpm: Consider port_type while determining unattached_state · 13cb492c
      Badhri Jagan Sridharan authored
      While performing PORT_RESET, upon receiving the cc disconnect
      signal from the underlaying tcpc device, TCPM transitions into
      unattached state. Consider the current type of port while determining
      the unattached state.
      
      In the below logs, although the port_type was set to sink, TCPM
      transitioned into SRC_UNATTACHED.
      
      [  762.290654] state change SRC_READY -> PORT_RESET
      [  762.324531] Setting voltage/current limit 0 mV 0 mA
      [  762.327912] polarity 0
      [  762.334864] cc:=0
      [  762.347193] pending state change PORT_RESET -> PORT_RESET_WAIT_OFF @ 100 ms
      [  762.347200] VBUS off
      [  762.347203] CC1: 2 -> 0, CC2: 0 -> 0 [state PORT_RESET, polarity 0, disconnected]
      [  762.347206] state change PORT_RESET -> SRC_UNATTACHED
      Signed-off-by: default avatarBadhri Jagan Sridharan <Badhri@google.com>
      Reviewed-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      13cb492c
    • Badhri Jagan Sridharan's avatar
      staging: typec: tcpm: Comply with TryWait.SNK State · af450ebb
      Badhri Jagan Sridharan authored
      According to the spec:
      "4.5.2.2.10.2 Exiting from TryWait.SNK State
      The port shall transition to Attached.SNK after tCCDebounce if or when VBUS
      is detected. Note the Source may initiate USB PD communications which will
      cause brief periods of the SNK.Open state on both the CC1 and CC2 pins,
      but this event will not exceed tPDDebounce. The port shall transition to
      Unattached.SNK when the state of both of the CC1 and CC2 pins is SNK.Open
      for at least tPDDebounce."
      Signed-off-by: default avatarBadhri Jagan Sridharan <Badhri@google.com>
      Reviewed-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      af450ebb
    • Badhri Jagan Sridharan's avatar
      staging: typec: tcpm: Follow Try.SRC exit requirements · 131c7d12
      Badhri Jagan Sridharan authored
      According to spec:
      " 4.5.2.2.9.2 Exiting from Try.SRC State:
      The port shall transition to Attached.SRC when the SRC.Rd
      state is detected on exactly one of the CC1 or CC2 pins for
      at least tPDDebounce. The port shall transition to
      TryWait.SNK after tDRPTry and the SRC.Rd state has not been
      detected."
      Signed-off-by: default avatarBadhri Jagan Sridharan <Badhri@google.com>
      Reviewed-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      131c7d12
    • Badhri Jagan Sridharan's avatar
      staging: typec: tcpm: Check for Rp for tPDDebounce · a0a3e04e
      Badhri Jagan Sridharan authored
      According the spec, the following is the conditions for exiting Try.SNK
      state:
      "The port shall wait for tDRPTry and only then begin monitoring the CC1 and
      CC2 pins for the SNK.Rp state. The port shall then transition to
      Attached.SNK when the SNK.Rp state is detected on exactly one of the CC1
      or CC2 pins for at least tPDDebounce and V BUS is detected. Alternatively,
      the port shall transition to TryWait.SRC if SNK.Rp state is not detected
      for tPDDebounce."
      Signed-off-by: default avatarBadhri Jagan Sridharan <Badhri@google.com>
      Reviewed-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a0a3e04e
    • Badhri Jagan Sridharan's avatar
      staging: typec: tcpm: Prevent TCPM from looping in SRC_TRYWAIT · 02d5be46
      Badhri Jagan Sridharan authored
      According to the spec the following is the condition
      for exiting TryWait.SRC:
      
      "The port shall transition to Attached.SRC when V BUS is at vSafe0V
      and the SRC.Rd state is detected on exactly one of the CC pins for at
      least tCCDebounce. The port shall transition to Unattached.SNK after
      tDRPTry if neither of the CC1 or CC2 pins are in the SRC.Rd state"
      
      TCPM at present keeps re-entering the SRC_TRYWAIT and keeps restarting
      tDRPTry if the CC presents Rp and disconnects within tCCDebounce.
      
      For example:
      [  447.164308] pending state change SRC_TRYWAIT -> SRC_ATTACHED @ 200 ms
      [  447.164386] CC1: 2 -> 0, CC2: 0 -> 0 [state SRC_TRYWAIT, polarity 0, disconnected]
      [  447.164406] state change SRC_TRYWAIT -> SRC_TRYWAIT
      [  447.164573] cc:=3
      [  447.191408] pending state change SRC_TRYWAIT -> SRC_TRYWAIT_UNATTACHED @ 100 ms
      [  447.191478] CC1: 0 -> 0, CC2: 0 -> 0 [state SRC_TRYWAIT, polarity 0, disconnected]
      [  447.207261] CC1: 0 -> 2, CC2: 0 -> 0 [state SRC_TRYWAIT, polarity 0, connected]
      [  447.207306] state change SRC_TRYWAIT -> SRC_TRYWAIT
      [  447.207485] cc:=3
      [  447.237283] pending state change SRC_TRYWAIT -> SRC_ATTACHED @ 200 ms
      [  447.237357] CC1: 2 -> 0, CC2: 0 -> 0 [state SRC_TRYWAIT, polarity 0, disconnected]
      [  447.237379] state change SRC_TRYWAIT -> SRC_TRYWAIT
      [  447.237532] cc:=3
      [  447.263219] pending state change SRC_TRYWAIT -> SRC_TRYWAIT_UNATTACHED @ 100 ms
      [  447.263289] CC1: 0 -> 0, CC2: 0 -> 0 [state SRC_TRYWAIT, polarity 0, disconnected]
      [  447.280926] CC1: 0 -> 2, CC2: 0 -> 0 [state SRC_TRYWAIT, polarity 0, connected]
      [  447.280970] state change SRC_TRYWAIT -> SRC_TRYWAIT
      [  447.281158] cc:=3
      [  447.307767] pending state change SRC_TRYWAIT -> SRC_ATTACHED @ 200 ms
      [  447.307838] CC1: 2 -> 0, CC2: 0 -> 0 [state SRC_TRYWAIT, polarity 0, disconnected]
      [  447.307858] state change SRC_TRYWAIT -> SRC_TRYWAIT
      
      In TCPM, tDRPTry is set tp 100ms (min 75ms and max 150ms)
      and tCCdebounce is set to 200ms (min 100ms and max 200ms).
      To overcome the issue, record the time at which the port
      enters TryWait.SRC(SRC_TRYWAIT) and re-enter SRC_TRYWAIT
      only when CC keeps debouncing within tDRPTry.
      Signed-off-by: default avatarBadhri Jagan Sridharan <Badhri@google.com>
      Reviewed-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      02d5be46
    • Badhri Jagan Sridharan's avatar
      staging: typec: tcpm: Check for port type for Try.SRC/Try.SNK · ff6c8cb1
      Badhri Jagan Sridharan authored
      Enable Try.SRC or Try.SNK only when port_type is
      DRP. Try.SRC or Try.SNK state machines are not
      valid for SRC only or SNK only ports.
      Signed-off-by: default avatarBadhri Jagan Sridharan <Badhri@google.com>
      Reviewed-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ff6c8cb1
    • Badhri Jagan Sridharan's avatar
      staging: typec: tcpm: set port type callback · 9b0ae699
      Badhri Jagan Sridharan authored
      The port type callback call enquires the tcpc_dev if
      the requested port type is supported. If supported, then
      performs a tcpm reset if required after setting the tcpm
      internal port_type variable.
      
      Check against the tcpm port_type instead of checking
      against caps.type as port_type reflects the current
      configuration.
      Signed-off-by: default avatarBadhri Jagan Sridharan <Badhri@google.com>
      Reviewed-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9b0ae699
  2. 28 Aug, 2017 26 commits
  3. 27 Aug, 2017 6 commits
    • Linus Torvalds's avatar
      Avoid page waitqueue race leaving possible page locker waiting · a8b169af
      Linus Torvalds authored
      The "lock_page_killable()" function waits for exclusive access to the
      page lock bit using the WQ_FLAG_EXCLUSIVE bit in the waitqueue entry
      set.
      
      That means that if it gets woken up, other waiters may have been
      skipped.
      
      That, in turn, means that if it sees the page being unlocked, it *must*
      take that lock and return success, even if a lethal signal is also
      pending.
      
      So instead of checking for lethal signals first, we need to check for
      them after we've checked the actual bit that we were waiting for.  Even
      if that might then delay the killing of the process.
      
      This matches the order of the old "wait_on_bit_lock()" infrastructure
      that the page locking used to use (and is still used in a few other
      areas).
      
      Note that if we still return an error after having unsuccessfully tried
      to acquire the page lock, that is ok: that means that some other thread
      was able to get ahead of us and lock the page, and when that other
      thread then unlocks the page, the wakeup event will be repeated.  So any
      other pending waiters will now get properly woken up.
      
      Fixes: 62906027 ("mm: add PageWaiters indicating tasks are waiting for a page bit")
      Cc: Nick Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a8b169af
    • Linus Torvalds's avatar
      Minor page waitqueue cleanups · 3510ca20
      Linus Torvalds authored
      Tim Chen and Kan Liang have been battling a customer load that shows
      extremely long page wakeup lists.  The cause seems to be constant NUMA
      migration of a hot page that is shared across a lot of threads, but the
      actual root cause for the exact behavior has not been found.
      
      Tim has a patch that batches the wait list traversal at wakeup time, so
      that we at least don't get long uninterruptible cases where we traverse
      and wake up thousands of processes and get nasty latency spikes.  That
      is likely 4.14 material, but we're still discussing the page waitqueue
      specific parts of it.
      
      In the meantime, I've tried to look at making the page wait queues less
      expensive, and failing miserably.  If you have thousands of threads
      waiting for the same page, it will be painful.  We'll need to try to
      figure out the NUMA balancing issue some day, in addition to avoiding
      the excessive spinlock hold times.
      
      That said, having tried to rewrite the page wait queues, I can at least
      fix up some of the braindamage in the current situation. In particular:
      
       (a) we don't want to continue walking the page wait list if the bit
           we're waiting for already got set again (which seems to be one of
           the patterns of the bad load).  That makes no progress and just
           causes pointless cache pollution chasing the pointers.
      
       (b) we don't want to put the non-locking waiters always on the front of
           the queue, and the locking waiters always on the back.  Not only is
           that unfair, it means that we wake up thousands of reading threads
           that will just end up being blocked by the writer later anyway.
      
      Also add a comment about the layout of 'struct wait_page_key' - there is
      an external user of it in the cachefiles code that means that it has to
      match the layout of 'struct wait_bit_key' in the two first members.  It
      so happens to match, because 'struct page *' and 'unsigned long *' end
      up having the same values simply because the page flags are the first
      member in struct page.
      
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3510ca20
    • Linus Torvalds's avatar
      Clarify (and fix) MAX_LFS_FILESIZE macros · 0cc3b0ec
      Linus Torvalds authored
      We have a MAX_LFS_FILESIZE macro that is meant to be filled in by
      filesystems (and other IO targets) that know they are 64-bit clean and
      don't have any 32-bit limits in their IO path.
      
      It turns out that our 32-bit value for that limit was bogus.  On 32-bit,
      the VM layer is limited by the page cache to only 32-bit index values,
      but our logic for that was confusing and actually wrong.  We used to
      define that value to
      
      	(((loff_t)PAGE_SIZE << (BITS_PER_LONG-1))-1)
      
      which is actually odd in several ways: it limits the index to 31 bits,
      and then it limits files so that they can't have data in that last byte
      of a page that has the highest 31-bit index (ie page index 0x7fffffff).
      
      Neither of those limitations make sense.  The index is actually the full
      32 bit unsigned value, and we can use that whole full page.  So the
      maximum size of the file would logically be "PAGE_SIZE << BITS_PER_LONG".
      
      However, we do wan tto avoid the maximum index, because we have code
      that iterates over the page indexes, and we don't want that code to
      overflow.  So the maximum size of a file on a 32-bit host should
      actually be one page less than the full 32-bit index.
      
      So the actual limit is ULONG_MAX << PAGE_SHIFT.  That means that we will
      not actually be using the page of that last index (ULONG_MAX), but we
      can grow a file up to that limit.
      
      The wrong value of MAX_LFS_FILESIZE actually caused problems for Doug
      Nazar, who was still using a 32-bit host, but with a 9.7TB 2 x RAID5
      volume.  It turns out that our old MAX_LFS_FILESIZE was 8TiB (well, one
      byte less), but the actual true VM limit is one page less than 16TiB.
      
      This was invisible until commit c2a9737f ("vfs,mm: fix a dead loop
      in truncate_inode_pages_range()"), which started applying that
      MAX_LFS_FILESIZE limit to block devices too.
      
      NOTE! On 64-bit, the page index isn't a limiter at all, and the limit is
      actually just the offset type itself (loff_t), which is signed.  But for
      clarity, on 64-bit, just use the maximum signed value, and don't make
      people have to count the number of 'f' characters in the hex constant.
      
      So just use LLONG_MAX for the 64-bit case.  That was what the value had
      been before too, just written out as a hex constant.
      
      Fixes: c2a9737f ("vfs,mm: fix a dead loop in truncate_inode_pages_range()")
      Reported-and-tested-by: default avatarDoug Nazar <nazard@nazar.ca>
      Cc: Andreas Dilger <adilger@dilger.ca>
      Cc: Mark Fasheh <mfasheh@versity.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: stable@kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0cc3b0ec
    • Himanshu Jha's avatar
      staging: rtl8723bs: remove null check before kfree · 4d506758
      Himanshu Jha authored
      Kfree on NULL pointer is a no-op and therefore checking is redundant.
      Signed-off-by: default avatarHimanshu Jha <himanshujha199640@gmail.com>
      Acked-by: default avatarLarry Finger <Larry.Finger@lwfinger.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4d506758
    • Himanshu Jha's avatar
      staging: r8822be: remove unnecessary call to memset · 3687994a
      Himanshu Jha authored
      call to memset to assign 0 value immediately after allocating
      memory with kzalloc is unnecesaary as kzalloc allocates the memory
      filled with 0 value.
      
      Build and tested it.
      Signed-off-by: default avatarHimanshu Jha <himanshujha199640@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3687994a
    • Alex Briskin's avatar
      staging: most: hdm_usb: Driver registration with module_driver macro · b9d7adc4
      Alex Briskin authored
      Register with module_driver macro instead of module_init/module_exit.
      Signed-off-by: default avatarAlex Briskin <br.shurik@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b9d7adc4