1. 20 Oct, 2020 9 commits
    • Juergen Gross's avatar
      xen/pciback: use lateeoi irq binding · c2711441
      Juergen Gross authored
      In order to reduce the chance for the system becoming unresponsive due
      to event storms triggered by a misbehaving pcifront use the lateeoi irq
      binding for pciback and unmask the event channel only just before
      leaving the event handling function.
      
      Restructure the handling to support that scheme. Basically an event can
      come in for two reasons: either a normal request for a pciback action,
      which is handled in a worker, or in case the guest has finished an AER
      request which was requested by pciback.
      
      When an AER request is issued to the guest and a normal pciback action
      is currently active issue an EOI early in order to be able to receive
      another event when the AER request has been finished by the guest.
      
      Let the worker processing the normal requests run until no further
      request is pending, instead of starting a new worker ion that case.
      Issue the EOI only just before leaving the worker.
      
      This scheme allows to drop calling the generic function
      xen_pcibk_test_and_schedule_op() after processing of any request as
      the handling of both request types is now separated more cleanly.
      
      This is part of XSA-332.
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarJulien Grall <julien@xen.org>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      c2711441
    • Juergen Gross's avatar
      xen/pvcallsback: use lateeoi irq binding · c8d647a3
      Juergen Gross authored
      In order to reduce the chance for the system becoming unresponsive due
      to event storms triggered by a misbehaving pvcallsfront use the lateeoi
      irq binding for pvcallsback and unmask the event channel only after
      handling all write requests, which are the ones coming in via an irq.
      
      This requires modifying the logic a little bit to not require an event
      for each write request, but to keep the ioworker running until no
      further data is found on the ring page to be processed.
      
      This is part of XSA-332.
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarJulien Grall <julien@xen.org>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarStefano Stabellini <sstabellini@kernel.org>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      c8d647a3
    • Juergen Gross's avatar
      xen/scsiback: use lateeoi irq binding · 86991b6e
      Juergen Gross authored
      In order to reduce the chance for the system becoming unresponsive due
      to event storms triggered by a misbehaving scsifront use the lateeoi
      irq binding for scsiback and unmask the event channel only just before
      leaving the event handling function.
      
      In case of a ring protocol error don't issue an EOI in order to avoid
      the possibility to use that for producing an event storm. This at once
      will result in no further call of scsiback_irq_fn(), so the ring_error
      struct member can be dropped and scsiback_do_cmd_fn() can signal the
      protocol error via a negative return value.
      
      This is part of XSA-332.
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarJulien Grall <julien@xen.org>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      86991b6e
    • Juergen Gross's avatar
      xen/netback: use lateeoi irq binding · 23025393
      Juergen Gross authored
      In order to reduce the chance for the system becoming unresponsive due
      to event storms triggered by a misbehaving netfront use the lateeoi
      irq binding for netback and unmask the event channel only just before
      going to sleep waiting for new events.
      
      Make sure not to issue an EOI when none is pending by introducing an
      eoi_pending element to struct xenvif_queue.
      
      When no request has been consumed set the spurious flag when sending
      the EOI for an interrupt.
      
      This is part of XSA-332.
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarJulien Grall <julien@xen.org>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      23025393
    • Juergen Gross's avatar
      xen/blkback: use lateeoi irq binding · 01263a1f
      Juergen Gross authored
      In order to reduce the chance for the system becoming unresponsive due
      to event storms triggered by a misbehaving blkfront use the lateeoi
      irq binding for blkback and unmask the event channel only after
      processing all pending requests.
      
      As the thread processing requests is used to do purging work in regular
      intervals an EOI may be sent only after having received an event. If
      there was no pending I/O request flag the EOI as spurious.
      
      This is part of XSA-332.
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarJulien Grall <julien@xen.org>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      01263a1f
    • Juergen Gross's avatar
      xen/events: add a new "late EOI" evtchn framework · 54c9de89
      Juergen Gross authored
      In order to avoid tight event channel related IRQ loops add a new
      framework of "late EOI" handling: the IRQ the event channel is bound
      to will be masked until the event has been handled and the related
      driver is capable to handle another event. The driver is responsible
      for unmasking the event channel via the new function xen_irq_lateeoi().
      
      This is similar to binding an event channel to a threaded IRQ, but
      without having to structure the driver accordingly.
      
      In order to support a future special handling in case a rogue guest
      is sending lots of unsolicited events, add a flag to xen_irq_lateeoi()
      which can be set by the caller to indicate the event was a spurious
      one.
      
      This is part of XSA-332.
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarJulien Grall <julien@xen.org>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarStefano Stabellini <sstabellini@kernel.org>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      54c9de89
    • Juergen Gross's avatar
      xen/events: fix race in evtchn_fifo_unmask() · f0133719
      Juergen Gross authored
      Unmasking a fifo event channel can result in unmasking it twice, once
      directly in the kernel and once via a hypercall in case the event was
      pending.
      
      Fix that by doing the local unmask only if the event is not pending.
      
      This is part of XSA-332.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      f0133719
    • Juergen Gross's avatar
      xen/events: add a proper barrier to 2-level uevent unmasking · 4d3fe31b
      Juergen Gross authored
      A follow-up patch will require certain write to happen before an event
      channel is unmasked.
      
      While the memory barrier is not strictly necessary for all the callers,
      the main one will need it. In order to avoid an extra memory barrier
      when using fifo event channels, mandate evtchn_unmask() to provide
      write ordering.
      
      The 2-level event handling unmask operation is missing an appropriate
      barrier, so add it. Fifo event channels are fine in this regard due to
      using sync_cmpxchg().
      
      This is part of XSA-332.
      
      Cc: stable@vger.kernel.org
      Suggested-by: default avatarJulien Grall <julien@xen.org>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarJulien Grall <jgrall@amazon.com>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      4d3fe31b
    • Juergen Gross's avatar
      xen/events: avoid removing an event channel while handling it · 073d0552
      Juergen Gross authored
      Today it can happen that an event channel is being removed from the
      system while the event handling loop is active. This can lead to a
      race resulting in crashes or WARN() splats when trying to access the
      irq_info structure related to the event channel.
      
      Fix this problem by using a rwlock taken as reader in the event
      handling loop and as writer when deallocating the irq_info structure.
      
      As the observed problem was a NULL dereference in evtchn_from_irq()
      make this function more robust against races by testing the irq_info
      pointer to be not NULL before dereferencing it.
      
      And finally make all accesses to evtchn_to_irq[row][col] atomic ones
      in order to avoid seeing partial updates of an array element in irq
      handling. Note that irq handling can be entered only for event channels
      which have been valid before, so any not populated row isn't a problem
      in this regard, as rows are only ever added and never removed.
      
      This is XSA-331.
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
      Reported-by: default avatarJinoh Kang <luke1337@theori.io>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarStefano Stabellini <sstabellini@kernel.org>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      073d0552
  2. 04 Oct, 2020 7 commits
  3. 03 Oct, 2020 10 commits
  4. 02 Oct, 2020 14 commits