Commit e4b67292 authored by Roger Pau Monné's avatar Roger Pau Monné Committed by Kamal Mostafa

xen-blkback: read from indirect descriptors only once

commit 18779149 upstream.

Since indirect descriptors are in memory shared with the frontend, the
frontend could alter the first_sect and last_sect values after they have
been validated but before they are recorded in the request.  This may
result in I/O requests that overflow the foreign page, possibly
overwriting local pages when the I/O request is executed.

When parsing indirect descriptors, only read first_sect and last_sect
once.

This is part of XSA155.
Signed-off-by: default avatarRoger Pau Monné <roger.pau@citrix.com>
Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
[ luis: backported to 3.16:
  - Use ACCESS_ONCE instead of READ_ONCE
  - Use PAGE_SIZE instead of XEN_PAGE_SIZE ]
Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
parent 123ed94d
...@@ -853,6 +853,8 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req, ...@@ -853,6 +853,8 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
goto unmap; goto unmap;
for (n = 0, i = 0; n < nseg; n++) { for (n = 0, i = 0; n < nseg; n++) {
uint8_t first_sect, last_sect;
if ((n % SEGS_PER_INDIRECT_FRAME) == 0) { if ((n % SEGS_PER_INDIRECT_FRAME) == 0) {
/* Map indirect segments */ /* Map indirect segments */
if (segments) if (segments)
...@@ -860,15 +862,18 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req, ...@@ -860,15 +862,18 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
segments = kmap_atomic(pages[n/SEGS_PER_INDIRECT_FRAME]->page); segments = kmap_atomic(pages[n/SEGS_PER_INDIRECT_FRAME]->page);
} }
i = n % SEGS_PER_INDIRECT_FRAME; i = n % SEGS_PER_INDIRECT_FRAME;
pending_req->segments[n]->gref = segments[i].gref; pending_req->segments[n]->gref = segments[i].gref;
seg[n].nsec = segments[i].last_sect -
segments[i].first_sect + 1; first_sect = ACCESS_ONCE(segments[i].first_sect);
seg[n].offset = (segments[i].first_sect << 9); last_sect = ACCESS_ONCE(segments[i].last_sect);
if ((segments[i].last_sect >= (PAGE_SIZE >> 9)) || if (last_sect >= (PAGE_SIZE >> 9) || last_sect < first_sect) {
(segments[i].last_sect < segments[i].first_sect)) {
rc = -EINVAL; rc = -EINVAL;
goto unmap; goto unmap;
} }
seg[n].nsec = last_sect - first_sect + 1;
seg[n].offset = first_sect << 9;
preq->nr_sects += seg[n].nsec; preq->nr_sects += seg[n].nsec;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment