Commit 237045fc authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-4.6/drivers' of git://git.kernel.dk/linux-block

Pull block driver updates from Jens Axboe:
 "This is the block driver pull request for this merge window.  It sits
  on top of for-4.6/core, that was just sent out.

  This contains:

   - A set of fixes for lightnvm.  One from Alan, fixing an overflow,
     and the rest from the usual suspects, Javier and Matias.

   - A set of fixes for nbd from Markus and Dan, and a fixup from Arnd
     for correct usage of the signed 64-bit divider.

   - A set of bug fixes for the Micron mtip32xx, from Asai.

   - A fix for the brd discard handling from Bart.

   - Update the maintainers entry for cciss, since that hardware has
     transferred ownership.

   - Three bug fixes for bcache from Eric Wheeler.

   - Set of fixes for xen-blk{back,front} from Jan and Konrad.

   - Removal of the cpqarray driver.  It has been disabled in Kconfig
     since 2013, and we were initially scheduled to remove it in 3.15.

   - Various updates and fixes for NVMe, with the most important being:

        - Removal of the per-device NVMe thread, replacing that with a
          watchdog timer instead. From Christoph.

        - Exposing the namespace WWID through sysfs, from Keith.

        - Set of cleanups from Ming Lin.

        - Logging the controller device name instead of the underlying
          PCI device name, from Sagi.

        - And a bunch of fixes and optimizations from the usual suspects
          in this area"

* 'for-4.6/drivers' of git://git.kernel.dk/linux-block: (49 commits)
  NVMe: Expose ns wwid through single sysfs entry
  drivers:block: cpqarray clean up
  brd: Fix discard request processing
  cpqarray: remove it from the kernel
  cciss: update MAINTAINERS
  NVMe: Remove unused sq_head read in completion path
  bcache: fix cache_set_flush() NULL pointer dereference on OOM
  bcache: cleaned up error handling around register_cache()
  bcache: fix race of writeback thread starting before complete initialization
  NVMe: Create discard zero quirk white list
  nbd: use correct div_s64 helper
  mtip32xx: remove unneeded variable in mtip_cmd_timeout()
  lightnvm: generalize rrpc ppa calculations
  lightnvm: remove struct nvm_dev->total_blocks
  lightnvm: rename ->nr_pages to ->nr_sects
  lightnvm: update closed list outside of intr context
  xen/blback: Fit the important information of the thread in 17 characters
  lightnvm: fold get bb tbl when using dual/quad plane mode
  lightnvm: fix up nonsensical configure overrun checking
  xen-blkback: advertise indirect segment support earlier
  ...
parents 35d88d97 118472ab
This driver is for Compaq's SMART2 Intelligent Disk Array Controllers.
Supported Cards:
----------------
This driver is known to work with the following cards:
* SMART (EISA)
* SMART-2/E (EISA)
* SMART-2/P
* SMART-2DH
* SMART-2SL
* SMART-221
* SMART-3100ES
* SMART-3200
* Integrated Smart Array Controller
* SA 4200
* SA 4250ES
* SA 431
* RAID LC2 Controller
It should also work with some really old Disk array adapters, but I am
unable to test against these cards:
* IDA
* IDA-2
* IAES
EISA Controllers:
-----------------
If you want to use an EISA controller you'll have to supply some
modprobe/lilo parameters. If the driver is compiled into the kernel, must
give it the controller's IO port address at boot time (it is not
necessary to specify the IRQ). For example, if you had two SMART-2/E
controllers, in EISA slots 1 and 2 you'd give it a boot argument like
this:
smart2=0x1000,0x2000
If you were loading the driver as a module, you'd give load it like this:
modprobe cpqarray eisa=0x1000,0x2000
You can use EISA and PCI adapters at the same time.
Device Naming:
--------------
You need some entries in /dev for the ida device. MAKEDEV in the /dev
directory can make device nodes for you automatically. The device setup is
as follows:
Major numbers:
72 ida0
73 ida1
74 ida2
75 ida3
76 ida4
77 ida5
78 ida6
79 ida7
Minor numbers:
b7 b6 b5 b4 b3 b2 b1 b0
|----+----| |----+----|
| |
| +-------- Partition ID (0=wholedev, 1-15 partition)
|
+-------------------- Logical Volume number
The device naming scheme is:
/dev/ida/c0d0 Controller 0, disk 0, whole device
/dev/ida/c0d0p1 Controller 0, disk 0, partition 1
/dev/ida/c0d0p2 Controller 0, disk 0, partition 2
/dev/ida/c0d0p3 Controller 0, disk 0, partition 3
/dev/ida/c1d1 Controller 1, disk 1, whole device
/dev/ida/c1d1p1 Controller 1, disk 1, partition 1
/dev/ida/c1d1p2 Controller 1, disk 1, partition 2
/dev/ida/c1d1p3 Controller 1, disk 1, partition 3
Changelog:
==========
10-28-2004 : General cleanup, syntax fixes for in-kernel driver version.
James Nelson <james4765@gmail.com>
1999 : Original Document
...@@ -5016,12 +5016,6 @@ T: git git://linuxtv.org/anttip/media_tree.git ...@@ -5016,12 +5016,6 @@ T: git git://linuxtv.org/anttip/media_tree.git
S: Maintained S: Maintained
F: drivers/media/dvb-frontends/hd29l2* F: drivers/media/dvb-frontends/hd29l2*
HEWLETT-PACKARD SMART2 RAID DRIVER
L: iss_storagedev@hp.com
S: Orphan
F: Documentation/blockdev/cpqarray.txt
F: drivers/block/cpqarray.*
HEWLETT-PACKARD SMART ARRAY RAID DRIVER (hpsa) HEWLETT-PACKARD SMART ARRAY RAID DRIVER (hpsa)
M: Don Brace <don.brace@microsemi.com> M: Don Brace <don.brace@microsemi.com>
L: iss_storagedev@hp.com L: iss_storagedev@hp.com
...@@ -5034,9 +5028,9 @@ F: include/linux/cciss*.h ...@@ -5034,9 +5028,9 @@ F: include/linux/cciss*.h
F: include/uapi/linux/cciss*.h F: include/uapi/linux/cciss*.h
HEWLETT-PACKARD SMART CISS RAID DRIVER (cciss) HEWLETT-PACKARD SMART CISS RAID DRIVER (cciss)
M: Don Brace <don.brace@pmcs.com> M: Don Brace <don.brace@microsemi.com>
L: iss_storagedev@hp.com L: iss_storagedev@hp.com
L: storagedev@pmcs.com L: esc.storagedev@microsemi.com
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Supported
F: Documentation/blockdev/cciss.txt F: Documentation/blockdev/cciss.txt
......
...@@ -110,16 +110,6 @@ source "drivers/block/mtip32xx/Kconfig" ...@@ -110,16 +110,6 @@ source "drivers/block/mtip32xx/Kconfig"
source "drivers/block/zram/Kconfig" source "drivers/block/zram/Kconfig"
config BLK_CPQ_DA
tristate "Compaq SMART2 support"
depends on PCI && VIRT_TO_BUS && 0
help
This is the driver for Compaq Smart Array controllers. Everyone
using these boards should say Y here. See the file
<file:Documentation/blockdev/cpqarray.txt> for the current list of
boards supported by this driver, and for further information on the
use of this driver.
config BLK_CPQ_CISS_DA config BLK_CPQ_CISS_DA
tristate "Compaq Smart Array 5xxx support" tristate "Compaq Smart Array 5xxx support"
depends on PCI depends on PCI
......
...@@ -15,7 +15,6 @@ obj-$(CONFIG_ATARI_FLOPPY) += ataflop.o ...@@ -15,7 +15,6 @@ obj-$(CONFIG_ATARI_FLOPPY) += ataflop.o
obj-$(CONFIG_AMIGA_Z2RAM) += z2ram.o obj-$(CONFIG_AMIGA_Z2RAM) += z2ram.o
obj-$(CONFIG_BLK_DEV_RAM) += brd.o obj-$(CONFIG_BLK_DEV_RAM) += brd.o
obj-$(CONFIG_BLK_DEV_LOOP) += loop.o obj-$(CONFIG_BLK_DEV_LOOP) += loop.o
obj-$(CONFIG_BLK_CPQ_DA) += cpqarray.o
obj-$(CONFIG_BLK_CPQ_CISS_DA) += cciss.o obj-$(CONFIG_BLK_CPQ_CISS_DA) += cciss.o
obj-$(CONFIG_BLK_DEV_DAC960) += DAC960.o obj-$(CONFIG_BLK_DEV_DAC960) += DAC960.o
obj-$(CONFIG_XILINX_SYSACE) += xsysace.o obj-$(CONFIG_XILINX_SYSACE) += xsysace.o
......
...@@ -341,7 +341,7 @@ static blk_qc_t brd_make_request(struct request_queue *q, struct bio *bio) ...@@ -341,7 +341,7 @@ static blk_qc_t brd_make_request(struct request_queue *q, struct bio *bio)
if (unlikely(bio->bi_rw & REQ_DISCARD)) { if (unlikely(bio->bi_rw & REQ_DISCARD)) {
if (sector & ((PAGE_SIZE >> SECTOR_SHIFT) - 1) || if (sector & ((PAGE_SIZE >> SECTOR_SHIFT) - 1) ||
bio->bi_iter.bi_size & PAGE_MASK) bio->bi_iter.bi_size & ~PAGE_MASK)
goto io_error; goto io_error;
discard_from_brd(brd, sector, bio->bi_iter.bi_size); discard_from_brd(brd, sector, bio->bi_iter.bi_size);
goto out; goto out;
......
This diff is collapsed.
/*
* Disk Array driver for Compaq SMART2 Controllers
* Copyright 1998 Compaq Computer Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* Questions/Comments/Bugfixes to iss_storagedev@hp.com
*
* If you want to make changes, improve or add functionality to this
* driver, you'll probably need the Compaq Array Controller Interface
* Specificiation (Document number ECG086/1198)
*/
#ifndef CPQARRAY_H
#define CPQARRAY_H
#ifdef __KERNEL__
#include <linux/blkdev.h>
#include <linux/slab.h>
#include <linux/proc_fs.h>
#include <linux/timer.h>
#endif
#include "ida_cmd.h"
#define IO_OK 0
#define IO_ERROR 1
#define NWD 16
#define NWD_SHIFT 4
#define IDA_TIMER (5*HZ)
#define IDA_TIMEOUT (10*HZ)
#define MISC_NONFATAL_WARN 0x01
typedef struct {
unsigned blk_size;
unsigned nr_blks;
unsigned cylinders;
unsigned heads;
unsigned sectors;
int usage_count;
} drv_info_t;
#ifdef __KERNEL__
struct ctlr_info;
typedef struct ctlr_info ctlr_info_t;
struct access_method {
void (*submit_command)(ctlr_info_t *h, cmdlist_t *c);
void (*set_intr_mask)(ctlr_info_t *h, unsigned long val);
unsigned long (*fifo_full)(ctlr_info_t *h);
unsigned long (*intr_pending)(ctlr_info_t *h);
unsigned long (*command_completed)(ctlr_info_t *h);
};
struct board_type {
__u32 board_id;
char *product_name;
struct access_method *access;
};
struct ctlr_info {
int ctlr;
char devname[8];
__u32 log_drv_map;
__u32 drv_assign_map;
__u32 drv_spare_map;
__u32 mp_failed_drv_map;
char firm_rev[4];
int ctlr_sig;
int log_drives;
int phys_drives;
struct pci_dev *pci_dev; /* NULL if EISA */
__u32 board_id;
char *product_name;
void __iomem *vaddr;
unsigned long paddr;
unsigned long io_mem_addr;
unsigned long io_mem_length;
int intr;
int usage_count;
drv_info_t drv[NWD];
struct proc_dir_entry *proc;
struct access_method access;
cmdlist_t *reqQ;
cmdlist_t *cmpQ;
cmdlist_t *cmd_pool;
dma_addr_t cmd_pool_dhandle;
unsigned long *cmd_pool_bits;
struct request_queue *queue;
spinlock_t lock;
unsigned int Qdepth;
unsigned int maxQsinceinit;
unsigned int nr_requests;
unsigned int nr_allocs;
unsigned int nr_frees;
struct timer_list timer;
unsigned int misc_tflags;
};
#define IDA_LOCK(i) (&hba[i]->lock)
#endif
#endif /* CPQARRAY_H */
/*
* Disk Array driver for Compaq SMART2 Controllers
* Copyright 1998 Compaq Computer Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* Questions/Comments/Bugfixes to iss_storagedev@hp.com
*
*/
#ifndef ARRAYCMD_H
#define ARRAYCMD_H
#include <asm/types.h>
#if 0
#include <linux/blkdev.h>
#endif
/* for the Smart Array 42XX cards */
#define S42XX_REQUEST_PORT_OFFSET 0x40
#define S42XX_REPLY_INTR_MASK_OFFSET 0x34
#define S42XX_REPLY_PORT_OFFSET 0x44
#define S42XX_INTR_STATUS 0x30
#define S42XX_INTR_OFF 0x08
#define S42XX_INTR_PENDING 0x08
#define COMMAND_FIFO 0x04
#define COMMAND_COMPLETE_FIFO 0x08
#define INTR_MASK 0x0C
#define INTR_STATUS 0x10
#define INTR_PENDING 0x14
#define FIFO_NOT_EMPTY 0x01
#define FIFO_NOT_FULL 0x02
#define BIG_PROBLEM 0x40
#define LOG_NOT_CONF 2
#pragma pack(1)
typedef struct {
__u32 size;
__u32 addr;
} sg_t;
#define RCODE_NONFATAL 0x02
#define RCODE_FATAL 0x04
#define RCODE_INVREQ 0x10
typedef struct {
__u16 next;
__u8 cmd;
__u8 rcode;
__u32 blk;
__u16 blk_cnt;
__u8 sg_cnt;
__u8 reserved;
} rhdr_t;
#define SG_MAX 32
typedef struct {
rhdr_t hdr;
sg_t sg[SG_MAX];
__u32 bp;
} rblk_t;
typedef struct {
__u8 unit;
__u8 prio;
__u16 size;
} chdr_t;
#define CMD_RWREQ 0x00
#define CMD_IOCTL_PEND 0x01
#define CMD_IOCTL_DONE 0x02
typedef struct cmdlist {
chdr_t hdr;
rblk_t req;
__u32 size;
int retry_cnt;
__u32 busaddr;
int ctlr;
struct cmdlist *prev;
struct cmdlist *next;
struct request *rq;
int type;
} cmdlist_t;
#define ID_CTLR 0x11
typedef struct {
__u8 nr_drvs;
__u32 cfg_sig;
__u8 firm_rev[4];
__u8 rom_rev[4];
__u8 hw_rev;
__u32 bb_rev;
__u32 drv_present_map;
__u32 ext_drv_map;
__u32 board_id;
__u8 cfg_error;
__u32 non_disk_bits;
__u8 bad_ram_addr;
__u8 cpu_rev;
__u8 pdpi_rev;
__u8 epic_rev;
__u8 wcxc_rev;
__u8 marketing_rev;
__u8 ctlr_flags;
__u8 host_flags;
__u8 expand_dis;
__u8 scsi_chips;
__u32 max_req_blocks;
__u32 ctlr_clock;
__u8 drvs_per_bus;
__u16 big_drv_present_map[8];
__u16 big_ext_drv_map[8];
__u16 big_non_disk_map[8];
__u16 task_flags;
__u8 icl_bus;
__u8 red_modes;
__u8 cur_red_mode;
__u8 red_ctlr_stat;
__u8 red_fail_reason;
__u8 reserved[403];
} id_ctlr_t;
typedef struct {
__u16 cyl;
__u8 heads;
__u8 xsig;
__u8 psectors;
__u16 wpre;
__u8 maxecc;
__u8 drv_ctrl;
__u16 pcyls;
__u8 pheads;
__u16 landz;
__u8 sect_per_track;
__u8 cksum;
} drv_param_t;
#define ID_LOG_DRV 0x10
typedef struct {
__u16 blk_size;
__u32 nr_blks;
drv_param_t drv;
__u8 fault_tol;
__u8 reserved;
__u8 bios_disable;
} id_log_drv_t;
#define ID_LOG_DRV_EXT 0x18
typedef struct {
__u32 log_drv_id;
__u8 log_drv_label[64];
__u8 reserved[418];
} id_log_drv_ext_t;
#define SENSE_LOG_DRV_STAT 0x12
typedef struct {
__u8 status;
__u32 fail_map;
__u16 read_err[32];
__u16 write_err[32];
__u8 drv_err_data[256];
__u8 drq_timeout[32];
__u32 blks_to_recover;
__u8 drv_recovering;
__u16 remap_cnt[32];
__u32 replace_drv_map;
__u32 act_spare_map;
__u8 spare_stat;
__u8 spare_repl_map[32];
__u32 repl_ok_map;
__u8 media_exch;
__u8 cache_fail;
__u8 expn_fail;
__u8 unit_flags;
__u16 big_fail_map[8];
__u16 big_remap_map[128];
__u16 big_repl_map[8];
__u16 big_act_spare_map[8];
__u8 big_spar_repl_map[128];
__u16 big_repl_ok_map[8];
__u8 big_drv_rebuild;
__u8 reserved[36];
} sense_log_drv_stat_t;
#define START_RECOVER 0x13
#define ID_PHYS_DRV 0x15
typedef struct {
__u8 scsi_bus;
__u8 scsi_id;
__u16 blk_size;
__u32 nr_blks;
__u32 rsvd_blks;
__u8 drv_model[40];
__u8 drv_sn[40];
__u8 drv_fw[8];
__u8 scsi_iq_bits;
__u8 compaq_drv_stmp;
__u8 last_fail;
__u8 phys_drv_flags;
__u8 phys_drv_flags1;
__u8 scsi_lun;
__u8 phys_drv_flags2;
__u8 reserved;
__u32 spi_speed_rules;
__u8 phys_connector[2];
__u8 phys_box_on_bus;
__u8 phys_bay_in_box;
} id_phys_drv_t;
#define BLINK_DRV_LEDS 0x16
typedef struct {
__u32 blink_duration;
__u32 reserved;
__u8 blink[256];
__u8 reserved1[248];
} blink_drv_leds_t;
#define SENSE_BLINK_LEDS 0x17
typedef struct {
__u32 blink_duration;
__u32 btime_elap;
__u8 blink[256];
__u8 reserved1[248];
} sense_blink_leds_t;
#define IDA_READ 0x20
#define IDA_WRITE 0x30
#define IDA_WRITE_MEDIA 0x31
#define RESET_TO_DIAG 0x40
#define DIAG_PASS_THRU 0x41
#define SENSE_CONFIG 0x50
#define SET_CONFIG 0x51
typedef struct {
__u32 cfg_sig;
__u16 compat_port;
__u8 data_dist_mode;
__u8 surf_an_ctrl;
__u16 ctlr_phys_drv;
__u16 log_unit_phys_drv;
__u16 fault_tol_mode;
__u8 phys_drv_param[16];
drv_param_t drv;
__u32 drv_asgn_map;
__u16 dist_factor;
__u32 spare_asgn_map;
__u8 reserved[6];
__u16 os;
__u8 ctlr_order;
__u8 extra_info;
__u32 data_offs;
__u8 parity_backedout_write_drvs;
__u8 parity_dist_mode;
__u8 parity_shift_fact;
__u8 bios_disable_flag;
__u32 blks_on_vol;
__u32 blks_per_drv;
__u8 scratch[16];
__u16 big_drv_map[8];
__u16 big_spare_map[8];
__u8 ss_source_vol;
__u8 mix_drv_cap_range;
struct {
__u16 big_drv_map[8];
__u32 blks_per_drv;
__u16 fault_tol_mode;
__u16 dist_factor;
} MDC_range[4];
__u8 reserved1[248];
} config_t;
#define BYPASS_VOL_STATE 0x52
#define SS_CREATE_VOL 0x53
#define CHANGE_CONFIG 0x54
#define SENSE_ORIG_CONF 0x55
#define REORDER_LOG_DRV 0x56
typedef struct {
__u8 old_units[32];
} reorder_log_drv_t;
#define LABEL_LOG_DRV 0x57
typedef struct {
__u8 log_drv_label[64];
} label_log_drv_t;
#define SS_TO_VOL 0x58
#define SET_SURF_DELAY 0x60
typedef struct {
__u16 delay;
__u8 reserved[510];
} surf_delay_t;
#define SET_OVERHEAT_DELAY 0x61
typedef struct {
__u16 delay;
} overhead_delay_t;
#define SET_MP_DELAY
typedef struct {
__u16 delay;
__u8 reserved[510];
} mp_delay_t;
#define PASSTHRU_A 0x91
typedef struct {
__u8 target;
__u8 bus;
__u8 lun;
__u32 timeout;
__u32 flags;
__u8 status;
__u8 error;
__u8 cdb_len;
__u8 sense_error;
__u8 sense_key;
__u32 sense_info;
__u8 sense_code;
__u8 sense_qual;
__u32 residual;
__u8 reserved[4];
__u8 cdb[12];
} scsi_param_t;
#define RESUME_BACKGROUND_ACTIVITY 0x99
#define SENSE_CONTROLLER_PERFORMANCE 0xa8
#define FLUSH_CACHE 0xc2
#define COLLECT_BUFFER 0xd2
#define READ_FLASH_ROM 0xf6
#define WRITE_FLASH_ROM 0xf7
#pragma pack()
#endif /* ARRAYCMD_H */
/*
* Disk Array driver for Compaq SMART2 Controllers
* Copyright 1998 Compaq Computer Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* Questions/Comments/Bugfixes to iss_storagedev@hp.com
*
*/
#ifndef IDA_IOCTL_H
#define IDA_IOCTL_H
#include "ida_cmd.h"
#include "cpqarray.h"
#define IDAGETDRVINFO 0x27272828
#define IDAPASSTHRU 0x28282929
#define IDAGETCTLRSIG 0x29293030
#define IDAREVALIDATEVOLS 0x30303131
#define IDADRIVERVERSION 0x31313232
#define IDAGETPCIINFO 0x32323333
typedef struct _ida_pci_info_struct
{
unsigned char bus;
unsigned char dev_fn;
__u32 board_id;
} ida_pci_info_struct;
/*
* Normally, the ioctl determines the logical unit for this command by
* the major,minor number of the fd passed to ioctl. If you need to send
* a command to a different/nonexistant unit (such as during config), you
* can override the normal behavior by setting the unit valid bit. (Normally,
* it should be zero) The controller the command is sent to is still
* determined by the major number of the open device.
*/
#define UNITVALID 0x80
typedef struct {
__u8 cmd;
__u8 rcode;
__u8 unit;
__u32 blk;
__u16 blk_cnt;
/* currently, sg_cnt is assumed to be 1: only the 0th element of sg is used */
struct {
void __user *addr;
size_t size;
} sg[SG_MAX];
int sg_cnt;
union ctlr_cmds {
drv_info_t drv;
unsigned char buf[1024];
id_ctlr_t id_ctlr;
drv_param_t drv_param;
id_log_drv_t id_log_drv;
id_log_drv_ext_t id_log_drv_ext;
sense_log_drv_stat_t sense_log_drv_stat;
id_phys_drv_t id_phys_drv;
blink_drv_leds_t blink_drv_leds;
sense_blink_leds_t sense_blink_leds;
config_t config;
reorder_log_drv_t reorder_log_drv;
label_log_drv_t label_log_drv;
surf_delay_t surf_delay;
overhead_delay_t overhead_delay;
mp_delay_t mp_delay;
scsi_param_t scsi_param;
} c;
} ida_ioctl_t;
#endif /* IDA_IOCTL_H */
This diff is collapsed.
...@@ -134,16 +134,24 @@ enum { ...@@ -134,16 +134,24 @@ enum {
MTIP_PF_EH_ACTIVE_BIT = 1, /* error handling */ MTIP_PF_EH_ACTIVE_BIT = 1, /* error handling */
MTIP_PF_SE_ACTIVE_BIT = 2, /* secure erase */ MTIP_PF_SE_ACTIVE_BIT = 2, /* secure erase */
MTIP_PF_DM_ACTIVE_BIT = 3, /* download microcde */ MTIP_PF_DM_ACTIVE_BIT = 3, /* download microcde */
MTIP_PF_TO_ACTIVE_BIT = 9, /* timeout handling */
MTIP_PF_PAUSE_IO = ((1 << MTIP_PF_IC_ACTIVE_BIT) | MTIP_PF_PAUSE_IO = ((1 << MTIP_PF_IC_ACTIVE_BIT) |
(1 << MTIP_PF_EH_ACTIVE_BIT) | (1 << MTIP_PF_EH_ACTIVE_BIT) |
(1 << MTIP_PF_SE_ACTIVE_BIT) | (1 << MTIP_PF_SE_ACTIVE_BIT) |
(1 << MTIP_PF_DM_ACTIVE_BIT)), (1 << MTIP_PF_DM_ACTIVE_BIT) |
(1 << MTIP_PF_TO_ACTIVE_BIT)),
MTIP_PF_SVC_THD_ACTIVE_BIT = 4, MTIP_PF_SVC_THD_ACTIVE_BIT = 4,
MTIP_PF_ISSUE_CMDS_BIT = 5, MTIP_PF_ISSUE_CMDS_BIT = 5,
MTIP_PF_REBUILD_BIT = 6, MTIP_PF_REBUILD_BIT = 6,
MTIP_PF_SVC_THD_STOP_BIT = 8, MTIP_PF_SVC_THD_STOP_BIT = 8,
MTIP_PF_SVC_THD_WORK = ((1 << MTIP_PF_EH_ACTIVE_BIT) |
(1 << MTIP_PF_ISSUE_CMDS_BIT) |
(1 << MTIP_PF_REBUILD_BIT) |
(1 << MTIP_PF_SVC_THD_STOP_BIT) |
(1 << MTIP_PF_TO_ACTIVE_BIT)),
/* below are bit numbers in 'dd_flag' defined in driver_data */ /* below are bit numbers in 'dd_flag' defined in driver_data */
MTIP_DDF_SEC_LOCK_BIT = 0, MTIP_DDF_SEC_LOCK_BIT = 0,
MTIP_DDF_REMOVE_PENDING_BIT = 1, MTIP_DDF_REMOVE_PENDING_BIT = 1,
...@@ -153,6 +161,7 @@ enum { ...@@ -153,6 +161,7 @@ enum {
MTIP_DDF_RESUME_BIT = 6, MTIP_DDF_RESUME_BIT = 6,
MTIP_DDF_INIT_DONE_BIT = 7, MTIP_DDF_INIT_DONE_BIT = 7,
MTIP_DDF_REBUILD_FAILED_BIT = 8, MTIP_DDF_REBUILD_FAILED_BIT = 8,
MTIP_DDF_REMOVAL_BIT = 9,
MTIP_DDF_STOP_IO = ((1 << MTIP_DDF_REMOVE_PENDING_BIT) | MTIP_DDF_STOP_IO = ((1 << MTIP_DDF_REMOVE_PENDING_BIT) |
(1 << MTIP_DDF_SEC_LOCK_BIT) | (1 << MTIP_DDF_SEC_LOCK_BIT) |
......
This diff is collapsed.
...@@ -23,8 +23,7 @@ ...@@ -23,8 +23,7 @@
#include <xen/grant_table.h> #include <xen/grant_table.h>
#include "common.h" #include "common.h"
/* Enlarge the array size in order to fully show blkback name. */ /* On the XenBus the max length of 'ring-ref%u'. */
#define BLKBACK_NAME_LEN (20)
#define RINGREF_NAME_LEN (20) #define RINGREF_NAME_LEN (20)
struct backend_info { struct backend_info {
...@@ -76,7 +75,7 @@ static int blkback_name(struct xen_blkif *blkif, char *buf) ...@@ -76,7 +75,7 @@ static int blkback_name(struct xen_blkif *blkif, char *buf)
else else
devname = devpath; devname = devpath;
snprintf(buf, BLKBACK_NAME_LEN, "blkback.%d.%s", blkif->domid, devname); snprintf(buf, TASK_COMM_LEN, "%d.%s", blkif->domid, devname);
kfree(devpath); kfree(devpath);
return 0; return 0;
...@@ -85,7 +84,7 @@ static int blkback_name(struct xen_blkif *blkif, char *buf) ...@@ -85,7 +84,7 @@ static int blkback_name(struct xen_blkif *blkif, char *buf)
static void xen_update_blkif_status(struct xen_blkif *blkif) static void xen_update_blkif_status(struct xen_blkif *blkif)
{ {
int err; int err;
char name[BLKBACK_NAME_LEN]; char name[TASK_COMM_LEN];
struct xen_blkif_ring *ring; struct xen_blkif_ring *ring;
int i; int i;
...@@ -618,6 +617,14 @@ static int xen_blkbk_probe(struct xenbus_device *dev, ...@@ -618,6 +617,14 @@ static int xen_blkbk_probe(struct xenbus_device *dev,
goto fail; goto fail;
} }
err = xenbus_printf(XBT_NIL, dev->nodename,
"feature-max-indirect-segments", "%u",
MAX_INDIRECT_SEGMENTS);
if (err)
dev_warn(&dev->dev,
"writing %s/feature-max-indirect-segments (%d)",
dev->nodename, err);
/* Multi-queue: advertise how many queues are supported by us.*/ /* Multi-queue: advertise how many queues are supported by us.*/
err = xenbus_printf(XBT_NIL, dev->nodename, err = xenbus_printf(XBT_NIL, dev->nodename,
"multi-queue-max-queues", "%u", xenblk_max_queues); "multi-queue-max-queues", "%u", xenblk_max_queues);
...@@ -849,11 +856,6 @@ static void connect(struct backend_info *be) ...@@ -849,11 +856,6 @@ static void connect(struct backend_info *be)
dev->nodename); dev->nodename);
goto abort; goto abort;
} }
err = xenbus_printf(xbt, dev->nodename, "feature-max-indirect-segments", "%u",
MAX_INDIRECT_SEGMENTS);
if (err)
dev_warn(&dev->dev, "writing %s/feature-max-indirect-segments (%d)",
dev->nodename, err);
err = xenbus_printf(xbt, dev->nodename, "sectors", "%llu", err = xenbus_printf(xbt, dev->nodename, "sectors", "%llu",
(unsigned long long)vbd_sz(&be->blkif->vbd)); (unsigned long long)vbd_sz(&be->blkif->vbd));
......
...@@ -125,8 +125,10 @@ static const struct block_device_operations xlvbd_block_fops; ...@@ -125,8 +125,10 @@ static const struct block_device_operations xlvbd_block_fops;
*/ */
static unsigned int xen_blkif_max_segments = 32; static unsigned int xen_blkif_max_segments = 32;
module_param_named(max, xen_blkif_max_segments, int, S_IRUGO); module_param_named(max_indirect_segments, xen_blkif_max_segments, uint,
MODULE_PARM_DESC(max, "Maximum amount of segments in indirect requests (default is 32)"); S_IRUGO);
MODULE_PARM_DESC(max_indirect_segments,
"Maximum amount of segments in indirect requests (default is 32)");
static unsigned int xen_blkif_max_queues = 4; static unsigned int xen_blkif_max_queues = 4;
module_param_named(max_queues, xen_blkif_max_queues, uint, S_IRUGO); module_param_named(max_queues, xen_blkif_max_queues, uint, S_IRUGO);
......
...@@ -250,7 +250,7 @@ int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd, ...@@ -250,7 +250,7 @@ int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd,
return 0; return 0;
} }
plane_cnt = (1 << dev->plane_mode); plane_cnt = dev->plane_mode;
rqd->nr_pages = plane_cnt * nr_ppas; rqd->nr_pages = plane_cnt * nr_ppas;
if (dev->ops->max_phys_sect < rqd->nr_pages) if (dev->ops->max_phys_sect < rqd->nr_pages)
...@@ -463,11 +463,7 @@ static int nvm_core_init(struct nvm_dev *dev) ...@@ -463,11 +463,7 @@ static int nvm_core_init(struct nvm_dev *dev)
dev->sec_per_lun = dev->sec_per_blk * dev->blks_per_lun; dev->sec_per_lun = dev->sec_per_blk * dev->blks_per_lun;
dev->nr_luns = dev->luns_per_chnl * dev->nr_chnls; dev->nr_luns = dev->luns_per_chnl * dev->nr_chnls;
dev->total_blocks = dev->nr_planes * dev->total_secs = dev->nr_luns * dev->sec_per_lun;
dev->blks_per_lun *
dev->luns_per_chnl *
dev->nr_chnls;
dev->total_pages = dev->total_blocks * dev->pgs_per_blk;
INIT_LIST_HEAD(&dev->online_targets); INIT_LIST_HEAD(&dev->online_targets);
mutex_init(&dev->mlock); mutex_init(&dev->mlock);
...@@ -872,20 +868,19 @@ static int nvm_configure_by_str_event(const char *val, ...@@ -872,20 +868,19 @@ static int nvm_configure_by_str_event(const char *val,
static int nvm_configure_get(char *buf, const struct kernel_param *kp) static int nvm_configure_get(char *buf, const struct kernel_param *kp)
{ {
int sz = 0; int sz;
char *buf_start = buf;
struct nvm_dev *dev; struct nvm_dev *dev;
buf += sprintf(buf, "available devices:\n"); sz = sprintf(buf, "available devices:\n");
down_write(&nvm_lock); down_write(&nvm_lock);
list_for_each_entry(dev, &nvm_devices, devices) { list_for_each_entry(dev, &nvm_devices, devices) {
if (sz > 4095 - DISK_NAME_LEN) if (sz > 4095 - DISK_NAME_LEN - 2)
break; break;
buf += sprintf(buf, " %32s\n", dev->name); sz += sprintf(buf + sz, " %32s\n", dev->name);
} }
up_write(&nvm_lock); up_write(&nvm_lock);
return buf - buf_start - 1; return sz;
} }
static const struct kernel_param_ops nvm_configure_by_str_event_param_ops = { static const struct kernel_param_ops nvm_configure_by_str_event_param_ops = {
......
...@@ -100,14 +100,13 @@ static int gennvm_block_map(u64 slba, u32 nlb, __le64 *entries, void *private) ...@@ -100,14 +100,13 @@ static int gennvm_block_map(u64 slba, u32 nlb, __le64 *entries, void *private)
{ {
struct nvm_dev *dev = private; struct nvm_dev *dev = private;
struct gen_nvm *gn = dev->mp; struct gen_nvm *gn = dev->mp;
sector_t max_pages = dev->total_pages * (dev->sec_size >> 9);
u64 elba = slba + nlb; u64 elba = slba + nlb;
struct gen_lun *lun; struct gen_lun *lun;
struct nvm_block *blk; struct nvm_block *blk;
u64 i; u64 i;
int lun_id; int lun_id;
if (unlikely(elba > dev->total_pages)) { if (unlikely(elba > dev->total_secs)) {
pr_err("gennvm: L2P data from device is out of bounds!\n"); pr_err("gennvm: L2P data from device is out of bounds!\n");
return -EINVAL; return -EINVAL;
} }
...@@ -115,7 +114,7 @@ static int gennvm_block_map(u64 slba, u32 nlb, __le64 *entries, void *private) ...@@ -115,7 +114,7 @@ static int gennvm_block_map(u64 slba, u32 nlb, __le64 *entries, void *private)
for (i = 0; i < nlb; i++) { for (i = 0; i < nlb; i++) {
u64 pba = le64_to_cpu(entries[i]); u64 pba = le64_to_cpu(entries[i]);
if (unlikely(pba >= max_pages && pba != U64_MAX)) { if (unlikely(pba >= dev->total_secs && pba != U64_MAX)) {
pr_err("gennvm: L2P data entry is out of bounds!\n"); pr_err("gennvm: L2P data entry is out of bounds!\n");
return -EINVAL; return -EINVAL;
} }
...@@ -197,7 +196,7 @@ static int gennvm_blocks_init(struct nvm_dev *dev, struct gen_nvm *gn) ...@@ -197,7 +196,7 @@ static int gennvm_blocks_init(struct nvm_dev *dev, struct gen_nvm *gn)
} }
if (dev->ops->get_l2p_tbl) { if (dev->ops->get_l2p_tbl) {
ret = dev->ops->get_l2p_tbl(dev, 0, dev->total_pages, ret = dev->ops->get_l2p_tbl(dev, 0, dev->total_secs,
gennvm_block_map, dev); gennvm_block_map, dev);
if (ret) { if (ret) {
pr_err("gennvm: could not read L2P table.\n"); pr_err("gennvm: could not read L2P table.\n");
......
...@@ -38,7 +38,7 @@ static void rrpc_page_invalidate(struct rrpc *rrpc, struct rrpc_addr *a) ...@@ -38,7 +38,7 @@ static void rrpc_page_invalidate(struct rrpc *rrpc, struct rrpc_addr *a)
spin_lock(&rblk->lock); spin_lock(&rblk->lock);
div_u64_rem(a->addr, rrpc->dev->pgs_per_blk, &pg_offset); div_u64_rem(a->addr, rrpc->dev->sec_per_blk, &pg_offset);
WARN_ON(test_and_set_bit(pg_offset, rblk->invalid_pages)); WARN_ON(test_and_set_bit(pg_offset, rblk->invalid_pages));
rblk->nr_invalid_pages++; rblk->nr_invalid_pages++;
...@@ -113,14 +113,24 @@ static void rrpc_discard(struct rrpc *rrpc, struct bio *bio) ...@@ -113,14 +113,24 @@ static void rrpc_discard(struct rrpc *rrpc, struct bio *bio)
static int block_is_full(struct rrpc *rrpc, struct rrpc_block *rblk) static int block_is_full(struct rrpc *rrpc, struct rrpc_block *rblk)
{ {
return (rblk->next_page == rrpc->dev->pgs_per_blk); return (rblk->next_page == rrpc->dev->sec_per_blk);
} }
/* Calculate relative addr for the given block, considering instantiated LUNs */
static u64 block_to_rel_addr(struct rrpc *rrpc, struct rrpc_block *rblk)
{
struct nvm_block *blk = rblk->parent;
int lun_blk = blk->id % (rrpc->dev->blks_per_lun * rrpc->nr_luns);
return lun_blk * rrpc->dev->sec_per_blk;
}
/* Calculate global addr for the given block */
static u64 block_to_addr(struct rrpc *rrpc, struct rrpc_block *rblk) static u64 block_to_addr(struct rrpc *rrpc, struct rrpc_block *rblk)
{ {
struct nvm_block *blk = rblk->parent; struct nvm_block *blk = rblk->parent;
return blk->id * rrpc->dev->pgs_per_blk; return blk->id * rrpc->dev->sec_per_blk;
} }
static struct ppa_addr linear_to_generic_addr(struct nvm_dev *dev, static struct ppa_addr linear_to_generic_addr(struct nvm_dev *dev,
...@@ -136,7 +146,7 @@ static struct ppa_addr linear_to_generic_addr(struct nvm_dev *dev, ...@@ -136,7 +146,7 @@ static struct ppa_addr linear_to_generic_addr(struct nvm_dev *dev,
l.g.sec = secs; l.g.sec = secs;
sector_div(ppa, dev->sec_per_pg); sector_div(ppa, dev->sec_per_pg);
div_u64_rem(ppa, dev->sec_per_blk, &pgs); div_u64_rem(ppa, dev->pgs_per_blk, &pgs);
l.g.pg = pgs; l.g.pg = pgs;
sector_div(ppa, dev->pgs_per_blk); sector_div(ppa, dev->pgs_per_blk);
...@@ -191,12 +201,12 @@ static struct rrpc_block *rrpc_get_blk(struct rrpc *rrpc, struct rrpc_lun *rlun, ...@@ -191,12 +201,12 @@ static struct rrpc_block *rrpc_get_blk(struct rrpc *rrpc, struct rrpc_lun *rlun,
return NULL; return NULL;
} }
rblk = &rlun->blocks[blk->id]; rblk = rrpc_get_rblk(rlun, blk->id);
list_add_tail(&rblk->list, &rlun->open_list); list_add_tail(&rblk->list, &rlun->open_list);
spin_unlock(&lun->lock); spin_unlock(&lun->lock);
blk->priv = rblk; blk->priv = rblk;
bitmap_zero(rblk->invalid_pages, rrpc->dev->pgs_per_blk); bitmap_zero(rblk->invalid_pages, rrpc->dev->sec_per_blk);
rblk->next_page = 0; rblk->next_page = 0;
rblk->nr_invalid_pages = 0; rblk->nr_invalid_pages = 0;
atomic_set(&rblk->data_cmnt_size, 0); atomic_set(&rblk->data_cmnt_size, 0);
...@@ -286,11 +296,11 @@ static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk) ...@@ -286,11 +296,11 @@ static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk)
struct bio *bio; struct bio *bio;
struct page *page; struct page *page;
int slot; int slot;
int nr_pgs_per_blk = rrpc->dev->pgs_per_blk; int nr_sec_per_blk = rrpc->dev->sec_per_blk;
u64 phys_addr; u64 phys_addr;
DECLARE_COMPLETION_ONSTACK(wait); DECLARE_COMPLETION_ONSTACK(wait);
if (bitmap_full(rblk->invalid_pages, nr_pgs_per_blk)) if (bitmap_full(rblk->invalid_pages, nr_sec_per_blk))
return 0; return 0;
bio = bio_alloc(GFP_NOIO, 1); bio = bio_alloc(GFP_NOIO, 1);
...@@ -306,10 +316,10 @@ static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk) ...@@ -306,10 +316,10 @@ static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk)
} }
while ((slot = find_first_zero_bit(rblk->invalid_pages, while ((slot = find_first_zero_bit(rblk->invalid_pages,
nr_pgs_per_blk)) < nr_pgs_per_blk) { nr_sec_per_blk)) < nr_sec_per_blk) {
/* Lock laddr */ /* Lock laddr */
phys_addr = (rblk->parent->id * nr_pgs_per_blk) + slot; phys_addr = rblk->parent->id * nr_sec_per_blk + slot;
try: try:
spin_lock(&rrpc->rev_lock); spin_lock(&rrpc->rev_lock);
...@@ -381,7 +391,7 @@ static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk) ...@@ -381,7 +391,7 @@ static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk)
mempool_free(page, rrpc->page_pool); mempool_free(page, rrpc->page_pool);
bio_put(bio); bio_put(bio);
if (!bitmap_full(rblk->invalid_pages, nr_pgs_per_blk)) { if (!bitmap_full(rblk->invalid_pages, nr_sec_per_blk)) {
pr_err("nvm: failed to garbage collect block\n"); pr_err("nvm: failed to garbage collect block\n");
return -EIO; return -EIO;
} }
...@@ -499,12 +509,21 @@ static void rrpc_gc_queue(struct work_struct *work) ...@@ -499,12 +509,21 @@ static void rrpc_gc_queue(struct work_struct *work)
struct rrpc *rrpc = gcb->rrpc; struct rrpc *rrpc = gcb->rrpc;
struct rrpc_block *rblk = gcb->rblk; struct rrpc_block *rblk = gcb->rblk;
struct nvm_lun *lun = rblk->parent->lun; struct nvm_lun *lun = rblk->parent->lun;
struct nvm_block *blk = rblk->parent;
struct rrpc_lun *rlun = &rrpc->luns[lun->id - rrpc->lun_offset]; struct rrpc_lun *rlun = &rrpc->luns[lun->id - rrpc->lun_offset];
spin_lock(&rlun->lock); spin_lock(&rlun->lock);
list_add_tail(&rblk->prio, &rlun->prio_list); list_add_tail(&rblk->prio, &rlun->prio_list);
spin_unlock(&rlun->lock); spin_unlock(&rlun->lock);
spin_lock(&lun->lock);
lun->nr_open_blocks--;
lun->nr_closed_blocks++;
blk->state &= ~NVM_BLK_ST_OPEN;
blk->state |= NVM_BLK_ST_CLOSED;
list_move_tail(&rblk->list, &rlun->closed_list);
spin_unlock(&lun->lock);
mempool_free(gcb, rrpc->gcb_pool); mempool_free(gcb, rrpc->gcb_pool);
pr_debug("nvm: block '%lu' is full, allow GC (sched)\n", pr_debug("nvm: block '%lu' is full, allow GC (sched)\n",
rblk->parent->id); rblk->parent->id);
...@@ -545,7 +564,7 @@ static struct rrpc_addr *rrpc_update_map(struct rrpc *rrpc, sector_t laddr, ...@@ -545,7 +564,7 @@ static struct rrpc_addr *rrpc_update_map(struct rrpc *rrpc, sector_t laddr,
struct rrpc_addr *gp; struct rrpc_addr *gp;
struct rrpc_rev_addr *rev; struct rrpc_rev_addr *rev;
BUG_ON(laddr >= rrpc->nr_pages); BUG_ON(laddr >= rrpc->nr_sects);
gp = &rrpc->trans_map[laddr]; gp = &rrpc->trans_map[laddr];
spin_lock(&rrpc->rev_lock); spin_lock(&rrpc->rev_lock);
...@@ -668,20 +687,8 @@ static void rrpc_end_io_write(struct rrpc *rrpc, struct rrpc_rq *rrqd, ...@@ -668,20 +687,8 @@ static void rrpc_end_io_write(struct rrpc *rrpc, struct rrpc_rq *rrqd,
lun = rblk->parent->lun; lun = rblk->parent->lun;
cmnt_size = atomic_inc_return(&rblk->data_cmnt_size); cmnt_size = atomic_inc_return(&rblk->data_cmnt_size);
if (unlikely(cmnt_size == rrpc->dev->pgs_per_blk)) { if (unlikely(cmnt_size == rrpc->dev->sec_per_blk))
struct nvm_block *blk = rblk->parent;
struct rrpc_lun *rlun = rblk->rlun;
spin_lock(&lun->lock);
lun->nr_open_blocks--;
lun->nr_closed_blocks++;
blk->state &= ~NVM_BLK_ST_OPEN;
blk->state |= NVM_BLK_ST_CLOSED;
list_move_tail(&rblk->list, &rlun->closed_list);
spin_unlock(&lun->lock);
rrpc_run_gc(rrpc, rblk); rrpc_run_gc(rrpc, rblk);
}
} }
} }
...@@ -726,7 +733,7 @@ static int rrpc_read_ppalist_rq(struct rrpc *rrpc, struct bio *bio, ...@@ -726,7 +733,7 @@ static int rrpc_read_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
for (i = 0; i < npages; i++) { for (i = 0; i < npages; i++) {
/* We assume that mapping occurs at 4KB granularity */ /* We assume that mapping occurs at 4KB granularity */
BUG_ON(!(laddr + i >= 0 && laddr + i < rrpc->nr_pages)); BUG_ON(!(laddr + i >= 0 && laddr + i < rrpc->nr_sects));
gp = &rrpc->trans_map[laddr + i]; gp = &rrpc->trans_map[laddr + i];
if (gp->rblk) { if (gp->rblk) {
...@@ -757,7 +764,7 @@ static int rrpc_read_rq(struct rrpc *rrpc, struct bio *bio, struct nvm_rq *rqd, ...@@ -757,7 +764,7 @@ static int rrpc_read_rq(struct rrpc *rrpc, struct bio *bio, struct nvm_rq *rqd,
if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd)) if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd))
return NVM_IO_REQUEUE; return NVM_IO_REQUEUE;
BUG_ON(!(laddr >= 0 && laddr < rrpc->nr_pages)); BUG_ON(!(laddr >= 0 && laddr < rrpc->nr_sects));
gp = &rrpc->trans_map[laddr]; gp = &rrpc->trans_map[laddr];
if (gp->rblk) { if (gp->rblk) {
...@@ -1007,21 +1014,21 @@ static int rrpc_l2p_update(u64 slba, u32 nlb, __le64 *entries, void *private) ...@@ -1007,21 +1014,21 @@ static int rrpc_l2p_update(u64 slba, u32 nlb, __le64 *entries, void *private)
struct nvm_dev *dev = rrpc->dev; struct nvm_dev *dev = rrpc->dev;
struct rrpc_addr *addr = rrpc->trans_map + slba; struct rrpc_addr *addr = rrpc->trans_map + slba;
struct rrpc_rev_addr *raddr = rrpc->rev_trans_map; struct rrpc_rev_addr *raddr = rrpc->rev_trans_map;
sector_t max_pages = dev->total_pages * (dev->sec_size >> 9);
u64 elba = slba + nlb; u64 elba = slba + nlb;
u64 i; u64 i;
if (unlikely(elba > dev->total_pages)) { if (unlikely(elba > dev->total_secs)) {
pr_err("nvm: L2P data from device is out of bounds!\n"); pr_err("nvm: L2P data from device is out of bounds!\n");
return -EINVAL; return -EINVAL;
} }
for (i = 0; i < nlb; i++) { for (i = 0; i < nlb; i++) {
u64 pba = le64_to_cpu(entries[i]); u64 pba = le64_to_cpu(entries[i]);
unsigned int mod;
/* LNVM treats address-spaces as silos, LBA and PBA are /* LNVM treats address-spaces as silos, LBA and PBA are
* equally large and zero-indexed. * equally large and zero-indexed.
*/ */
if (unlikely(pba >= max_pages && pba != U64_MAX)) { if (unlikely(pba >= dev->total_secs && pba != U64_MAX)) {
pr_err("nvm: L2P data entry is out of bounds!\n"); pr_err("nvm: L2P data entry is out of bounds!\n");
return -EINVAL; return -EINVAL;
} }
...@@ -1033,8 +1040,10 @@ static int rrpc_l2p_update(u64 slba, u32 nlb, __le64 *entries, void *private) ...@@ -1033,8 +1040,10 @@ static int rrpc_l2p_update(u64 slba, u32 nlb, __le64 *entries, void *private)
if (!pba) if (!pba)
continue; continue;
div_u64_rem(pba, rrpc->nr_sects, &mod);
addr[i].addr = pba; addr[i].addr = pba;
raddr[pba].addr = slba + i; raddr[mod].addr = slba + i;
} }
return 0; return 0;
...@@ -1046,16 +1055,16 @@ static int rrpc_map_init(struct rrpc *rrpc) ...@@ -1046,16 +1055,16 @@ static int rrpc_map_init(struct rrpc *rrpc)
sector_t i; sector_t i;
int ret; int ret;
rrpc->trans_map = vzalloc(sizeof(struct rrpc_addr) * rrpc->nr_pages); rrpc->trans_map = vzalloc(sizeof(struct rrpc_addr) * rrpc->nr_sects);
if (!rrpc->trans_map) if (!rrpc->trans_map)
return -ENOMEM; return -ENOMEM;
rrpc->rev_trans_map = vmalloc(sizeof(struct rrpc_rev_addr) rrpc->rev_trans_map = vmalloc(sizeof(struct rrpc_rev_addr)
* rrpc->nr_pages); * rrpc->nr_sects);
if (!rrpc->rev_trans_map) if (!rrpc->rev_trans_map)
return -ENOMEM; return -ENOMEM;
for (i = 0; i < rrpc->nr_pages; i++) { for (i = 0; i < rrpc->nr_sects; i++) {
struct rrpc_addr *p = &rrpc->trans_map[i]; struct rrpc_addr *p = &rrpc->trans_map[i];
struct rrpc_rev_addr *r = &rrpc->rev_trans_map[i]; struct rrpc_rev_addr *r = &rrpc->rev_trans_map[i];
...@@ -1067,8 +1076,8 @@ static int rrpc_map_init(struct rrpc *rrpc) ...@@ -1067,8 +1076,8 @@ static int rrpc_map_init(struct rrpc *rrpc)
return 0; return 0;
/* Bring up the mapping table from device */ /* Bring up the mapping table from device */
ret = dev->ops->get_l2p_tbl(dev, 0, dev->total_pages, ret = dev->ops->get_l2p_tbl(dev, 0, dev->total_secs, rrpc_l2p_update,
rrpc_l2p_update, rrpc); rrpc);
if (ret) { if (ret) {
pr_err("nvm: rrpc: could not read L2P table.\n"); pr_err("nvm: rrpc: could not read L2P table.\n");
return -EINVAL; return -EINVAL;
...@@ -1141,7 +1150,7 @@ static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end) ...@@ -1141,7 +1150,7 @@ static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end)
struct rrpc_lun *rlun; struct rrpc_lun *rlun;
int i, j; int i, j;
if (dev->pgs_per_blk > MAX_INVALID_PAGES_STORAGE * BITS_PER_LONG) { if (dev->sec_per_blk > MAX_INVALID_PAGES_STORAGE * BITS_PER_LONG) {
pr_err("rrpc: number of pages per block too high."); pr_err("rrpc: number of pages per block too high.");
return -EINVAL; return -EINVAL;
} }
...@@ -1168,7 +1177,7 @@ static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end) ...@@ -1168,7 +1177,7 @@ static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end)
spin_lock_init(&rlun->lock); spin_lock_init(&rlun->lock);
rrpc->total_blocks += dev->blks_per_lun; rrpc->total_blocks += dev->blks_per_lun;
rrpc->nr_pages += dev->sec_per_lun; rrpc->nr_sects += dev->sec_per_lun;
rlun->blocks = vzalloc(sizeof(struct rrpc_block) * rlun->blocks = vzalloc(sizeof(struct rrpc_block) *
rrpc->dev->blks_per_lun); rrpc->dev->blks_per_lun);
...@@ -1221,9 +1230,9 @@ static sector_t rrpc_capacity(void *private) ...@@ -1221,9 +1230,9 @@ static sector_t rrpc_capacity(void *private)
/* cur, gc, and two emergency blocks for each lun */ /* cur, gc, and two emergency blocks for each lun */
reserved = rrpc->nr_luns * dev->max_pages_per_blk * 4; reserved = rrpc->nr_luns * dev->max_pages_per_blk * 4;
provisioned = rrpc->nr_pages - reserved; provisioned = rrpc->nr_sects - reserved;
if (reserved > rrpc->nr_pages) { if (reserved > rrpc->nr_sects) {
pr_err("rrpc: not enough space available to expose storage.\n"); pr_err("rrpc: not enough space available to expose storage.\n");
return 0; return 0;
} }
...@@ -1242,10 +1251,11 @@ static void rrpc_block_map_update(struct rrpc *rrpc, struct rrpc_block *rblk) ...@@ -1242,10 +1251,11 @@ static void rrpc_block_map_update(struct rrpc *rrpc, struct rrpc_block *rblk)
struct nvm_dev *dev = rrpc->dev; struct nvm_dev *dev = rrpc->dev;
int offset; int offset;
struct rrpc_addr *laddr; struct rrpc_addr *laddr;
u64 paddr, pladdr; u64 bpaddr, paddr, pladdr;
for (offset = 0; offset < dev->pgs_per_blk; offset++) { bpaddr = block_to_rel_addr(rrpc, rblk);
paddr = block_to_addr(rrpc, rblk) + offset; for (offset = 0; offset < dev->sec_per_blk; offset++) {
paddr = bpaddr + offset;
pladdr = rrpc->rev_trans_map[paddr].addr; pladdr = rrpc->rev_trans_map[paddr].addr;
if (pladdr == ADDR_EMPTY) if (pladdr == ADDR_EMPTY)
...@@ -1386,7 +1396,7 @@ static void *rrpc_init(struct nvm_dev *dev, struct gendisk *tdisk, ...@@ -1386,7 +1396,7 @@ static void *rrpc_init(struct nvm_dev *dev, struct gendisk *tdisk,
blk_queue_max_hw_sectors(tqueue, queue_max_hw_sectors(bqueue)); blk_queue_max_hw_sectors(tqueue, queue_max_hw_sectors(bqueue));
pr_info("nvm: rrpc initialized with %u luns and %llu pages.\n", pr_info("nvm: rrpc initialized with %u luns and %llu pages.\n",
rrpc->nr_luns, (unsigned long long)rrpc->nr_pages); rrpc->nr_luns, (unsigned long long)rrpc->nr_sects);
mod_timer(&rrpc->gc_timer, jiffies + msecs_to_jiffies(10)); mod_timer(&rrpc->gc_timer, jiffies + msecs_to_jiffies(10));
......
...@@ -104,7 +104,7 @@ struct rrpc { ...@@ -104,7 +104,7 @@ struct rrpc {
struct rrpc_lun *luns; struct rrpc_lun *luns;
/* calculated values */ /* calculated values */
unsigned long long nr_pages; unsigned long long nr_sects;
unsigned long total_blocks; unsigned long total_blocks;
/* Write strategy variables. Move these into each for structure for each /* Write strategy variables. Move these into each for structure for each
...@@ -156,6 +156,15 @@ struct rrpc_rev_addr { ...@@ -156,6 +156,15 @@ struct rrpc_rev_addr {
u64 addr; u64 addr;
}; };
static inline struct rrpc_block *rrpc_get_rblk(struct rrpc_lun *rlun,
int blk_id)
{
struct rrpc *rrpc = rlun->rrpc;
int lun_blk = blk_id % rrpc->dev->blks_per_lun;
return &rlun->blocks[lun_blk];
}
static inline sector_t rrpc_get_laddr(struct bio *bio) static inline sector_t rrpc_get_laddr(struct bio *bio)
{ {
return bio->bi_iter.bi_sector / NR_PHY_IN_LOG; return bio->bi_iter.bi_sector / NR_PHY_IN_LOG;
...@@ -206,7 +215,7 @@ static inline int rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr, ...@@ -206,7 +215,7 @@ static inline int rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
unsigned pages, unsigned pages,
struct rrpc_inflight_rq *r) struct rrpc_inflight_rq *r)
{ {
BUG_ON((laddr + pages) > rrpc->nr_pages); BUG_ON((laddr + pages) > rrpc->nr_sects);
return __rrpc_lock_laddr(rrpc, laddr, pages, r); return __rrpc_lock_laddr(rrpc, laddr, pages, r);
} }
...@@ -243,7 +252,7 @@ static inline void rrpc_unlock_rq(struct rrpc *rrpc, struct nvm_rq *rqd) ...@@ -243,7 +252,7 @@ static inline void rrpc_unlock_rq(struct rrpc *rrpc, struct nvm_rq *rqd)
struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd); struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
uint8_t pages = rqd->nr_pages; uint8_t pages = rqd->nr_pages;
BUG_ON((r->l_start + pages) > rrpc->nr_pages); BUG_ON((r->l_start + pages) > rrpc->nr_sects);
rrpc_unlock_laddr(rrpc, r); rrpc_unlock_laddr(rrpc, r);
} }
......
...@@ -1015,8 +1015,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c) ...@@ -1015,8 +1015,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c)
*/ */
atomic_set(&dc->count, 1); atomic_set(&dc->count, 1);
if (bch_cached_dev_writeback_start(dc)) /* Block writeback thread, but spawn it */
down_write(&dc->writeback_lock);
if (bch_cached_dev_writeback_start(dc)) {
up_write(&dc->writeback_lock);
return -ENOMEM; return -ENOMEM;
}
if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) { if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) {
bch_sectors_dirty_init(dc); bch_sectors_dirty_init(dc);
...@@ -1028,6 +1032,9 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c) ...@@ -1028,6 +1032,9 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c)
bch_cached_dev_run(dc); bch_cached_dev_run(dc);
bcache_device_link(&dc->disk, c, "bdev"); bcache_device_link(&dc->disk, c, "bdev");
/* Allow the writeback thread to proceed */
up_write(&dc->writeback_lock);
pr_info("Caching %s as %s on set %pU", pr_info("Caching %s as %s on set %pU",
bdevname(dc->bdev, buf), dc->disk.disk->disk_name, bdevname(dc->bdev, buf), dc->disk.disk->disk_name,
dc->disk.c->sb.set_uuid); dc->disk.c->sb.set_uuid);
...@@ -1366,6 +1373,9 @@ static void cache_set_flush(struct closure *cl) ...@@ -1366,6 +1373,9 @@ static void cache_set_flush(struct closure *cl)
struct btree *b; struct btree *b;
unsigned i; unsigned i;
if (!c)
closure_return(cl);
bch_cache_accounting_destroy(&c->accounting); bch_cache_accounting_destroy(&c->accounting);
kobject_put(&c->internal); kobject_put(&c->internal);
...@@ -1828,11 +1838,12 @@ static int cache_alloc(struct cache_sb *sb, struct cache *ca) ...@@ -1828,11 +1838,12 @@ static int cache_alloc(struct cache_sb *sb, struct cache *ca)
return 0; return 0;
} }
static void register_cache(struct cache_sb *sb, struct page *sb_page, static int register_cache(struct cache_sb *sb, struct page *sb_page,
struct block_device *bdev, struct cache *ca) struct block_device *bdev, struct cache *ca)
{ {
char name[BDEVNAME_SIZE]; char name[BDEVNAME_SIZE];
const char *err = "cannot allocate memory"; const char *err = NULL;
int ret = 0;
memcpy(&ca->sb, sb, sizeof(struct cache_sb)); memcpy(&ca->sb, sb, sizeof(struct cache_sb));
ca->bdev = bdev; ca->bdev = bdev;
...@@ -1847,27 +1858,35 @@ static void register_cache(struct cache_sb *sb, struct page *sb_page, ...@@ -1847,27 +1858,35 @@ static void register_cache(struct cache_sb *sb, struct page *sb_page,
if (blk_queue_discard(bdev_get_queue(ca->bdev))) if (blk_queue_discard(bdev_get_queue(ca->bdev)))
ca->discard = CACHE_DISCARD(&ca->sb); ca->discard = CACHE_DISCARD(&ca->sb);
if (cache_alloc(sb, ca) != 0) ret = cache_alloc(sb, ca);
if (ret != 0)
goto err; goto err;
err = "error creating kobject"; if (kobject_add(&ca->kobj, &part_to_dev(bdev->bd_part)->kobj, "bcache")) {
if (kobject_add(&ca->kobj, &part_to_dev(bdev->bd_part)->kobj, "bcache")) err = "error calling kobject_add";
goto err; ret = -ENOMEM;
goto out;
}
mutex_lock(&bch_register_lock); mutex_lock(&bch_register_lock);
err = register_cache_set(ca); err = register_cache_set(ca);
mutex_unlock(&bch_register_lock); mutex_unlock(&bch_register_lock);
if (err) if (err) {
goto err; ret = -ENODEV;
goto out;
}
pr_info("registered cache device %s", bdevname(bdev, name)); pr_info("registered cache device %s", bdevname(bdev, name));
out: out:
kobject_put(&ca->kobj); kobject_put(&ca->kobj);
return;
err: err:
pr_notice("error opening %s: %s", bdevname(bdev, name), err); if (err)
goto out; pr_notice("error opening %s: %s", bdevname(bdev, name), err);
return ret;
} }
/* Global interfaces/init */ /* Global interfaces/init */
...@@ -1965,7 +1984,8 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr, ...@@ -1965,7 +1984,8 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
if (!ca) if (!ca)
goto err_close; goto err_close;
register_cache(sb, sb_page, bdev, ca); if (register_cache(sb, sb_page, bdev, ca) != 0)
goto err_close;
} }
out: out:
if (sb_page) if (sb_page)
......
config NVME_CORE
tristate
config BLK_DEV_NVME config BLK_DEV_NVME
tristate "NVM Express block device" tristate "NVM Express block device"
depends on PCI && BLOCK depends on PCI && BLOCK
select NVME_CORE
---help--- ---help---
The NVM Express driver is for solid state drives directly The NVM Express driver is for solid state drives directly
connected to the PCI or PCI Express bus. If you know you connected to the PCI or PCI Express bus. If you know you
...@@ -11,7 +15,7 @@ config BLK_DEV_NVME ...@@ -11,7 +15,7 @@ config BLK_DEV_NVME
config BLK_DEV_NVME_SCSI config BLK_DEV_NVME_SCSI
bool "SCSI emulation for NVMe device nodes" bool "SCSI emulation for NVMe device nodes"
depends on BLK_DEV_NVME depends on NVME_CORE
---help--- ---help---
This adds support for the SG_IO ioctl on the NVMe character This adds support for the SG_IO ioctl on the NVMe character
and block devices nodes, as well a a translation for a small and block devices nodes, as well a a translation for a small
......
obj-$(CONFIG_NVME_CORE) += nvme-core.o
obj-$(CONFIG_BLK_DEV_NVME) += nvme.o
obj-$(CONFIG_BLK_DEV_NVME) += nvme.o nvme-core-y := core.o
nvme-core-$(CONFIG_BLK_DEV_NVME_SCSI) += scsi.o
nvme-core-$(CONFIG_NVM) += lightnvm.o
lightnvm-$(CONFIG_NVM) := lightnvm.o nvme-y += pci.o
nvme-y += core.o pci.o $(lightnvm-y)
nvme-$(CONFIG_BLK_DEV_NVME_SCSI) += scsi.o
This diff is collapsed.
...@@ -379,8 +379,31 @@ static int nvme_nvm_get_l2p_tbl(struct nvm_dev *nvmdev, u64 slba, u32 nlb, ...@@ -379,8 +379,31 @@ static int nvme_nvm_get_l2p_tbl(struct nvm_dev *nvmdev, u64 slba, u32 nlb,
return ret; return ret;
} }
static void nvme_nvm_bb_tbl_fold(struct nvm_dev *nvmdev,
int nr_dst_blks, u8 *dst_blks,
int nr_src_blks, u8 *src_blks)
{
int blk, offset, pl, blktype;
for (blk = 0; blk < nr_dst_blks; blk++) {
offset = blk * nvmdev->plane_mode;
blktype = src_blks[offset];
/* Bad blocks on any planes take precedence over other types */
for (pl = 0; pl < nvmdev->plane_mode; pl++) {
if (src_blks[offset + pl] &
(NVM_BLK_T_BAD|NVM_BLK_T_GRWN_BAD)) {
blktype = src_blks[offset + pl];
break;
}
}
dst_blks[blk] = blktype;
}
}
static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
int nr_blocks, nvm_bb_update_fn *update_bbtbl, int nr_dst_blks, nvm_bb_update_fn *update_bbtbl,
void *priv) void *priv)
{ {
struct request_queue *q = nvmdev->q; struct request_queue *q = nvmdev->q;
...@@ -388,7 +411,9 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, ...@@ -388,7 +411,9 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
struct nvme_ctrl *ctrl = ns->ctrl; struct nvme_ctrl *ctrl = ns->ctrl;
struct nvme_nvm_command c = {}; struct nvme_nvm_command c = {};
struct nvme_nvm_bb_tbl *bb_tbl; struct nvme_nvm_bb_tbl *bb_tbl;
int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blocks; u8 *dst_blks = NULL;
int nr_src_blks = nr_dst_blks * nvmdev->plane_mode;
int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_src_blks;
int ret = 0; int ret = 0;
c.get_bb.opcode = nvme_nvm_admin_get_bb_tbl; c.get_bb.opcode = nvme_nvm_admin_get_bb_tbl;
...@@ -399,6 +424,12 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, ...@@ -399,6 +424,12 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
if (!bb_tbl) if (!bb_tbl)
return -ENOMEM; return -ENOMEM;
dst_blks = kzalloc(nr_dst_blks, GFP_KERNEL);
if (!dst_blks) {
ret = -ENOMEM;
goto out;
}
ret = nvme_submit_sync_cmd(ctrl->admin_q, (struct nvme_command *)&c, ret = nvme_submit_sync_cmd(ctrl->admin_q, (struct nvme_command *)&c,
bb_tbl, tblsz); bb_tbl, tblsz);
if (ret) { if (ret) {
...@@ -420,16 +451,21 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, ...@@ -420,16 +451,21 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
goto out; goto out;
} }
if (le32_to_cpu(bb_tbl->tblks) != nr_blocks) { if (le32_to_cpu(bb_tbl->tblks) != nr_src_blks) {
ret = -EINVAL; ret = -EINVAL;
dev_err(ctrl->dev, "bbt unsuspected blocks returned (%u!=%u)", dev_err(ctrl->dev, "bbt unsuspected blocks returned (%u!=%u)",
le32_to_cpu(bb_tbl->tblks), nr_blocks); le32_to_cpu(bb_tbl->tblks), nr_src_blks);
goto out; goto out;
} }
nvme_nvm_bb_tbl_fold(nvmdev, nr_dst_blks, dst_blks,
nr_src_blks, bb_tbl->blk);
ppa = dev_to_generic_addr(nvmdev, ppa); ppa = dev_to_generic_addr(nvmdev, ppa);
ret = update_bbtbl(ppa, nr_blocks, bb_tbl->blk, priv); ret = update_bbtbl(ppa, nr_dst_blks, dst_blks, priv);
out: out:
kfree(dst_blks);
kfree(bb_tbl); kfree(bb_tbl);
return ret; return ret;
} }
......
...@@ -59,6 +59,12 @@ enum nvme_quirks { ...@@ -59,6 +59,12 @@ enum nvme_quirks {
* correctly. * correctly.
*/ */
NVME_QUIRK_IDENTIFY_CNS = (1 << 1), NVME_QUIRK_IDENTIFY_CNS = (1 << 1),
/*
* The controller deterministically returns O's on reads to discarded
* logical blocks.
*/
NVME_QUIRK_DISCARD_ZEROES = (1 << 2),
}; };
struct nvme_ctrl { struct nvme_ctrl {
...@@ -78,6 +84,7 @@ struct nvme_ctrl { ...@@ -78,6 +84,7 @@ struct nvme_ctrl {
char serial[20]; char serial[20];
char model[40]; char model[40];
char firmware_rev[8]; char firmware_rev[8];
int cntlid;
u32 ctrl_config; u32 ctrl_config;
...@@ -85,6 +92,7 @@ struct nvme_ctrl { ...@@ -85,6 +92,7 @@ struct nvme_ctrl {
u32 max_hw_sectors; u32 max_hw_sectors;
u32 stripe_size; u32 stripe_size;
u16 oncs; u16 oncs;
u16 vid;
atomic_t abort_limit; atomic_t abort_limit;
u8 event_limit; u8 event_limit;
u8 vwc; u8 vwc;
...@@ -124,6 +132,7 @@ struct nvme_ns { ...@@ -124,6 +132,7 @@ struct nvme_ns {
}; };
struct nvme_ctrl_ops { struct nvme_ctrl_ops {
struct module *module;
int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val); int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val);
int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val); int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val);
int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val); int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);
...@@ -255,7 +264,8 @@ void nvme_requeue_req(struct request *req); ...@@ -255,7 +264,8 @@ void nvme_requeue_req(struct request *req);
int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
void *buf, unsigned bufflen); void *buf, unsigned bufflen);
int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
void *buffer, unsigned bufflen, u32 *result, unsigned timeout); struct nvme_completion *cqe, void *buffer, unsigned bufflen,
unsigned timeout);
int nvme_submit_user_cmd(struct request_queue *q, struct nvme_command *cmd, int nvme_submit_user_cmd(struct request_queue *q, struct nvme_command *cmd,
void __user *ubuffer, unsigned bufflen, u32 *result, void __user *ubuffer, unsigned bufflen, u32 *result,
unsigned timeout); unsigned timeout);
...@@ -273,8 +283,6 @@ int nvme_set_features(struct nvme_ctrl *dev, unsigned fid, unsigned dword11, ...@@ -273,8 +283,6 @@ int nvme_set_features(struct nvme_ctrl *dev, unsigned fid, unsigned dword11,
dma_addr_t dma_addr, u32 *result); dma_addr_t dma_addr, u32 *result);
int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count); int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count);
extern spinlock_t dev_list_lock;
struct sg_io_hdr; struct sg_io_hdr;
int nvme_sg_io(struct nvme_ns *ns, struct sg_io_hdr __user *u_hdr); int nvme_sg_io(struct nvme_ns *ns, struct sg_io_hdr __user *u_hdr);
......
This diff is collapsed.
...@@ -92,9 +92,9 @@ enum { ...@@ -92,9 +92,9 @@ enum {
NVM_ADDRMODE_CHANNEL = 1, NVM_ADDRMODE_CHANNEL = 1,
/* Plane programming mode for LUN */ /* Plane programming mode for LUN */
NVM_PLANE_SINGLE = 0, NVM_PLANE_SINGLE = 1,
NVM_PLANE_DOUBLE = 1, NVM_PLANE_DOUBLE = 2,
NVM_PLANE_QUAD = 2, NVM_PLANE_QUAD = 4,
/* Status codes */ /* Status codes */
NVM_RSP_SUCCESS = 0x0, NVM_RSP_SUCCESS = 0x0,
...@@ -341,8 +341,8 @@ struct nvm_dev { ...@@ -341,8 +341,8 @@ struct nvm_dev {
int lps_per_blk; int lps_per_blk;
int *lptbl; int *lptbl;
unsigned long total_pages;
unsigned long total_blocks; unsigned long total_blocks;
unsigned long total_secs;
int nr_luns; int nr_luns;
unsigned max_pages_per_blk; unsigned max_pages_per_blk;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment