Commit 6f51ab94 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'mtd/for-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Richard Weinberger:
 "MTD core changes:
   - partition parser: Support MTD names containing one or more colons.
   - mtdblock: clear cache_state to avoid writing to bad blocks
     repeatedly.

  Raw NAND core changes:
   - Stop using nand_release(), patched all drivers.
   - Give more information about the ECC weakness when not matching the
     chip's requirement.
   - MAINTAINERS updates.
   - Support emulated SLC mode on MLC NANDs.
   - Support "constrained" controllers, adapt the core and ONFI/JEDEC
     table parsing and Micron's code.
   - Take check_only into account.
   - Add an invalid ECC mode to discriminate with valid ones.
   - Return an enum from of_get_nand_ecc_algo().
   - Drop OOB_FIRST placement scheme.
   - Introduce nand_extract_bits().
   - Ensure a consistent bitflips numbering.
   - BCH lib:
      - Allow easy bit swapping.
      - Rework a little bit the exported function names.
   - Fix nand_gpio_waitrdy().
   - Propage CS selection to sub operations.
   - Add a NAND_NO_BBM_QUIRK flag.
   - Give the possibility to verify a read operation is supported.
   - Add a helper to check supported operations.
   - Avoid indirect access to ->data_buf().
   - Rename the use_bufpoi variables.
   - Fix comments about the use of bufpoi.
   - Rename a NAND chip option.
   - Reorder the nand_chip->options flags.
   - Translate obscure bitfields into readable macros.
   - Timings:
      - Fix default values.
      - Add mode information to the timings structure.

  Raw NAND controller driver changes:
   - Fixed many error paths.
   - Arasan
      - New driver
   - Au1550nd:
      - Various cleanups
      - Migration to ->exec_op()
   - brcmnand:
      - Misc cleanup.
      - Support v2.1-v2.2 controllers.
      - Remove unused including <linux/version.h>.
      - Correctly verify erased pages.
      - Fix Hamming OOB layout.
   - Cadence
      - Make cadence_nand_attach_chip static.
   - Cafe:
      - Set the NAND_NO_BBM_QUIRK flag
   - cmx270:
      - Remove this controller driver.
   - cs553x:
      - Misc cleanup
      - Migration to ->exec_op()
   - Davinci:
      - Misc cleanup.
      - Migration to ->exec_op()
   - Denali:
      - Add more delays before latching incoming data
   - Diskonchip:
      - Misc cleanup
      - Migration to ->exec_op()
   - Fsmc:
      - Change to non-atomic bit operations.
   - GPMI:
      - Use nand_extract_bits()
      - Fix runtime PM imbalance.
   - Ingenic:
      - Migration to exec_op()
      - Fix the RB gpio active-high property on qi, lb60
      - Make qi_lb60_ooblayout_ops static.
   - Marvell:
      - Misc cleanup and small fixes
   - Nandsim:
      - Fix the error paths, driver wide.
   - Omap_elm:
      - Fix runtime PM imbalance.
   - STM32_FMC2:
      - Misc cleanups (error cases, comments, timeout valus, cosmetic
        changes).

  SPI NOR core changes:
   - Add, update support and fix few flashes.
   - Prepare BFPT parsing for JESD216 rev D.
   - Kernel doc fixes.

  CFI changes:
   - Support the absence of protection registers for Intel CFI flashes.
   - Replace zero-length array with flexible-arrays"

* tag 'mtd/for-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (208 commits)
  mtd: clear cache_state to avoid writing to bad blocks repeatedly
  mtd: parser: cmdline: Support MTD names containing one or more colons
  mtd: physmap_of_gemini: remove defined but not used symbol 'syscon_match'
  mtd: rawnand: Add an invalid ECC mode to discriminate with valid ones
  mtd: rawnand: Return an enum from of_get_nand_ecc_algo()
  mtd: rawnand: Drop OOB_FIRST placement scheme
  mtd: rawnand: Avoid a typedef
  mtd: Fix typo in mtd_ooblayout_set_databytes() description
  mtd: rawnand: Stop using nand_release()
  mtd: rawnand: nandsim: Reorganize ns_cleanup_module()
  mtd: rawnand: nandsim: Rename a label in ns_init_module()
  mtd: rawnand: nandsim: Manage lists on error in ns_init_module()
  mtd: rawnand: nandsim: Fix the label pointing on nand_cleanup()
  mtd: rawnand: nandsim: Free erase_block_wear on error
  mtd: rawnand: nandsim: Use an additional label when freeing the nandsim object
  mtd: rawnand: nandsim: Stop using nand_release()
  mtd: rawnand: nandsim: Free the partition names in ns_free()
  mtd: rawnand: nandsim: Free the allocated device on error in ns_init()
  mtd: rawnand: nandsim: Free partition names on error in ns_init()
  mtd: rawnand: nandsim: Fix the two ns_alloc_device() error paths
  ...
parents 6f630784 5788ccf3
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/mtd/arasan,nand-controller.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Arasan NAND Flash Controller with ONFI 3.1 support device tree bindings
allOf:
- $ref: "nand-controller.yaml"
maintainers:
- Naga Sureshkumar Relli <naga.sureshkumar.relli@xilinx.com>
properties:
compatible:
oneOf:
- items:
- enum:
- xlnx,zynqmp-nand-controller
- enum:
- arasan,nfc-v3p10
reg:
maxItems: 1
clocks:
items:
- description: Controller clock
- description: NAND bus clock
clock-names:
items:
- const: controller
- const: bus
interrupts:
maxItems: 1
"#address-cells": true
"#size-cells": true
required:
- compatible
- reg
- clocks
- clock-names
- interrupts
additionalProperties: true
examples:
- |
nfc: nand-controller@ff100000 {
compatible = "xlnx,zynqmp-nand-controller", "arasan,nfc-v3p10";
reg = <0x0 0xff100000 0x0 0x1000>;
clock-names = "controller", "bus";
clocks = <&clk200>, <&clk100>;
interrupt-parent = <&gic>;
interrupts = <0 14 4>;
#address-cells = <1>;
#size-cells = <0>;
};
......@@ -20,6 +20,8 @@ Required properties:
"brcm,brcmnand" and an appropriate version compatibility
string, like "brcm,brcmnand-v7.0"
Possible values:
brcm,brcmnand-v2.1
brcm,brcmnand-v2.2
brcm,brcmnand-v4.0
brcm,brcmnand-v5.0
brcm,brcmnand-v6.0
......
......@@ -61,6 +61,9 @@ Optional properties:
clobbered.
- lock : Do not unlock the partition at initialization time (not supported on
all devices)
- slc-mode: This parameter, if present, allows one to emulate SLC mode on a
partition attached to an MLC NAND thus making this partition immune to
paired-pages corruptions
Examples:
......
......@@ -276,8 +276,10 @@ unregisters the partitions in the MTD layer.
#ifdef MODULE
static void __exit board_cleanup (void)
{
/* Release resources, unregister device */
nand_release (mtd_to_nand(board_mtd));
/* Unregister device */
WARN_ON(mtd_device_unregister(board_mtd));
/* Release resources */
nand_cleanup(mtd_to_nand(board_mtd));
/* unmap physical address */
iounmap(baseaddr);
......
......@@ -1305,6 +1305,13 @@ S: Supported
W: http://www.aquantia.com
F: drivers/net/ethernet/aquantia/atlantic/aq_ptp*
ARASAN NAND CONTROLLER DRIVER
M: Naga Sureshkumar Relli <nagasure@xilinx.com>
L: linux-mtd@lists.infradead.org
S: Maintained
F: Documentation/devicetree/bindings/mtd/arasan,nand-controller.yaml
F: drivers/mtd/nand/raw/arasan-nand-controller.c
ARC FRAMEBUFFER DRIVER
M: Jaya Kumar <jayalk@intworks.biz>
S: Maintained
......@@ -3778,9 +3785,8 @@ F: Documentation/devicetree/bindings/media/cdns,*.txt
F: drivers/media/platform/cadence/cdns-csi2*
CADENCE NAND DRIVER
M: Piotr Sroka <piotrs@cadence.com>
L: linux-mtd@lists.infradead.org
S: Maintained
S: Orphan
F: Documentation/devicetree/bindings/mtd/cadence-nand-controller.txt
F: drivers/mtd/nand/raw/cadence-nand-controller.c
......@@ -10853,9 +10859,8 @@ F: Documentation/devicetree/bindings/i2c/i2c-mt7621.txt
F: drivers/i2c/busses/i2c-mt7621.c
MEDIATEK NAND CONTROLLER DRIVER
M: Xiaolei Li <xiaolei.li@mediatek.com>
L: linux-mtd@lists.infradead.org
S: Maintained
S: Orphan
F: Documentation/devicetree/bindings/mtd/mtk-nand.txt
F: drivers/mtd/nand/raw/mtk_*
......
......@@ -420,8 +420,9 @@ read_pri_intelext(struct map_info *map, __u16 adr)
extra_size = 0;
/* Protection Register info */
extra_size += (extp->NumProtectionFields - 1) *
sizeof(struct cfi_intelext_otpinfo);
if (extp->NumProtectionFields)
extra_size += (extp->NumProtectionFields - 1) *
sizeof(struct cfi_intelext_otpinfo);
}
if (extp->MinorVersion >= '1') {
......@@ -695,14 +696,16 @@ static int cfi_intelext_partition_fixup(struct mtd_info *mtd,
*/
if (extp && extp->MajorVersion == '1' && extp->MinorVersion >= '3'
&& extp->FeatureSupport & (1 << 9)) {
int offs = 0;
struct cfi_private *newcfi;
struct flchip *chip;
struct flchip_shared *shared;
int offs, numregions, numparts, partshift, numvirtchips, i, j;
int numregions, numparts, partshift, numvirtchips, i, j;
/* Protection Register info */
offs = (extp->NumProtectionFields - 1) *
sizeof(struct cfi_intelext_otpinfo);
if (extp->NumProtectionFields)
offs = (extp->NumProtectionFields - 1) *
sizeof(struct cfi_intelext_otpinfo);
/* Burst Read info */
offs += extp->extra[offs+1]+2;
......
......@@ -647,7 +647,7 @@ static int doc_ecc_bch_fix_data(struct docg3 *docg3, void *buf, u8 *hwecc)
for (i = 0; i < DOC_ECC_BCH_SIZE; i++)
ecc[i] = bitrev8(hwecc[i]);
numerrs = decode_bch(docg3->cascade->bch, NULL,
numerrs = bch_decode(docg3->cascade->bch, NULL,
DOC_ECC_BCH_COVERED_BYTES,
NULL, ecc, NULL, errorpos);
BUG_ON(numerrs == -EINVAL);
......@@ -1984,8 +1984,8 @@ static int __init docg3_probe(struct platform_device *pdev)
return ret;
cascade->base = base;
mutex_init(&cascade->lock);
cascade->bch = init_bch(DOC_ECC_BCH_M, DOC_ECC_BCH_T,
DOC_ECC_BCH_PRIMPOLY);
cascade->bch = bch_init(DOC_ECC_BCH_M, DOC_ECC_BCH_T,
DOC_ECC_BCH_PRIMPOLY, false);
if (!cascade->bch)
return ret;
......@@ -2021,7 +2021,7 @@ static int __init docg3_probe(struct platform_device *pdev)
ret = -ENODEV;
dev_info(dev, "No supported DiskOnChip found\n");
err_probe:
free_bch(cascade->bch);
bch_free(cascade->bch);
for (floor = 0; floor < DOC_MAX_NBFLOORS; floor++)
if (cascade->floors[floor])
doc_release_device(cascade->floors[floor]);
......@@ -2045,7 +2045,7 @@ static int docg3_release(struct platform_device *pdev)
if (cascade->floors[floor])
doc_release_device(cascade->floors[floor]);
free_bch(docg3->cascade->bch);
bch_free(docg3->cascade->bch);
return 0;
}
......
......@@ -46,11 +46,6 @@
#define FLASH_PARALLEL_HIGH_PIN_CNT (1 << 20) /* else low pin cnt */
static const struct of_device_id syscon_match[] = {
{ .compatible = "cortina,gemini-syscon" },
{ },
};
struct gemini_flash {
struct device *dev;
struct pinctrl *p;
......
......@@ -89,8 +89,6 @@ static int write_cached_data (struct mtdblk_dev *mtdblk)
ret = erase_write (mtd, mtdblk->cache_offset,
mtdblk->cache_size, mtdblk->cache_data);
if (ret)
return ret;
/*
* Here we could arguably set the cache state to STATE_CLEAN.
......@@ -98,9 +96,14 @@ static int write_cached_data (struct mtdblk_dev *mtdblk)
* be notified if this content is altered on the flash by other
* means. Let's declare it empty and leave buffering tasks to
* the buffer cache instead.
*
* If this cache_offset points to a bad block, data cannot be
* written to the device. Clear cache_state to avoid writing to
* bad blocks repeatedly.
*/
mtdblk->cache_state = STATE_EMPTY;
return 0;
if (ret == 0 || ret == -EIO)
mtdblk->cache_state = STATE_EMPTY;
return ret;
}
......
......@@ -617,6 +617,19 @@ int add_mtd_device(struct mtd_info *mtd)
!(mtd->flags & MTD_NO_ERASE)))
return -EINVAL;
/*
* MTD_SLC_ON_MLC_EMULATION can only be set on partitions, when the
* master is an MLC NAND and has a proper pairing scheme defined.
* We also reject masters that implement ->_writev() for now, because
* NAND controller drivers don't implement this hook, and adding the
* SLC -> MLC address/length conversion to this path is useless if we
* don't have a user.
*/
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION &&
(!mtd_is_partition(mtd) || master->type != MTD_MLCNANDFLASH ||
!master->pairing || master->_writev))
return -EINVAL;
mutex_lock(&mtd_table_mutex);
i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL);
......@@ -632,6 +645,14 @@ int add_mtd_device(struct mtd_info *mtd)
if (mtd->bitflip_threshold == 0)
mtd->bitflip_threshold = mtd->ecc_strength;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
int ngroups = mtd_pairing_groups(master);
mtd->erasesize /= ngroups;
mtd->size = (u64)mtd_div_by_eb(mtd->size, master) *
mtd->erasesize;
}
if (is_power_of_2(mtd->erasesize))
mtd->erasesize_shift = ffs(mtd->erasesize) - 1;
else
......@@ -1074,9 +1095,11 @@ int mtd_erase(struct mtd_info *mtd, struct erase_info *instr)
{
struct mtd_info *master = mtd_get_master(mtd);
u64 mst_ofs = mtd_get_master_ofs(mtd, 0);
struct erase_info adjinstr;
int ret;
instr->fail_addr = MTD_FAIL_ADDR_UNKNOWN;
adjinstr = *instr;
if (!mtd->erasesize || !master->_erase)
return -ENOTSUPP;
......@@ -1091,12 +1114,27 @@ int mtd_erase(struct mtd_info *mtd, struct erase_info *instr)
ledtrig_mtd_activity();
instr->addr += mst_ofs;
ret = master->_erase(master, instr);
if (instr->fail_addr != MTD_FAIL_ADDR_UNKNOWN)
instr->fail_addr -= mst_ofs;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
adjinstr.addr = (loff_t)mtd_div_by_eb(instr->addr, mtd) *
master->erasesize;
adjinstr.len = ((u64)mtd_div_by_eb(instr->addr + instr->len, mtd) *
master->erasesize) -
adjinstr.addr;
}
adjinstr.addr += mst_ofs;
ret = master->_erase(master, &adjinstr);
if (adjinstr.fail_addr != MTD_FAIL_ADDR_UNKNOWN) {
instr->fail_addr = adjinstr.fail_addr - mst_ofs;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
instr->fail_addr = mtd_div_by_eb(instr->fail_addr,
master);
instr->fail_addr *= mtd->erasesize;
}
}
instr->addr -= mst_ofs;
return ret;
}
EXPORT_SYMBOL_GPL(mtd_erase);
......@@ -1276,6 +1314,101 @@ static int mtd_check_oob_ops(struct mtd_info *mtd, loff_t offs,
return 0;
}
static int mtd_read_oob_std(struct mtd_info *mtd, loff_t from,
struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
int ret;
from = mtd_get_master_ofs(mtd, from);
if (master->_read_oob)
ret = master->_read_oob(master, from, ops);
else
ret = master->_read(master, from, ops->len, &ops->retlen,
ops->datbuf);
return ret;
}
static int mtd_write_oob_std(struct mtd_info *mtd, loff_t to,
struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
int ret;
to = mtd_get_master_ofs(mtd, to);
if (master->_write_oob)
ret = master->_write_oob(master, to, ops);
else
ret = master->_write(master, to, ops->len, &ops->retlen,
ops->datbuf);
return ret;
}
static int mtd_io_emulated_slc(struct mtd_info *mtd, loff_t start, bool read,
struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
int ngroups = mtd_pairing_groups(master);
int npairs = mtd_wunit_per_eb(master) / ngroups;
struct mtd_oob_ops adjops = *ops;
unsigned int wunit, oobavail;
struct mtd_pairing_info info;
int max_bitflips = 0;
u32 ebofs, pageofs;
loff_t base, pos;
ebofs = mtd_mod_by_eb(start, mtd);
base = (loff_t)mtd_div_by_eb(start, mtd) * master->erasesize;
info.group = 0;
info.pair = mtd_div_by_ws(ebofs, mtd);
pageofs = mtd_mod_by_ws(ebofs, mtd);
oobavail = mtd_oobavail(mtd, ops);
while (ops->retlen < ops->len || ops->oobretlen < ops->ooblen) {
int ret;
if (info.pair >= npairs) {
info.pair = 0;
base += master->erasesize;
}
wunit = mtd_pairing_info_to_wunit(master, &info);
pos = mtd_wunit_to_offset(mtd, base, wunit);
adjops.len = ops->len - ops->retlen;
if (adjops.len > mtd->writesize - pageofs)
adjops.len = mtd->writesize - pageofs;
adjops.ooblen = ops->ooblen - ops->oobretlen;
if (adjops.ooblen > oobavail - adjops.ooboffs)
adjops.ooblen = oobavail - adjops.ooboffs;
if (read) {
ret = mtd_read_oob_std(mtd, pos + pageofs, &adjops);
if (ret > 0)
max_bitflips = max(max_bitflips, ret);
} else {
ret = mtd_write_oob_std(mtd, pos + pageofs, &adjops);
}
if (ret < 0)
return ret;
max_bitflips = max(max_bitflips, ret);
ops->retlen += adjops.retlen;
ops->oobretlen += adjops.oobretlen;
adjops.datbuf += adjops.retlen;
adjops.oobbuf += adjops.oobretlen;
adjops.ooboffs = 0;
pageofs = 0;
info.pair++;
}
return max_bitflips;
}
int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
......@@ -1294,12 +1427,10 @@ int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops)
if (!master->_read_oob && (!master->_read || ops->oobbuf))
return -EOPNOTSUPP;
from = mtd_get_master_ofs(mtd, from);
if (master->_read_oob)
ret_code = master->_read_oob(master, from, ops);
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ret_code = mtd_io_emulated_slc(mtd, from, true, ops);
else
ret_code = master->_read(master, from, ops->len, &ops->retlen,
ops->datbuf);
ret_code = mtd_read_oob_std(mtd, from, ops);
mtd_update_ecc_stats(mtd, master, &old_stats);
......@@ -1338,13 +1469,10 @@ int mtd_write_oob(struct mtd_info *mtd, loff_t to,
if (!master->_write_oob && (!master->_write || ops->oobbuf))
return -EOPNOTSUPP;
to = mtd_get_master_ofs(mtd, to);
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
return mtd_io_emulated_slc(mtd, to, false, ops);
if (master->_write_oob)
return master->_write_oob(master, to, ops);
else
return master->_write(master, to, ops->len, &ops->retlen,
ops->datbuf);
return mtd_write_oob_std(mtd, to, ops);
}
EXPORT_SYMBOL_GPL(mtd_write_oob);
......@@ -1672,7 +1800,7 @@ EXPORT_SYMBOL_GPL(mtd_ooblayout_get_databytes);
* @start: first ECC byte to set
* @nbytes: number of ECC bytes to set
*
* Works like mtd_ooblayout_get_bytes(), except it acts on free bytes.
* Works like mtd_ooblayout_set_bytes(), except it acts on free bytes.
*
* Returns zero on success, a negative error code otherwise.
*/
......@@ -1817,6 +1945,12 @@ int mtd_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return -EINVAL;
if (!len)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize;
}
return master->_lock(master, mtd_get_master_ofs(mtd, ofs), len);
}
EXPORT_SYMBOL_GPL(mtd_lock);
......@@ -1831,6 +1965,12 @@ int mtd_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return -EINVAL;
if (!len)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize;
}
return master->_unlock(master, mtd_get_master_ofs(mtd, ofs), len);
}
EXPORT_SYMBOL_GPL(mtd_unlock);
......@@ -1845,6 +1985,12 @@ int mtd_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return -EINVAL;
if (!len)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize;
}
return master->_is_locked(master, mtd_get_master_ofs(mtd, ofs), len);
}
EXPORT_SYMBOL_GPL(mtd_is_locked);
......@@ -1857,6 +2003,10 @@ int mtd_block_isreserved(struct mtd_info *mtd, loff_t ofs)
return -EINVAL;
if (!master->_block_isreserved)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
return master->_block_isreserved(master, mtd_get_master_ofs(mtd, ofs));
}
EXPORT_SYMBOL_GPL(mtd_block_isreserved);
......@@ -1869,6 +2019,10 @@ int mtd_block_isbad(struct mtd_info *mtd, loff_t ofs)
return -EINVAL;
if (!master->_block_isbad)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
return master->_block_isbad(master, mtd_get_master_ofs(mtd, ofs));
}
EXPORT_SYMBOL_GPL(mtd_block_isbad);
......@@ -1885,6 +2039,9 @@ int mtd_block_markbad(struct mtd_info *mtd, loff_t ofs)
if (!(mtd->flags & MTD_WRITEABLE))
return -EROFS;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
ret = master->_block_markbad(master, mtd_get_master_ofs(mtd, ofs));
if (ret)
return ret;
......
......@@ -35,9 +35,12 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
const struct mtd_partition *part,
int partno, uint64_t cur_offset)
{
int wr_alignment = (parent->flags & MTD_NO_ERASE) ? parent->writesize :
parent->erasesize;
struct mtd_info *child, *master = mtd_get_master(parent);
struct mtd_info *master = mtd_get_master(parent);
int wr_alignment = (parent->flags & MTD_NO_ERASE) ?
master->writesize : master->erasesize;
u64 parent_size = mtd_is_partition(parent) ?
parent->part.size : parent->size;
struct mtd_info *child;
u32 remainder;
char *name;
u64 tmp;
......@@ -56,8 +59,9 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
/* set up the MTD object for this partition */
child->type = parent->type;
child->part.flags = parent->flags & ~part->mask_flags;
child->part.flags |= part->add_flags;
child->flags = child->part.flags;
child->size = part->size;
child->part.size = part->size;
child->writesize = parent->writesize;
child->writebufsize = parent->writebufsize;
child->oobsize = parent->oobsize;
......@@ -98,29 +102,29 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
}
if (child->part.offset == MTDPART_OFS_RETAIN) {
child->part.offset = cur_offset;
if (parent->size - child->part.offset >= child->size) {
child->size = parent->size - child->part.offset -
child->size;
if (parent_size - child->part.offset >= child->part.size) {
child->part.size = parent_size - child->part.offset -
child->part.size;
} else {
printk(KERN_ERR "mtd partition \"%s\" doesn't have enough space: %#llx < %#llx, disabled\n",
part->name, parent->size - child->part.offset,
child->size);
part->name, parent_size - child->part.offset,
child->part.size);
/* register to preserve ordering */
goto out_register;
}
}
if (child->size == MTDPART_SIZ_FULL)
child->size = parent->size - child->part.offset;
if (child->part.size == MTDPART_SIZ_FULL)
child->part.size = parent_size - child->part.offset;
printk(KERN_NOTICE "0x%012llx-0x%012llx : \"%s\"\n",
child->part.offset, child->part.offset + child->size,
child->part.offset, child->part.offset + child->part.size,
child->name);
/* let's do some sanity checks */
if (child->part.offset >= parent->size) {
if (child->part.offset >= parent_size) {
/* let's register it anyway to preserve ordering */
child->part.offset = 0;
child->size = 0;
child->part.size = 0;
/* Initialize ->erasesize to make add_mtd_device() happy. */
child->erasesize = parent->erasesize;
......@@ -128,15 +132,16 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
part->name);
goto out_register;
}
if (child->part.offset + child->size > parent->size) {
child->size = parent->size - child->part.offset;
if (child->part.offset + child->part.size > parent->size) {
child->part.size = parent_size - child->part.offset;
printk(KERN_WARNING"mtd: partition \"%s\" extends beyond the end of device \"%s\" -- size truncated to %#llx\n",
part->name, parent->name, child->size);
part->name, parent->name, child->part.size);
}
if (parent->numeraseregions > 1) {
/* Deal with variable erase size stuff */
int i, max = parent->numeraseregions;
u64 end = child->part.offset + child->size;
u64 end = child->part.offset + child->part.size;
struct mtd_erase_region_info *regions = parent->eraseregions;
/* Find the first erase regions which is part of this
......@@ -156,7 +161,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
BUG_ON(child->erasesize == 0);
} else {
/* Single erase size */
child->erasesize = parent->erasesize;
child->erasesize = master->erasesize;
}
/*
......@@ -178,7 +183,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
part->name);
}
tmp = mtd_get_master_ofs(child, 0) + child->size;
tmp = mtd_get_master_ofs(child, 0) + child->part.size;
remainder = do_div(tmp, wr_alignment);
if ((child->flags & MTD_WRITEABLE) && remainder) {
child->flags &= ~MTD_WRITEABLE;
......@@ -186,6 +191,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
part->name);
}
child->size = child->part.size;
child->ecc_step_size = parent->ecc_step_size;
child->ecc_strength = parent->ecc_strength;
child->bitflip_threshold = parent->bitflip_threshold;
......@@ -193,7 +199,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
if (master->_block_isbad) {
uint64_t offs = 0;
while (offs < child->size) {
while (offs < child->part.size) {
if (mtd_block_isreserved(child, offs))
child->ecc_stats.bbtblocks++;
else if (mtd_block_isbad(child, offs))
......@@ -234,6 +240,8 @@ int mtd_add_partition(struct mtd_info *parent, const char *name,
long long offset, long long length)
{
struct mtd_info *master = mtd_get_master(parent);
u64 parent_size = mtd_is_partition(parent) ?
parent->part.size : parent->size;
struct mtd_partition part;
struct mtd_info *child;
int ret = 0;
......@@ -244,7 +252,7 @@ int mtd_add_partition(struct mtd_info *parent, const char *name,
return -EINVAL;
if (length == MTDPART_SIZ_FULL)
length = parent->size - offset;
length = parent_size - offset;
if (length <= 0)
return -EINVAL;
......@@ -419,7 +427,7 @@ int add_mtd_partitions(struct mtd_info *parent,
/* Look for subpartitions */
parse_mtd_partitions(child, parts[i].types, NULL);
cur_offset = child->part.offset + child->size;
cur_offset = child->part.offset + child->part.size;
}
return 0;
......
......@@ -213,10 +213,6 @@ config MTD_NAND_MLC_LPC32XX
Please check the actual NAND chip connected and its support
by the MLC NAND controller.
config MTD_NAND_CM_X270
tristate "CM-X270 modules NAND controller"
depends on MACH_ARMCORE
config MTD_NAND_PASEMI
tristate "PA Semi PWRficient NAND controller"
depends on PPC_PASEMI
......@@ -457,6 +453,14 @@ config MTD_NAND_CADENCE
Enable the driver for NAND flash on platforms using a Cadence NAND
controller.
config MTD_NAND_ARASAN
tristate "Support for Arasan NAND flash controller"
depends on HAS_IOMEM && HAS_DMA
select BCH
help
Enables the driver for the Arasan NAND flash controller on
Zynq Ultrascale+ MPSoC.
comment "Misc"
config MTD_SM_COMMON
......
......@@ -25,7 +25,6 @@ obj-$(CONFIG_MTD_NAND_GPIO) += gpio.o
omap2_nand-objs := omap2.o
obj-$(CONFIG_MTD_NAND_OMAP2) += omap2_nand.o
obj-$(CONFIG_MTD_NAND_OMAP_BCH_BUILD) += omap_elm.o
obj-$(CONFIG_MTD_NAND_CM_X270) += cmx270_nand.o
obj-$(CONFIG_MTD_NAND_MARVELL) += marvell_nand.o
obj-$(CONFIG_MTD_NAND_TMIO) += tmio_nand.o
obj-$(CONFIG_MTD_NAND_PLATFORM) += plat_nand.o
......@@ -58,6 +57,7 @@ obj-$(CONFIG_MTD_NAND_TEGRA) += tegra_nand.o
obj-$(CONFIG_MTD_NAND_STM32_FMC2) += stm32_fmc2_nand.o
obj-$(CONFIG_MTD_NAND_MESON) += meson_nand.o
obj-$(CONFIG_MTD_NAND_CADENCE) += cadence-nand-controller.o
obj-$(CONFIG_MTD_NAND_ARASAN) += arasan-nand-controller.o
nand-objs := nand_base.o nand_legacy.o nand_bbt.o nand_timings.o nand_ids.o
nand-objs += nand_onfi.o
......
......@@ -387,12 +387,15 @@ static int gpio_nand_remove(struct platform_device *pdev)
{
struct gpio_nand *priv = platform_get_drvdata(pdev);
struct mtd_info *mtd = nand_to_mtd(&priv->nand_chip);
int ret;
/* Apply write protection */
gpiod_set_value(priv->gpiod_nwp, 1);
/* Unregister device */
nand_release(mtd_to_nand(mtd));
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(mtd_to_nand(mtd));
return 0;
}
......
This diff is collapsed.
......@@ -1494,7 +1494,7 @@ static void atmel_nand_init(struct atmel_nand_controller *nc,
* suitable for DMA.
*/
if (nc->dmac)
chip->options |= NAND_USE_BOUNCE_BUFFER;
chip->options |= NAND_USES_DMA;
/* Default to HW ECC if pmecc is available. */
if (nc->pmecc)
......
This diff is collapsed.
......@@ -60,8 +60,12 @@ static int bcm47xxnflash_probe(struct platform_device *pdev)
static int bcm47xxnflash_remove(struct platform_device *pdev)
{
struct bcm47xxnflash *nflash = platform_get_drvdata(pdev);
struct nand_chip *chip = &nflash->nand_chip;
int ret;
nand_release(&nflash->nand_chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
return 0;
}
......
......@@ -4,7 +4,6 @@
*/
#include <linux/clk.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/delay.h>
......@@ -264,6 +263,7 @@ struct brcmnand_controller {
const unsigned int *block_sizes;
unsigned int max_page_size;
const unsigned int *page_sizes;
unsigned int page_size_shift;
unsigned int max_oob;
u32 features;
......@@ -338,8 +338,38 @@ enum brcmnand_reg {
BRCMNAND_FC_BASE,
};
/* BRCMNAND v4.0 */
static const u16 brcmnand_regs_v40[] = {
/* BRCMNAND v2.1-v2.2 */
static const u16 brcmnand_regs_v21[] = {
[BRCMNAND_CMD_START] = 0x04,
[BRCMNAND_CMD_EXT_ADDRESS] = 0x08,
[BRCMNAND_CMD_ADDRESS] = 0x0c,
[BRCMNAND_INTFC_STATUS] = 0x5c,
[BRCMNAND_CS_SELECT] = 0x14,
[BRCMNAND_CS_XOR] = 0x18,
[BRCMNAND_LL_OP] = 0,
[BRCMNAND_CS0_BASE] = 0x40,
[BRCMNAND_CS1_BASE] = 0,
[BRCMNAND_CORR_THRESHOLD] = 0,
[BRCMNAND_CORR_THRESHOLD_EXT] = 0,
[BRCMNAND_UNCORR_COUNT] = 0,
[BRCMNAND_CORR_COUNT] = 0,
[BRCMNAND_CORR_EXT_ADDR] = 0x60,
[BRCMNAND_CORR_ADDR] = 0x64,
[BRCMNAND_UNCORR_EXT_ADDR] = 0x68,
[BRCMNAND_UNCORR_ADDR] = 0x6c,
[BRCMNAND_SEMAPHORE] = 0x50,
[BRCMNAND_ID] = 0x54,
[BRCMNAND_ID_EXT] = 0,
[BRCMNAND_LL_RDATA] = 0,
[BRCMNAND_OOB_READ_BASE] = 0x20,
[BRCMNAND_OOB_READ_10_BASE] = 0,
[BRCMNAND_OOB_WRITE_BASE] = 0x30,
[BRCMNAND_OOB_WRITE_10_BASE] = 0,
[BRCMNAND_FC_BASE] = 0x200,
};
/* BRCMNAND v3.3-v4.0 */
static const u16 brcmnand_regs_v33[] = {
[BRCMNAND_CMD_START] = 0x04,
[BRCMNAND_CMD_EXT_ADDRESS] = 0x08,
[BRCMNAND_CMD_ADDRESS] = 0x0c,
......@@ -536,6 +566,9 @@ enum {
CFG_BUS_WIDTH = BIT(CFG_BUS_WIDTH_SHIFT),
CFG_DEVICE_SIZE_SHIFT = 24,
/* Only for v2.1 */
CFG_PAGE_SIZE_SHIFT_v2_1 = 30,
/* Only for pre-v7.1 (with no CFG_EXT register) */
CFG_PAGE_SIZE_SHIFT = 20,
CFG_BLK_SIZE_SHIFT = 28,
......@@ -571,12 +604,16 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
{
static const unsigned int block_sizes_v6[] = { 8, 16, 128, 256, 512, 1024, 2048, 0 };
static const unsigned int block_sizes_v4[] = { 16, 128, 8, 512, 256, 1024, 2048, 0 };
static const unsigned int page_sizes[] = { 512, 2048, 4096, 8192, 0 };
static const unsigned int block_sizes_v2_2[] = { 16, 128, 8, 512, 256, 0 };
static const unsigned int block_sizes_v2_1[] = { 16, 128, 8, 512, 0 };
static const unsigned int page_sizes_v3_4[] = { 512, 2048, 4096, 8192, 0 };
static const unsigned int page_sizes_v2_2[] = { 512, 2048, 4096, 0 };
static const unsigned int page_sizes_v2_1[] = { 512, 2048, 0 };
ctrl->nand_version = nand_readreg(ctrl, 0) & 0xffff;
/* Only support v4.0+? */
if (ctrl->nand_version < 0x0400) {
/* Only support v2.1+ */
if (ctrl->nand_version < 0x0201) {
dev_err(ctrl->dev, "version %#x not supported\n",
ctrl->nand_version);
return -ENODEV;
......@@ -591,8 +628,10 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
ctrl->reg_offsets = brcmnand_regs_v60;
else if (ctrl->nand_version >= 0x0500)
ctrl->reg_offsets = brcmnand_regs_v50;
else if (ctrl->nand_version >= 0x0400)
ctrl->reg_offsets = brcmnand_regs_v40;
else if (ctrl->nand_version >= 0x0303)
ctrl->reg_offsets = brcmnand_regs_v33;
else if (ctrl->nand_version >= 0x0201)
ctrl->reg_offsets = brcmnand_regs_v21;
/* Chip-select stride */
if (ctrl->nand_version >= 0x0701)
......@@ -606,8 +645,9 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
} else {
ctrl->cs_offsets = brcmnand_cs_offsets;
/* v5.0 and earlier has a different CS0 offset layout */
if (ctrl->nand_version <= 0x0500)
/* v3.3-5.0 have a different CS0 offset layout */
if (ctrl->nand_version >= 0x0303 &&
ctrl->nand_version <= 0x0500)
ctrl->cs0_offsets = brcmnand_cs_offsets_cs0;
}
......@@ -617,14 +657,32 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
ctrl->max_page_size = 16 * 1024;
ctrl->max_block_size = 2 * 1024 * 1024;
} else {
ctrl->page_sizes = page_sizes;
if (ctrl->nand_version >= 0x0304)
ctrl->page_sizes = page_sizes_v3_4;
else if (ctrl->nand_version >= 0x0202)
ctrl->page_sizes = page_sizes_v2_2;
else
ctrl->page_sizes = page_sizes_v2_1;
if (ctrl->nand_version >= 0x0202)
ctrl->page_size_shift = CFG_PAGE_SIZE_SHIFT;
else
ctrl->page_size_shift = CFG_PAGE_SIZE_SHIFT_v2_1;
if (ctrl->nand_version >= 0x0600)
ctrl->block_sizes = block_sizes_v6;
else
else if (ctrl->nand_version >= 0x0400)
ctrl->block_sizes = block_sizes_v4;
else if (ctrl->nand_version >= 0x0202)
ctrl->block_sizes = block_sizes_v2_2;
else
ctrl->block_sizes = block_sizes_v2_1;
if (ctrl->nand_version < 0x0400) {
ctrl->max_page_size = 4096;
if (ctrl->nand_version < 0x0202)
ctrl->max_page_size = 2048;
else
ctrl->max_page_size = 4096;
ctrl->max_block_size = 512 * 1024;
}
}
......@@ -810,6 +868,9 @@ static void brcmnand_wr_corr_thresh(struct brcmnand_host *host, u8 val)
enum brcmnand_reg reg = BRCMNAND_CORR_THRESHOLD;
int cs = host->cs;
if (!ctrl->reg_offsets[reg])
return;
if (ctrl->nand_version == 0x0702)
bits = 7;
else if (ctrl->nand_version >= 0x0600)
......@@ -868,8 +929,10 @@ static inline u32 brcmnand_spare_area_mask(struct brcmnand_controller *ctrl)
return GENMASK(7, 0);
else if (ctrl->nand_version >= 0x0600)
return GENMASK(6, 0);
else
else if (ctrl->nand_version >= 0x0303)
return GENMASK(5, 0);
else
return GENMASK(4, 0);
}
#define NAND_ACC_CONTROL_ECC_SHIFT 16
......@@ -1100,30 +1163,30 @@ static int brcmnand_hamming_ooblayout_free(struct mtd_info *mtd, int section,
struct brcmnand_cfg *cfg = &host->hwcfg;
int sas = cfg->spare_area_size << cfg->sector_size_1k;
int sectors = cfg->page_size / (512 << cfg->sector_size_1k);
u32 next;
if (section >= sectors * 2)
if (section > sectors)
return -ERANGE;
oobregion->offset = (section / 2) * sas;
next = (section * sas);
if (section < sectors)
next += 6;
if (section & 1) {
oobregion->offset += 9;
oobregion->length = 7;
if (section) {
oobregion->offset = ((section - 1) * sas) + 9;
} else {
oobregion->length = 6;
/* First sector of each page may have BBI */
if (!section) {
/*
* Small-page NAND use byte 6 for BBI while large-page
* NAND use byte 0.
*/
if (cfg->page_size > 512)
oobregion->offset++;
oobregion->length--;
if (cfg->page_size > 512) {
/* Large page NAND uses first 2 bytes for BBI */
oobregion->offset = 2;
} else {
/* Small page NAND uses last byte before ECC for BBI */
oobregion->offset = 0;
next--;
}
}
oobregion->length = next - oobregion->offset;
return 0;
}
......@@ -2018,28 +2081,31 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip,
static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd,
struct nand_chip *chip, void *buf, u64 addr)
{
int i, sas;
void *oob = chip->oob_poi;
struct mtd_oob_region ecc;
int i;
int bitflips = 0;
int page = addr >> chip->page_shift;
int ret;
void *ecc_bytes;
void *ecc_chunk;
if (!buf)
buf = nand_get_data_buf(chip);
sas = mtd->oobsize / chip->ecc.steps;
/* read without ecc for verification */
ret = chip->ecc.read_page_raw(chip, buf, true, page);
if (ret)
return ret;
for (i = 0; i < chip->ecc.steps; i++, oob += sas) {
for (i = 0; i < chip->ecc.steps; i++) {
ecc_chunk = buf + chip->ecc.size * i;
ret = nand_check_erased_ecc_chunk(ecc_chunk,
chip->ecc.size,
oob, sas, NULL, 0,
mtd_ooblayout_ecc(mtd, i, &ecc);
ecc_bytes = chip->oob_poi + ecc.offset;
ret = nand_check_erased_ecc_chunk(ecc_chunk, chip->ecc.size,
ecc_bytes, ecc.length,
NULL, 0,
chip->ecc.strength);
if (ret < 0)
return ret;
......@@ -2377,7 +2443,7 @@ static int brcmnand_set_cfg(struct brcmnand_host *host,
(!!(cfg->device_width == 16) << CFG_BUS_WIDTH_SHIFT) |
(device_size << CFG_DEVICE_SIZE_SHIFT);
if (cfg_offs == cfg_ext_offs) {
tmp |= (page_size << CFG_PAGE_SIZE_SHIFT) |
tmp |= (page_size << ctrl->page_size_shift) |
(block_size << CFG_BLK_SIZE_SHIFT);
nand_writereg(ctrl, cfg_offs, tmp);
} else {
......@@ -2389,9 +2455,11 @@ static int brcmnand_set_cfg(struct brcmnand_host *host,
tmp = nand_readreg(ctrl, acc_control_offs);
tmp &= ~brcmnand_ecc_level_mask(ctrl);
tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT;
tmp &= ~brcmnand_spare_area_mask(ctrl);
tmp |= cfg->spare_area_size;
if (ctrl->nand_version >= 0x0302) {
tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT;
tmp |= cfg->spare_area_size;
}
nand_writereg(ctrl, acc_control_offs, tmp);
brcmnand_set_sector_size_1k(host, cfg->sector_size_1k);
......@@ -2577,7 +2645,7 @@ static int brcmnand_attach_chip(struct nand_chip *chip)
* to/from, and have nand_base pass us a bounce buffer instead, as
* needed.
*/
chip->options |= NAND_USE_BOUNCE_BUFFER;
chip->options |= NAND_USES_DMA;
if (chip->bbt_options & NAND_BBT_USE_FLASH)
chip->bbt_options |= NAND_BBT_NO_OOB;
......@@ -2764,6 +2832,8 @@ const struct dev_pm_ops brcmnand_pm_ops = {
EXPORT_SYMBOL_GPL(brcmnand_pm_ops);
static const struct of_device_id brcmnand_of_match[] = {
{ .compatible = "brcm,brcmnand-v2.1" },
{ .compatible = "brcm,brcmnand-v2.2" },
{ .compatible = "brcm,brcmnand-v4.0" },
{ .compatible = "brcm,brcmnand-v5.0" },
{ .compatible = "brcm,brcmnand-v6.0" },
......@@ -3045,9 +3115,15 @@ int brcmnand_remove(struct platform_device *pdev)
{
struct brcmnand_controller *ctrl = dev_get_drvdata(&pdev->dev);
struct brcmnand_host *host;
struct nand_chip *chip;
int ret;
list_for_each_entry(host, &ctrl->host_list, node)
nand_release(&host->chip);
list_for_each_entry(host, &ctrl->host_list, node) {
chip = &host->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
}
clk_disable_unprepare(ctrl->clk);
......
......@@ -2223,10 +2223,12 @@ static int cadence_nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op,
bool check_only)
{
int status = cadence_nand_select_target(chip);
if (!check_only) {
int status = cadence_nand_select_target(chip);
if (status)
return status;
if (status)
return status;
}
return nand_op_parser_exec_op(chip, &cadence_nand_op_parser, op,
check_only);
......@@ -2592,7 +2594,7 @@ cadence_nand_setup_data_interface(struct nand_chip *chip, int chipnr,
return 0;
}
int cadence_nand_attach_chip(struct nand_chip *chip)
static int cadence_nand_attach_chip(struct nand_chip *chip)
{
struct cdns_nand_ctrl *cdns_ctrl = to_cdns_nand_ctrl(chip->controller);
struct cdns_nand_chip *cdns_chip = to_cdns_nand_chip(chip);
......@@ -2778,9 +2780,14 @@ static int cadence_nand_chip_init(struct cdns_nand_ctrl *cdns_ctrl,
static void cadence_nand_chips_cleanup(struct cdns_nand_ctrl *cdns_ctrl)
{
struct cdns_nand_chip *entry, *temp;
struct nand_chip *chip;
int ret;
list_for_each_entry_safe(entry, temp, &cdns_ctrl->chips, node) {
nand_release(&entry->chip);
chip = &entry->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
list_del(&entry->node);
}
}
......
......@@ -546,11 +546,6 @@ static int cafe_nand_write_page_lowlevel(struct nand_chip *chip,
return nand_prog_page_end_op(chip);
}
static int cafe_nand_block_bad(struct nand_chip *chip, loff_t ofs)
{
return 0;
}
/* F_2[X]/(X**6+X+1) */
static unsigned short gf64_mul(u8 a, u8 b)
{
......@@ -718,10 +713,8 @@ static int cafe_nand_probe(struct pci_dev *pdev,
/* Enable the following for a flash based bad block table */
cafe->nand.bbt_options = NAND_BBT_USE_FLASH;
if (skipbbt) {
cafe->nand.options |= NAND_SKIP_BBTSCAN;
cafe->nand.legacy.block_bad = cafe_nand_block_bad;
}
if (skipbbt)
cafe->nand.options |= NAND_SKIP_BBTSCAN | NAND_NO_BBM_QUIRK;
if (numtimings && numtimings != 3) {
dev_warn(&cafe->pdev->dev, "%d timing register values ignored; precisely three are required\n", numtimings);
......@@ -814,11 +807,14 @@ static void cafe_nand_remove(struct pci_dev *pdev)
struct mtd_info *mtd = pci_get_drvdata(pdev);
struct nand_chip *chip = mtd_to_nand(mtd);
struct cafe_priv *cafe = nand_get_controller_data(chip);
int ret;
/* Disable NAND IRQ in global IRQ mask register */
cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK);
free_irq(pdev->irq, mtd);
nand_release(chip);
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(chip);
free_rs(cafe->rs);
pci_iounmap(pdev, cafe->mmio);
dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr);
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2006 Compulab, Ltd.
* Mike Rapoport <mike@compulab.co.il>
*
* Derived from drivers/mtd/nand/h1910.c (removed in v3.10)
* Copyright (C) 2002 Marius Gröger (mag@sysgo.de)
* Copyright (c) 2001 Thomas Gleixner (gleixner@autronix.de)
*
* Overview:
* This is a device driver for the NAND flash device found on the
* CM-X270 board.
*/
#include <linux/mtd/rawnand.h>
#include <linux/mtd/partitions.h>
#include <linux/slab.h>
#include <linux/gpio.h>
#include <linux/module.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/mach-types.h>
#include <mach/pxa2xx-regs.h>
#define GPIO_NAND_CS (11)
#define GPIO_NAND_RB (89)
/* MTD structure for CM-X270 board */
static struct mtd_info *cmx270_nand_mtd;
/* remaped IO address of the device */
static void __iomem *cmx270_nand_io;
/*
* Define static partitions for flash device
*/
static const struct mtd_partition partition_info[] = {
[0] = {
.name = "cmx270-0",
.offset = 0,
.size = MTDPART_SIZ_FULL
}
};
#define NUM_PARTITIONS (ARRAY_SIZE(partition_info))
static u_char cmx270_read_byte(struct nand_chip *this)
{
return (readl(this->legacy.IO_ADDR_R) >> 16);
}
static void cmx270_write_buf(struct nand_chip *this, const u_char *buf,
int len)
{
int i;
for (i=0; i<len; i++)
writel((*buf++ << 16), this->legacy.IO_ADDR_W);
}
static void cmx270_read_buf(struct nand_chip *this, u_char *buf, int len)
{
int i;
for (i=0; i<len; i++)
*buf++ = readl(this->legacy.IO_ADDR_R) >> 16;
}
static inline void nand_cs_on(void)
{
gpio_set_value(GPIO_NAND_CS, 0);
}
static void nand_cs_off(void)
{
dsb();
gpio_set_value(GPIO_NAND_CS, 1);
}
/*
* hardware specific access to control-lines
*/
static void cmx270_hwcontrol(struct nand_chip *this, int dat,
unsigned int ctrl)
{
unsigned int nandaddr = (unsigned int)this->legacy.IO_ADDR_W;
dsb();
if (ctrl & NAND_CTRL_CHANGE) {
if ( ctrl & NAND_ALE )
nandaddr |= (1 << 3);
else
nandaddr &= ~(1 << 3);
if ( ctrl & NAND_CLE )
nandaddr |= (1 << 2);
else
nandaddr &= ~(1 << 2);
if ( ctrl & NAND_NCE )
nand_cs_on();
else
nand_cs_off();
}
dsb();
this->legacy.IO_ADDR_W = (void __iomem*)nandaddr;
if (dat != NAND_CMD_NONE)
writel((dat << 16), this->legacy.IO_ADDR_W);
dsb();
}
/*
* read device ready pin
*/
static int cmx270_device_ready(struct nand_chip *this)
{
dsb();
return (gpio_get_value(GPIO_NAND_RB));
}
/*
* Main initialization routine
*/
static int __init cmx270_init(void)
{
struct nand_chip *this;
int ret;
if (!(machine_is_armcore() && cpu_is_pxa27x()))
return -ENODEV;
ret = gpio_request(GPIO_NAND_CS, "NAND CS");
if (ret) {
pr_warn("CM-X270: failed to request NAND CS gpio\n");
return ret;
}
gpio_direction_output(GPIO_NAND_CS, 1);
ret = gpio_request(GPIO_NAND_RB, "NAND R/B");
if (ret) {
pr_warn("CM-X270: failed to request NAND R/B gpio\n");
goto err_gpio_request;
}
gpio_direction_input(GPIO_NAND_RB);
/* Allocate memory for MTD device structure and private data */
this = kzalloc(sizeof(struct nand_chip), GFP_KERNEL);
if (!this) {
ret = -ENOMEM;
goto err_kzalloc;
}
cmx270_nand_io = ioremap(PXA_CS1_PHYS, 12);
if (!cmx270_nand_io) {
pr_debug("Unable to ioremap NAND device\n");
ret = -EINVAL;
goto err_ioremap;
}
cmx270_nand_mtd = nand_to_mtd(this);
/* Link the private data with the MTD structure */
cmx270_nand_mtd->owner = THIS_MODULE;
/* insert callbacks */
this->legacy.IO_ADDR_R = cmx270_nand_io;
this->legacy.IO_ADDR_W = cmx270_nand_io;
this->legacy.cmd_ctrl = cmx270_hwcontrol;
this->legacy.dev_ready = cmx270_device_ready;
/* 15 us command delay time */
this->legacy.chip_delay = 20;
this->ecc.mode = NAND_ECC_SOFT;
this->ecc.algo = NAND_ECC_HAMMING;
/* read/write functions */
this->legacy.read_byte = cmx270_read_byte;
this->legacy.read_buf = cmx270_read_buf;
this->legacy.write_buf = cmx270_write_buf;
/* Scan to find existence of the device */
ret = nand_scan(this, 1);
if (ret) {
pr_notice("No NAND device\n");
goto err_scan;
}
/* Register the partitions */
ret = mtd_device_register(cmx270_nand_mtd, partition_info,
NUM_PARTITIONS);
if (ret)
goto err_scan;
/* Return happy */
return 0;
err_scan:
iounmap(cmx270_nand_io);
err_ioremap:
kfree(this);
err_kzalloc:
gpio_free(GPIO_NAND_RB);
err_gpio_request:
gpio_free(GPIO_NAND_CS);
return ret;
}
module_init(cmx270_init);
/*
* Clean up routine
*/
static void __exit cmx270_cleanup(void)
{
/* Release resources, unregister device */
nand_release(mtd_to_nand(cmx270_nand_mtd));
gpio_free(GPIO_NAND_RB);
gpio_free(GPIO_NAND_CS);
iounmap(cmx270_nand_io);
kfree(mtd_to_nand(cmx270_nand_mtd));
}
module_exit(cmx270_cleanup);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Mike Rapoport <mike@compulab.co.il>");
MODULE_DESCRIPTION("NAND flash driver for Compulab CM-X270 Module");
......@@ -21,9 +21,9 @@
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <linux/iopoll.h>
#include <asm/msr.h>
#include <asm/io.h>
#define NR_CS553X_CONTROLLERS 4
......@@ -89,76 +89,151 @@
#define CS_NAND_ECC_CLRECC (1<<1)
#define CS_NAND_ECC_ENECC (1<<0)
static void cs553x_read_buf(struct nand_chip *this, u_char *buf, int len)
struct cs553x_nand_controller {
struct nand_controller base;
struct nand_chip chip;
void __iomem *mmio;
};
static struct cs553x_nand_controller *
to_cs553x(struct nand_controller *controller)
{
return container_of(controller, struct cs553x_nand_controller, base);
}
static int cs553x_write_ctrl_byte(struct cs553x_nand_controller *cs553x,
u32 ctl, u8 data)
{
u8 status;
int ret;
writeb(ctl, cs553x->mmio + MM_NAND_CTL);
writeb(data, cs553x->mmio + MM_NAND_IO);
ret = readb_poll_timeout_atomic(cs553x->mmio + MM_NAND_STS, status,
!(status & CS_NAND_CTLR_BUSY), 1,
100000);
if (ret)
return ret;
return 0;
}
static void cs553x_data_in(struct cs553x_nand_controller *cs553x, void *buf,
unsigned int len)
{
writeb(0, cs553x->mmio + MM_NAND_CTL);
while (unlikely(len > 0x800)) {
memcpy_fromio(buf, this->legacy.IO_ADDR_R, 0x800);
memcpy_fromio(buf, cs553x->mmio, 0x800);
buf += 0x800;
len -= 0x800;
}
memcpy_fromio(buf, this->legacy.IO_ADDR_R, len);
memcpy_fromio(buf, cs553x->mmio, len);
}
static void cs553x_write_buf(struct nand_chip *this, const u_char *buf, int len)
static void cs553x_data_out(struct cs553x_nand_controller *cs553x,
const void *buf, unsigned int len)
{
writeb(0, cs553x->mmio + MM_NAND_CTL);
while (unlikely(len > 0x800)) {
memcpy_toio(this->legacy.IO_ADDR_R, buf, 0x800);
memcpy_toio(cs553x->mmio, buf, 0x800);
buf += 0x800;
len -= 0x800;
}
memcpy_toio(this->legacy.IO_ADDR_R, buf, len);
memcpy_toio(cs553x->mmio, buf, len);
}
static unsigned char cs553x_read_byte(struct nand_chip *this)
static int cs553x_wait_ready(struct cs553x_nand_controller *cs553x,
unsigned int timeout_ms)
{
return readb(this->legacy.IO_ADDR_R);
u8 mask = CS_NAND_CTLR_BUSY | CS_NAND_STS_FLASH_RDY;
u8 status;
return readb_poll_timeout(cs553x->mmio + MM_NAND_STS, status,
(status & mask) == CS_NAND_STS_FLASH_RDY, 100,
timeout_ms * 1000);
}
static void cs553x_write_byte(struct nand_chip *this, u_char byte)
static int cs553x_exec_instr(struct cs553x_nand_controller *cs553x,
const struct nand_op_instr *instr)
{
int i = 100000;
unsigned int i;
int ret = 0;
switch (instr->type) {
case NAND_OP_CMD_INSTR:
ret = cs553x_write_ctrl_byte(cs553x, CS_NAND_CTL_CLE,
instr->ctx.cmd.opcode);
break;
case NAND_OP_ADDR_INSTR:
for (i = 0; i < instr->ctx.addr.naddrs; i++) {
ret = cs553x_write_ctrl_byte(cs553x, CS_NAND_CTL_ALE,
instr->ctx.addr.addrs[i]);
if (ret)
break;
}
break;
case NAND_OP_DATA_IN_INSTR:
cs553x_data_in(cs553x, instr->ctx.data.buf.in,
instr->ctx.data.len);
break;
case NAND_OP_DATA_OUT_INSTR:
cs553x_data_out(cs553x, instr->ctx.data.buf.out,
instr->ctx.data.len);
break;
while (i && readb(this->legacy.IO_ADDR_R + MM_NAND_STS) & CS_NAND_CTLR_BUSY) {
udelay(1);
i--;
case NAND_OP_WAITRDY_INSTR:
ret = cs553x_wait_ready(cs553x, instr->ctx.waitrdy.timeout_ms);
break;
}
writeb(byte, this->legacy.IO_ADDR_W + 0x801);
if (instr->delay_ns)
ndelay(instr->delay_ns);
return ret;
}
static void cs553x_hwcontrol(struct nand_chip *this, int cmd,
unsigned int ctrl)
static int cs553x_exec_op(struct nand_chip *this,
const struct nand_operation *op,
bool check_only)
{
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
if (ctrl & NAND_CTRL_CHANGE) {
unsigned char ctl = (ctrl & ~NAND_CTRL_CHANGE ) ^ 0x01;
writeb(ctl, mmio_base + MM_NAND_CTL);
struct cs553x_nand_controller *cs553x = to_cs553x(this->controller);
unsigned int i;
int ret;
if (check_only)
return true;
/* De-assert the CE pin */
writeb(0, cs553x->mmio + MM_NAND_CTL);
for (i = 0; i < op->ninstrs; i++) {
ret = cs553x_exec_instr(cs553x, &op->instrs[i]);
if (ret)
break;
}
if (cmd != NAND_CMD_NONE)
cs553x_write_byte(this, cmd);
}
static int cs553x_device_ready(struct nand_chip *this)
{
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
unsigned char foo = readb(mmio_base + MM_NAND_STS);
/* Re-assert the CE pin. */
writeb(CS_NAND_CTL_CE, cs553x->mmio + MM_NAND_CTL);
return (foo & CS_NAND_STS_FLASH_RDY) && !(foo & CS_NAND_CTLR_BUSY);
return ret;
}
static void cs_enable_hwecc(struct nand_chip *this, int mode)
{
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
struct cs553x_nand_controller *cs553x = to_cs553x(this->controller);
writeb(0x07, mmio_base + MM_NAND_ECC_CTL);
writeb(0x07, cs553x->mmio + MM_NAND_ECC_CTL);
}
static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat,
u_char *ecc_code)
{
struct cs553x_nand_controller *cs553x = to_cs553x(this->controller);
uint32_t ecc;
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
ecc = readl(mmio_base + MM_NAND_STS);
ecc = readl(cs553x->mmio + MM_NAND_STS);
ecc_code[1] = ecc >> 8;
ecc_code[0] = ecc >> 16;
......@@ -166,10 +241,15 @@ static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat,
return 0;
}
static struct mtd_info *cs553x_mtd[4];
static struct cs553x_nand_controller *controllers[4];
static const struct nand_controller_ops cs553x_nand_controller_ops = {
.exec_op = cs553x_exec_op,
};
static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
{
struct cs553x_nand_controller *controller;
int err = 0;
struct nand_chip *this;
struct mtd_info *new_mtd;
......@@ -183,33 +263,29 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
}
/* Allocate memory for MTD device structure and private data */
this = kzalloc(sizeof(struct nand_chip), GFP_KERNEL);
if (!this) {
controller = kzalloc(sizeof(*controller), GFP_KERNEL);
if (!controller) {
err = -ENOMEM;
goto out;
}
this = &controller->chip;
nand_controller_init(&controller->base);
controller->base.ops = &cs553x_nand_controller_ops;
this->controller = &controller->base;
new_mtd = nand_to_mtd(this);
/* Link the private data with the MTD structure */
new_mtd->owner = THIS_MODULE;
/* map physical address */
this->legacy.IO_ADDR_R = this->legacy.IO_ADDR_W = ioremap(adr, 4096);
if (!this->legacy.IO_ADDR_R) {
controller->mmio = ioremap(adr, 4096);
if (!controller->mmio) {
pr_warn("ioremap cs553x NAND @0x%08lx failed\n", adr);
err = -EIO;
goto out_mtd;
}
this->legacy.cmd_ctrl = cs553x_hwcontrol;
this->legacy.dev_ready = cs553x_device_ready;
this->legacy.read_byte = cs553x_read_byte;
this->legacy.read_buf = cs553x_read_buf;
this->legacy.write_buf = cs553x_write_buf;
this->legacy.chip_delay = 0;
this->ecc.mode = NAND_ECC_HW;
this->ecc.size = 256;
this->ecc.bytes = 3;
......@@ -232,15 +308,15 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
if (err)
goto out_free;
cs553x_mtd[cs] = new_mtd;
controllers[cs] = controller;
goto out;
out_free:
kfree(new_mtd->name);
out_ior:
iounmap(this->legacy.IO_ADDR_R);
iounmap(controller->mmio);
out_mtd:
kfree(this);
kfree(controller);
out:
return err;
}
......@@ -295,9 +371,10 @@ static int __init cs553x_init(void)
/* Register all devices together here. This means we can easily hack it to
do mtdconcat etc. if we want to. */
for (i = 0; i < NR_CS553X_CONTROLLERS; i++) {
if (cs553x_mtd[i]) {
if (controllers[i]) {
/* If any devices registered, return success. Else the last error. */
mtd_device_register(cs553x_mtd[i], NULL, 0);
mtd_device_register(nand_to_mtd(&controllers[i]->chip),
NULL, 0);
err = 0;
}
}
......@@ -312,26 +389,26 @@ static void __exit cs553x_cleanup(void)
int i;
for (i = 0; i < NR_CS553X_CONTROLLERS; i++) {
struct mtd_info *mtd = cs553x_mtd[i];
struct nand_chip *this;
void __iomem *mmio_base;
struct cs553x_nand_controller *controller = controllers[i];
struct nand_chip *this = &controller->chip;
struct mtd_info *mtd = nand_to_mtd(this);
int ret;
if (!mtd)
continue;
this = mtd_to_nand(mtd);
mmio_base = this->legacy.IO_ADDR_R;
/* Release resources, unregister device */
nand_release(this);
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(this);
kfree(mtd->name);
cs553x_mtd[i] = NULL;
controllers[i] = NULL;
/* unmap physical address */
iounmap(mmio_base);
iounmap(controller->mmio);
/* Free the MTD device structure */
kfree(this);
kfree(controller);
}
}
......
This diff is collapsed.
......@@ -764,6 +764,7 @@ static int denali_write_page(struct nand_chip *chip, const u8 *buf,
static int denali_setup_data_interface(struct nand_chip *chip, int chipnr,
const struct nand_data_interface *conf)
{
static const unsigned int data_setup_on_host = 10000;
struct denali_controller *denali = to_denali_controller(chip);
struct denali_chip_sel *sel;
const struct nand_sdr_timings *timings;
......@@ -796,15 +797,6 @@ static int denali_setup_data_interface(struct nand_chip *chip, int chipnr,
sel = &to_denali_chip(chip)->sels[chipnr];
/* tREA -> ACC_CLKS */
acc_clks = DIV_ROUND_UP(timings->tREA_max, t_x);
acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE);
tmp = ioread32(denali->reg + ACC_CLKS);
tmp &= ~ACC_CLKS__VALUE;
tmp |= FIELD_PREP(ACC_CLKS__VALUE, acc_clks);
sel->acc_clks = tmp;
/* tRWH -> RE_2_WE */
re_2_we = DIV_ROUND_UP(timings->tRHW_min, t_x);
re_2_we = min_t(int, re_2_we, RE_2_WE__VALUE);
......@@ -862,14 +854,45 @@ static int denali_setup_data_interface(struct nand_chip *chip, int chipnr,
tmp |= FIELD_PREP(RDWR_EN_HI_CNT__VALUE, rdwr_en_hi);
sel->rdwr_en_hi_cnt = tmp;
/* tRP, tWP -> RDWR_EN_LO_CNT */
/*
* tREA -> ACC_CLKS
* tRP, tWP, tRHOH, tRC, tWC -> RDWR_EN_LO_CNT
*/
/*
* Determine the minimum of acc_clks to meet the setup timing when
* capturing the incoming data.
*
* The delay on the chip side is well-defined as tREA, but we need to
* take additional delay into account. This includes a certain degree
* of unknowledge, such as signal propagation delays on the PCB and
* in the SoC, load capacity of the I/O pins, etc.
*/
acc_clks = DIV_ROUND_UP(timings->tREA_max + data_setup_on_host, t_x);
/* Determine the minimum of rdwr_en_lo_cnt from RE#/WE# pulse width */
rdwr_en_lo = DIV_ROUND_UP(max(timings->tRP_min, timings->tWP_min), t_x);
/* Extend rdwr_en_lo to meet the data hold timing */
rdwr_en_lo = max_t(int, rdwr_en_lo,
acc_clks - timings->tRHOH_min / t_x);
/* Extend rdwr_en_lo to meet the requirement for RE#/WE# cycle time */
rdwr_en_lo_hi = DIV_ROUND_UP(max(timings->tRC_min, timings->tWC_min),
t_x);
rdwr_en_lo_hi = max_t(int, rdwr_en_lo_hi, mult_x);
rdwr_en_lo = max(rdwr_en_lo, rdwr_en_lo_hi - rdwr_en_hi);
rdwr_en_lo = min_t(int, rdwr_en_lo, RDWR_EN_LO_CNT__VALUE);
/* Center the data latch timing for extra safety */
acc_clks = (acc_clks + rdwr_en_lo +
DIV_ROUND_UP(timings->tRHOH_min, t_x)) / 2;
acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE);
tmp = ioread32(denali->reg + ACC_CLKS);
tmp &= ~ACC_CLKS__VALUE;
tmp |= FIELD_PREP(ACC_CLKS__VALUE, acc_clks);
sel->acc_clks = tmp;
tmp = ioread32(denali->reg + RDWR_EN_LO_CNT);
tmp &= ~RDWR_EN_LO_CNT__VALUE;
tmp |= FIELD_PREP(RDWR_EN_LO_CNT__VALUE, rdwr_en_lo);
......@@ -1203,7 +1226,7 @@ int denali_chip_init(struct denali_controller *denali,
mtd->name = "denali-nand";
if (denali->dma_avail) {
chip->options |= NAND_USE_BOUNCE_BUFFER;
chip->options |= NAND_USES_DMA;
chip->buf_align = 16;
}
......@@ -1336,10 +1359,17 @@ EXPORT_SYMBOL(denali_init);
void denali_remove(struct denali_controller *denali)
{
struct denali_chip *dchip;
struct denali_chip *dchip, *tmp;
struct nand_chip *chip;
int ret;
list_for_each_entry(dchip, &denali->chips, node)
nand_release(&dchip->chip);
list_for_each_entry_safe(dchip, tmp, &denali->chips, node) {
chip = &dchip->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
list_del(&dchip->node);
}
denali_disable_irq(denali);
}
......
This diff is collapsed.
......@@ -956,8 +956,13 @@ static int fsl_elbc_nand_remove(struct platform_device *pdev)
{
struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = fsl_lbc_ctrl_dev->nand;
struct fsl_elbc_mtd *priv = dev_get_drvdata(&pdev->dev);
struct nand_chip *chip = &priv->chip;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
nand_release(&priv->chip);
fsl_elbc_chip_remove(priv);
mutex_lock(&fsl_elbc_nand_mutex);
......
......@@ -1093,8 +1093,13 @@ static int fsl_ifc_nand_probe(struct platform_device *dev)
static int fsl_ifc_nand_remove(struct platform_device *dev)
{
struct fsl_ifc_mtd *priv = dev_get_drvdata(&dev->dev);
struct nand_chip *chip = &priv->chip;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
nand_release(&priv->chip);
fsl_ifc_chip_remove(priv);
mutex_lock(&fsl_ifc_nand_mutex);
......
......@@ -317,10 +317,13 @@ static int fun_probe(struct platform_device *ofdev)
static int fun_remove(struct platform_device *ofdev)
{
struct fsl_upm_nand *fun = dev_get_drvdata(&ofdev->dev);
struct mtd_info *mtd = nand_to_mtd(&fun->chip);
int i;
struct nand_chip *chip = &fun->chip;
struct mtd_info *mtd = nand_to_mtd(chip);
int ret, i;
nand_release(&fun->chip);
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(chip);
kfree(mtd->name);
for (i = 0; i < fun->mchip_count; i++) {
......
......@@ -608,6 +608,9 @@ static int fsmc_exec_op(struct nand_chip *chip, const struct nand_operation *op,
unsigned int op_id;
int i;
if (check_only)
return 0;
pr_debug("Executing operation [%d instructions]:\n", op->ninstrs);
for (op_id = 0; op_id < op->ninstrs; op_id++) {
......@@ -691,7 +694,7 @@ static int fsmc_read_page_hwecc(struct nand_chip *chip, u8 *buf,
for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) {
nand_read_page_op(chip, page, s * eccsize, NULL, 0);
chip->ecc.hwctl(chip, NAND_ECC_READ);
ret = nand_read_data_op(chip, p, eccsize, false);
ret = nand_read_data_op(chip, p, eccsize, false, false);
if (ret)
return ret;
......@@ -809,11 +812,12 @@ static int fsmc_bch8_correct_data(struct nand_chip *chip, u8 *dat,
i = 0;
while (num_err--) {
change_bit(0, (unsigned long *)&err_idx[i]);
change_bit(1, (unsigned long *)&err_idx[i]);
err_idx[i] ^= 3;
if (err_idx[i] < chip->ecc.size * 8) {
change_bit(err_idx[i], (unsigned long *)dat);
int err = err_idx[i];
dat[err >> 3] ^= BIT(err & 7);
i++;
}
}
......@@ -1132,7 +1136,12 @@ static int fsmc_nand_remove(struct platform_device *pdev)
struct fsmc_nand_data *host = platform_get_drvdata(pdev);
if (host) {
nand_release(&host->nand);
struct nand_chip *chip = &host->nand;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
fsmc_nand_disable(host);
if (host->mode == USE_DMA_ACCESS) {
......
......@@ -190,8 +190,12 @@ gpio_nand_get_io_sync(struct platform_device *pdev)
static int gpio_nand_remove(struct platform_device *pdev)
{
struct gpiomtd *gpiomtd = platform_get_drvdata(pdev);
struct nand_chip *chip = &gpiomtd->nand_chip;
int ret;
nand_release(&gpiomtd->nand_chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
/* Enable write protection and disable the chip */
if (gpiomtd->nwp && !IS_ERR(gpiomtd->nwp))
......
......@@ -540,8 +540,10 @@ static int bch_set_geometry(struct gpmi_nand_data *this)
return ret;
ret = pm_runtime_get_sync(this->dev);
if (ret < 0)
if (ret < 0) {
pm_runtime_put_autosuspend(this->dev);
return ret;
}
/*
* Due to erratum #2847 of the MX23, the BCH cannot be soft reset on this
......@@ -834,158 +836,6 @@ static bool prepare_data_dma(struct gpmi_nand_data *this, const void *buf,
return false;
}
/**
* gpmi_copy_bits - copy bits from one memory region to another
* @dst: destination buffer
* @dst_bit_off: bit offset we're starting to write at
* @src: source buffer
* @src_bit_off: bit offset we're starting to read from
* @nbits: number of bits to copy
*
* This functions copies bits from one memory region to another, and is used by
* the GPMI driver to copy ECC sections which are not guaranteed to be byte
* aligned.
*
* src and dst should not overlap.
*
*/
static void gpmi_copy_bits(u8 *dst, size_t dst_bit_off, const u8 *src,
size_t src_bit_off, size_t nbits)
{
size_t i;
size_t nbytes;
u32 src_buffer = 0;
size_t bits_in_src_buffer = 0;
if (!nbits)
return;
/*
* Move src and dst pointers to the closest byte pointer and store bit
* offsets within a byte.
*/
src += src_bit_off / 8;
src_bit_off %= 8;
dst += dst_bit_off / 8;
dst_bit_off %= 8;
/*
* Initialize the src_buffer value with bits available in the first
* byte of data so that we end up with a byte aligned src pointer.
*/
if (src_bit_off) {
src_buffer = src[0] >> src_bit_off;
if (nbits >= (8 - src_bit_off)) {
bits_in_src_buffer += 8 - src_bit_off;
} else {
src_buffer &= GENMASK(nbits - 1, 0);
bits_in_src_buffer += nbits;
}
nbits -= bits_in_src_buffer;
src++;
}
/* Calculate the number of bytes that can be copied from src to dst. */
nbytes = nbits / 8;
/* Try to align dst to a byte boundary. */
if (dst_bit_off) {
if (bits_in_src_buffer < (8 - dst_bit_off) && nbytes) {
src_buffer |= src[0] << bits_in_src_buffer;
bits_in_src_buffer += 8;
src++;
nbytes--;
}
if (bits_in_src_buffer >= (8 - dst_bit_off)) {
dst[0] &= GENMASK(dst_bit_off - 1, 0);
dst[0] |= src_buffer << dst_bit_off;
src_buffer >>= (8 - dst_bit_off);
bits_in_src_buffer -= (8 - dst_bit_off);
dst_bit_off = 0;
dst++;
if (bits_in_src_buffer > 7) {
bits_in_src_buffer -= 8;
dst[0] = src_buffer;
dst++;
src_buffer >>= 8;
}
}
}
if (!bits_in_src_buffer && !dst_bit_off) {
/*
* Both src and dst pointers are byte aligned, thus we can
* just use the optimized memcpy function.
*/
if (nbytes)
memcpy(dst, src, nbytes);
} else {
/*
* src buffer is not byte aligned, hence we have to copy each
* src byte to the src_buffer variable before extracting a byte
* to store in dst.
*/
for (i = 0; i < nbytes; i++) {
src_buffer |= src[i] << bits_in_src_buffer;
dst[i] = src_buffer;
src_buffer >>= 8;
}
}
/* Update dst and src pointers */
dst += nbytes;
src += nbytes;
/*
* nbits is the number of remaining bits. It should not exceed 8 as
* we've already copied as much bytes as possible.
*/
nbits %= 8;
/*
* If there's no more bits to copy to the destination and src buffer
* was already byte aligned, then we're done.
*/
if (!nbits && !bits_in_src_buffer)
return;
/* Copy the remaining bits to src_buffer */
if (nbits)
src_buffer |= (*src & GENMASK(nbits - 1, 0)) <<
bits_in_src_buffer;
bits_in_src_buffer += nbits;
/*
* In case there were not enough bits to get a byte aligned dst buffer
* prepare the src_buffer variable to match the dst organization (shift
* src_buffer by dst_bit_off and retrieve the least significant bits
* from dst).
*/
if (dst_bit_off)
src_buffer = (src_buffer << dst_bit_off) |
(*dst & GENMASK(dst_bit_off - 1, 0));
bits_in_src_buffer += dst_bit_off;
/*
* Keep most significant bits from dst if we end up with an unaligned
* number of bits.
*/
nbytes = bits_in_src_buffer / 8;
if (bits_in_src_buffer % 8) {
src_buffer |= (dst[nbytes] &
GENMASK(7, bits_in_src_buffer % 8)) <<
(nbytes * 8);
nbytes++;
}
/* Copy the remaining bytes to dst */
for (i = 0; i < nbytes; i++) {
dst[i] = src_buffer;
src_buffer >>= 8;
}
}
/* add our owner bbt descriptor */
static uint8_t scan_ff_pattern[] = { 0xff };
static struct nand_bbt_descr gpmi_bbt_descr = {
......@@ -1713,7 +1563,7 @@ static int gpmi_ecc_write_oob(struct nand_chip *chip, int page)
* inline (interleaved with payload DATA), and do not align data chunk on
* byte boundaries.
* We thus need to take care moving the payload data and ECC bits stored in the
* page into the provided buffers, which is why we're using gpmi_copy_bits.
* page into the provided buffers, which is why we're using nand_extract_bits().
*
* See set_geometry_by_ecc_info inline comments to have a full description
* of the layout used by the GPMI controller.
......@@ -1762,9 +1612,8 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
/* Extract interleaved payload data and ECC bits */
for (step = 0; step < nfc_geo->ecc_chunk_count; step++) {
if (buf)
gpmi_copy_bits(buf, step * eccsize * 8,
tmp_buf, src_bit_off,
eccsize * 8);
nand_extract_bits(buf, step * eccsize, tmp_buf,
src_bit_off, eccsize * 8);
src_bit_off += eccsize * 8;
/* Align last ECC block to align a byte boundary */
......@@ -1773,9 +1622,8 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
eccbits += 8 - ((oob_bit_off + eccbits) % 8);
if (oob_required)
gpmi_copy_bits(oob, oob_bit_off,
tmp_buf, src_bit_off,
eccbits);
nand_extract_bits(oob, oob_bit_off, tmp_buf,
src_bit_off, eccbits);
src_bit_off += eccbits;
oob_bit_off += eccbits;
......@@ -1800,7 +1648,7 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
* inline (interleaved with payload DATA), and do not align data chunk on
* byte boundaries.
* We thus need to take care moving the OOB area at the right place in the
* final page, which is why we're using gpmi_copy_bits.
* final page, which is why we're using nand_extract_bits().
*
* See set_geometry_by_ecc_info inline comments to have a full description
* of the layout used by the GPMI controller.
......@@ -1839,8 +1687,8 @@ static int gpmi_ecc_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
/* Interleave payload data and ECC bits */
for (step = 0; step < nfc_geo->ecc_chunk_count; step++) {
if (buf)
gpmi_copy_bits(tmp_buf, dst_bit_off,
buf, step * eccsize * 8, eccsize * 8);
nand_extract_bits(tmp_buf, dst_bit_off, buf,
step * eccsize * 8, eccsize * 8);
dst_bit_off += eccsize * 8;
/* Align last ECC block to align a byte boundary */
......@@ -1849,8 +1697,8 @@ static int gpmi_ecc_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
eccbits += 8 - ((oob_bit_off + eccbits) % 8);
if (oob_required)
gpmi_copy_bits(tmp_buf, dst_bit_off,
oob, oob_bit_off, eccbits);
nand_extract_bits(tmp_buf, dst_bit_off, oob,
oob_bit_off, eccbits);
dst_bit_off += eccbits;
oob_bit_off += eccbits;
......@@ -2408,6 +2256,9 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
struct completion *completion;
unsigned long to;
if (check_only)
return 0;
this->ntransfers = 0;
for (i = 0; i < GPMI_MAX_TRANSFERS; i++)
this->transfers[i].direction = DMA_NONE;
......@@ -2658,7 +2509,7 @@ static int gpmi_nand_probe(struct platform_device *pdev)
ret = __gpmi_enable_clk(this, true);
if (ret)
goto exit_nfc_init;
goto exit_acquire_resources;
pm_runtime_set_autosuspend_delay(&pdev->dev, 500);
pm_runtime_use_autosuspend(&pdev->dev);
......@@ -2693,11 +2544,15 @@ static int gpmi_nand_probe(struct platform_device *pdev)
static int gpmi_nand_remove(struct platform_device *pdev)
{
struct gpmi_nand_data *this = platform_get_drvdata(pdev);
struct nand_chip *chip = &this->nand;
int ret;
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
nand_release(&this->nand);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
gpmi_free_dma_buffer(this);
release_resources(this);
return 0;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -411,6 +411,7 @@ static int elm_probe(struct platform_device *pdev)
pm_runtime_enable(&pdev->dev);
if (pm_runtime_get_sync(&pdev->dev) < 0) {
ret = -EINVAL;
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
dev_err(&pdev->dev, "can't enable clock\n");
return ret;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment