- 09 Sep, 2024 4 commits
-
-
Cheng Ming Lin authored
Macronix serial NAND flash with a two-plane structure requires insertion of the Plane Select bit into the column address during the write_to_cache operation. Additionally, for MX35{U,F}2G14AC and MX35LF2GE4AB, insertion of the Plane Select bit into the column address is required during the read_from_cache operation. Signed-off-by: Cheng Ming Lin <chengminglin@mxic.com.tw> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240909092643.2434479-3-linchengming884@gmail.com
-
Cheng Ming Lin authored
Add two flags for inserting the Plane Select bit into the column address during the write_to_cache and the read_from_cache operation. Add the SPINAND_HAS_PROG_PLANE_SELECT_BIT flag for serial NAND flash that require inserting the Plane Select bit into the column address during the write_to_cache operation. Add the SPINAND_HAS_READ_PLANE_SELECT_BIT flag for serial NAND flash that require inserting the Plane Select bit into the column address during the read_from_cache operation. Signed-off-by: Cheng Ming Lin <chengminglin@mxic.com.tw> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240909092643.2434479-2-linchengming884@gmail.com
-
Roger Quadros authored
Allow fixed-partitions to be specified through a partitions node. Fixes the below dtbs_check warning: arch/arm64/boot/dts/ti/k3-am642-evm-nand.dtb: nand@0,0: Unevaluated properties are not allowed ('partitions' was unexpected) Signed-off-by: Roger Quadros <rogerq@kernel.org> Reviewed-by: Rob Herring (Arm) <robh@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240903-gpmc-dtb-v2-1-8046c1915b96@kernel.org
-
Miquel Raynal authored
Reviewing a series converting the for_each_chil_of_node() loops into their _scoped variants made me realize there was no cleanup of the already registered NAND devices upon error which may leak memory on systems with more than a chip when this error occurs. We should call the _nand_chips_cleanup() function when this happens. Fixes: 1d6b1e46 ("mtd: mediatek: driver for MTK Smart Device") Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Reviewed-by: Matthias Brugger <matthias.bgg@kernel.org> Link: https://lore.kernel.org/linux-mtd/20240826153019.67106-2-miquel.raynal@bootlin.com
-
- 06 Sep, 2024 24 commits
-
-
Miquel Raynal authored
There are some un-freed resources in one of the error path which would benefit from a helper going through all the registered mtk chips one by one and perform all the necessary cleanup. This is precisely what the remove path does, so let's extract the logic in a helper. There is no functional change. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Reviewed-by: Matthias Brugger <matthias.bgg@kernel.org> Link: https://lore.kernel.org/linux-mtd/20240826153019.67106-1-miquel.raynal@bootlin.com
-
Alexander Dahl authored
Like for other atmel drivers (serial, crypto, mmc, …), too. Signed-off-by: Alexander Dahl <ada@thorsis.com> Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240828063707.73869-1-ada@thorsis.com
-
Miquel Raynal authored
There is a reason why sometime we write "NAND chip" with an 's'. It usually means several chips can be managed by the same controller. So when initializing a single chip at a time, the wording "chip" must be used, otherwise when talking about all the chips managed by the controller, we want to use "chips". Fix the function name to clarify the meson_nfc_nand_chip*s*_cleanup() helper intend. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Link: https://lore.kernel.org/linux-mtd/20240826153158.67334-1-miquel.raynal@bootlin.com
-
Miquel Raynal authored
Enabling continuous read support implies several changes which must be done atomically in order to keep the code base consistent and bisectable. 1/ Retrieving bitflips differently Improve the helper retrieving the number of bitflips to support the case where many pages have been read instead of just one. In this case, if there is one page with bitflips, we cannot know the detail and just get the information of the maximum number of bitflips corrected in the most corrupted chunk. Compatible Macronix flashes return: - the ECC status for the last page read (bits 0-3), - the amount of bitflips for the whole read operation (bits 4-7). Hence, when reading two consecutive pages, if there was 2 bits corrected at most in one chunk, we return this amount times (arbitrary) the number read pages. It is probably a very pessimistic calculation in most cases, but still less pessimistic than if we multiplied this amount by the number of chunks. Anyway, this is just for statistics, the important data is the maximum amount of bitflips, which leads to wear leveling. 2/ Configuring, enabling and disabling the feature Create an init function for allocating a vendor structure. Use this vendor structure to cache the internal continuous read state. The state is being used to discriminate between the two bitflips retrieval methods. Finally, helpers for enabling and disabling sequential reads are also created. 3/ Fill the chips table Flag all the chips supporting the feature with the ->set_cont_read() helper. In order to validate the changes, I modified the mtd-utils test suite with extended versions of nandbiterrs, nanddump and flash_speed in order to support, test and benchmark continuous reads. I also ran all the UBI tests successfully. The nandbiterrs tool allows to track the ECC efficiency and feedback. Here is its default output (stripped): Successfully corrected 0 bit errors per subpage Read reported 1 corrected bit errors Successfully corrected 1 bit errors per subpage Read reported 2 corrected bit errors Successfully corrected 2 bit errors per subpage Read reported 3 corrected bit errors Successfully corrected 3 bit errors per subpage Read reported 4 corrected bit errors Successfully corrected 4 bit errors per subpage Read reported 5 corrected bit errors Successfully corrected 5 bit errors per subpage Read reported 6 corrected bit errors Successfully corrected 6 bit errors per subpage Read reported 7 corrected bit errors Successfully corrected 7 bit errors per subpage Read reported 8 corrected bit errors Successfully corrected 8 bit errors per subpage Failed to recover 1 bitflips Read error after 9 bit errors per page The output using the continuous option over two pages (the second page is kept intact): Successfully corrected 0 bit errors per subpage Read reported 2 corrected bit errors Successfully corrected 1 bit errors per subpage Read reported 4 corrected bit errors Successfully corrected 2 bit errors per subpage Read reported 6 corrected bit errors Successfully corrected 3 bit errors per subpage Read reported 8 corrected bit errors Successfully corrected 4 bit errors per subpage Read reported 10 corrected bit errors Successfully corrected 5 bit errors per subpage Read reported 12 corrected bit errors Successfully corrected 6 bit errors per subpage Read reported 14 corrected bit errors Successfully corrected 7 bit errors per subpage Read reported 16 corrected bit errors Successfully corrected 8 bit errors per subpage Failed to recover 1 bitflips Read error after 9 bit errors per page Regarding the throughput improvements, tests have been conducted in 1-1-1 and 1-1-4 modes, reading a full block X pages at a time, X ranging from 1 to 64 (size of a block with the tested device). The percent value on the right is the comparison of the same test conducted without the continuous read feature, ie. reading X pages in one single user request, which got naturally split by the core whit the continuous read optimization disabled into single-page reads. * 1-1-1 result: 1 page read speed is 2634 KiB/s 2 page read speed is 2704 KiB/s (+3%) 3 page read speed is 2747 KiB/s (+5%) 4 page read speed is 2804 KiB/s (+7%) 5 page read speed is 2782 KiB/s 6 page read speed is 2826 KiB/s 7 page read speed is 2834 KiB/s 8 page read speed is 2821 KiB/s 9 page read speed is 2846 KiB/s 10 page read speed is 2819 KiB/s 11 page read speed is 2871 KiB/s (+10%) 12 page read speed is 2823 KiB/s 13 page read speed is 2880 KiB/s 14 page read speed is 2842 KiB/s 15 page read speed is 2862 KiB/s 16 page read speed is 2837 KiB/s 32 page read speed is 2879 KiB/s 64 page read speed is 2842 KiB/s * 1-1-4 result: 1 page read speed is 7562 KiB/s 2 page read speed is 8904 KiB/s (+15%) 3 page read speed is 9655 KiB/s (+25%) 4 page read speed is 10118 KiB/s (+30%) 5 page read speed is 10084 KiB/s 6 page read speed is 10300 KiB/s 7 page read speed is 10434 KiB/s (+35%) 8 page read speed is 10406 KiB/s 9 page read speed is 10769 KiB/s (+40%) 10 page read speed is 10666 KiB/s 11 page read speed is 10757 KiB/s 12 page read speed is 10835 KiB/s 13 page read speed is 10976 KiB/s 14 page read speed is 11200 KiB/s 15 page read speed is 11009 KiB/s 16 page read speed is 11082 KiB/s 32 page read speed is 11352 KiB/s (+45%) 64 page read speed is 11403 KiB/s This work has received support and could be achieved thanks to Alvin Zhou <alvinzhou@mxic.com.tw>. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-10-miquel.raynal@bootlin.com
-
Miquel Raynal authored
Macronix SPI-NANDs encode the ECC status into two bits. There are three standard situations (no bitflip, bitflips, error), and an additional possible situation which is only triggered when configuring the 0x10 configuration register, allowing to know, if there have been bitflips, whether the maximum amount of bitflips was above a configurable threshold or not. In all cases, for now, s this configuration register is unset, it means the same as "there are bitflips". This value is maybe standard, maybe not. For now, let's define it in the Macronix driver, we can safely move it to a shared place later if that is relevant. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-9-miquel.raynal@bootlin.com
-
Miquel Raynal authored
With GET_STATUS commands, SPI-NAND devices can tell the status of the last read operation, in particular if there was: - no bitflips - corrected bitflips - uncorrectable bitflips The next step then to read an ECC status register and retrieve the amount of bitflips, when relevant, if possible. The logic used here works well for now, but will no longer apply to continuous reads. In order to prepare the introduction of continuous reads, let's factorize out the code that is specific to single-page reads. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-8-miquel.raynal@bootlin.com
-
Miquel Raynal authored
Use "macronix_" instead of "mx35lf1ge4ab_" as common prefix for the ->get_status() callback name. This callback is used by many different families, there is no variation in the implementation so far. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-7-miquel.raynal@bootlin.com
-
Miquel Raynal authored
This helper function will soon be used from a vendor driver, let's export it through the spinand.h header. No need for any export, as there is currently no reason for any module to need it. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-6-miquel.raynal@bootlin.com
-
Miquel Raynal authored
A regular page read consist in: - Asking one page of content from the NAND array to be loaded in the chip's SRAM, - Waiting for the operation to be done, - Retrieving the data (I/O phase) from the chip's SRAM. When reading several sequential pages, the above operation is repeated over and over. There is however a way to optimize these accesses, by enabling continuous reads. The feature requires the NAND chip to have a second internal SRAM area plus a bit of additional internal logic to trigger another internal transfer between the NAND array and the second SRAM area while the I/O phase is ongoing. Once the first I/O phase is done, the host can continue reading more data, continuously, as the chip will automatically switch to the second SRAM content (which has already been loaded) and in turns trigger the next load into the first SRAM area again. From an instruction perspective, the command op-codes are different, but the same cycles are required. The only difference is that after a continuous read (which is stopped by a CS deassert), the host must observe a delay of tRST. However, because there is no guarantee in Linux regarding the actual state of the CS pin after a transfer (in order to speed-up the next transfer if targeting the same device), it was necessary to manually end the continuous read with a configuration register write operation. Continuous reads have two main drawbacks: * They only work on full pages (column address ignored) * Only the main data area is pulled, out-of-band bytes are not accessible. Said otherwise, the feature can only be useful with on-die ECC engines. Performance wise, measures have been performed on a Zynq platform using Macronix SPI-NAND controller with a Macronix chip (based on the flash_speed tool modified for testing sequential reads): - 1-1-1 mode: performances improved from +3% (2-pages) up to +10% after a dozen pages. - 1-1-4 mode: performances improved from +15% (2-pages) up to +40% after a dozen pages. This series is based on a previous work from Macronix engineer Jaime Liao. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-5-miquel.raynal@bootlin.com
-
Miquel Raynal authored
There is currently only a single path for performing page reads as requested by the MTD layer. Soon there will be two: - a "regular" page read - a continuous page read Let's extract the page read logic in a dedicated helper, so the introduction of continuous page reads will be as easy as checking whether continuous reads shall/can be used and calling one helper or the other. There is not behavioral change intended. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-4-miquel.raynal@bootlin.com
-
Miquel Raynal authored
In order to be able to iterate easily across eraseblocks rather than pages, let's introduce a block iterator inspired from the page iterator. The main usage of this iterator will be for continuous/sequential reads, where it is interesting to use a single request rather than split the requests in smaller chunks (ie. pages) that can be hardly optimized. So a "continuous" boolean get's added for this purpose. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-3-miquel.raynal@bootlin.com
-
Miquel Raynal authored
Soon a helper for iterating over blocks will be needed (for continuous read purposes). In order to clarify the intend of this helper, let's rename it with the "page" wording inside. While at it, improve the doc and fix a typo. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-2-miquel.raynal@bootlin.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-11-ruanjinjie@huawei.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-10-ruanjinjie@huawei.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-9-ruanjinjie@huawei.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-8-ruanjinjie@huawei.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-7-ruanjinjie@huawei.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-6-ruanjinjie@huawei.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-5-ruanjinjie@huawei.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop by using for_each_child_of_node_scoped(). Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-4-ruanjinjie@huawei.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop by using for_each_child_of_node_scoped(). Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-3-ruanjinjie@huawei.com
-
Jinjie Ruan authored
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-2-ruanjinjie@huawei.com
-
Jinjie Ruan authored
The devm_clk_get_enabled() helper: - calls devm_clk_get() - calls clk_prepare_enable() and registers what is needed in order to call clk_disable_unprepare() when needed, as a managed resource. This simplifies the code. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826080408.2522978-1-ruanjinjie@huawei.com
-
Chen Ridong authored
The pci_release_regions was miss at error case, just add it. Signed-off-by: Chen Ridong <chenridong@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826024339.476921-1-chenridong@huawei.com
-
- 23 Aug, 2024 3 commits
-
-
Bartosz Golaszewski authored
There are no longer any users of the platform data for davinci rawnand in board files. We can remove the public pdata headers and move the structures that are still used into the driver compilation unit while removing the rest. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240814122120.13975-1-brgl@bgdev.pl
-
Daniel Golle authored
Reporting an unclean read from SPI-NAND only when the maximum number of correctable bitflip errors has been hit seems a bit late. UBI LEB scrubbing, which depends on the lower MTD device reporting correctable bitflips, then only kicks in when it's almost too late. Set bitflip_threshold to 75% of the ECC strength, which is also the default for raw NAND. Signed-off-by: Daniel Golle <daniel@makrotopia.org> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/2117e387260b0a96f95b8e1652ff79e0e2d71d53.1723427450.git.daniel@makrotopia.org
-
Robert Marko authored
Add support for Winbond W25N01KV 1Gbit SPI-NAND. It has 4-bit on-die ECC. Signed-off-by: Robert Marko <robimarko@gmail.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240805175125.6658-1-robimarko@gmail.com
-
- 28 Jul, 2024 9 commits
-
-
Linus Torvalds authored
-
Linus Torvalds authored
Merge tag 'kbuild-fixes-v6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild fixes from Masahiro Yamada: - Fix RPM package build error caused by an incorrect locale setup - Mark modules.weakdep as ghost in RPM package - Fix the odd combination of -S and -c in stack protector scripts, which is an error with the latest Clang * tag 'kbuild-fixes-v6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kbuild: Fix '-S -c' in x86 stack protector scripts kbuild: rpm-pkg: ghost modules.weakdep file kbuild: rpm-pkg: Fix C locale setup
-
Linus Torvalds authored
This simplifies the min_t() and max_t() macros by no longer making them work in the context of a C constant expression. That means that you can no longer use them for static initializers or for array sizes in type definitions, but there were only a couple of such uses, and all of them were converted (famous last words) to use MIN_T/MAX_T instead. Cc: David Laight <David.Laight@aculab.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
Commit 3a7e02c0 ("minmax: avoid overly complicated constant expressions in VM code") added the simpler MIN_T/MAX_T macros in order to avoid some excessive expansion from the rather complicated regular min/max macros. The complexity of those macros stems from two issues: (a) trying to use them in situations that require a C constant expression (in static initializers and for array sizes) (b) the type sanity checking and MIN_T/MAX_T avoids both of these issues. Now, in the whole (long) discussion about all this, it was pointed out that the whole type sanity checking is entirely unnecessary for min_t/max_t which get a fixed type that the comparison is done in. But that still leaves min_t/max_t unnecessarily complicated due to worries about the C constant expression case. However, it turns out that there really aren't very many cases that use min_t/max_t for this, and we can just force-convert those. This does exactly that. Which in turn will then allow for much simpler implementations of min_t()/max_t(). All the usual "macros in all upper case will evaluate the arguments multiple times" rules apply. We should do all the same things for the regular min/max() vs MIN/MAX() cases, but that has the added complexity of various drivers defining their own local versions of MIN/MAX, so that needs another level of fixes first. Link: https://lore.kernel.org/all/b47fad1d0cf8449886ad148f8c013dae@AcuMS.aculab.com/ Cc: David Laight <David.Laight@aculab.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
Merge tag 'ubifs-for-linus-6.11-rc1-take2' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs Pull UBI and UBIFS updates from Richard Weinberger: - Many fixes for power-cut issues by Zhihao Cheng - Another ubiblock error path fix - ubiblock section mismatch fix - Misc fixes all over the place * tag 'ubifs-for-linus-6.11-rc1-take2' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs: ubi: Fix ubi_init() ubiblock_exit() section mismatch ubifs: add check for crypto_shash_tfm_digest ubifs: Fix inconsistent inode size when powercut happens during appendant writing ubi: block: fix null-pointer-dereference in ubiblock_create() ubifs: fix kernel-doc warnings ubifs: correct UBIFS_DFS_DIR_LEN macro definition and improve code clarity mtd: ubi: Restore missing cleanup on ubi_init() failure path ubifs: dbg_orphan_check: Fix missed key type checking ubifs: Fix unattached inode when powercut happens in creating ubifs: Fix space leak when powercut happens in linking tmpfile ubifs: Move ui->data initialization after initializing security ubifs: Fix adding orphan entry twice for the same inode ubifs: Remove insert_dead_orphan from replaying orphan process Revert "ubifs: ubifs_symlink: Fix memleak of inode->i_link in error path" ubifs: Don't add xattr inode into orphan area ubifs: Fix unattached xattr inode if powercut happens after deleting mtd: ubi: avoid expensive do_div() on 32-bit machines mtd: ubi: make ubi_class constant ubi: eba: properly rollback inside self_check_eba
-
Nathan Chancellor authored
After a recent change in clang to stop consuming all instances of '-S' and '-c' [1], the stack protector scripts break due to the kernel's use of -Werror=unused-command-line-argument to catch cases where flags are not being properly consumed by the compiler driver: $ echo | clang -o - -x c - -S -c -Werror=unused-command-line-argument clang: error: argument unused during compilation: '-c' [-Werror,-Wunused-command-line-argument] This results in CONFIG_STACKPROTECTOR getting disabled because CONFIG_CC_HAS_SANE_STACKPROTECTOR is no longer set. '-c' and '-S' both instruct the compiler to stop at different stages of the pipeline ('-S' after compiling, '-c' after assembling), so having them present together in the same command makes little sense. In this case, the test wants to stop before assembling because it is looking at the textual assembly output of the compiler for either '%fs' or '%gs', so remove '-c' from the list of arguments to resolve the error. All versions of GCC continue to work after this change, along with versions of clang that do or do not contain the change mentioned above. Cc: stable@vger.kernel.org Fixes: 4f7fd4d7 ("[PATCH] Add the -fstack-protector option to the CFLAGS") Fixes: 60a5317f ("x86: implement x86_32 stack protector") Link: https://github.com/llvm/llvm-project/commit/6461e537815f7fa68cef06842505353cf5600e9c [1] Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
-
Richard Weinberger authored
Since ubiblock_exit() is now called from an init function, the __exit section no longer makes sense. Cc: Ben Hutchings <bwh@kernel.org> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202407131403.wZJpd8n2-lkp@intel.com/Signed-off-by: Richard Weinberger <richard@nod.at> Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linuxLinus Torvalds authored
Pull turbostat updates from Len Brown: - Enable turbostat extensions to add both perf and PMT (Intel Platform Monitoring Technology) counters via the cmdline - Demonstrate PMT access with built-in support for Meteor Lake's Die C6 counter * tag 'v6.11-merge' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux: tools/power turbostat: version 2024.07.26 tools/power turbostat: Include umask=%x in perf counter's config tools/power turbostat: Document PMT in turbostat.8 tools/power turbostat: Add MTL's PMT DC6 builtin counter tools/power turbostat: Add early support for PMT counters tools/power turbostat: Add selftests for added perf counters tools/power turbostat: Add selftests for SMI, APERF and MPERF counters tools/power turbostat: Move verbose counter messages to level 2 tools/power turbostat: Move debug prints from stdout to stderr tools/power turbostat: Fix typo in turbostat.8 tools/power turbostat: Add perf added counter example to turbostat.8 tools/power turbostat: Fix formatting in turbostat.8 tools/power turbostat: Extend --add option with perf counters tools/power turbostat: Group SMI counter with APERF and MPERF tools/power turbostat: Add ZERO_ARRAY for zero initializing builtin array tools/power turbostat: Replace enum rapl_source and cstate_source with counter_source tools/power turbostat: Remove anonymous union from rapl_counter_info_t tools/power/turbostat: Switch to new Intel CPU model defines
-
git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxlLinus Torvalds authored
Pull CXL updates from Dave Jiang: "Core: - A CXL maturity map has been added to the documentation to detail the current state of CXL enabling. It provides the status of the current state of various CXL features to inform current and future contributors of where things are and which areas need contribution. - A notifier handler has been added in order for a newly created CXL memory region to trigger the abstract distance metrics calculation. This should bring parity for CXL memory to the same level vs hotplugged DRAM for NUMA abstract distance calculation. The abstract distance reflects relative performance used for memory tiering handling. - An addition for XOR math has been added to address the CXL DPA to SPA translation. CXL address translation did not support address interleave math with XOR prior to this change. Fixes: - Fix to address race condition in the CXL memory hotplug notifier - Add missing MODULE_DESCRIPTION() for CXL modules - Fix incorrect vendor debug UUID define Misc: - A warning has been added to inform users of an unsupported configuration when mixing CXL VH and RCH/RCD hierarchies - The ENXIO error code has been replaced with EBUSY for inject poison limit reached via debugfs and cxl-test support - Moving the PCI config read in cxl_dvsec_rr_decode() to avoid unnecessary PCI config reads - A refactor to a common struct for DRAM and general media CXL events" * tag 'cxl-for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: cxl/core/pci: Move reading of control register to immediately before usage cxl: Remove defunct code calculating host bridge target positions cxl/region: Verify target positions using the ordered target list cxl: Restore XOR'd position bits during address translation cxl/core: Fold cxl_trace_hpa() into cxl_dpa_to_hpa() cxl/test: Replace ENXIO with EBUSY for inject poison limit reached cxl/memdev: Replace ENXIO with EBUSY for inject poison limit reached cxl/acpi: Warn on mixed CXL VH and RCH/RCD Hierarchy cxl/core: Fix incorrect vendor debug UUID define Documentation: CXL Maturity Map cxl/region: Simplify cxl_region_nid() cxl/region: Support to calculate memory tier abstract distance cxl/region: Fix a race condition in memory hotplug notifier cxl: add missing MODULE_DESCRIPTION() macros cxl/events: Use a common struct for DRAM and General Media events
-