Commit dd671f16 authored by Julia Lawall's avatar Julia Lawall Committed by Will Deacon

arm64: fix typos in comments

Various spelling mistakes in comments.
Detected with the help of Coccinelle.
Signed-off-by: default avatarJulia Lawall <Julia.Lawall@inria.fr>
Link: https://lore.kernel.org/r/20220318103729.157574-10-Julia.Lawall@inria.fr
[will: Squashed in 20220318103729.157574-28-Julia.Lawall@inria.fr]
Signed-off-by: default avatarWill Deacon <will@kernel.org>
parent 5524cbb1
...@@ -701,7 +701,7 @@ NOKPROBE_SYMBOL(breakpoint_handler); ...@@ -701,7 +701,7 @@ NOKPROBE_SYMBOL(breakpoint_handler);
* addresses. There is no straight-forward way, short of disassembling the * addresses. There is no straight-forward way, short of disassembling the
* offending instruction, to map that address back to the watchpoint. This * offending instruction, to map that address back to the watchpoint. This
* function computes the distance of the memory access from the watchpoint as a * function computes the distance of the memory access from the watchpoint as a
* heuristic for the likelyhood that a given access triggered the watchpoint. * heuristic for the likelihood that a given access triggered the watchpoint.
* *
* See Section D2.10.5 "Determining the memory location that caused a Watchpoint * See Section D2.10.5 "Determining the memory location that caused a Watchpoint
* exception" of ARMv8 Architecture Reference Manual for details. * exception" of ARMv8 Architecture Reference Manual for details.
......
...@@ -220,7 +220,7 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num, ...@@ -220,7 +220,7 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num,
* increasing the section's alignment so that the * increasing the section's alignment so that the
* resulting address of this instruction is guaranteed * resulting address of this instruction is guaranteed
* to equal the offset in that particular bit (as well * to equal the offset in that particular bit (as well
* as all less signficant bits). This ensures that the * as all less significant bits). This ensures that the
* address modulo 4 KB != 0xfff8 or 0xfffc (which would * address modulo 4 KB != 0xfff8 or 0xfffc (which would
* have all ones in bits [11:3]) * have all ones in bits [11:3])
*/ */
......
...@@ -140,7 +140,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) ...@@ -140,7 +140,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
/* /*
* Restore pstate flags. OS lock and mdscr have been already * Restore pstate flags. OS lock and mdscr have been already
* restored, so from this point onwards, debugging is fully * restored, so from this point onwards, debugging is fully
* renabled if it was enabled when core started shutdown. * reenabled if it was enabled when core started shutdown.
*/ */
local_daif_restore(flags); local_daif_restore(flags);
......
...@@ -73,7 +73,7 @@ EXPORT_SYMBOL(memstart_addr); ...@@ -73,7 +73,7 @@ EXPORT_SYMBOL(memstart_addr);
* In this scheme a comparatively quicker boot is observed. * In this scheme a comparatively quicker boot is observed.
* *
* If ZONE_DMA configs are defined, crash kernel memory reservation * If ZONE_DMA configs are defined, crash kernel memory reservation
* is delayed until DMA zone memory range size initilazation performed in * is delayed until DMA zone memory range size initialization performed in
* zone_sizes_init(). The defer is necessary to steer clear of DMA zone * zone_sizes_init(). The defer is necessary to steer clear of DMA zone
* memory range to avoid overlap allocation. So crash kernel memory boundaries * memory range to avoid overlap allocation. So crash kernel memory boundaries
* are not known when mapping all bank memory ranges, which otherwise means * are not known when mapping all bank memory ranges, which otherwise means
...@@ -81,7 +81,7 @@ EXPORT_SYMBOL(memstart_addr); ...@@ -81,7 +81,7 @@ EXPORT_SYMBOL(memstart_addr);
* so page-granularity mappings are created for the entire memory range. * so page-granularity mappings are created for the entire memory range.
* Hence a slightly slower boot is observed. * Hence a slightly slower boot is observed.
* *
* Note: Page-granularity mapppings are necessary for crash kernel memory * Note: Page-granularity mappings are necessary for crash kernel memory
* range for shrinking its size via /sys/kernel/kexec_crash_size interface. * range for shrinking its size via /sys/kernel/kexec_crash_size interface.
*/ */
#if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32) #if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment