- 26 Jul, 2017 23 commits
-
-
Dennis Zhou (Facebook) authored
The other patches contain a lot of information, so adding this information in a separate patch. It adds my copyright and a brief explanation of how the bitmap allocator works. There is a minor typo as well in the prior explanation so that is fixed. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The simple, and expensive, way to find a free area is to iterate over the entire bitmap until an area is found that fits the allocation size and alignment. This patch makes use of an iterate that find an area to check by using the block level contig hints. It will only return an area that can fit the size and alignment request. If the request can fit inside a block, it returns the first_free bit to start checking from to see if it can be fulfilled prior to the contig hint. The pcpu_alloc_area check has a bound of a block size added in case it is wrong. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The largest free region will either be a block level contig hint or an aggregate over the left_free and right_free areas of blocks. This is a much smaller set of free areas that need to be checked than a full traverse. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The bitmap allocator must keep metadata consistent. The easiest way is to scan after every allocation for each affected block and the entire chunk. This is rather expensive. The free path can take advantage of current contig hints to prevent scanning within the start and end block. If a scan is needed, it can be done by scanning backwards from the start and forwards from the end to identify the entire free area this can be combined with. The blocks can then be updated by some basic checks rather than complete block scans. A chunk scan happens when the freed area makes a page free, a block free, or spans across blocks. This is necessary as the contig hint at this point could span across blocks. The check uses the minimum of page size and the block size to allow for variable sized blocks. There is a tradeoff here with not updating after every free. It is possible a contig hint in one block can be merged with the contig hint in the next block. This means the contig hint can be off by up to a page. However, if the chunk's contig hint is contained in one block, the contig hint will be accurate. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
Metadata is kept per block to keep track of where the contig hints are. Scanning can be avoided when the contig hints are not broken. In that case, left and right contigs have to be managed manually. This patch changes the allocation path hint updating to only scan when contig hints are broken. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
This patch makes the contig hint starting offset optimization from the previous patch as honest as it can be. For both chunk and block starting offsets, make sure it keeps the starting offset with the best alignment. The block skip optimization is added in a later patch when the pcpu_find_block_fit iterator is swapped in. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
This patch adds chunk->contig_bits_start to keep track of the contig hint's offset and the check to skip the chunk if it does not fit. If the chunk's contig hint starting offset cannot satisfy an allocation, the allocator assumes there is enough memory pressure in this chunk to either use a different chunk or create a new one. This accepts a less tight packing for a smoother latency curve. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
This patch adds first_bit to keep track of the first free bit in the bitmap. This hint helps prevent scanning of fully allocated blocks. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
This patch introduces the bitmap metadata blocks and adds the skeleton of the code that will be used to maintain these blocks. Each chunk's bitmap is made up of full metadata blocks. These blocks maintain basic metadata to help prevent scanning unnecssarily to update hints. Full scanning methods are used for the skeleton and will be replaced in the coming patches. A number of helper functions are added as well to do conversion of pages to blocks and manage offsets. Comments will be updated as the final version of each function is added. There exists a relationship between PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE, the region size, and unit_size. Every chunk's region (including offsets) is page aligned at the beginning to preserve alignment. The end is aligned to LCM(PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE) to ensure that the end can fit with the populated page map which is by page and every metadata block is fully accounted for. The unit_size is already page aligned, but must also be aligned with PCPU_BITMAP_BLOCK_SIZE to ensure full metadata blocks. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The percpu memory allocator is experiencing scalability issues when allocating and freeing large numbers of counters as in BPF. Additionally, there is a corner case where iteration is triggered over all chunks if the contig_hint is the right size, but wrong alignment. This patch replaces the area map allocator with a basic bitmap allocator implementation. Each subsequent patch will introduce new features and replace full scanning functions with faster non-scanning options when possible. Implementation: This patchset removes the area map allocator in favor of a bitmap allocator backed by metadata blocks. The primary goal is to provide consistency in performance and memory footprint with a focus on small allocations (< 64 bytes). The bitmap removes the heavy memmove from the freeing critical path and provides a consistent memory footprint. The metadata blocks provide a bound on the amount of scanning required by maintaining a set of hints. In an effort to make freeing fast, the metadata is updated on the free path if the new free area makes a page free, a block free, or spans across blocks. This causes the chunk's contig hint to potentially be smaller than what it could allocate by up to the smaller of a page or a block. If the chunk's contig hint is contained within a block, a check occurs and the hint is kept accurate. Metadata is always kept accurate on allocation, so there will not be a situation where a chunk has a later contig hint than available. Evaluation: I have primarily done testing against a simple workload of allocation of 1 million objects (2^20) of varying size. Deallocation was done by in order, alternating, and in reverse. These numbers were collected after rebasing ontop of a80099a1. I present the worst-case numbers here: Area Map Allocator: Object Size | Alloc Time (ms) | Free Time (ms) ---------------------------------------------- 4B | 310 | 4770 16B | 557 | 1325 64B | 436 | 273 256B | 776 | 131 1024B | 3280 | 122 Bitmap Allocator: Object Size | Alloc Time (ms) | Free Time (ms) ---------------------------------------------- 4B | 490 | 70 16B | 515 | 75 64B | 610 | 80 256B | 950 | 100 1024B | 3520 | 200 This data demonstrates the inability for the area map allocator to handle less than ideal situations. In the best case of reverse deallocation, the area map allocator was able to perform within range of the bitmap allocator. In the worst case situation, freeing took nearly 5 seconds for 1 million 4-byte objects. The bitmap allocator dramatically improves the consistency of the free path. The small allocations performed nearly identical regardless of the freeing pattern. While it does add to the allocation latency, the allocation scenario here is optimal for the area map allocator. The area map allocator runs into trouble when it is allocating in chunks where the latter half is full. It is difficult to replicate this, so I present a variant where the pages are second half filled. Freeing was done sequentially. Below are the numbers for this scenario: Area Map Allocator: Object Size | Alloc Time (ms) | Free Time (ms) ---------------------------------------------- 4B | 4118 | 4892 16B | 1651 | 1163 64B | 598 | 285 256B | 771 | 158 1024B | 3034 | 160 Bitmap Allocator: Object Size | Alloc Time (ms) | Free Time (ms) ---------------------------------------------- 4B | 481 | 67 16B | 506 | 69 64B | 636 | 75 256B | 892 | 90 1024B | 3262 | 147 The data shows a parabolic curve of performance for the area map allocator. This is due to the memmove operation being the dominant cost with the lower object sizes as more objects are packed in a chunk and at higher object sizes, the traversal of the chunk slots is the dominating cost. The bitmap allocator suffers this problem as well. The above data shows the inability to scale for the allocation path with the area map allocator and that the bitmap allocator demonstrates consistent performance in general. The second problem of additional scanning can result in the area map allocator completing in 52 minutes when trying to allocate 1 million 4-byte objects with 8-byte alignment. The same workload takes approximately 16 seconds to complete for the bitmap allocator. V2: Fixed a bug in pcpu_alloc_first_chunk end_offset was setting the bitmap using bytes instead of bits. Added a comment to pcpu_cnt_pop_pages to explain bitmap_weight. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The area map allocator only used a bitmap for the backing page state. The new bitmap allocator will use bitmaps to manage the allocation region in addition to this. This patch generalizes the bitmap iterators so they can be reused with the bitmap allocator. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
This patch increases the minimum allocation size of percpu memory to 4-bytes. This change will help minimize the metadata overhead associated with the bitmap allocator. The assumption is that most allocations will be of objects or structs greater than 2 bytes with integers or longs being used rather than shorts. The first chunk regions are now aligned with the minimum allocation size. The reserved region is expected to be set as a multiple of the minimum allocation size. The static region is aligned up and the delta is removed from the dynamic size. This works because the dynamic size is increased to be page aligned. If the static size is not minimum allocation size aligned, then there must be a gap that is added to the dynamic size. The dynamic size will never be smaller than the set value. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
pcpu_nr_empty_pop_pages is used to ensure there are a handful of free pages around to serve atomic allocations. A new field, nr_empty_pop_pages, is added to the pcpu_chunk struct to keep track of the number of empty pages. This field is needed as the number of empty populated pages is globally tracked and deltas are used to update in the bitmap allocator. Pages that contain a hidden area are not considered to be empty. This new field is exposed in percpu_stats. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The populated bitmap represents the state of the pages the chunk serves. Prior, the bitmap was marked completely used as the first chunk was allocated and immutable. This is misleading because the first chunk may not be completely filled. Additionally, with moving the base_addr up in the previous patch, the population check no longer corresponds to what was being checked. This patch modifies the population map to be only the number of pages the region serves and to make what it was checking correspond correctly again. The change is to remove any misunderstanding between the size of the populated bitmap and the actual size of it. The work function page iterators now use nr_pages for the check rather than pcpu_unit_pages because nr_populated is now chunk specific. Without this, the work function would try to populate the remainder of these chunks despite it not serving any more than nr_pages when nr_pages is set less than pcpu_unit_pages. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The percpu address checks for the reserved and dynamic region chunks are now specific to each region. The address checking logic can be combined taking advantage of the global references to the dynamic and static region chunks. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
Originally, the first chunk was served by one or two chunks, each given a region they are responsible for. Despite this, the arithmetic was based off of the true base_addr of the chunk making it be overly inclusive. This patch moves the base_addr of chunks that are responsible for the first chunk. The base_addr must remain page aligned to keep the address alignment correct, so it is the beginning of the region served page aligned down. start_offset holds where the region served begins from this new base_addr. The corresponding percpu address checks are modified to be more specific as a result. The first chunk considers only the dynamic region and both first chunk and reserved chunk checks ignore the static region. The static region addresses should never be passed into the allocator. There is no impact here besides distinguishing the first chunk and making the checks specific. The percpu pointer to physical address is left intact as addresses are not given out in the non-allocated portion of percpu memory. nr_pages is added to pcpu_chunk to keep track of the size of the entire region served containing both start_offset and end_offset. This variable will be used to manage the bitmap allocator. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
There is no need to have the static chunk and dynamic chunk be named separately as the allocations are sequential. This preemptively solves the misnomer problem with the base_addrs being moved up in the following patch. It also removes a ternary operation deciding the first chunk. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The area map allocator manages the first chunk area by hiding all but the region it is responsible for serving in the area map. To align this with the populated page bitmap, end_offset is introduced to keep track of the delta to end page aligned. The area map is appended with the page aligned end when necessary to be in line with how the bitmap allocator requires the ending to be aligned with the LCM of PAGE_SIZE and the size of each bitmap block. percpu_stats is updated to ignore this region when present. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
Create a common allocator for first chunk initialization, pcpu_alloc_first_chunk. Comments for this function will be added in a later patch once the bitmap allocator is added. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
There is logic for setting variables in the static chunk init code that could be consolidated with the dynamic chunk init code. This combines this logic to setup for combining the allocation paths. reserved_size is used as the conditional as a dynamic region will always exist. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
Prior this variable was used to manage statistics when the first chunk had a reserved region. The previous patch introduced start_offset to keep track of the offset by value rather than boolean. Therefore, has_reserved can be removed. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The reserved chunk arithmetic uses a global variable pcpu_reserved_chunk_limit that is set in the first chunk init code to hide a portion of the area map. The bitmap allocator to come will eventually move the base_addr up and require both the reserved chunk and static chunk to maintain this offset. pcpu_reserved_chunk_limit is removed and start_offset is added. The first chunk that is circulated and is pcpu_first_chunk serves the dynamic region, the region following the reserved region. The reserved chunk address check will temporarily use the first chunk to identify its address range. A following patch will increase the base_addr and remove this. If there is no reserved chunk, this will check the static region and return false because those values should never be passed into the allocator. Lastly, when linking in the first chunk, make sure to count the right free region for the number of empty populated pages. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
The first chunk is handled as a special case as it is composed of the static, reserved, and dynamic regions. The code handles each case individually. The next several patches will merge these code paths and lay the foundation for the bitmap allocator. This patch modifies logic to enforce that a dynamic region exists and changes the area map to account for that. This brings the logic closer to the dynamic chunk's init logic. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
- 17 Jul, 2017 4 commits
-
-
Dennis Zhou (Facebook) authored
The header comment for percpu memory is a little hard to parse and is not super clear about how the first chunk is managed. This adds a little more clarity to the situation. There is also quite a bit of tricky logic in the pcpu_build_alloc_info. This adds a restructure of a comment to add a little more information. Unfortunately, you will still have to piece together a handful of other comments too, but should help direct you to the meaningful comments. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
Percpu memory holds a minimum threshold of pages that are populated in order to serve atomic percpu memory requests. This change makes it easier to verify that there are a minimum number of populated pages lying around. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
This makes the debugfs output for percpu_stats a little easier to read by changing the spacing of the output to be consistent. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
Dennis Zhou (Facebook) authored
Changes the use of a void buffer to an int buffer for clarity. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
- 15 Jul, 2017 13 commits
-
-
Linus Torvalds authored
-
git://git.lwn.net/linuxLinus Torvalds authored
Pull documentation format standardization from Jonathan Corbet: "This series converts a number of top-level documents to the RST format without incorporating them into the Sphinx tree. The hope is to bring some uniformity to kernel documentation and, perhaps more importantly, have our existing docs serve as an example of the desired formatting for those that will be added later. Mauro has gone through and fixed up a lot of top-level documentation files to make them conform to the RST format, but without moving or renaming them in any way. This will help when we incorporate the ones we want to keep into the Sphinx doctree, but the real purpose is to bring a bit of uniformity to our documentation and let the top-level docs serve as examples for those writing new ones" * tag 'standardize-docs' of git://git.lwn.net/linux: (84 commits) docs: kprobes.txt: Fix whitespacing tee.txt: standardize document format cgroup-v2.txt: standardize document format dell_rbu.txt: standardize document format zorro.txt: standardize document format xz.txt: standardize document format xillybus.txt: standardize document format vfio.txt: standardize document format vfio-mediated-device.txt: standardize document format unaligned-memory-access.txt: standardize document format this_cpu_ops.txt: standardize document format svga.txt: standardize document format static-keys.txt: standardize document format smsc_ece1099.txt: standardize document format SM501.txt: standardize document format siphash.txt: standardize document format sgi-ioc4.txt: standardize document format SAK.txt: standardize document format rpmsg.txt: standardize document format robust-futexes.txt: standardize document format ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/randomLinus Torvalds authored
Pull random updates from Ted Ts'o: "Add wait_for_random_bytes() and get_random_*_wait() functions so that callers can more safely get random bytes if they can block until the CRNG is initialized. Also print a warning if get_random_*() is called before the CRNG is initialized. By default, only one single-line warning will be printed per boot. If CONFIG_WARN_ALL_UNSEEDED_RANDOM is defined, then a warning will be printed for each function which tries to get random bytes before the CRNG is initialized. This can get spammy for certain architecture types, so it is not enabled by default" * tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random: random: reorder READ_ONCE() in get_random_uXX random: suppress spammy warnings about unseeded randomness random: warn when kernel uses unseeded randomness net/route: use get_random_int for random counter net/neighbor: use get_random_u32 for 32-bit hash random rhashtable: use get_random_u32 for hash_rnd ceph: ensure RNG is seeded before using iscsi: ensure RNG is seeded before use cifs: use get_random_u32 for 32-bit lock random random: add get_random_{bytes,u32,u64,int,long,once}_wait family random: add wait_for_random_bytes() API
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds authored
Pull ->s_options removal from Al Viro: "Preparations for fsmount/fsopen stuff (coming next cycle). Everything gets moved to explicit ->show_options(), killing ->s_options off + some cosmetic bits around fs/namespace.c and friends. Basically, the stuff needed to work with fsmount series with minimum of conflicts with other work. It's not strictly required for this merge window, but it would reduce the PITA during the coming cycle, so it would be nice to have those bits and pieces out of the way" * 'work.mount' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: isofs: Fix isofs_show_options() VFS: Kill off s_options and helpers orangefs: Implement show_options 9p: Implement show_options isofs: Implement show_options afs: Implement show_options affs: Implement show_options befs: Implement show_options spufs: Implement show_options bpf: Implement show_options ramfs: Implement show_options pstore: Implement show_options omfs: Implement show_options hugetlbfs: Implement show_options VFS: Don't use save/replace_mount_options if not using generic_show_options VFS: Provide empty name qstr VFS: Make get_filesystem() return the affected filesystem VFS: Clean up whitespace in fs/namespace.c and fs/super.c Provide a function to create a NUL-terminated string from unterminated data
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds authored
Pull more __copy_.._user elimination from Al Viro. * 'work.__copy_to_user' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: drm_dp_aux_dev: switch to read_iter/write_iter
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds authored
Pull uacess-unaligned removal from Al Viro: "That stuff had just one user, and an exotic one, at that - binfmt_flat on arm and m68k" * 'work.uaccess-unaligned' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: kill {__,}{get,put}_user_unaligned() binfmt_flat: flat_{get,put}_addr_from_rp() should be able to fail
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds authored
Pull network field-by-field copy-in updates from Al Viro: "This part of the misc compat queue was held back for review from networking folks and since davem has jus ACKed those..." * 'misc.compat' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: get_compat_bpf_fprog(): don't copyin field-by-field get_compat_msghdr(): get rid of field-by-field copyin copy_msghdr_from_user(): get rid of field-by-field copyin
-
git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds authored
Pull MIPS updates from Ralf Baechle: "Boston platform support: - Document DT bindings - Add CLK driver for board clocks CM: - Avoid per-core locking with CM3 & higher - WARN on attempt to lock invalid VP, not BUG CPS: - Select CONFIG_SYS_SUPPORTS_SCHED_SMT for MIPSr6 - Prevent multi-core with dcache aliasing - Handle cores not powering down more gracefully - Handle spurious VP starts more gracefully DSP: - Add lwx & lhx missaligned access support eBPF: - Add MIPS support along with many supporting change to add the required infrastructure Generic arch code: - Misc sysmips MIPS_ATOMIC_SET fixes - Drop duplicate HAVE_SYSCALL_TRACEPOINTS - Negate error syscall return in trace - Correct forced syscall errors - Traced negative syscalls should return -ENOSYS - Allow samples/bpf/tracex5 to access syscall arguments for sane traces - Cleanup from old Kconfig options in defconfigs - Fix PREF instruction usage by memcpy for MIPS R6 - Fix various special cases in the FPU eulation - Fix some special cases in MIPS16e2 support - Fix MIPS I ISA /proc/cpuinfo reporting - Sort MIPS Kconfig alphabetically - Fix minimum alignment requirement of IRQ stack as required by ABI / GCC - Fix special cases in the module loader - Perform post-DMA cache flushes on systems with MAARs - Probe the I6500 CPU - Cleanup cmpxchg and add support for 1 and 2 byte operations - Use queued read/write locks (qrwlock) - Use queued spinlocks (qspinlock) - Add CPU shared FTLB feature detection - Handle tlbex-tlbp race condition - Allow storing pgd in C0_CONTEXT for MIPSr6 - Use current_cpu_type() in m4kc_tlbp_war() - Support Boston in the generic kernel Generic platform: - yamon-dt: Pull YAMON DT shim code out of SEAD-3 board - yamon-dt: Support > 256MB of RAM - yamon-dt: Use serial* rather than uart* aliases - Abstract FDT fixup application - Set RTC_ALWAYS_BCD to 0 - Add a MAINTAINERS entry core kernel: - qspinlock.c: include linux/prefetch.h Loongson 3: - Add support Perf: - Add I6500 support SEAD-3: - Remove GIC timer from DT - Set interrupt-parent per-device, not at root node - Fix GIC interrupt specifiers SMP: - Skip IPI setup if we only have a single CPU VDSO: - Make comment match reality - Improvements to time code in VDSO" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (86 commits) locking/qspinlock: Include linux/prefetch.h MIPS: Fix MIPS I ISA /proc/cpuinfo reporting MIPS: Fix minimum alignment requirement of IRQ stack MIPS: generic: Support MIPS Boston development boards MIPS: DTS: img: Don't attempt to build-in all .dtb files clk: boston: Add a driver for MIPS Boston board clocks dt-bindings: Document img,boston-clock binding MIPS: Traced negative syscalls should return -ENOSYS MIPS: Correct forced syscall errors MIPS: Negate error syscall return in trace MIPS: Drop duplicate HAVE_SYSCALL_TRACEPOINTS select MIPS16e2: Provide feature overrides for non-MIPS16 systems MIPS: MIPS16e2: Report ASE presence in /proc/cpuinfo MIPS: MIPS16e2: Subdecode extended LWSP/SWSP instructions MIPS: MIPS16e2: Identify ASE presence MIPS: VDSO: Fix a mismatch between comment and preprocessor constant MIPS: VDSO: Add implementation of gettimeofday() fallback MIPS: VDSO: Add implementation of clock_gettime() fallback MIPS: VDSO: Fix conversions in do_monotonic()/do_monotonic_coarse() MIPS: Use current_cpu_type() in m4kc_tlbp_war() ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/rw/umlLinus Torvalds authored
Pull UML updates from Richard Weinberger: "Mostly fixes for UML: - First round of fixes for PTRACE_GETRESET/SETREGSET - A printf vs printk cleanup - Minor improvements" * 'for-linus-4.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml: um: Correctly check for PTRACE_GETRESET/SETREGSET um: v2: Use generic NOTES macro um: Add kerneldoc for userspace_tramp() and start_userspace() um: Add kerneldoc for segv_handler um: stub-data.h: remove superfluous include um: userspace - be more verbose in ptrace set regs error um: add dummy ioremap and iounmap functions um: Allow building and running on older hosts um: Avoid longjmp/setjmp symbol clashes with libpthread.a um: console: Ignore console= option um: Use os_warn to print out pre-boot warning/error messages um: Add os_warn() for pre-boot warning/error messages um: Use os_info for the messages on normal path um: Add os_info() for pre-boot information messages um: Use printk instead of printf in make_uml_dir
-
git://git.infradead.org/linux-ubifsLinus Torvalds authored
Pull UBIFS updates from Richard Weinberger: - Updates and fixes for the file encryption mode - Minor improvements - Random fixes * tag 'upstream-4.13-rc1' of git://git.infradead.org/linux-ubifs: ubifs: Set double hash cookie also for RENAME_EXCHANGE ubifs: Massage assert in ubifs_xattr_set() wrt. init_xattrs ubifs: Don't leak kernel memory to the MTD ubifs: Change gfp flags in page allocation for bulk read ubifs: Fix oops when remounting with no_bulk_read. ubifs: Fail commit if TNC is obviously inconsistent ubifs: allow userspace to map mounts to volumes ubifs: Wire-up statx() support ubifs: Remove dead code from ubifs_get_link() ubifs: Massage debug prints wrt. fscrypt ubifs: Add assert to dent_key_init() ubifs: Fix unlink code wrt. double hash lookups ubifs: Fix data node size for truncating uncompressed nodes ubifs: Don't encrypt special files on creation ubifs: Fix memory leak in RENAME_WHITEOUT error path in do_rename ubifs: Fix inode data budget in ubifs_mknod ubifs: Correctly evict xattr inodes ubifs: Unexport ubifs_inode_slab ubifs: don't bother checking for encryption key in ->mmap() ubifs: require key for truncate(2) of encrypted file
-
git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds authored
Pull more KVM updates from Radim Krčmář: "Second batch of KVM updates for v4.13 Common: - add uevents for VM creation/destruction - annotate and properly access RCU-protected objects s390: - rename IOCTL added in the first v4.13 merge x86: - emulate VMLOAD VMSAVE feature in SVM - support paravirtual asynchronous page fault while nested - add Hyper-V userspace interfaces for better migration - improve master clock corner cases - extend internal error reporting after EPT misconfig - correct single-stepping of emulated instructions in SVM - handle MCE during VM entry - fix nVMX VM entry checks and nVMX VMCS shadowing" * tag 'kvm-4.13-2' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (28 commits) kvm: x86: hyperv: make VP_INDEX managed by userspace KVM: async_pf: Let guest support delivery of async_pf from guest mode KVM: async_pf: Force a nested vmexit if the injected #PF is async_pf KVM: async_pf: Add L1 guest async_pf #PF vmexit handler KVM: x86: Simplify kvm_x86_ops->queue_exception parameter list kvm: x86: hyperv: add KVM_CAP_HYPERV_SYNIC2 KVM: x86: make backwards_tsc_observed a per-VM variable KVM: trigger uevents when creating or destroying a VM KVM: SVM: Enable Virtual VMLOAD VMSAVE feature KVM: SVM: Add Virtual VMLOAD VMSAVE feature definition KVM: SVM: Rename lbr_ctl field in the vmcb control area KVM: SVM: Prepare for new bit definition in lbr_ctl KVM: SVM: handle singlestep exception when skipping emulated instructions KVM: x86: take slots_lock in kvm_free_pit KVM: s390: Fix KVM_S390_GET_CMMA_BITS ioctl definition kvm: vmx: Properly handle machine check during VM-entry KVM: x86: update master clock before computing kvmclock_offset kvm: nVMX: Shadow "high" parts of shadowed 64-bit VMCS fields kvm: nVMX: Fix nested_vmx_check_msr_bitmap_controls kvm: nVMX: Validate the I/O bitmaps on nested VM-entry ...
-
Sebastian Andrzej Siewior authored
Avoid the READ_ONCE in commit 4a072c71 ("random: silence compiler warnings and fix race") if we can leave the function after arch_get_random_XXX(). Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
-
Theodore Ts'o authored
Unfortunately, on some models of some architectures getting a fully seeded CRNG is extremely difficult, and so this can result in dmesg getting spammed for a surprisingly long time. This is really bad from a security perspective, and so architecture maintainers really need to do what they can to get the CRNG seeded sooner after the system is booted. However, users can't do anything actionble to address this, and spamming the kernel messages log will only just annoy people. For developers who want to work on improving this situation, CONFIG_WARN_UNSEEDED_RANDOM has been renamed to CONFIG_WARN_ALL_UNSEEDED_RANDOM. By default the kernel will always print the first use of unseeded randomness. This way, hopefully the security obsessed will be happy that there is _some_ indication when the kernel boots there may be a potential issue with that architecture or subarchitecture. To see all uses of unseeded randomness, developers can enable CONFIG_WARN_ALL_UNSEEDED_RANDOM. Signed-off-by: Theodore Ts'o <tytso@mit.edu>
-