Commit 8ede842f authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'rust-6.9' of https://github.com/Rust-for-Linux/linux

Pull Rust updates from Miguel Ojeda:
 "Another routine one in terms of features. We got two version upgrades
  this time, but in terms of lines, 'alloc' changes are not very large.

  Toolchain and infrastructure:

   - Upgrade to Rust 1.76.0

     This time around, due to how the kernel and Rust schedules have
     aligned, there are two upgrades in fact. These allow us to remove
     two more unstable features ('const_maybe_uninit_zeroed' and
     'ptr_metadata') from the list, among other improvements

   - Mark 'rustc' (and others) invocations as recursive, which fixes a
     new warning and prepares us for the future in case we eventually
     take advantage of the Make jobserver

  'kernel' crate:

   - Add the 'container_of!' macro

   - Stop using the unstable 'ptr_metadata' feature by employing the now
     stable 'byte_sub' method to implement 'Arc::from_raw()'

   - Add the 'time' module with a 'msecs_to_jiffies()' conversion
     function to begin with, to be used by Rust Binder

   - Add 'notify_sync()' and 'wait_interruptible_timeout()' methods to
     'CondVar', to be used by Rust Binder

   - Update integer types for 'CondVar'

   - Rename 'wait_list' field to 'wait_queue_head' in 'CondVar'

   - Implement 'Display' and 'Debug' for 'BStr'

   - Add the 'try_from_foreign()' method to the 'ForeignOwnable' trait

   - Add reexports for macros so that they can be used from the right
     module (in addition to the root)

   - A series of code documentation improvements, including adding
     intra-doc links, consistency improvements, typo fixes...

  'macros' crate:

   - Place generated 'init_module()' function in '.init.text'

  Documentation:

   - Add documentation on Rust doctests and how they work"

* tag 'rust-6.9' of https://github.com/Rust-for-Linux/linux: (29 commits)
  rust: upgrade to Rust 1.76.0
  kbuild: mark `rustc` (and others) invocations as recursive
  rust: add `container_of!` macro
  rust: str: implement `Display` and `Debug` for `BStr`
  rust: module: place generated init_module() function in .init.text
  rust: types: add `try_from_foreign()` method
  docs: rust: Add description of Rust documentation test as KUnit ones
  docs: rust: Move testing to a separate page
  rust: kernel: stop using ptr_metadata feature
  rust: kernel: add reexports for macros
  rust: locked_by: shorten doclink preview
  rust: kernel: remove unneeded doclink targets
  rust: kernel: add doclinks
  rust: kernel: add blank lines in front of code blocks
  rust: kernel: mark code fragments in docs with backticks
  rust: kernel: unify spelling of refcount in docs
  rust: str: move SAFETY comment in front of unsafe block
  rust: str: use `NUL` instead of 0 in doc comments
  rust: kernel: add srctree-relative doclinks
  rust: ioctl: end top-level module docs with full stop
  ...
parents 5a2a15cd 768409cf
...@@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils. ...@@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils.
====================== =============== ======================================== ====================== =============== ========================================
GNU C 5.1 gcc --version GNU C 5.1 gcc --version
Clang/LLVM (optional) 11.0.0 clang --version Clang/LLVM (optional) 11.0.0 clang --version
Rust (optional) 1.74.1 rustc --version Rust (optional) 1.76.0 rustc --version
bindgen (optional) 0.65.1 bindgen --version bindgen (optional) 0.65.1 bindgen --version
GNU make 3.82 make --version GNU make 3.82 make --version
bash 4.2 bash --version bash 4.2 bash --version
......
...@@ -77,27 +77,3 @@ configuration: ...@@ -77,27 +77,3 @@ configuration:
#[cfg(CONFIG_X="y")] // Enabled as a built-in (`y`) #[cfg(CONFIG_X="y")] // Enabled as a built-in (`y`)
#[cfg(CONFIG_X="m")] // Enabled as a module (`m`) #[cfg(CONFIG_X="m")] // Enabled as a module (`m`)
#[cfg(not(CONFIG_X))] // Disabled #[cfg(not(CONFIG_X))] // Disabled
Testing
-------
There are the tests that come from the examples in the Rust documentation
and get transformed into KUnit tests. These can be run via KUnit. For example
via ``kunit_tool`` (``kunit.py``) on the command line::
./tools/testing/kunit/kunit.py run --make_options LLVM=1 --arch x86_64 --kconfig_add CONFIG_RUST=y
Alternatively, KUnit can run them as kernel built-in at boot. Refer to
Documentation/dev-tools/kunit/index.rst for the general KUnit documentation
and Documentation/dev-tools/kunit/architecture.rst for the details of kernel
built-in vs. command line testing.
Additionally, there are the ``#[test]`` tests. These can be run using
the ``rusttest`` Make target::
make LLVM=1 rusttest
This requires the kernel ``.config`` and downloads external repositories.
It runs the ``#[test]`` tests on the host (currently) and thus is fairly
limited in what these tests can test.
...@@ -40,6 +40,7 @@ configurations. ...@@ -40,6 +40,7 @@ configurations.
general-information general-information
coding-guidelines coding-guidelines
arch-support arch-support
testing
.. only:: subproject and html .. only:: subproject and html
......
.. SPDX-License-Identifier: GPL-2.0
Testing
=======
This document contains useful information how to test the Rust code in the
kernel.
There are two sorts of tests:
- The KUnit tests.
- The ``#[test]`` tests.
The KUnit tests
---------------
These are the tests that come from the examples in the Rust documentation. They
get transformed into KUnit tests.
Usage
*****
These tests can be run via KUnit. For example via ``kunit_tool`` (``kunit.py``)
on the command line::
./tools/testing/kunit/kunit.py run --make_options LLVM=1 --arch x86_64 --kconfig_add CONFIG_RUST=y
Alternatively, KUnit can run them as kernel built-in at boot. Refer to
Documentation/dev-tools/kunit/index.rst for the general KUnit documentation
and Documentation/dev-tools/kunit/architecture.rst for the details of kernel
built-in vs. command line testing.
To use these KUnit doctests, the following must be enabled::
CONFIG_KUNIT
Kernel hacking -> Kernel Testing and Coverage -> KUnit - Enable support for unit tests
CONFIG_RUST_KERNEL_DOCTESTS
Kernel hacking -> Rust hacking -> Doctests for the `kernel` crate
in the kernel config system.
KUnit tests are documentation tests
***********************************
These documentation tests are typically examples of usage of any item (e.g.
function, struct, module...).
They are very convenient because they are just written alongside the
documentation. For instance:
.. code-block:: rust
/// Sums two numbers.
///
/// ```
/// assert_eq!(mymod::f(10, 20), 30);
/// ```
pub fn f(a: i32, b: i32) -> i32 {
a + b
}
In userspace, the tests are collected and run via ``rustdoc``. Using the tool
as-is would be useful already, since it allows verifying that examples compile
(thus enforcing they are kept in sync with the code they document) and as well
as running those that do not depend on in-kernel APIs.
For the kernel, however, these tests get transformed into KUnit test suites.
This means that doctests get compiled as Rust kernel objects, allowing them to
run against a built kernel.
A benefit of this KUnit integration is that Rust doctests get to reuse existing
testing facilities. For instance, the kernel log would look like::
KTAP version 1
1..1
KTAP version 1
# Subtest: rust_doctests_kernel
1..59
# rust_doctest_kernel_build_assert_rs_0.location: rust/kernel/build_assert.rs:13
ok 1 rust_doctest_kernel_build_assert_rs_0
# rust_doctest_kernel_build_assert_rs_1.location: rust/kernel/build_assert.rs:56
ok 2 rust_doctest_kernel_build_assert_rs_1
# rust_doctest_kernel_init_rs_0.location: rust/kernel/init.rs:122
ok 3 rust_doctest_kernel_init_rs_0
...
# rust_doctest_kernel_types_rs_2.location: rust/kernel/types.rs:150
ok 59 rust_doctest_kernel_types_rs_2
# rust_doctests_kernel: pass:59 fail:0 skip:0 total:59
# Totals: pass:59 fail:0 skip:0 total:59
ok 1 rust_doctests_kernel
Tests using the `? <https://doc.rust-lang.org/reference/expressions/operator-expr.html#the-question-mark-operator>`_
operator are also supported as usual, e.g.:
.. code-block:: rust
/// ```
/// # use kernel::{spawn_work_item, workqueue};
/// spawn_work_item!(workqueue::system(), || pr_info!("x"))?;
/// # Ok::<(), Error>(())
/// ```
The tests are also compiled with Clippy under ``CLIPPY=1``, just like normal
code, thus also benefitting from extra linting.
In order for developers to easily see which line of doctest code caused a
failure, a KTAP diagnostic line is printed to the log. This contains the
location (file and line) of the original test (i.e. instead of the location in
the generated Rust file)::
# rust_doctest_kernel_types_rs_2.location: rust/kernel/types.rs:150
Rust tests appear to assert using the usual ``assert!`` and ``assert_eq!``
macros from the Rust standard library (``core``). We provide a custom version
that forwards the call to KUnit instead. Importantly, these macros do not
require passing context, unlike those for KUnit testing (i.e.
``struct kunit *``). This makes them easier to use, and readers of the
documentation do not need to care about which testing framework is used. In
addition, it may allow us to test third-party code more easily in the future.
A current limitation is that KUnit does not support assertions in other tasks.
Thus, we presently simply print an error to the kernel log if an assertion
actually failed. Additionally, doctests are not run for nonpublic functions.
The ``#[test]`` tests
---------------------
Additionally, there are the ``#[test]`` tests. These can be run using the
``rusttest`` Make target::
make LLVM=1 rusttest
This requires the kernel ``.config`` and downloads external repositories. It
runs the ``#[test]`` tests on the host (currently) and thus is fairly limited in
what these tests can test.
...@@ -1201,7 +1201,7 @@ prepare0: archprepare ...@@ -1201,7 +1201,7 @@ prepare0: archprepare
# All the preparing.. # All the preparing..
prepare: prepare0 prepare: prepare0
ifdef CONFIG_RUST ifdef CONFIG_RUST
$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh +$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh
$(Q)$(MAKE) $(build)=rust $(Q)$(MAKE) $(build)=rust
endif endif
...@@ -1711,7 +1711,7 @@ $(DOC_TARGETS): ...@@ -1711,7 +1711,7 @@ $(DOC_TARGETS):
# "Is Rust available?" target # "Is Rust available?" target
PHONY += rustavailable PHONY += rustavailable
rustavailable: rustavailable:
$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh && echo "Rust is available!" +$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh && echo "Rust is available!"
# Documentation target # Documentation target
# #
......
...@@ -40,7 +40,7 @@ obj-$(CONFIG_RUST_KERNEL_DOCTESTS) += doctests_kernel_generated_kunit.o ...@@ -40,7 +40,7 @@ obj-$(CONFIG_RUST_KERNEL_DOCTESTS) += doctests_kernel_generated_kunit.o
ifdef CONFIG_RUST ifdef CONFIG_RUST
# `$(rust_flags)` is passed in case the user added `--sysroot`. # `$(rust_flags)` is passed in case the user added `--sysroot`.
rustc_sysroot := $(shell $(RUSTC) $(rust_flags) --print sysroot) rustc_sysroot := $(shell MAKEFLAGS= $(RUSTC) $(rust_flags) --print sysroot)
rustc_host_target := $(shell $(RUSTC) --version --verbose | grep -F 'host: ' | cut -d' ' -f2) rustc_host_target := $(shell $(RUSTC) --version --verbose | grep -F 'host: ' | cut -d' ' -f2)
RUST_LIB_SRC ?= $(rustc_sysroot)/lib/rustlib/src/rust/library RUST_LIB_SRC ?= $(rustc_sysroot)/lib/rustlib/src/rust/library
...@@ -108,14 +108,14 @@ rustdoc-macros: private rustdoc_host = yes ...@@ -108,14 +108,14 @@ rustdoc-macros: private rustdoc_host = yes
rustdoc-macros: private rustc_target_flags = --crate-type proc-macro \ rustdoc-macros: private rustc_target_flags = --crate-type proc-macro \
--extern proc_macro --extern proc_macro
rustdoc-macros: $(src)/macros/lib.rs FORCE rustdoc-macros: $(src)/macros/lib.rs FORCE
$(call if_changed,rustdoc) +$(call if_changed,rustdoc)
rustdoc-core: private rustc_target_flags = $(core-cfgs) rustdoc-core: private rustc_target_flags = $(core-cfgs)
rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE
$(call if_changed,rustdoc) +$(call if_changed,rustdoc)
rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE
$(call if_changed,rustdoc) +$(call if_changed,rustdoc)
# We need to allow `rustdoc::broken_intra_doc_links` because some # We need to allow `rustdoc::broken_intra_doc_links` because some
# `no_global_oom_handling` functions refer to non-`no_global_oom_handling` # `no_global_oom_handling` functions refer to non-`no_global_oom_handling`
...@@ -124,7 +124,7 @@ rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE ...@@ -124,7 +124,7 @@ rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE
rustdoc-alloc: private rustc_target_flags = $(alloc-cfgs) \ rustdoc-alloc: private rustc_target_flags = $(alloc-cfgs) \
-Arustdoc::broken_intra_doc_links -Arustdoc::broken_intra_doc_links
rustdoc-alloc: $(src)/alloc/lib.rs rustdoc-core rustdoc-compiler_builtins FORCE rustdoc-alloc: $(src)/alloc/lib.rs rustdoc-core rustdoc-compiler_builtins FORCE
$(call if_changed,rustdoc) +$(call if_changed,rustdoc)
rustdoc-kernel: private rustc_target_flags = --extern alloc \ rustdoc-kernel: private rustc_target_flags = --extern alloc \
--extern build_error --extern macros=$(objtree)/$(obj)/libmacros.so \ --extern build_error --extern macros=$(objtree)/$(obj)/libmacros.so \
...@@ -132,7 +132,7 @@ rustdoc-kernel: private rustc_target_flags = --extern alloc \ ...@@ -132,7 +132,7 @@ rustdoc-kernel: private rustc_target_flags = --extern alloc \
rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-macros \ rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-macros \
rustdoc-compiler_builtins rustdoc-alloc $(obj)/libmacros.so \ rustdoc-compiler_builtins rustdoc-alloc $(obj)/libmacros.so \
$(obj)/bindings.o FORCE $(obj)/bindings.o FORCE
$(call if_changed,rustdoc) +$(call if_changed,rustdoc)
quiet_cmd_rustc_test_library = RUSTC TL $< quiet_cmd_rustc_test_library = RUSTC TL $<
cmd_rustc_test_library = \ cmd_rustc_test_library = \
...@@ -146,18 +146,18 @@ quiet_cmd_rustc_test_library = RUSTC TL $< ...@@ -146,18 +146,18 @@ quiet_cmd_rustc_test_library = RUSTC TL $<
--crate-name $(subst rusttest-,,$(subst rusttestlib-,,$@)) $< --crate-name $(subst rusttest-,,$(subst rusttestlib-,,$@)) $<
rusttestlib-build_error: $(src)/build_error.rs rusttest-prepare FORCE rusttestlib-build_error: $(src)/build_error.rs rusttest-prepare FORCE
$(call if_changed,rustc_test_library) +$(call if_changed,rustc_test_library)
rusttestlib-macros: private rustc_target_flags = --extern proc_macro rusttestlib-macros: private rustc_target_flags = --extern proc_macro
rusttestlib-macros: private rustc_test_library_proc = yes rusttestlib-macros: private rustc_test_library_proc = yes
rusttestlib-macros: $(src)/macros/lib.rs rusttest-prepare FORCE rusttestlib-macros: $(src)/macros/lib.rs rusttest-prepare FORCE
$(call if_changed,rustc_test_library) +$(call if_changed,rustc_test_library)
rusttestlib-bindings: $(src)/bindings/lib.rs rusttest-prepare FORCE rusttestlib-bindings: $(src)/bindings/lib.rs rusttest-prepare FORCE
$(call if_changed,rustc_test_library) +$(call if_changed,rustc_test_library)
rusttestlib-uapi: $(src)/uapi/lib.rs rusttest-prepare FORCE rusttestlib-uapi: $(src)/uapi/lib.rs rusttest-prepare FORCE
$(call if_changed,rustc_test_library) +$(call if_changed,rustc_test_library)
quiet_cmd_rustdoc_test = RUSTDOC T $< quiet_cmd_rustdoc_test = RUSTDOC T $<
cmd_rustdoc_test = \ cmd_rustdoc_test = \
...@@ -189,7 +189,7 @@ quiet_cmd_rustdoc_test_kernel = RUSTDOC TK $< ...@@ -189,7 +189,7 @@ quiet_cmd_rustdoc_test_kernel = RUSTDOC TK $<
$(src)/kernel/lib.rs $(obj)/kernel.o \ $(src)/kernel/lib.rs $(obj)/kernel.o \
$(objtree)/scripts/rustdoc_test_builder \ $(objtree)/scripts/rustdoc_test_builder \
$(objtree)/scripts/rustdoc_test_gen FORCE $(objtree)/scripts/rustdoc_test_gen FORCE
$(call if_changed,rustdoc_test_kernel) +$(call if_changed,rustdoc_test_kernel)
# We cannot use `-Zpanic-abort-tests` because some tests are dynamic, # We cannot use `-Zpanic-abort-tests` because some tests are dynamic,
# so for the moment we skip `-Cpanic=abort`. # so for the moment we skip `-Cpanic=abort`.
...@@ -254,21 +254,21 @@ quiet_cmd_rustsysroot = RUSTSYSROOT ...@@ -254,21 +254,21 @@ quiet_cmd_rustsysroot = RUSTSYSROOT
$(objtree)/$(obj)/test/sysroot/lib/rustlib/$(rustc_host_target)/lib $(objtree)/$(obj)/test/sysroot/lib/rustlib/$(rustc_host_target)/lib
rusttest-prepare: FORCE rusttest-prepare: FORCE
$(call if_changed,rustsysroot) +$(call if_changed,rustsysroot)
rusttest-macros: private rustc_target_flags = --extern proc_macro rusttest-macros: private rustc_target_flags = --extern proc_macro
rusttest-macros: private rustdoc_test_target_flags = --crate-type proc-macro rusttest-macros: private rustdoc_test_target_flags = --crate-type proc-macro
rusttest-macros: $(src)/macros/lib.rs rusttest-prepare FORCE rusttest-macros: $(src)/macros/lib.rs rusttest-prepare FORCE
$(call if_changed,rustc_test) +$(call if_changed,rustc_test)
$(call if_changed,rustdoc_test) +$(call if_changed,rustdoc_test)
rusttest-kernel: private rustc_target_flags = --extern alloc \ rusttest-kernel: private rustc_target_flags = --extern alloc \
--extern build_error --extern macros --extern bindings --extern uapi --extern build_error --extern macros --extern bindings --extern uapi
rusttest-kernel: $(src)/kernel/lib.rs rusttest-prepare \ rusttest-kernel: $(src)/kernel/lib.rs rusttest-prepare \
rusttestlib-build_error rusttestlib-macros rusttestlib-bindings \ rusttestlib-build_error rusttestlib-macros rusttestlib-bindings \
rusttestlib-uapi FORCE rusttestlib-uapi FORCE
$(call if_changed,rustc_test) +$(call if_changed,rustc_test)
$(call if_changed,rustc_test_library) +$(call if_changed,rustc_test_library)
ifdef CONFIG_CC_IS_CLANG ifdef CONFIG_CC_IS_CLANG
bindgen_c_flags = $(c_flags) bindgen_c_flags = $(c_flags)
...@@ -396,7 +396,7 @@ quiet_cmd_rustc_procmacro = $(RUSTC_OR_CLIPPY_QUIET) P $@ ...@@ -396,7 +396,7 @@ quiet_cmd_rustc_procmacro = $(RUSTC_OR_CLIPPY_QUIET) P $@
# Therefore, to get `libmacros.so` automatically recompiled when the compiler # Therefore, to get `libmacros.so` automatically recompiled when the compiler
# version changes, we add `core.o` as a dependency (even if it is not needed). # version changes, we add `core.o` as a dependency (even if it is not needed).
$(obj)/libmacros.so: $(src)/macros/lib.rs $(obj)/core.o FORCE $(obj)/libmacros.so: $(src)/macros/lib.rs $(obj)/core.o FORCE
$(call if_changed_dep,rustc_procmacro) +$(call if_changed_dep,rustc_procmacro)
quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L $@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L $@
cmd_rustc_library = \ cmd_rustc_library = \
...@@ -435,36 +435,36 @@ $(obj)/core.o: private skip_flags = -Dunreachable_pub ...@@ -435,36 +435,36 @@ $(obj)/core.o: private skip_flags = -Dunreachable_pub
$(obj)/core.o: private rustc_objcopy = $(foreach sym,$(redirect-intrinsics),--redefine-sym $(sym)=__rust$(sym)) $(obj)/core.o: private rustc_objcopy = $(foreach sym,$(redirect-intrinsics),--redefine-sym $(sym)=__rust$(sym))
$(obj)/core.o: private rustc_target_flags = $(core-cfgs) $(obj)/core.o: private rustc_target_flags = $(core-cfgs)
$(obj)/core.o: $(RUST_LIB_SRC)/core/src/lib.rs scripts/target.json FORCE $(obj)/core.o: $(RUST_LIB_SRC)/core/src/lib.rs scripts/target.json FORCE
$(call if_changed_dep,rustc_library) +$(call if_changed_dep,rustc_library)
$(obj)/compiler_builtins.o: private rustc_objcopy = -w -W '__*' $(obj)/compiler_builtins.o: private rustc_objcopy = -w -W '__*'
$(obj)/compiler_builtins.o: $(src)/compiler_builtins.rs $(obj)/core.o FORCE $(obj)/compiler_builtins.o: $(src)/compiler_builtins.rs $(obj)/core.o FORCE
$(call if_changed_dep,rustc_library) +$(call if_changed_dep,rustc_library)
$(obj)/alloc.o: private skip_clippy = 1 $(obj)/alloc.o: private skip_clippy = 1
$(obj)/alloc.o: private skip_flags = -Dunreachable_pub $(obj)/alloc.o: private skip_flags = -Dunreachable_pub
$(obj)/alloc.o: private rustc_target_flags = $(alloc-cfgs) $(obj)/alloc.o: private rustc_target_flags = $(alloc-cfgs)
$(obj)/alloc.o: $(src)/alloc/lib.rs $(obj)/compiler_builtins.o FORCE $(obj)/alloc.o: $(src)/alloc/lib.rs $(obj)/compiler_builtins.o FORCE
$(call if_changed_dep,rustc_library) +$(call if_changed_dep,rustc_library)
$(obj)/build_error.o: $(src)/build_error.rs $(obj)/compiler_builtins.o FORCE $(obj)/build_error.o: $(src)/build_error.rs $(obj)/compiler_builtins.o FORCE
$(call if_changed_dep,rustc_library) +$(call if_changed_dep,rustc_library)
$(obj)/bindings.o: $(src)/bindings/lib.rs \ $(obj)/bindings.o: $(src)/bindings/lib.rs \
$(obj)/compiler_builtins.o \ $(obj)/compiler_builtins.o \
$(obj)/bindings/bindings_generated.rs \ $(obj)/bindings/bindings_generated.rs \
$(obj)/bindings/bindings_helpers_generated.rs FORCE $(obj)/bindings/bindings_helpers_generated.rs FORCE
$(call if_changed_dep,rustc_library) +$(call if_changed_dep,rustc_library)
$(obj)/uapi.o: $(src)/uapi/lib.rs \ $(obj)/uapi.o: $(src)/uapi/lib.rs \
$(obj)/compiler_builtins.o \ $(obj)/compiler_builtins.o \
$(obj)/uapi/uapi_generated.rs FORCE $(obj)/uapi/uapi_generated.rs FORCE
$(call if_changed_dep,rustc_library) +$(call if_changed_dep,rustc_library)
$(obj)/kernel.o: private rustc_target_flags = --extern alloc \ $(obj)/kernel.o: private rustc_target_flags = --extern alloc \
--extern build_error --extern macros --extern bindings --extern uapi --extern build_error --extern macros --extern bindings --extern uapi
$(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/alloc.o $(obj)/build_error.o \ $(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/alloc.o $(obj)/build_error.o \
$(obj)/libmacros.so $(obj)/bindings.o $(obj)/uapi.o FORCE $(obj)/libmacros.so $(obj)/bindings.o $(obj)/uapi.o FORCE
$(call if_changed_dep,rustc_library) +$(call if_changed_dep,rustc_library)
endif # CONFIG_RUST endif # CONFIG_RUST
...@@ -379,13 +379,20 @@ const fn ct_error(_: Layout) -> ! { ...@@ -379,13 +379,20 @@ const fn ct_error(_: Layout) -> ! {
panic!("allocation failed"); panic!("allocation failed");
} }
#[inline]
fn rt_error(layout: Layout) -> ! { fn rt_error(layout: Layout) -> ! {
unsafe { unsafe {
__rust_alloc_error_handler(layout.size(), layout.align()); __rust_alloc_error_handler(layout.size(), layout.align());
} }
} }
unsafe { core::intrinsics::const_eval_select((layout,), ct_error, rt_error) } #[cfg(not(feature = "panic_immediate_abort"))]
unsafe {
core::intrinsics::const_eval_select((layout,), ct_error, rt_error)
}
#[cfg(feature = "panic_immediate_abort")]
ct_error(layout)
} }
// For alloc test `std::alloc::handle_alloc_error` can be used directly. // For alloc test `std::alloc::handle_alloc_error` can be used directly.
...@@ -418,12 +425,14 @@ pub unsafe fn __rdl_oom(size: usize, _align: usize) -> ! { ...@@ -418,12 +425,14 @@ pub unsafe fn __rdl_oom(size: usize, _align: usize) -> ! {
} }
} }
#[cfg(not(no_global_oom_handling))]
/// Specialize clones into pre-allocated, uninitialized memory. /// Specialize clones into pre-allocated, uninitialized memory.
/// Used by `Box::clone` and `Rc`/`Arc::make_mut`. /// Used by `Box::clone` and `Rc`/`Arc::make_mut`.
pub(crate) trait WriteCloneIntoRaw: Sized { pub(crate) trait WriteCloneIntoRaw: Sized {
unsafe fn write_clone_into_raw(&self, target: *mut Self); unsafe fn write_clone_into_raw(&self, target: *mut Self);
} }
#[cfg(not(no_global_oom_handling))]
impl<T: Clone> WriteCloneIntoRaw for T { impl<T: Clone> WriteCloneIntoRaw for T {
#[inline] #[inline]
default unsafe fn write_clone_into_raw(&self, target: *mut Self) { default unsafe fn write_clone_into_raw(&self, target: *mut Self) {
...@@ -433,6 +442,7 @@ impl<T: Clone> WriteCloneIntoRaw for T { ...@@ -433,6 +442,7 @@ impl<T: Clone> WriteCloneIntoRaw for T {
} }
} }
#[cfg(not(no_global_oom_handling))]
impl<T: Copy> WriteCloneIntoRaw for T { impl<T: Copy> WriteCloneIntoRaw for T {
#[inline] #[inline]
unsafe fn write_clone_into_raw(&self, target: *mut Self) { unsafe fn write_clone_into_raw(&self, target: *mut Self) {
......
...@@ -161,7 +161,7 @@ ...@@ -161,7 +161,7 @@
use core::marker::Unsize; use core::marker::Unsize;
use core::mem::{self, SizedTypeProperties}; use core::mem::{self, SizedTypeProperties};
use core::ops::{ use core::ops::{
CoerceUnsized, Deref, DerefMut, DispatchFromDyn, Generator, GeneratorState, Receiver, CoerceUnsized, Coroutine, CoroutineState, Deref, DerefMut, DispatchFromDyn, Receiver,
}; };
use core::pin::Pin; use core::pin::Pin;
use core::ptr::{self, NonNull, Unique}; use core::ptr::{self, NonNull, Unique};
...@@ -211,7 +211,7 @@ impl<T> Box<T> { ...@@ -211,7 +211,7 @@ impl<T> Box<T> {
/// ``` /// ```
/// let five = Box::new(5); /// let five = Box::new(5);
/// ``` /// ```
#[cfg(all(not(no_global_oom_handling)))] #[cfg(not(no_global_oom_handling))]
#[inline(always)] #[inline(always)]
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
#[must_use] #[must_use]
...@@ -1042,10 +1042,18 @@ impl<T: ?Sized, A: Allocator> Box<T, A> { ...@@ -1042,10 +1042,18 @@ impl<T: ?Sized, A: Allocator> Box<T, A> {
/// use std::ptr; /// use std::ptr;
/// ///
/// let x = Box::new(String::from("Hello")); /// let x = Box::new(String::from("Hello"));
/// let p = Box::into_raw(x); /// let ptr = Box::into_raw(x);
/// unsafe { /// unsafe {
/// ptr::drop_in_place(p); /// ptr::drop_in_place(ptr);
/// dealloc(p as *mut u8, Layout::new::<String>()); /// dealloc(ptr as *mut u8, Layout::new::<String>());
/// }
/// ```
/// Note: This is equivalent to the following:
/// ```
/// let x = Box::new(String::from("Hello"));
/// let ptr = Box::into_raw(x);
/// unsafe {
/// drop(Box::from_raw(ptr));
/// } /// }
/// ``` /// ```
/// ///
...@@ -2110,28 +2118,28 @@ fn as_mut(&mut self) -> &mut T { ...@@ -2110,28 +2118,28 @@ fn as_mut(&mut self) -> &mut T {
#[stable(feature = "pin", since = "1.33.0")] #[stable(feature = "pin", since = "1.33.0")]
impl<T: ?Sized, A: Allocator> Unpin for Box<T, A> where A: 'static {} impl<T: ?Sized, A: Allocator> Unpin for Box<T, A> where A: 'static {}
#[unstable(feature = "generator_trait", issue = "43122")] #[unstable(feature = "coroutine_trait", issue = "43122")]
impl<G: ?Sized + Generator<R> + Unpin, R, A: Allocator> Generator<R> for Box<G, A> impl<G: ?Sized + Coroutine<R> + Unpin, R, A: Allocator> Coroutine<R> for Box<G, A>
where where
A: 'static, A: 'static,
{ {
type Yield = G::Yield; type Yield = G::Yield;
type Return = G::Return; type Return = G::Return;
fn resume(mut self: Pin<&mut Self>, arg: R) -> GeneratorState<Self::Yield, Self::Return> { fn resume(mut self: Pin<&mut Self>, arg: R) -> CoroutineState<Self::Yield, Self::Return> {
G::resume(Pin::new(&mut *self), arg) G::resume(Pin::new(&mut *self), arg)
} }
} }
#[unstable(feature = "generator_trait", issue = "43122")] #[unstable(feature = "coroutine_trait", issue = "43122")]
impl<G: ?Sized + Generator<R>, R, A: Allocator> Generator<R> for Pin<Box<G, A>> impl<G: ?Sized + Coroutine<R>, R, A: Allocator> Coroutine<R> for Pin<Box<G, A>>
where where
A: 'static, A: 'static,
{ {
type Yield = G::Yield; type Yield = G::Yield;
type Return = G::Return; type Return = G::Return;
fn resume(mut self: Pin<&mut Self>, arg: R) -> GeneratorState<Self::Yield, Self::Return> { fn resume(mut self: Pin<&mut Self>, arg: R) -> CoroutineState<Self::Yield, Self::Return> {
G::resume((*self).as_mut(), arg) G::resume((*self).as_mut(), arg)
} }
} }
...@@ -2448,4 +2456,8 @@ fn cause(&self) -> Option<&dyn core::error::Error> { ...@@ -2448,4 +2456,8 @@ fn cause(&self) -> Option<&dyn core::error::Error> {
fn source(&self) -> Option<&(dyn core::error::Error + 'static)> { fn source(&self) -> Option<&(dyn core::error::Error + 'static)> {
core::error::Error::source(&**self) core::error::Error::source(&**self)
} }
fn provide<'b>(&'b self, request: &mut core::error::Request<'b>) {
core::error::Error::provide(&**self, request);
}
} }
...@@ -150,6 +150,7 @@ fn fmt( ...@@ -150,6 +150,7 @@ fn fmt(
/// An intermediate trait for specialization of `Extend`. /// An intermediate trait for specialization of `Extend`.
#[doc(hidden)] #[doc(hidden)]
#[cfg(not(no_global_oom_handling))]
trait SpecExtend<I: IntoIterator> { trait SpecExtend<I: IntoIterator> {
/// Extends `self` with the contents of the given iterator. /// Extends `self` with the contents of the given iterator.
fn spec_extend(&mut self, iter: I); fn spec_extend(&mut self, iter: I);
......
...@@ -80,6 +80,8 @@ ...@@ -80,6 +80,8 @@
not(no_sync), not(no_sync),
target_has_atomic = "ptr" target_has_atomic = "ptr"
))] ))]
#![doc(rust_logo)]
#![feature(rustdoc_internals)]
#![no_std] #![no_std]
#![needs_allocator] #![needs_allocator]
// Lints: // Lints:
...@@ -115,7 +117,6 @@ ...@@ -115,7 +117,6 @@
#![feature(const_eval_select)] #![feature(const_eval_select)]
#![feature(const_maybe_uninit_as_mut_ptr)] #![feature(const_maybe_uninit_as_mut_ptr)]
#![feature(const_maybe_uninit_write)] #![feature(const_maybe_uninit_write)]
#![feature(const_maybe_uninit_zeroed)]
#![feature(const_pin)] #![feature(const_pin)]
#![feature(const_refs_to_cell)] #![feature(const_refs_to_cell)]
#![feature(const_size_of_val)] #![feature(const_size_of_val)]
...@@ -141,7 +142,6 @@ ...@@ -141,7 +142,6 @@
#![feature(maybe_uninit_uninit_array)] #![feature(maybe_uninit_uninit_array)]
#![feature(maybe_uninit_uninit_array_transpose)] #![feature(maybe_uninit_uninit_array_transpose)]
#![feature(pattern)] #![feature(pattern)]
#![feature(pointer_byte_offsets)]
#![feature(ptr_internals)] #![feature(ptr_internals)]
#![feature(ptr_metadata)] #![feature(ptr_metadata)]
#![feature(ptr_sub_ptr)] #![feature(ptr_sub_ptr)]
...@@ -156,6 +156,7 @@ ...@@ -156,6 +156,7 @@
#![feature(std_internals)] #![feature(std_internals)]
#![feature(str_internals)] #![feature(str_internals)]
#![feature(strict_provenance)] #![feature(strict_provenance)]
#![feature(trusted_fused)]
#![feature(trusted_len)] #![feature(trusted_len)]
#![feature(trusted_random_access)] #![feature(trusted_random_access)]
#![feature(try_trait_v2)] #![feature(try_trait_v2)]
...@@ -168,7 +169,7 @@ ...@@ -168,7 +169,7 @@
// //
// Language features: // Language features:
// tidy-alphabetical-start // tidy-alphabetical-start
#![cfg_attr(not(test), feature(generator_trait))] #![cfg_attr(not(test), feature(coroutine_trait))]
#![cfg_attr(test, feature(panic_update_hook))] #![cfg_attr(test, feature(panic_update_hook))]
#![cfg_attr(test, feature(test))] #![cfg_attr(test, feature(test))]
#![feature(allocator_internals)] #![feature(allocator_internals)]
...@@ -276,7 +277,7 @@ pub(crate) mod test_helpers { ...@@ -276,7 +277,7 @@ pub(crate) mod test_helpers {
/// seed not being the same for every RNG invocation too. /// seed not being the same for every RNG invocation too.
pub(crate) fn test_rng() -> rand_xorshift::XorShiftRng { pub(crate) fn test_rng() -> rand_xorshift::XorShiftRng {
use std::hash::{BuildHasher, Hash, Hasher}; use std::hash::{BuildHasher, Hash, Hasher};
let mut hasher = std::collections::hash_map::RandomState::new().build_hasher(); let mut hasher = std::hash::RandomState::new().build_hasher();
std::panic::Location::caller().hash(&mut hasher); std::panic::Location::caller().hash(&mut hasher);
let hc64 = hasher.finish(); let hc64 = hasher.finish();
let seed_vec = let seed_vec =
......
...@@ -27,6 +27,16 @@ enum AllocInit { ...@@ -27,6 +27,16 @@ enum AllocInit {
Zeroed, Zeroed,
} }
#[repr(transparent)]
#[cfg_attr(target_pointer_width = "16", rustc_layout_scalar_valid_range_end(0x7fff))]
#[cfg_attr(target_pointer_width = "32", rustc_layout_scalar_valid_range_end(0x7fff_ffff))]
#[cfg_attr(target_pointer_width = "64", rustc_layout_scalar_valid_range_end(0x7fff_ffff_ffff_ffff))]
struct Cap(usize);
impl Cap {
const ZERO: Cap = unsafe { Cap(0) };
}
/// A low-level utility for more ergonomically allocating, reallocating, and deallocating /// A low-level utility for more ergonomically allocating, reallocating, and deallocating
/// a buffer of memory on the heap without having to worry about all the corner cases /// a buffer of memory on the heap without having to worry about all the corner cases
/// involved. This type is excellent for building your own data structures like Vec and VecDeque. /// involved. This type is excellent for building your own data structures like Vec and VecDeque.
...@@ -52,7 +62,12 @@ enum AllocInit { ...@@ -52,7 +62,12 @@ enum AllocInit {
#[allow(missing_debug_implementations)] #[allow(missing_debug_implementations)]
pub(crate) struct RawVec<T, A: Allocator = Global> { pub(crate) struct RawVec<T, A: Allocator = Global> {
ptr: Unique<T>, ptr: Unique<T>,
cap: usize, /// Never used for ZSTs; it's `capacity()`'s responsibility to return usize::MAX in that case.
///
/// # Safety
///
/// `cap` must be in the `0..=isize::MAX` range.
cap: Cap,
alloc: A, alloc: A,
} }
...@@ -121,7 +136,7 @@ impl<T, A: Allocator> RawVec<T, A> { ...@@ -121,7 +136,7 @@ impl<T, A: Allocator> RawVec<T, A> {
/// the returned `RawVec`. /// the returned `RawVec`.
pub const fn new_in(alloc: A) -> Self { pub const fn new_in(alloc: A) -> Self {
// `cap: 0` means "unallocated". zero-sized types are ignored. // `cap: 0` means "unallocated". zero-sized types are ignored.
Self { ptr: Unique::dangling(), cap: 0, alloc } Self { ptr: Unique::dangling(), cap: Cap::ZERO, alloc }
} }
/// Like `with_capacity`, but parameterized over the choice of /// Like `with_capacity`, but parameterized over the choice of
...@@ -203,7 +218,7 @@ fn allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Self { ...@@ -203,7 +218,7 @@ fn allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Self {
// here should change to `ptr.len() / mem::size_of::<T>()`. // here should change to `ptr.len() / mem::size_of::<T>()`.
Self { Self {
ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) }, ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) },
cap: capacity, cap: unsafe { Cap(capacity) },
alloc, alloc,
} }
} }
...@@ -228,7 +243,7 @@ fn try_allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Result<Self, T ...@@ -228,7 +243,7 @@ fn try_allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Result<Self, T
// here should change to `ptr.len() / mem::size_of::<T>()`. // here should change to `ptr.len() / mem::size_of::<T>()`.
Ok(Self { Ok(Self {
ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) }, ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) },
cap: capacity, cap: unsafe { Cap(capacity) },
alloc, alloc,
}) })
} }
...@@ -240,12 +255,13 @@ fn try_allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Result<Self, T ...@@ -240,12 +255,13 @@ fn try_allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Result<Self, T
/// The `ptr` must be allocated (via the given allocator `alloc`), and with the given /// The `ptr` must be allocated (via the given allocator `alloc`), and with the given
/// `capacity`. /// `capacity`.
/// The `capacity` cannot exceed `isize::MAX` for sized types. (only a concern on 32-bit /// The `capacity` cannot exceed `isize::MAX` for sized types. (only a concern on 32-bit
/// systems). ZST vectors may have a capacity up to `usize::MAX`. /// systems). For ZSTs capacity is ignored.
/// If the `ptr` and `capacity` come from a `RawVec` created via `alloc`, then this is /// If the `ptr` and `capacity` come from a `RawVec` created via `alloc`, then this is
/// guaranteed. /// guaranteed.
#[inline] #[inline]
pub unsafe fn from_raw_parts_in(ptr: *mut T, capacity: usize, alloc: A) -> Self { pub unsafe fn from_raw_parts_in(ptr: *mut T, capacity: usize, alloc: A) -> Self {
Self { ptr: unsafe { Unique::new_unchecked(ptr) }, cap: capacity, alloc } let cap = if T::IS_ZST { Cap::ZERO } else { unsafe { Cap(capacity) } };
Self { ptr: unsafe { Unique::new_unchecked(ptr) }, cap, alloc }
} }
/// Gets a raw pointer to the start of the allocation. Note that this is /// Gets a raw pointer to the start of the allocation. Note that this is
...@@ -261,7 +277,7 @@ pub fn ptr(&self) -> *mut T { ...@@ -261,7 +277,7 @@ pub fn ptr(&self) -> *mut T {
/// This will always be `usize::MAX` if `T` is zero-sized. /// This will always be `usize::MAX` if `T` is zero-sized.
#[inline(always)] #[inline(always)]
pub fn capacity(&self) -> usize { pub fn capacity(&self) -> usize {
if T::IS_ZST { usize::MAX } else { self.cap } if T::IS_ZST { usize::MAX } else { self.cap.0 }
} }
/// Returns a shared reference to the allocator backing this `RawVec`. /// Returns a shared reference to the allocator backing this `RawVec`.
...@@ -270,7 +286,7 @@ pub fn allocator(&self) -> &A { ...@@ -270,7 +286,7 @@ pub fn allocator(&self) -> &A {
} }
fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> { fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> {
if T::IS_ZST || self.cap == 0 { if T::IS_ZST || self.cap.0 == 0 {
None None
} else { } else {
// We could use Layout::array here which ensures the absence of isize and usize overflows // We could use Layout::array here which ensures the absence of isize and usize overflows
...@@ -280,7 +296,7 @@ fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> { ...@@ -280,7 +296,7 @@ fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> {
let _: () = const { assert!(mem::size_of::<T>() % mem::align_of::<T>() == 0) }; let _: () = const { assert!(mem::size_of::<T>() % mem::align_of::<T>() == 0) };
unsafe { unsafe {
let align = mem::align_of::<T>(); let align = mem::align_of::<T>();
let size = mem::size_of::<T>().unchecked_mul(self.cap); let size = mem::size_of::<T>().unchecked_mul(self.cap.0);
let layout = Layout::from_size_align_unchecked(size, align); let layout = Layout::from_size_align_unchecked(size, align);
Some((self.ptr.cast().into(), layout)) Some((self.ptr.cast().into(), layout))
} }
...@@ -338,10 +354,13 @@ pub fn reserve_for_push(&mut self, len: usize) { ...@@ -338,10 +354,13 @@ pub fn reserve_for_push(&mut self, len: usize) {
/// The same as `reserve`, but returns on errors instead of panicking or aborting. /// The same as `reserve`, but returns on errors instead of panicking or aborting.
pub fn try_reserve(&mut self, len: usize, additional: usize) -> Result<(), TryReserveError> { pub fn try_reserve(&mut self, len: usize, additional: usize) -> Result<(), TryReserveError> {
if self.needs_to_grow(len, additional) { if self.needs_to_grow(len, additional) {
self.grow_amortized(len, additional) self.grow_amortized(len, additional)?;
} else {
Ok(())
} }
unsafe {
// Inform the optimizer that the reservation has succeeded or wasn't needed
core::intrinsics::assume(!self.needs_to_grow(len, additional));
}
Ok(())
} }
/// The same as `reserve_for_push`, but returns on errors instead of panicking or aborting. /// The same as `reserve_for_push`, but returns on errors instead of panicking or aborting.
...@@ -378,7 +397,14 @@ pub fn try_reserve_exact( ...@@ -378,7 +397,14 @@ pub fn try_reserve_exact(
len: usize, len: usize,
additional: usize, additional: usize,
) -> Result<(), TryReserveError> { ) -> Result<(), TryReserveError> {
if self.needs_to_grow(len, additional) { self.grow_exact(len, additional) } else { Ok(()) } if self.needs_to_grow(len, additional) {
self.grow_exact(len, additional)?;
}
unsafe {
// Inform the optimizer that the reservation has succeeded or wasn't needed
core::intrinsics::assume(!self.needs_to_grow(len, additional));
}
Ok(())
} }
/// Shrinks the buffer down to the specified capacity. If the given amount /// Shrinks the buffer down to the specified capacity. If the given amount
...@@ -404,12 +430,15 @@ fn needs_to_grow(&self, len: usize, additional: usize) -> bool { ...@@ -404,12 +430,15 @@ fn needs_to_grow(&self, len: usize, additional: usize) -> bool {
additional > self.capacity().wrapping_sub(len) additional > self.capacity().wrapping_sub(len)
} }
fn set_ptr_and_cap(&mut self, ptr: NonNull<[u8]>, cap: usize) { /// # Safety:
///
/// `cap` must not exceed `isize::MAX`.
unsafe fn set_ptr_and_cap(&mut self, ptr: NonNull<[u8]>, cap: usize) {
// Allocators currently return a `NonNull<[u8]>` whose length matches // Allocators currently return a `NonNull<[u8]>` whose length matches
// the size requested. If that ever changes, the capacity here should // the size requested. If that ever changes, the capacity here should
// change to `ptr.len() / mem::size_of::<T>()`. // change to `ptr.len() / mem::size_of::<T>()`.
self.ptr = unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) }; self.ptr = unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) };
self.cap = cap; self.cap = unsafe { Cap(cap) };
} }
// This method is usually instantiated many times. So we want it to be as // This method is usually instantiated many times. So we want it to be as
...@@ -434,14 +463,15 @@ fn grow_amortized(&mut self, len: usize, additional: usize) -> Result<(), TryRes ...@@ -434,14 +463,15 @@ fn grow_amortized(&mut self, len: usize, additional: usize) -> Result<(), TryRes
// This guarantees exponential growth. The doubling cannot overflow // This guarantees exponential growth. The doubling cannot overflow
// because `cap <= isize::MAX` and the type of `cap` is `usize`. // because `cap <= isize::MAX` and the type of `cap` is `usize`.
let cap = cmp::max(self.cap * 2, required_cap); let cap = cmp::max(self.cap.0 * 2, required_cap);
let cap = cmp::max(Self::MIN_NON_ZERO_CAP, cap); let cap = cmp::max(Self::MIN_NON_ZERO_CAP, cap);
let new_layout = Layout::array::<T>(cap); let new_layout = Layout::array::<T>(cap);
// `finish_grow` is non-generic over `T`. // `finish_grow` is non-generic over `T`.
let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?; let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
self.set_ptr_and_cap(ptr, cap); // SAFETY: finish_grow would have resulted in a capacity overflow if we tried to allocate more than isize::MAX items
unsafe { self.set_ptr_and_cap(ptr, cap) };
Ok(()) Ok(())
} }
...@@ -460,7 +490,10 @@ fn grow_exact(&mut self, len: usize, additional: usize) -> Result<(), TryReserve ...@@ -460,7 +490,10 @@ fn grow_exact(&mut self, len: usize, additional: usize) -> Result<(), TryReserve
// `finish_grow` is non-generic over `T`. // `finish_grow` is non-generic over `T`.
let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?; let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
self.set_ptr_and_cap(ptr, cap); // SAFETY: finish_grow would have resulted in a capacity overflow if we tried to allocate more than isize::MAX items
unsafe {
self.set_ptr_and_cap(ptr, cap);
}
Ok(()) Ok(())
} }
...@@ -478,7 +511,7 @@ fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> { ...@@ -478,7 +511,7 @@ fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> {
if cap == 0 { if cap == 0 {
unsafe { self.alloc.deallocate(ptr, layout) }; unsafe { self.alloc.deallocate(ptr, layout) };
self.ptr = Unique::dangling(); self.ptr = Unique::dangling();
self.cap = 0; self.cap = Cap::ZERO;
} else { } else {
let ptr = unsafe { let ptr = unsafe {
// `Layout::array` cannot overflow here because it would have // `Layout::array` cannot overflow here because it would have
...@@ -489,7 +522,10 @@ fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> { ...@@ -489,7 +522,10 @@ fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> {
.shrink(ptr, layout, new_layout) .shrink(ptr, layout, new_layout)
.map_err(|_| AllocError { layout: new_layout, non_exhaustive: () })? .map_err(|_| AllocError { layout: new_layout, non_exhaustive: () })?
}; };
self.set_ptr_and_cap(ptr, cap); // SAFETY: if the allocation is valid, then the capacity is too
unsafe {
self.set_ptr_and_cap(ptr, cap);
}
} }
Ok(()) Ok(())
} }
...@@ -569,6 +605,7 @@ fn alloc_guard(alloc_size: usize) -> Result<(), TryReserveError> { ...@@ -569,6 +605,7 @@ fn alloc_guard(alloc_size: usize) -> Result<(), TryReserveError> {
// ensure that the code generation related to these panics is minimal as there's // ensure that the code generation related to these panics is minimal as there's
// only one location which panics rather than a bunch throughout the module. // only one location which panics rather than a bunch throughout the module.
#[cfg(not(no_global_oom_handling))] #[cfg(not(no_global_oom_handling))]
#[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
fn capacity_overflow() -> ! { fn capacity_overflow() -> ! {
panic!("capacity overflow"); panic!("capacity overflow");
} }
...@@ -9,7 +9,8 @@ ...@@ -9,7 +9,8 @@
use core::array; use core::array;
use core::fmt; use core::fmt;
use core::iter::{ use core::iter::{
FusedIterator, InPlaceIterable, SourceIter, TrustedLen, TrustedRandomAccessNoCoerce, FusedIterator, InPlaceIterable, SourceIter, TrustedFused, TrustedLen,
TrustedRandomAccessNoCoerce,
}; };
use core::marker::PhantomData; use core::marker::PhantomData;
use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties}; use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties};
...@@ -287,9 +288,7 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item ...@@ -287,9 +288,7 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item
// Also note the implementation of `Self: TrustedRandomAccess` requires // Also note the implementation of `Self: TrustedRandomAccess` requires
// that `T: Copy` so reading elements from the buffer doesn't invalidate // that `T: Copy` so reading elements from the buffer doesn't invalidate
// them for `Drop`. // them for `Drop`.
unsafe { unsafe { if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) } }
if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) }
}
} }
} }
...@@ -341,6 +340,10 @@ fn is_empty(&self) -> bool { ...@@ -341,6 +340,10 @@ fn is_empty(&self) -> bool {
#[stable(feature = "fused", since = "1.26.0")] #[stable(feature = "fused", since = "1.26.0")]
impl<T, A: Allocator> FusedIterator for IntoIter<T, A> {} impl<T, A: Allocator> FusedIterator for IntoIter<T, A> {}
#[doc(hidden)]
#[unstable(issue = "none", feature = "trusted_fused")]
unsafe impl<T, A: Allocator> TrustedFused for IntoIter<T, A> {}
#[unstable(feature = "trusted_len", issue = "37572")] #[unstable(feature = "trusted_len", issue = "37572")]
unsafe impl<T, A: Allocator> TrustedLen for IntoIter<T, A> {} unsafe impl<T, A: Allocator> TrustedLen for IntoIter<T, A> {}
...@@ -425,7 +428,10 @@ fn drop(&mut self) { ...@@ -425,7 +428,10 @@ fn drop(&mut self) {
// also refer to the vec::in_place_collect module documentation to get an overview // also refer to the vec::in_place_collect module documentation to get an overview
#[unstable(issue = "none", feature = "inplace_iteration")] #[unstable(issue = "none", feature = "inplace_iteration")]
#[doc(hidden)] #[doc(hidden)]
unsafe impl<T, A: Allocator> InPlaceIterable for IntoIter<T, A> {} unsafe impl<T, A: Allocator> InPlaceIterable for IntoIter<T, A> {
const EXPAND_BY: Option<NonZeroUsize> = NonZeroUsize::new(1);
const MERGE_BY: Option<NonZeroUsize> = NonZeroUsize::new(1);
}
#[unstable(issue = "none", feature = "inplace_iteration")] #[unstable(issue = "none", feature = "inplace_iteration")]
#[doc(hidden)] #[doc(hidden)]
......
...@@ -105,6 +105,7 @@ ...@@ -105,6 +105,7 @@
#[cfg(not(no_global_oom_handling))] #[cfg(not(no_global_oom_handling))]
use self::is_zero::IsZero; use self::is_zero::IsZero;
#[cfg(not(no_global_oom_handling))]
mod is_zero; mod is_zero;
#[cfg(not(no_global_oom_handling))] #[cfg(not(no_global_oom_handling))]
...@@ -123,7 +124,7 @@ ...@@ -123,7 +124,7 @@
mod set_len_on_drop; mod set_len_on_drop;
#[cfg(not(no_global_oom_handling))] #[cfg(not(no_global_oom_handling))]
use self::in_place_drop::{InPlaceDrop, InPlaceDstBufDrop}; use self::in_place_drop::{InPlaceDrop, InPlaceDstDataSrcBufDrop};
#[cfg(not(no_global_oom_handling))] #[cfg(not(no_global_oom_handling))]
mod in_place_drop; mod in_place_drop;
...@@ -1376,7 +1377,7 @@ pub fn as_mut_slice(&mut self) -> &mut [T] { ...@@ -1376,7 +1377,7 @@ pub fn as_mut_slice(&mut self) -> &mut [T] {
/// [`as_mut_ptr`]: Vec::as_mut_ptr /// [`as_mut_ptr`]: Vec::as_mut_ptr
/// [`as_ptr`]: Vec::as_ptr /// [`as_ptr`]: Vec::as_ptr
#[stable(feature = "vec_as_ptr", since = "1.37.0")] #[stable(feature = "vec_as_ptr", since = "1.37.0")]
#[cfg_attr(not(bootstrap), rustc_never_returns_null_ptr)] #[rustc_never_returns_null_ptr]
#[inline] #[inline]
pub fn as_ptr(&self) -> *const T { pub fn as_ptr(&self) -> *const T {
// We shadow the slice method of the same name to avoid going through // We shadow the slice method of the same name to avoid going through
...@@ -1436,7 +1437,7 @@ pub fn as_ptr(&self) -> *const T { ...@@ -1436,7 +1437,7 @@ pub fn as_ptr(&self) -> *const T {
/// [`as_mut_ptr`]: Vec::as_mut_ptr /// [`as_mut_ptr`]: Vec::as_mut_ptr
/// [`as_ptr`]: Vec::as_ptr /// [`as_ptr`]: Vec::as_ptr
#[stable(feature = "vec_as_ptr", since = "1.37.0")] #[stable(feature = "vec_as_ptr", since = "1.37.0")]
#[cfg_attr(not(bootstrap), rustc_never_returns_null_ptr)] #[rustc_never_returns_null_ptr]
#[inline] #[inline]
pub fn as_mut_ptr(&mut self) -> *mut T { pub fn as_mut_ptr(&mut self) -> *mut T {
// We shadow the slice method of the same name to avoid going through // We shadow the slice method of the same name to avoid going through
...@@ -1565,7 +1566,8 @@ pub unsafe fn set_len(&mut self, new_len: usize) { ...@@ -1565,7 +1566,8 @@ pub unsafe fn set_len(&mut self, new_len: usize) {
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub fn swap_remove(&mut self, index: usize) -> T { pub fn swap_remove(&mut self, index: usize) -> T {
#[cold] #[cold]
#[inline(never)] #[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
#[track_caller]
fn assert_failed(index: usize, len: usize) -> ! { fn assert_failed(index: usize, len: usize) -> ! {
panic!("swap_remove index (is {index}) should be < len (is {len})"); panic!("swap_remove index (is {index}) should be < len (is {len})");
} }
...@@ -1606,7 +1608,8 @@ fn assert_failed(index: usize, len: usize) -> ! { ...@@ -1606,7 +1608,8 @@ fn assert_failed(index: usize, len: usize) -> ! {
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub fn insert(&mut self, index: usize, element: T) { pub fn insert(&mut self, index: usize, element: T) {
#[cold] #[cold]
#[inline(never)] #[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
#[track_caller]
fn assert_failed(index: usize, len: usize) -> ! { fn assert_failed(index: usize, len: usize) -> ! {
panic!("insertion index (is {index}) should be <= len (is {len})"); panic!("insertion index (is {index}) should be <= len (is {len})");
} }
...@@ -1667,7 +1670,7 @@ fn assert_failed(index: usize, len: usize) -> ! { ...@@ -1667,7 +1670,7 @@ fn assert_failed(index: usize, len: usize) -> ! {
#[track_caller] #[track_caller]
pub fn remove(&mut self, index: usize) -> T { pub fn remove(&mut self, index: usize) -> T {
#[cold] #[cold]
#[inline(never)] #[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
#[track_caller] #[track_caller]
fn assert_failed(index: usize, len: usize) -> ! { fn assert_failed(index: usize, len: usize) -> ! {
panic!("removal index (is {index}) should be < len (is {len})"); panic!("removal index (is {index}) should be < len (is {len})");
...@@ -1891,7 +1894,32 @@ pub fn dedup_by<F>(&mut self, mut same_bucket: F) ...@@ -1891,7 +1894,32 @@ pub fn dedup_by<F>(&mut self, mut same_bucket: F)
return; return;
} }
/* INVARIANT: vec.len() > read >= write > write-1 >= 0 */ // Check if we ever want to remove anything.
// This allows to use copy_non_overlapping in next cycle.
// And avoids any memory writes if we don't need to remove anything.
let mut first_duplicate_idx: usize = 1;
let start = self.as_mut_ptr();
while first_duplicate_idx != len {
let found_duplicate = unsafe {
// SAFETY: first_duplicate always in range [1..len)
// Note that we start iteration from 1 so we never overflow.
let prev = start.add(first_duplicate_idx.wrapping_sub(1));
let current = start.add(first_duplicate_idx);
// We explicitly say in docs that references are reversed.
same_bucket(&mut *current, &mut *prev)
};
if found_duplicate {
break;
}
first_duplicate_idx += 1;
}
// Don't need to remove anything.
// We cannot get bigger than len.
if first_duplicate_idx == len {
return;
}
/* INVARIANT: vec.len() > read > write > write-1 >= 0 */
struct FillGapOnDrop<'a, T, A: core::alloc::Allocator> { struct FillGapOnDrop<'a, T, A: core::alloc::Allocator> {
/* Offset of the element we want to check if it is duplicate */ /* Offset of the element we want to check if it is duplicate */
read: usize, read: usize,
...@@ -1937,31 +1965,39 @@ fn drop(&mut self) { ...@@ -1937,31 +1965,39 @@ fn drop(&mut self) {
} }
} }
let mut gap = FillGapOnDrop { read: 1, write: 1, vec: self };
let ptr = gap.vec.as_mut_ptr();
/* Drop items while going through Vec, it should be more efficient than /* Drop items while going through Vec, it should be more efficient than
* doing slice partition_dedup + truncate */ * doing slice partition_dedup + truncate */
// Construct gap first and then drop item to avoid memory corruption if `T::drop` panics.
let mut gap =
FillGapOnDrop { read: first_duplicate_idx + 1, write: first_duplicate_idx, vec: self };
unsafe {
// SAFETY: we checked that first_duplicate_idx in bounds before.
// If drop panics, `gap` would remove this item without drop.
ptr::drop_in_place(start.add(first_duplicate_idx));
}
/* SAFETY: Because of the invariant, read_ptr, prev_ptr and write_ptr /* SAFETY: Because of the invariant, read_ptr, prev_ptr and write_ptr
* are always in-bounds and read_ptr never aliases prev_ptr */ * are always in-bounds and read_ptr never aliases prev_ptr */
unsafe { unsafe {
while gap.read < len { while gap.read < len {
let read_ptr = ptr.add(gap.read); let read_ptr = start.add(gap.read);
let prev_ptr = ptr.add(gap.write.wrapping_sub(1)); let prev_ptr = start.add(gap.write.wrapping_sub(1));
if same_bucket(&mut *read_ptr, &mut *prev_ptr) { // We explicitly say in docs that references are reversed.
let found_duplicate = same_bucket(&mut *read_ptr, &mut *prev_ptr);
if found_duplicate {
// Increase `gap.read` now since the drop may panic. // Increase `gap.read` now since the drop may panic.
gap.read += 1; gap.read += 1;
/* We have found duplicate, drop it in-place */ /* We have found duplicate, drop it in-place */
ptr::drop_in_place(read_ptr); ptr::drop_in_place(read_ptr);
} else { } else {
let write_ptr = ptr.add(gap.write); let write_ptr = start.add(gap.write);
/* Because `read_ptr` can be equal to `write_ptr`, we either /* read_ptr cannot be equal to write_ptr because at this point
* have to use `copy` or conditional `copy_nonoverlapping`. * we guaranteed to skip at least one element (before loop starts).
* Looks like the first option is faster. */ */
ptr::copy(read_ptr, write_ptr, 1); ptr::copy_nonoverlapping(read_ptr, write_ptr, 1);
/* We have filled that place, so go further */ /* We have filled that place, so go further */
gap.write += 1; gap.write += 1;
...@@ -2097,6 +2133,7 @@ pub fn pop(&mut self) -> Option<T> { ...@@ -2097,6 +2133,7 @@ pub fn pop(&mut self) -> Option<T> {
} else { } else {
unsafe { unsafe {
self.len -= 1; self.len -= 1;
core::intrinsics::assume(self.len < self.capacity());
Some(ptr::read(self.as_ptr().add(self.len()))) Some(ptr::read(self.as_ptr().add(self.len())))
} }
} }
...@@ -2299,7 +2336,8 @@ pub fn split_off(&mut self, at: usize) -> Self ...@@ -2299,7 +2336,8 @@ pub fn split_off(&mut self, at: usize) -> Self
A: Clone, A: Clone,
{ {
#[cold] #[cold]
#[inline(never)] #[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
#[track_caller]
fn assert_failed(at: usize, len: usize) -> ! { fn assert_failed(at: usize, len: usize) -> ! {
panic!("`at` split index (is {at}) should be <= len (is {len})"); panic!("`at` split index (is {at}) should be <= len (is {len})");
} }
...@@ -2840,6 +2878,7 @@ pub fn from_elem_in<T: Clone, A: Allocator>(elem: T, n: usize, alloc: A) -> Vec< ...@@ -2840,6 +2878,7 @@ pub fn from_elem_in<T: Clone, A: Allocator>(elem: T, n: usize, alloc: A) -> Vec<
<T as SpecFromElem>::from_elem(elem, n, alloc) <T as SpecFromElem>::from_elem(elem, n, alloc)
} }
#[cfg(not(no_global_oom_handling))]
trait ExtendFromWithinSpec { trait ExtendFromWithinSpec {
/// # Safety /// # Safety
/// ///
...@@ -2848,6 +2887,7 @@ trait ExtendFromWithinSpec { ...@@ -2848,6 +2887,7 @@ trait ExtendFromWithinSpec {
unsafe fn spec_extend_from_within(&mut self, src: Range<usize>); unsafe fn spec_extend_from_within(&mut self, src: Range<usize>);
} }
#[cfg(not(no_global_oom_handling))]
impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> { impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> {
default unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) { default unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) {
// SAFETY: // SAFETY:
...@@ -2867,6 +2907,7 @@ impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> { ...@@ -2867,6 +2907,7 @@ impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> {
} }
} }
#[cfg(not(no_global_oom_handling))]
impl<T: Copy, A: Allocator> ExtendFromWithinSpec for Vec<T, A> { impl<T: Copy, A: Allocator> ExtendFromWithinSpec for Vec<T, A> {
unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) { unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) {
let count = src.len(); let count = src.len();
...@@ -2947,7 +2988,7 @@ fn clone_from(&mut self, other: &Self) { ...@@ -2947,7 +2988,7 @@ fn clone_from(&mut self, other: &Self) {
/// ``` /// ```
/// use std::hash::BuildHasher; /// use std::hash::BuildHasher;
/// ///
/// let b = std::collections::hash_map::RandomState::new(); /// let b = std::hash::RandomState::new();
/// let v: Vec<u8> = vec![0xa8, 0x3c, 0x09]; /// let v: Vec<u8> = vec![0xa8, 0x3c, 0x09];
/// let s: &[u8] = &[0xa8, 0x3c, 0x09]; /// let s: &[u8] = &[0xa8, 0x3c, 0x09];
/// assert_eq!(b.hash_one(v), b.hash_one(s)); /// assert_eq!(b.hash_one(v), b.hash_one(s));
......
...@@ -9,12 +9,13 @@ ...@@ -9,12 +9,13 @@
#include <kunit/test.h> #include <kunit/test.h>
#include <linux/errname.h> #include <linux/errname.h>
#include <linux/ethtool.h> #include <linux/ethtool.h>
#include <linux/jiffies.h>
#include <linux/mdio.h> #include <linux/mdio.h>
#include <linux/phy.h> #include <linux/phy.h>
#include <linux/slab.h>
#include <linux/refcount.h> #include <linux/refcount.h>
#include <linux/wait.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/slab.h>
#include <linux/wait.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
/* `bindgen` gets confused at certain things. */ /* `bindgen` gets confused at certain things. */
......
...@@ -35,7 +35,7 @@ unsafe fn krealloc_aligned(ptr: *mut u8, new_layout: Layout, flags: bindings::gf ...@@ -35,7 +35,7 @@ unsafe fn krealloc_aligned(ptr: *mut u8, new_layout: Layout, flags: bindings::gf
// - `ptr` is either null or a pointer returned from a previous `k{re}alloc()` by the // - `ptr` is either null or a pointer returned from a previous `k{re}alloc()` by the
// function safety requirement. // function safety requirement.
// - `size` is greater than 0 since it's either a `layout.size()` (which cannot be zero // - `size` is greater than 0 since it's either a `layout.size()` (which cannot be zero
// according to the function safety requirement) or a result from `next_power_of_two()`. // according to the function safety requirement) or a result from `next_power_of_two()`.
unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags) as *mut u8 } unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags) as *mut u8 }
} }
......
...@@ -264,13 +264,9 @@ pub fn to_result(err: core::ffi::c_int) -> Result { ...@@ -264,13 +264,9 @@ pub fn to_result(err: core::ffi::c_int) -> Result {
/// pdev: &mut PlatformDevice, /// pdev: &mut PlatformDevice,
/// index: u32, /// index: u32,
/// ) -> Result<*mut core::ffi::c_void> { /// ) -> Result<*mut core::ffi::c_void> {
/// // SAFETY: FFI call. /// // SAFETY: `pdev` points to a valid platform device. There are no safety requirements
/// unsafe { /// // on `index`.
/// from_err_ptr(bindings::devm_platform_ioremap_resource( /// from_err_ptr(unsafe { bindings::devm_platform_ioremap_resource(pdev.to_ptr(), index) })
/// pdev.to_ptr(),
/// index,
/// ))
/// }
/// } /// }
/// ``` /// ```
// TODO: Remove `dead_code` marker once an in-kernel client is available. // TODO: Remove `dead_code` marker once an in-kernel client is available.
......
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
//! //!
//! ```rust //! ```rust
//! # #![allow(clippy::disallowed_names)] //! # #![allow(clippy::disallowed_names)]
//! use kernel::{prelude::*, sync::Mutex, new_mutex}; //! use kernel::sync::{new_mutex, Mutex};
//! # use core::pin::Pin; //! # use core::pin::Pin;
//! #[pin_data] //! #[pin_data]
//! struct Foo { //! struct Foo {
...@@ -56,7 +56,7 @@ ...@@ -56,7 +56,7 @@
//! //!
//! ```rust //! ```rust
//! # #![allow(clippy::disallowed_names)] //! # #![allow(clippy::disallowed_names)]
//! # use kernel::{prelude::*, sync::Mutex, new_mutex}; //! # use kernel::sync::{new_mutex, Mutex};
//! # use core::pin::Pin; //! # use core::pin::Pin;
//! # #[pin_data] //! # #[pin_data]
//! # struct Foo { //! # struct Foo {
...@@ -79,7 +79,7 @@ ...@@ -79,7 +79,7 @@
//! above method only works for types where you can access the fields. //! above method only works for types where you can access the fields.
//! //!
//! ```rust //! ```rust
//! # use kernel::{new_mutex, sync::{Arc, Mutex}}; //! # use kernel::sync::{new_mutex, Arc, Mutex};
//! let mtx: Result<Arc<Mutex<usize>>> = Arc::pin_init(new_mutex!(42, "example::mtx")); //! let mtx: Result<Arc<Mutex<usize>>> = Arc::pin_init(new_mutex!(42, "example::mtx"));
//! ``` //! ```
//! //!
...@@ -751,10 +751,10 @@ macro_rules! try_init { ...@@ -751,10 +751,10 @@ macro_rules! try_init {
/// ///
/// # Safety /// # Safety
/// ///
/// When implementing this type you will need to take great care. Also there are probably very few /// When implementing this trait you will need to take great care. Also there are probably very few
/// cases where a manual implementation is necessary. Use [`pin_init_from_closure`] where possible. /// cases where a manual implementation is necessary. Use [`pin_init_from_closure`] where possible.
/// ///
/// The [`PinInit::__pinned_init`] function /// The [`PinInit::__pinned_init`] function:
/// - returns `Ok(())` if it initialized every field of `slot`, /// - returns `Ok(())` if it initialized every field of `slot`,
/// - returns `Err(err)` if it encountered an error and then cleaned `slot`, this means: /// - returns `Err(err)` if it encountered an error and then cleaned `slot`, this means:
/// - `slot` can be deallocated without UB occurring, /// - `slot` can be deallocated without UB occurring,
...@@ -861,10 +861,10 @@ unsafe fn __pinned_init(self, slot: *mut T) -> Result<(), E> { ...@@ -861,10 +861,10 @@ unsafe fn __pinned_init(self, slot: *mut T) -> Result<(), E> {
/// ///
/// # Safety /// # Safety
/// ///
/// When implementing this type you will need to take great care. Also there are probably very few /// When implementing this trait you will need to take great care. Also there are probably very few
/// cases where a manual implementation is necessary. Use [`init_from_closure`] where possible. /// cases where a manual implementation is necessary. Use [`init_from_closure`] where possible.
/// ///
/// The [`Init::__init`] function /// The [`Init::__init`] function:
/// - returns `Ok(())` if it initialized every field of `slot`, /// - returns `Ok(())` if it initialized every field of `slot`,
/// - returns `Err(err)` if it encountered an error and then cleaned `slot`, this means: /// - returns `Err(err)` if it encountered an error and then cleaned `slot`, this means:
/// - `slot` can be deallocated without UB occurring, /// - `slot` can be deallocated without UB occurring,
...@@ -1013,7 +1013,7 @@ pub fn uninit<T, E>() -> impl Init<MaybeUninit<T>, E> { ...@@ -1013,7 +1013,7 @@ pub fn uninit<T, E>() -> impl Init<MaybeUninit<T>, E> {
/// ///
/// ```rust /// ```rust
/// use kernel::{error::Error, init::init_array_from_fn}; /// use kernel::{error::Error, init::init_array_from_fn};
/// let array: Box<[usize; 1_000]>= Box::init::<Error>(init_array_from_fn(|i| i)).unwrap(); /// let array: Box<[usize; 1_000]> = Box::init::<Error>(init_array_from_fn(|i| i)).unwrap();
/// assert_eq!(array.len(), 1_000); /// assert_eq!(array.len(), 1_000);
/// ``` /// ```
pub fn init_array_from_fn<I, const N: usize, T, E>( pub fn init_array_from_fn<I, const N: usize, T, E>(
...@@ -1027,7 +1027,7 @@ pub fn init_array_from_fn<I, const N: usize, T, E>( ...@@ -1027,7 +1027,7 @@ pub fn init_array_from_fn<I, const N: usize, T, E>(
// Counts the number of initialized elements and when dropped drops that many elements from // Counts the number of initialized elements and when dropped drops that many elements from
// `slot`. // `slot`.
let mut init_count = ScopeGuard::new_with_data(0, |i| { let mut init_count = ScopeGuard::new_with_data(0, |i| {
// We now free every element that has been initialized before: // We now free every element that has been initialized before.
// SAFETY: The loop initialized exactly the values from 0..i and since we // SAFETY: The loop initialized exactly the values from 0..i and since we
// return `Err` below, the caller will consider the memory at `slot` as // return `Err` below, the caller will consider the memory at `slot` as
// uninitialized. // uninitialized.
...@@ -1056,7 +1056,7 @@ pub fn init_array_from_fn<I, const N: usize, T, E>( ...@@ -1056,7 +1056,7 @@ pub fn init_array_from_fn<I, const N: usize, T, E>(
/// ///
/// ```rust /// ```rust
/// use kernel::{sync::{Arc, Mutex}, init::pin_init_array_from_fn, new_mutex}; /// use kernel::{sync::{Arc, Mutex}, init::pin_init_array_from_fn, new_mutex};
/// let array: Arc<[Mutex<usize>; 1_000]>= /// let array: Arc<[Mutex<usize>; 1_000]> =
/// Arc::pin_init(pin_init_array_from_fn(|i| new_mutex!(i))).unwrap(); /// Arc::pin_init(pin_init_array_from_fn(|i| new_mutex!(i))).unwrap();
/// assert_eq!(array.len(), 1_000); /// assert_eq!(array.len(), 1_000);
/// ``` /// ```
...@@ -1071,7 +1071,7 @@ pub fn pin_init_array_from_fn<I, const N: usize, T, E>( ...@@ -1071,7 +1071,7 @@ pub fn pin_init_array_from_fn<I, const N: usize, T, E>(
// Counts the number of initialized elements and when dropped drops that many elements from // Counts the number of initialized elements and when dropped drops that many elements from
// `slot`. // `slot`.
let mut init_count = ScopeGuard::new_with_data(0, |i| { let mut init_count = ScopeGuard::new_with_data(0, |i| {
// We now free every element that has been initialized before: // We now free every element that has been initialized before.
// SAFETY: The loop initialized exactly the values from 0..i and since we // SAFETY: The loop initialized exactly the values from 0..i and since we
// return `Err` below, the caller will consider the memory at `slot` as // return `Err` below, the caller will consider the memory at `slot` as
// uninitialized. // uninitialized.
......
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
//! ioctl() number definitions //! `ioctl()` number definitions.
//! //!
//! C header: [`include/asm-generic/ioctl.h`](srctree/include/asm-generic/ioctl.h) //! C header: [`include/asm-generic/ioctl.h`](srctree/include/asm-generic/ioctl.h)
...@@ -28,13 +28,13 @@ pub const fn _IO(ty: u32, nr: u32) -> u32 { ...@@ -28,13 +28,13 @@ pub const fn _IO(ty: u32, nr: u32) -> u32 {
_IOC(uapi::_IOC_NONE, ty, nr, 0) _IOC(uapi::_IOC_NONE, ty, nr, 0)
} }
/// Build an ioctl number for an read-only ioctl. /// Build an ioctl number for a read-only ioctl.
#[inline(always)] #[inline(always)]
pub const fn _IOR<T>(ty: u32, nr: u32) -> u32 { pub const fn _IOR<T>(ty: u32, nr: u32) -> u32 {
_IOC(uapi::_IOC_READ, ty, nr, core::mem::size_of::<T>()) _IOC(uapi::_IOC_READ, ty, nr, core::mem::size_of::<T>())
} }
/// Build an ioctl number for an write-only ioctl. /// Build an ioctl number for a write-only ioctl.
#[inline(always)] #[inline(always)]
pub const fn _IOW<T>(ty: u32, nr: u32) -> u32 { pub const fn _IOW<T>(ty: u32, nr: u32) -> u32 {
_IOC(uapi::_IOC_WRITE, ty, nr, core::mem::size_of::<T>()) _IOC(uapi::_IOC_WRITE, ty, nr, core::mem::size_of::<T>())
......
...@@ -14,11 +14,9 @@ ...@@ -14,11 +14,9 @@
#![no_std] #![no_std]
#![feature(allocator_api)] #![feature(allocator_api)]
#![feature(coerce_unsized)] #![feature(coerce_unsized)]
#![feature(const_maybe_uninit_zeroed)]
#![feature(dispatch_from_dyn)] #![feature(dispatch_from_dyn)]
#![feature(new_uninit)] #![feature(new_uninit)]
#![feature(offset_of)] #![feature(offset_of)]
#![feature(ptr_metadata)]
#![feature(receiver_trait)] #![feature(receiver_trait)]
#![feature(unsize)] #![feature(unsize)]
...@@ -49,6 +47,7 @@ ...@@ -49,6 +47,7 @@
pub mod str; pub mod str;
pub mod sync; pub mod sync;
pub mod task; pub mod task;
pub mod time;
pub mod types; pub mod types;
pub mod workqueue; pub mod workqueue;
...@@ -78,7 +77,7 @@ pub trait Module: Sized + Sync { ...@@ -78,7 +77,7 @@ pub trait Module: Sized + Sync {
/// Equivalent to `THIS_MODULE` in the C API. /// Equivalent to `THIS_MODULE` in the C API.
/// ///
/// C header: `include/linux/export.h` /// C header: [`include/linux/export.h`](srctree/include/linux/export.h)
pub struct ThisModule(*mut bindings::module); pub struct ThisModule(*mut bindings::module);
// SAFETY: `THIS_MODULE` may be used from all threads within a module. // SAFETY: `THIS_MODULE` may be used from all threads within a module.
...@@ -102,3 +101,35 @@ fn panic(info: &core::panic::PanicInfo<'_>) -> ! { ...@@ -102,3 +101,35 @@ fn panic(info: &core::panic::PanicInfo<'_>) -> ! {
// SAFETY: FFI call. // SAFETY: FFI call.
unsafe { bindings::BUG() }; unsafe { bindings::BUG() };
} }
/// Produces a pointer to an object from a pointer to one of its fields.
///
/// # Safety
///
/// The pointer passed to this macro, and the pointer returned by this macro, must both be in
/// bounds of the same allocation.
///
/// # Examples
///
/// ```
/// # use kernel::container_of;
/// struct Test {
/// a: u64,
/// b: u32,
/// }
///
/// let test = Test { a: 10, b: 20 };
/// let b_ptr = &test.b;
/// // SAFETY: The pointer points at the `b` field of a `Test`, so the resulting pointer will be
/// // in-bounds of the same allocation as `b_ptr`.
/// let test_alias = unsafe { container_of!(b_ptr, Test, b) };
/// assert!(core::ptr::eq(&test, test_alias));
/// ```
#[macro_export]
macro_rules! container_of {
($ptr:expr, $type:ty, $($f:tt)*) => {{
let ptr = $ptr as *const _ as *const u8;
let offset: usize = ::core::mem::offset_of!($type, $($f)*);
ptr.sub(offset) as *const $type
}}
}
...@@ -13,9 +13,102 @@ ...@@ -13,9 +13,102 @@
}; };
/// Byte string without UTF-8 validity guarantee. /// Byte string without UTF-8 validity guarantee.
/// #[repr(transparent)]
/// `BStr` is simply an alias to `[u8]`, but has a more evident semantical meaning. pub struct BStr([u8]);
pub type BStr = [u8];
impl BStr {
/// Returns the length of this string.
#[inline]
pub const fn len(&self) -> usize {
self.0.len()
}
/// Returns `true` if the string is empty.
#[inline]
pub const fn is_empty(&self) -> bool {
self.len() == 0
}
/// Creates a [`BStr`] from a `[u8]`.
#[inline]
pub const fn from_bytes(bytes: &[u8]) -> &Self {
// SAFETY: `BStr` is transparent to `[u8]`.
unsafe { &*(bytes as *const [u8] as *const BStr) }
}
}
impl fmt::Display for BStr {
/// Formats printable ASCII characters, escaping the rest.
///
/// ```
/// # use kernel::{fmt, b_str, str::{BStr, CString}};
/// let ascii = b_str!("Hello, BStr!");
/// let s = CString::try_from_fmt(fmt!("{}", ascii)).unwrap();
/// assert_eq!(s.as_bytes(), "Hello, BStr!".as_bytes());
///
/// let non_ascii = b_str!("🦀");
/// let s = CString::try_from_fmt(fmt!("{}", non_ascii)).unwrap();
/// assert_eq!(s.as_bytes(), "\\xf0\\x9f\\xa6\\x80".as_bytes());
/// ```
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
for &b in &self.0 {
match b {
// Common escape codes.
b'\t' => f.write_str("\\t")?,
b'\n' => f.write_str("\\n")?,
b'\r' => f.write_str("\\r")?,
// Printable characters.
0x20..=0x7e => f.write_char(b as char)?,
_ => write!(f, "\\x{:02x}", b)?,
}
}
Ok(())
}
}
impl fmt::Debug for BStr {
/// Formats printable ASCII characters with a double quote on either end,
/// escaping the rest.
///
/// ```
/// # use kernel::{fmt, b_str, str::{BStr, CString}};
/// // Embedded double quotes are escaped.
/// let ascii = b_str!("Hello, \"BStr\"!");
/// let s = CString::try_from_fmt(fmt!("{:?}", ascii)).unwrap();
/// assert_eq!(s.as_bytes(), "\"Hello, \\\"BStr\\\"!\"".as_bytes());
///
/// let non_ascii = b_str!("😺");
/// let s = CString::try_from_fmt(fmt!("{:?}", non_ascii)).unwrap();
/// assert_eq!(s.as_bytes(), "\"\\xf0\\x9f\\x98\\xba\"".as_bytes());
/// ```
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_char('"')?;
for &b in &self.0 {
match b {
// Common escape codes.
b'\t' => f.write_str("\\t")?,
b'\n' => f.write_str("\\n")?,
b'\r' => f.write_str("\\r")?,
// String escape characters.
b'\"' => f.write_str("\\\"")?,
b'\\' => f.write_str("\\\\")?,
// Printable characters.
0x20..=0x7e => f.write_char(b as char)?,
_ => write!(f, "\\x{:02x}", b)?,
}
}
f.write_char('"')
}
}
impl Deref for BStr {
type Target = [u8];
#[inline]
fn deref(&self) -> &Self::Target {
&self.0
}
}
/// Creates a new [`BStr`] from a string literal. /// Creates a new [`BStr`] from a string literal.
/// ///
...@@ -33,7 +126,7 @@ ...@@ -33,7 +126,7 @@
macro_rules! b_str { macro_rules! b_str {
($str:literal) => {{ ($str:literal) => {{
const S: &'static str = $str; const S: &'static str = $str;
const C: &'static $crate::str::BStr = S.as_bytes(); const C: &'static $crate::str::BStr = $crate::str::BStr::from_bytes(S.as_bytes());
C C
}}; }};
} }
...@@ -149,13 +242,13 @@ pub const fn as_char_ptr(&self) -> *const core::ffi::c_char { ...@@ -149,13 +242,13 @@ pub const fn as_char_ptr(&self) -> *const core::ffi::c_char {
self.0.as_ptr() as _ self.0.as_ptr() as _
} }
/// Convert the string to a byte slice without the trailing 0 byte. /// Convert the string to a byte slice without the trailing `NUL` byte.
#[inline] #[inline]
pub fn as_bytes(&self) -> &[u8] { pub fn as_bytes(&self) -> &[u8] {
&self.0[..self.len()] &self.0[..self.len()]
} }
/// Convert the string to a byte slice containing the trailing 0 byte. /// Convert the string to a byte slice containing the trailing `NUL` byte.
#[inline] #[inline]
pub const fn as_bytes_with_nul(&self) -> &[u8] { pub const fn as_bytes_with_nul(&self) -> &[u8] {
&self.0 &self.0
...@@ -191,9 +284,9 @@ pub fn to_str(&self) -> Result<&str, core::str::Utf8Error> { ...@@ -191,9 +284,9 @@ pub fn to_str(&self) -> Result<&str, core::str::Utf8Error> {
/// ``` /// ```
/// # use kernel::c_str; /// # use kernel::c_str;
/// # use kernel::str::CStr; /// # use kernel::str::CStr;
/// let bar = c_str!("ツ");
/// // SAFETY: String literals are guaranteed to be valid UTF-8 /// // SAFETY: String literals are guaranteed to be valid UTF-8
/// // by the Rust compiler. /// // by the Rust compiler.
/// let bar = c_str!("ツ");
/// assert_eq!(unsafe { bar.as_str_unchecked() }, "ツ"); /// assert_eq!(unsafe { bar.as_str_unchecked() }, "ツ");
/// ``` /// ```
#[inline] #[inline]
...@@ -271,7 +364,7 @@ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { ...@@ -271,7 +364,7 @@ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
impl AsRef<BStr> for CStr { impl AsRef<BStr> for CStr {
#[inline] #[inline]
fn as_ref(&self) -> &BStr { fn as_ref(&self) -> &BStr {
self.as_bytes() BStr::from_bytes(self.as_bytes())
} }
} }
...@@ -280,7 +373,7 @@ impl Deref for CStr { ...@@ -280,7 +373,7 @@ impl Deref for CStr {
#[inline] #[inline]
fn deref(&self) -> &Self::Target { fn deref(&self) -> &Self::Target {
self.as_bytes() self.as_ref()
} }
} }
...@@ -327,7 +420,7 @@ impl<Idx> Index<Idx> for CStr ...@@ -327,7 +420,7 @@ impl<Idx> Index<Idx> for CStr
#[inline] #[inline]
fn index(&self, index: Idx) -> &Self::Output { fn index(&self, index: Idx) -> &Self::Output {
&self.as_bytes()[index] &self.as_ref()[index]
} }
} }
...@@ -357,6 +450,21 @@ macro_rules! c_str { ...@@ -357,6 +450,21 @@ macro_rules! c_str {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use alloc::format;
const ALL_ASCII_CHARS: &'static str =
"\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09\\x0a\\x0b\\x0c\\x0d\\x0e\\x0f\
\\x10\\x11\\x12\\x13\\x14\\x15\\x16\\x17\\x18\\x19\\x1a\\x1b\\x1c\\x1d\\x1e\\x1f \
!\"#$%&'()*+,-./0123456789:;<=>?@\
ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\\x7f\
\\x80\\x81\\x82\\x83\\x84\\x85\\x86\\x87\\x88\\x89\\x8a\\x8b\\x8c\\x8d\\x8e\\x8f\
\\x90\\x91\\x92\\x93\\x94\\x95\\x96\\x97\\x98\\x99\\x9a\\x9b\\x9c\\x9d\\x9e\\x9f\
\\xa0\\xa1\\xa2\\xa3\\xa4\\xa5\\xa6\\xa7\\xa8\\xa9\\xaa\\xab\\xac\\xad\\xae\\xaf\
\\xb0\\xb1\\xb2\\xb3\\xb4\\xb5\\xb6\\xb7\\xb8\\xb9\\xba\\xbb\\xbc\\xbd\\xbe\\xbf\
\\xc0\\xc1\\xc2\\xc3\\xc4\\xc5\\xc6\\xc7\\xc8\\xc9\\xca\\xcb\\xcc\\xcd\\xce\\xcf\
\\xd0\\xd1\\xd2\\xd3\\xd4\\xd5\\xd6\\xd7\\xd8\\xd9\\xda\\xdb\\xdc\\xdd\\xde\\xdf\
\\xe0\\xe1\\xe2\\xe3\\xe4\\xe5\\xe6\\xe7\\xe8\\xe9\\xea\\xeb\\xec\\xed\\xee\\xef\
\\xf0\\xf1\\xf2\\xf3\\xf4\\xf5\\xf6\\xf7\\xf8\\xf9\\xfa\\xfb\\xfc\\xfd\\xfe\\xff";
#[test] #[test]
fn test_cstr_to_str() { fn test_cstr_to_str() {
...@@ -381,6 +489,69 @@ fn test_cstr_as_str_unchecked() { ...@@ -381,6 +489,69 @@ fn test_cstr_as_str_unchecked() {
let unchecked_str = unsafe { checked_cstr.as_str_unchecked() }; let unchecked_str = unsafe { checked_cstr.as_str_unchecked() };
assert_eq!(unchecked_str, "🐧"); assert_eq!(unchecked_str, "🐧");
} }
#[test]
fn test_cstr_display() {
let hello_world = CStr::from_bytes_with_nul(b"hello, world!\0").unwrap();
assert_eq!(format!("{}", hello_world), "hello, world!");
let non_printables = CStr::from_bytes_with_nul(b"\x01\x09\x0a\0").unwrap();
assert_eq!(format!("{}", non_printables), "\\x01\\x09\\x0a");
let non_ascii = CStr::from_bytes_with_nul(b"d\xe9j\xe0 vu\0").unwrap();
assert_eq!(format!("{}", non_ascii), "d\\xe9j\\xe0 vu");
let good_bytes = CStr::from_bytes_with_nul(b"\xf0\x9f\xa6\x80\0").unwrap();
assert_eq!(format!("{}", good_bytes), "\\xf0\\x9f\\xa6\\x80");
}
#[test]
fn test_cstr_display_all_bytes() {
let mut bytes: [u8; 256] = [0; 256];
// fill `bytes` with [1..=255] + [0]
for i in u8::MIN..=u8::MAX {
bytes[i as usize] = i.wrapping_add(1);
}
let cstr = CStr::from_bytes_with_nul(&bytes).unwrap();
assert_eq!(format!("{}", cstr), ALL_ASCII_CHARS);
}
#[test]
fn test_cstr_debug() {
let hello_world = CStr::from_bytes_with_nul(b"hello, world!\0").unwrap();
assert_eq!(format!("{:?}", hello_world), "\"hello, world!\"");
let non_printables = CStr::from_bytes_with_nul(b"\x01\x09\x0a\0").unwrap();
assert_eq!(format!("{:?}", non_printables), "\"\\x01\\x09\\x0a\"");
let non_ascii = CStr::from_bytes_with_nul(b"d\xe9j\xe0 vu\0").unwrap();
assert_eq!(format!("{:?}", non_ascii), "\"d\\xe9j\\xe0 vu\"");
let good_bytes = CStr::from_bytes_with_nul(b"\xf0\x9f\xa6\x80\0").unwrap();
assert_eq!(format!("{:?}", good_bytes), "\"\\xf0\\x9f\\xa6\\x80\"");
}
#[test]
fn test_bstr_display() {
let hello_world = BStr::from_bytes(b"hello, world!");
assert_eq!(format!("{}", hello_world), "hello, world!");
let escapes = BStr::from_bytes(b"_\t_\n_\r_\\_\'_\"_");
assert_eq!(format!("{}", escapes), "_\\t_\\n_\\r_\\_'_\"_");
let others = BStr::from_bytes(b"\x01");
assert_eq!(format!("{}", others), "\\x01");
let non_ascii = BStr::from_bytes(b"d\xe9j\xe0 vu");
assert_eq!(format!("{}", non_ascii), "d\\xe9j\\xe0 vu");
let good_bytes = BStr::from_bytes(b"\xf0\x9f\xa6\x80");
assert_eq!(format!("{}", good_bytes), "\\xf0\\x9f\\xa6\\x80");
}
#[test]
fn test_bstr_debug() {
let hello_world = BStr::from_bytes(b"hello, world!");
assert_eq!(format!("{:?}", hello_world), "\"hello, world!\"");
let escapes = BStr::from_bytes(b"_\t_\n_\r_\\_\'_\"_");
assert_eq!(format!("{:?}", escapes), "\"_\\t_\\n_\\r_\\\\_'_\\\"_\"");
let others = BStr::from_bytes(b"\x01");
assert_eq!(format!("{:?}", others), "\"\\x01\"");
let non_ascii = BStr::from_bytes(b"d\xe9j\xe0 vu");
assert_eq!(format!("{:?}", non_ascii), "\"d\\xe9j\\xe0 vu\"");
let good_bytes = BStr::from_bytes(b"\xf0\x9f\xa6\x80");
assert_eq!(format!("{:?}", good_bytes), "\"\\xf0\\x9f\\xa6\\x80\"");
}
} }
/// Allows formatting of [`fmt::Arguments`] into a raw buffer. /// Allows formatting of [`fmt::Arguments`] into a raw buffer.
...@@ -449,7 +620,7 @@ pub(crate) fn pos(&self) -> *mut u8 { ...@@ -449,7 +620,7 @@ pub(crate) fn pos(&self) -> *mut u8 {
self.pos as _ self.pos as _
} }
/// Return the number of bytes written to the formatter. /// Returns the number of bytes written to the formatter.
pub(crate) fn bytes_written(&self) -> usize { pub(crate) fn bytes_written(&self) -> usize {
self.pos - self.beg self.pos - self.beg
} }
......
...@@ -13,8 +13,9 @@ ...@@ -13,8 +13,9 @@
mod locked_by; mod locked_by;
pub use arc::{Arc, ArcBorrow, UniqueArc}; pub use arc::{Arc, ArcBorrow, UniqueArc};
pub use condvar::CondVar; pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult};
pub use lock::{mutex::Mutex, spinlock::SpinLock}; pub use lock::mutex::{new_mutex, Mutex};
pub use lock::spinlock::{new_spinlock, SpinLock};
pub use locked_by::LockedBy; pub use locked_by::LockedBy;
/// Represents a lockdep class. It's a wrapper around C's `lock_class_key`. /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
......
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
mem::{ManuallyDrop, MaybeUninit}, mem::{ManuallyDrop, MaybeUninit},
ops::{Deref, DerefMut}, ops::{Deref, DerefMut},
pin::Pin, pin::Pin,
ptr::{NonNull, Pointee}, ptr::NonNull,
}; };
use macros::pin_data; use macros::pin_data;
...@@ -56,7 +56,7 @@ ...@@ -56,7 +56,7 @@
/// b: u32, /// b: u32,
/// } /// }
/// ///
/// // Create a ref-counted instance of `Example`. /// // Create a refcounted instance of `Example`.
/// let obj = Arc::try_new(Example { a: 10, b: 20 })?; /// let obj = Arc::try_new(Example { a: 10, b: 20 })?;
/// ///
/// // Get a new pointer to `obj` and increment the refcount. /// // Get a new pointer to `obj` and increment the refcount.
...@@ -239,22 +239,20 @@ pub unsafe fn from_raw(ptr: *const T) -> Self { ...@@ -239,22 +239,20 @@ pub unsafe fn from_raw(ptr: *const T) -> Self {
// binary, so its layout is not so large that it can trigger arithmetic overflow. // binary, so its layout is not so large that it can trigger arithmetic overflow.
let val_offset = unsafe { refcount_layout.extend(val_layout).unwrap_unchecked().1 }; let val_offset = unsafe { refcount_layout.extend(val_layout).unwrap_unchecked().1 };
let metadata: <T as Pointee>::Metadata = core::ptr::metadata(ptr); // Pointer casts leave the metadata unchanged. This is okay because the metadata of `T` and
// SAFETY: The metadata of `T` and `ArcInner<T>` is the same because `ArcInner` is a struct // `ArcInner<T>` is the same since `ArcInner` is a struct with `T` as its last field.
// with `T` as its last field.
// //
// This is documented at: // This is documented at:
// <https://doc.rust-lang.org/std/ptr/trait.Pointee.html>. // <https://doc.rust-lang.org/std/ptr/trait.Pointee.html>.
let metadata: <ArcInner<T> as Pointee>::Metadata = let ptr = ptr as *const ArcInner<T>;
unsafe { core::mem::transmute_copy(&metadata) };
// SAFETY: The pointer is in-bounds of an allocation both before and after offsetting the // SAFETY: The pointer is in-bounds of an allocation both before and after offsetting the
// pointer, since it originates from a previous call to `Arc::into_raw` and is still valid. // pointer, since it originates from a previous call to `Arc::into_raw` and is still valid.
let ptr = unsafe { (ptr as *mut u8).sub(val_offset) as *mut () }; let ptr = unsafe { ptr.byte_sub(val_offset) };
let ptr = core::ptr::from_raw_parts_mut(ptr, metadata);
// SAFETY: By the safety requirements we know that `ptr` came from `Arc::into_raw`, so the // SAFETY: By the safety requirements we know that `ptr` came from `Arc::into_raw`, so the
// reference count held then will be owned by the new `Arc` object. // reference count held then will be owned by the new `Arc` object.
unsafe { Self::from_inner(NonNull::new_unchecked(ptr)) } unsafe { Self::from_inner(NonNull::new_unchecked(ptr.cast_mut())) }
} }
/// Returns an [`ArcBorrow`] from the given [`Arc`]. /// Returns an [`ArcBorrow`] from the given [`Arc`].
...@@ -365,12 +363,12 @@ fn from(item: Pin<UniqueArc<T>>) -> Self { ...@@ -365,12 +363,12 @@ fn from(item: Pin<UniqueArc<T>>) -> Self {
/// A borrowed reference to an [`Arc`] instance. /// A borrowed reference to an [`Arc`] instance.
/// ///
/// For cases when one doesn't ever need to increment the refcount on the allocation, it is simpler /// For cases when one doesn't ever need to increment the refcount on the allocation, it is simpler
/// to use just `&T`, which we can trivially get from an `Arc<T>` instance. /// to use just `&T`, which we can trivially get from an [`Arc<T>`] instance.
/// ///
/// However, when one may need to increment the refcount, it is preferable to use an `ArcBorrow<T>` /// However, when one may need to increment the refcount, it is preferable to use an `ArcBorrow<T>`
/// over `&Arc<T>` because the latter results in a double-indirection: a pointer (shared reference) /// over `&Arc<T>` because the latter results in a double-indirection: a pointer (shared reference)
/// to a pointer (`Arc<T>`) to the object (`T`). An [`ArcBorrow`] eliminates this double /// to a pointer ([`Arc<T>`]) to the object (`T`). An [`ArcBorrow`] eliminates this double
/// indirection while still allowing one to increment the refcount and getting an `Arc<T>` when/if /// indirection while still allowing one to increment the refcount and getting an [`Arc<T>`] when/if
/// needed. /// needed.
/// ///
/// # Invariants /// # Invariants
...@@ -510,7 +508,7 @@ fn deref(&self) -> &Self::Target { ...@@ -510,7 +508,7 @@ fn deref(&self) -> &Self::Target {
/// # test().unwrap(); /// # test().unwrap();
/// ``` /// ```
/// ///
/// In the following example we first allocate memory for a ref-counted `Example` but we don't /// In the following example we first allocate memory for a refcounted `Example` but we don't
/// initialise it on allocation. We do initialise it later with a call to [`UniqueArc::write`], /// initialise it on allocation. We do initialise it later with a call to [`UniqueArc::write`],
/// followed by a conversion to `Arc<Example>`. This is particularly useful when allocation happens /// followed by a conversion to `Arc<Example>`. This is particularly useful when allocation happens
/// in one context (e.g., sleepable) and initialisation in another (e.g., atomic): /// in one context (e.g., sleepable) and initialisation in another (e.g., atomic):
...@@ -560,7 +558,7 @@ impl<T> UniqueArc<T> { ...@@ -560,7 +558,7 @@ impl<T> UniqueArc<T> {
/// Tries to allocate a new [`UniqueArc`] instance. /// Tries to allocate a new [`UniqueArc`] instance.
pub fn try_new(value: T) -> Result<Self, AllocError> { pub fn try_new(value: T) -> Result<Self, AllocError> {
Ok(Self { Ok(Self {
// INVARIANT: The newly-created object has a ref-count of 1. // INVARIANT: The newly-created object has a refcount of 1.
inner: Arc::try_new(value)?, inner: Arc::try_new(value)?,
}) })
} }
...@@ -574,7 +572,7 @@ pub fn try_new_uninit() -> Result<UniqueArc<MaybeUninit<T>>, AllocError> { ...@@ -574,7 +572,7 @@ pub fn try_new_uninit() -> Result<UniqueArc<MaybeUninit<T>>, AllocError> {
data <- init::uninit::<T, AllocError>(), data <- init::uninit::<T, AllocError>(),
}? AllocError))?; }? AllocError))?;
Ok(UniqueArc { Ok(UniqueArc {
// INVARIANT: The newly-created object has a ref-count of 1. // INVARIANT: The newly-created object has a refcount of 1.
// SAFETY: The pointer from the `Box` is valid. // SAFETY: The pointer from the `Box` is valid.
inner: unsafe { Arc::from_inner(Box::leak(inner).into()) }, inner: unsafe { Arc::from_inner(Box::leak(inner).into()) },
}) })
......
...@@ -6,8 +6,18 @@ ...@@ -6,8 +6,18 @@
//! variable. //! variable.
use super::{lock::Backend, lock::Guard, LockClassKey}; use super::{lock::Backend, lock::Guard, LockClassKey};
use crate::{bindings, init::PinInit, pin_init, str::CStr, types::Opaque}; use crate::{
bindings,
init::PinInit,
pin_init,
str::CStr,
task::{MAX_SCHEDULE_TIMEOUT, TASK_INTERRUPTIBLE, TASK_NORMAL, TASK_UNINTERRUPTIBLE},
time::Jiffies,
types::Opaque,
};
use core::ffi::{c_int, c_long};
use core::marker::PhantomPinned; use core::marker::PhantomPinned;
use core::ptr;
use macros::pin_data; use macros::pin_data;
/// Creates a [`CondVar`] initialiser with the given name and a newly-created lock class. /// Creates a [`CondVar`] initialiser with the given name and a newly-created lock class.
...@@ -17,6 +27,7 @@ macro_rules! new_condvar { ...@@ -17,6 +27,7 @@ macro_rules! new_condvar {
$crate::sync::CondVar::new($crate::optional_name!($($name)?), $crate::static_lock_class!()) $crate::sync::CondVar::new($crate::optional_name!($($name)?), $crate::static_lock_class!())
}; };
} }
pub use new_condvar;
/// A conditional variable. /// A conditional variable.
/// ///
...@@ -34,8 +45,7 @@ macro_rules! new_condvar { ...@@ -34,8 +45,7 @@ macro_rules! new_condvar {
/// The following is an example of using a condvar with a mutex: /// The following is an example of using a condvar with a mutex:
/// ///
/// ``` /// ```
/// use kernel::sync::{CondVar, Mutex}; /// use kernel::sync::{new_condvar, new_mutex, CondVar, Mutex};
/// use kernel::{new_condvar, new_mutex};
/// ///
/// #[pin_data] /// #[pin_data]
/// pub struct Example { /// pub struct Example {
...@@ -73,10 +83,12 @@ macro_rules! new_condvar { ...@@ -73,10 +83,12 @@ macro_rules! new_condvar {
#[pin_data] #[pin_data]
pub struct CondVar { pub struct CondVar {
#[pin] #[pin]
pub(crate) wait_list: Opaque<bindings::wait_queue_head>, pub(crate) wait_queue_head: Opaque<bindings::wait_queue_head>,
/// A condvar needs to be pinned because it contains a [`struct list_head`] that is /// A condvar needs to be pinned because it contains a [`struct list_head`] that is
/// self-referential, so it cannot be safely moved once it is initialised. /// self-referential, so it cannot be safely moved once it is initialised.
///
/// [`struct list_head`]: srctree/include/linux/types.h
#[pin] #[pin]
_pin: PhantomPinned, _pin: PhantomPinned,
} }
...@@ -96,28 +108,35 @@ pub fn new(name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self ...@@ -96,28 +108,35 @@ pub fn new(name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self
_pin: PhantomPinned, _pin: PhantomPinned,
// SAFETY: `slot` is valid while the closure is called and both `name` and `key` have // SAFETY: `slot` is valid while the closure is called and both `name` and `key` have
// static lifetimes so they live indefinitely. // static lifetimes so they live indefinitely.
wait_list <- Opaque::ffi_init(|slot| unsafe { wait_queue_head <- Opaque::ffi_init(|slot| unsafe {
bindings::__init_waitqueue_head(slot, name.as_char_ptr(), key.as_ptr()) bindings::__init_waitqueue_head(slot, name.as_char_ptr(), key.as_ptr())
}), }),
}) })
} }
fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guard<'_, T, B>) { fn wait_internal<T: ?Sized, B: Backend>(
&self,
wait_state: c_int,
guard: &mut Guard<'_, T, B>,
timeout_in_jiffies: c_long,
) -> c_long {
let wait = Opaque::<bindings::wait_queue_entry>::uninit(); let wait = Opaque::<bindings::wait_queue_entry>::uninit();
// SAFETY: `wait` points to valid memory. // SAFETY: `wait` points to valid memory.
unsafe { bindings::init_wait(wait.get()) }; unsafe { bindings::init_wait(wait.get()) };
// SAFETY: Both `wait` and `wait_list` point to valid memory. // SAFETY: Both `wait` and `wait_queue_head` point to valid memory.
unsafe { unsafe {
bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _) bindings::prepare_to_wait_exclusive(self.wait_queue_head.get(), wait.get(), wait_state)
}; };
// SAFETY: No arguments, switches to another thread. // SAFETY: Switches to another thread. The timeout can be any number.
guard.do_unlocked(|| unsafe { bindings::schedule() }); let ret = guard.do_unlocked(|| unsafe { bindings::schedule_timeout(timeout_in_jiffies) });
// SAFETY: Both `wait` and `wait_queue_head` point to valid memory.
unsafe { bindings::finish_wait(self.wait_queue_head.get(), wait.get()) };
// SAFETY: Both `wait` and `wait_list` point to valid memory. ret
unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
} }
/// Releases the lock and waits for a notification in uninterruptible mode. /// Releases the lock and waits for a notification in uninterruptible mode.
...@@ -127,7 +146,7 @@ fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guar ...@@ -127,7 +146,7 @@ fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guar
/// [`CondVar::notify_one`] or [`CondVar::notify_all`]. Note that it may also wake up /// [`CondVar::notify_one`] or [`CondVar::notify_all`]. Note that it may also wake up
/// spuriously. /// spuriously.
pub fn wait<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) { pub fn wait<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) {
self.wait_internal(bindings::TASK_UNINTERRUPTIBLE, guard); self.wait_internal(TASK_UNINTERRUPTIBLE, guard, MAX_SCHEDULE_TIMEOUT);
} }
/// Releases the lock and waits for a notification in interruptible mode. /// Releases the lock and waits for a notification in interruptible mode.
...@@ -138,29 +157,60 @@ pub fn wait<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) { ...@@ -138,29 +157,60 @@ pub fn wait<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) {
/// Returns whether there is a signal pending. /// Returns whether there is a signal pending.
#[must_use = "wait_interruptible returns if a signal is pending, so the caller must check the return value"] #[must_use = "wait_interruptible returns if a signal is pending, so the caller must check the return value"]
pub fn wait_interruptible<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) -> bool { pub fn wait_interruptible<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) -> bool {
self.wait_internal(bindings::TASK_INTERRUPTIBLE, guard); self.wait_internal(TASK_INTERRUPTIBLE, guard, MAX_SCHEDULE_TIMEOUT);
crate::current!().signal_pending() crate::current!().signal_pending()
} }
/// Calls the kernel function to notify the appropriate number of threads with the given flags. /// Releases the lock and waits for a notification in interruptible mode.
fn notify(&self, count: i32, flags: u32) { ///
// SAFETY: `wait_list` points to valid memory. /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
/// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
/// [`CondVar::notify_all`], or when a timeout occurs, or when the thread receives a signal.
#[must_use = "wait_interruptible_timeout returns if a signal is pending, so the caller must check the return value"]
pub fn wait_interruptible_timeout<T: ?Sized, B: Backend>(
&self,
guard: &mut Guard<'_, T, B>,
jiffies: Jiffies,
) -> CondVarTimeoutResult {
let jiffies = jiffies.try_into().unwrap_or(MAX_SCHEDULE_TIMEOUT);
let res = self.wait_internal(TASK_INTERRUPTIBLE, guard, jiffies);
match (res as Jiffies, crate::current!().signal_pending()) {
(jiffies, true) => CondVarTimeoutResult::Signal { jiffies },
(0, false) => CondVarTimeoutResult::Timeout,
(jiffies, false) => CondVarTimeoutResult::Woken { jiffies },
}
}
/// Calls the kernel function to notify the appropriate number of threads.
fn notify(&self, count: c_int) {
// SAFETY: `wait_queue_head` points to valid memory.
unsafe { unsafe {
bindings::__wake_up( bindings::__wake_up(
self.wait_list.get(), self.wait_queue_head.get(),
bindings::TASK_NORMAL, TASK_NORMAL,
count, count,
flags as _, ptr::null_mut(),
) )
}; };
} }
/// Calls the kernel function to notify one thread synchronously.
///
/// This method behaves like `notify_one`, except that it hints to the scheduler that the
/// current thread is about to go to sleep, so it should schedule the target thread on the same
/// CPU.
pub fn notify_sync(&self) {
// SAFETY: `wait_queue_head` points to valid memory.
unsafe { bindings::__wake_up_sync(self.wait_queue_head.get(), TASK_NORMAL) };
}
/// Wakes a single waiter up, if any. /// Wakes a single waiter up, if any.
/// ///
/// This is not 'sticky' in the sense that if no thread is waiting, the notification is lost /// This is not 'sticky' in the sense that if no thread is waiting, the notification is lost
/// completely (as opposed to automatically waking up the next waiter). /// completely (as opposed to automatically waking up the next waiter).
pub fn notify_one(&self) { pub fn notify_one(&self) {
self.notify(1, 0); self.notify(1);
} }
/// Wakes all waiters up, if any. /// Wakes all waiters up, if any.
...@@ -168,6 +218,22 @@ pub fn notify_one(&self) { ...@@ -168,6 +218,22 @@ pub fn notify_one(&self) {
/// This is not 'sticky' in the sense that if no thread is waiting, the notification is lost /// This is not 'sticky' in the sense that if no thread is waiting, the notification is lost
/// completely (as opposed to automatically waking up the next waiter). /// completely (as opposed to automatically waking up the next waiter).
pub fn notify_all(&self) { pub fn notify_all(&self) {
self.notify(0, 0); self.notify(0);
} }
} }
/// The return type of `wait_timeout`.
pub enum CondVarTimeoutResult {
/// The timeout was reached.
Timeout,
/// Somebody woke us up.
Woken {
/// Remaining sleep duration.
jiffies: Jiffies,
},
/// A signal occurred.
Signal {
/// Remaining sleep duration.
jiffies: Jiffies,
},
}
...@@ -21,14 +21,21 @@ ...@@ -21,14 +21,21 @@
/// # Safety /// # Safety
/// ///
/// - Implementers must ensure that only one thread/CPU may access the protected data once the lock /// - Implementers must ensure that only one thread/CPU may access the protected data once the lock
/// is owned, that is, between calls to `lock` and `unlock`. /// is owned, that is, between calls to [`lock`] and [`unlock`].
/// - Implementers must also ensure that `relock` uses the same locking method as the original /// - Implementers must also ensure that [`relock`] uses the same locking method as the original
/// lock operation. /// lock operation.
///
/// [`lock`]: Backend::lock
/// [`unlock`]: Backend::unlock
/// [`relock`]: Backend::relock
pub unsafe trait Backend { pub unsafe trait Backend {
/// The state required by the lock. /// The state required by the lock.
type State; type State;
/// The state required to be kept between lock and unlock. /// The state required to be kept between [`lock`] and [`unlock`].
///
/// [`lock`]: Backend::lock
/// [`unlock`]: Backend::unlock
type GuardState; type GuardState;
/// Initialises the lock. /// Initialises the lock.
...@@ -139,7 +146,7 @@ pub struct Guard<'a, T: ?Sized, B: Backend> { ...@@ -139,7 +146,7 @@ pub struct Guard<'a, T: ?Sized, B: Backend> {
unsafe impl<T: Sync + ?Sized, B: Backend> Sync for Guard<'_, T, B> {} unsafe impl<T: Sync + ?Sized, B: Backend> Sync for Guard<'_, T, B> {}
impl<T: ?Sized, B: Backend> Guard<'_, T, B> { impl<T: ?Sized, B: Backend> Guard<'_, T, B> {
pub(crate) fn do_unlocked(&mut self, cb: impl FnOnce()) { pub(crate) fn do_unlocked<U>(&mut self, cb: impl FnOnce() -> U) -> U {
// SAFETY: The caller owns the lock, so it is safe to unlock it. // SAFETY: The caller owns the lock, so it is safe to unlock it.
unsafe { B::unlock(self.lock.state.get(), &self.state) }; unsafe { B::unlock(self.lock.state.get(), &self.state) };
...@@ -147,7 +154,7 @@ pub(crate) fn do_unlocked(&mut self, cb: impl FnOnce()) { ...@@ -147,7 +154,7 @@ pub(crate) fn do_unlocked(&mut self, cb: impl FnOnce()) {
let _relock = let _relock =
ScopeGuard::new(|| unsafe { B::relock(self.lock.state.get(), &mut self.state) }); ScopeGuard::new(|| unsafe { B::relock(self.lock.state.get(), &mut self.state) });
cb(); cb()
} }
} }
......
...@@ -17,6 +17,7 @@ macro_rules! new_mutex { ...@@ -17,6 +17,7 @@ macro_rules! new_mutex {
$inner, $crate::optional_name!($($name)?), $crate::static_lock_class!()) $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
}; };
} }
pub use new_mutex;
/// A mutual exclusion primitive. /// A mutual exclusion primitive.
/// ///
...@@ -35,7 +36,7 @@ macro_rules! new_mutex { ...@@ -35,7 +36,7 @@ macro_rules! new_mutex {
/// contains an inner struct (`Inner`) that is protected by a mutex. /// contains an inner struct (`Inner`) that is protected by a mutex.
/// ///
/// ``` /// ```
/// use kernel::{init::InPlaceInit, init::PinInit, new_mutex, pin_init, sync::Mutex}; /// use kernel::sync::{new_mutex, Mutex};
/// ///
/// struct Inner { /// struct Inner {
/// a: u32, /// a: u32,
......
...@@ -17,6 +17,7 @@ macro_rules! new_spinlock { ...@@ -17,6 +17,7 @@ macro_rules! new_spinlock {
$inner, $crate::optional_name!($($name)?), $crate::static_lock_class!()) $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
}; };
} }
pub use new_spinlock;
/// A spinlock. /// A spinlock.
/// ///
...@@ -33,7 +34,7 @@ macro_rules! new_spinlock { ...@@ -33,7 +34,7 @@ macro_rules! new_spinlock {
/// contains an inner struct (`Inner`) that is protected by a spinlock. /// contains an inner struct (`Inner`) that is protected by a spinlock.
/// ///
/// ``` /// ```
/// use kernel::{init::InPlaceInit, init::PinInit, new_spinlock, pin_init, sync::SpinLock}; /// use kernel::sync::{new_spinlock, SpinLock};
/// ///
/// struct Inner { /// struct Inner {
/// a: u32, /// a: u32,
...@@ -112,7 +113,7 @@ unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState { ...@@ -112,7 +113,7 @@ unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState {
unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) { unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) {
// SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the
// caller is the owner of the mutex. // caller is the owner of the spinlock.
unsafe { bindings::spin_unlock(ptr) } unsafe { bindings::spin_unlock(ptr) }
} }
} }
...@@ -9,14 +9,17 @@ ...@@ -9,14 +9,17 @@
/// Allows access to some data to be serialised by a lock that does not wrap it. /// Allows access to some data to be serialised by a lock that does not wrap it.
/// ///
/// In most cases, data protected by a lock is wrapped by the appropriate lock type, e.g., /// In most cases, data protected by a lock is wrapped by the appropriate lock type, e.g.,
/// [`super::Mutex`] or [`super::SpinLock`]. [`LockedBy`] is meant for cases when this is not /// [`Mutex`] or [`SpinLock`]. [`LockedBy`] is meant for cases when this is not possible.
/// possible. For example, if a container has a lock and some data in the contained elements needs /// For example, if a container has a lock and some data in the contained elements needs
/// to be protected by the same lock. /// to be protected by the same lock.
/// ///
/// [`LockedBy`] wraps the data in lieu of another locking primitive, and only allows access to it /// [`LockedBy`] wraps the data in lieu of another locking primitive, and only allows access to it
/// when the caller shows evidence that the 'external' lock is locked. It panics if the evidence /// when the caller shows evidence that the 'external' lock is locked. It panics if the evidence
/// refers to the wrong instance of the lock. /// refers to the wrong instance of the lock.
/// ///
/// [`Mutex`]: super::Mutex
/// [`SpinLock`]: super::SpinLock
///
/// # Examples /// # Examples
/// ///
/// The following is an example for illustrative purposes: `InnerDirectory::bytes_used` is an /// The following is an example for illustrative purposes: `InnerDirectory::bytes_used` is an
......
...@@ -5,7 +5,23 @@ ...@@ -5,7 +5,23 @@
//! C header: [`include/linux/sched.h`](srctree/include/linux/sched.h). //! C header: [`include/linux/sched.h`](srctree/include/linux/sched.h).
use crate::{bindings, types::Opaque}; use crate::{bindings, types::Opaque};
use core::{marker::PhantomData, ops::Deref, ptr}; use core::{
ffi::{c_int, c_long, c_uint},
marker::PhantomData,
ops::Deref,
ptr,
};
/// A sentinel value used for infinite timeouts.
pub const MAX_SCHEDULE_TIMEOUT: c_long = c_long::MAX;
/// Bitmask for tasks that are sleeping in an interruptible state.
pub const TASK_INTERRUPTIBLE: c_int = bindings::TASK_INTERRUPTIBLE as c_int;
/// Bitmask for tasks that are sleeping in an uninterruptible state.
pub const TASK_UNINTERRUPTIBLE: c_int = bindings::TASK_UNINTERRUPTIBLE as c_int;
/// Convenience constant for waking up tasks regardless of whether they are in interruptible or
/// uninterruptible sleep.
pub const TASK_NORMAL: c_uint = bindings::TASK_NORMAL as c_uint;
/// Returns the currently running task. /// Returns the currently running task.
#[macro_export] #[macro_export]
...@@ -23,7 +39,7 @@ macro_rules! current { ...@@ -23,7 +39,7 @@ macro_rules! current {
/// ///
/// All instances are valid tasks created by the C portion of the kernel. /// All instances are valid tasks created by the C portion of the kernel.
/// ///
/// Instances of this type are always ref-counted, that is, a call to `get_task_struct` ensures /// Instances of this type are always refcounted, that is, a call to `get_task_struct` ensures
/// that the allocation remains valid at least until the matching call to `put_task_struct`. /// that the allocation remains valid at least until the matching call to `put_task_struct`.
/// ///
/// # Examples /// # Examples
...@@ -116,7 +132,7 @@ fn deref(&self) -> &Self::Target { ...@@ -116,7 +132,7 @@ fn deref(&self) -> &Self::Target {
/// Returns the group leader of the given task. /// Returns the group leader of the given task.
pub fn group_leader(&self) -> &Task { pub fn group_leader(&self) -> &Task {
// SAFETY: By the type invariant, we know that `self.0` is a valid task. Valid tasks always // SAFETY: By the type invariant, we know that `self.0` is a valid task. Valid tasks always
// have a valid group_leader. // have a valid `group_leader`.
let ptr = unsafe { *ptr::addr_of!((*self.0.get()).group_leader) }; let ptr = unsafe { *ptr::addr_of!((*self.0.get()).group_leader) };
// SAFETY: The lifetime of the returned task reference is tied to the lifetime of `self`, // SAFETY: The lifetime of the returned task reference is tied to the lifetime of `self`,
...@@ -147,7 +163,7 @@ pub fn wake_up(&self) { ...@@ -147,7 +163,7 @@ pub fn wake_up(&self) {
} }
} }
// SAFETY: The type invariants guarantee that `Task` is always ref-counted. // SAFETY: The type invariants guarantee that `Task` is always refcounted.
unsafe impl crate::types::AlwaysRefCounted for Task { unsafe impl crate::types::AlwaysRefCounted for Task {
fn inc_ref(&self) { fn inc_ref(&self) {
// SAFETY: The existence of a shared reference means that the refcount is nonzero. // SAFETY: The existence of a shared reference means that the refcount is nonzero.
......
// SPDX-License-Identifier: GPL-2.0
//! Time related primitives.
//!
//! This module contains the kernel APIs related to time and timers that
//! have been ported or wrapped for usage by Rust code in the kernel.
/// The time unit of Linux kernel. One jiffy equals (1/HZ) second.
pub type Jiffies = core::ffi::c_ulong;
/// The millisecond time unit.
pub type Msecs = core::ffi::c_uint;
/// Converts milliseconds to jiffies.
#[inline]
pub fn msecs_to_jiffies(msecs: Msecs) -> Jiffies {
// SAFETY: The `__msecs_to_jiffies` function is always safe to call no
// matter what the argument is.
unsafe { bindings::__msecs_to_jiffies(msecs) }
}
...@@ -46,6 +46,25 @@ pub trait ForeignOwnable: Sized { ...@@ -46,6 +46,25 @@ pub trait ForeignOwnable: Sized {
/// Additionally, all instances (if any) of values returned by [`ForeignOwnable::borrow`] for /// Additionally, all instances (if any) of values returned by [`ForeignOwnable::borrow`] for
/// this object must have been dropped. /// this object must have been dropped.
unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self; unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self;
/// Tries to convert a foreign-owned object back to a Rust-owned one.
///
/// A convenience wrapper over [`ForeignOwnable::from_foreign`] that returns [`None`] if `ptr`
/// is null.
///
/// # Safety
///
/// `ptr` must either be null or satisfy the safety requirements for
/// [`ForeignOwnable::from_foreign`].
unsafe fn try_from_foreign(ptr: *const core::ffi::c_void) -> Option<Self> {
if ptr.is_null() {
None
} else {
// SAFETY: Since `ptr` is not null here, then `ptr` satisfies the safety requirements
// of `from_foreign` given the safety requirements of this function.
unsafe { Some(Self::from_foreign(ptr)) }
}
}
} }
impl<T: 'static> ForeignOwnable for Box<T> { impl<T: 'static> ForeignOwnable for Box<T> {
...@@ -90,6 +109,7 @@ unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {} ...@@ -90,6 +109,7 @@ unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {}
/// ///
/// In the example below, we have multiple exit paths and we want to log regardless of which one is /// In the example below, we have multiple exit paths and we want to log regardless of which one is
/// taken: /// taken:
///
/// ``` /// ```
/// # use kernel::types::ScopeGuard; /// # use kernel::types::ScopeGuard;
/// fn example1(arg: bool) { /// fn example1(arg: bool) {
...@@ -108,6 +128,7 @@ unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {} ...@@ -108,6 +128,7 @@ unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {}
/// ///
/// In the example below, we want to log the same message on all early exits but a different one on /// In the example below, we want to log the same message on all early exits but a different one on
/// the main exit path: /// the main exit path:
///
/// ``` /// ```
/// # use kernel::types::ScopeGuard; /// # use kernel::types::ScopeGuard;
/// fn example2(arg: bool) { /// fn example2(arg: bool) {
...@@ -129,6 +150,7 @@ unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {} ...@@ -129,6 +150,7 @@ unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {}
/// ///
/// In the example below, we need a mutable object (the vector) to be accessible within the log /// In the example below, we need a mutable object (the vector) to be accessible within the log
/// function, so we wrap it in the [`ScopeGuard`]: /// function, so we wrap it in the [`ScopeGuard`]:
///
/// ``` /// ```
/// # use kernel::types::ScopeGuard; /// # use kernel::types::ScopeGuard;
/// fn example3(arg: bool) -> Result { /// fn example3(arg: bool) -> Result {
......
...@@ -12,19 +12,19 @@ ...@@ -12,19 +12,19 @@
//! //!
//! # The raw API //! # The raw API
//! //!
//! The raw API consists of the `RawWorkItem` trait, where the work item needs to provide an //! The raw API consists of the [`RawWorkItem`] trait, where the work item needs to provide an
//! arbitrary function that knows how to enqueue the work item. It should usually not be used //! arbitrary function that knows how to enqueue the work item. It should usually not be used
//! directly, but if you want to, you can use it without using the pieces from the safe API. //! directly, but if you want to, you can use it without using the pieces from the safe API.
//! //!
//! # The safe API //! # The safe API
//! //!
//! The safe API is used via the `Work` struct and `WorkItem` traits. Furthermore, it also includes //! The safe API is used via the [`Work`] struct and [`WorkItem`] traits. Furthermore, it also
//! a trait called `WorkItemPointer`, which is usually not used directly by the user. //! includes a trait called [`WorkItemPointer`], which is usually not used directly by the user.
//! //!
//! * The `Work` struct is the Rust wrapper for the C `work_struct` type. //! * The [`Work`] struct is the Rust wrapper for the C `work_struct` type.
//! * The `WorkItem` trait is implemented for structs that can be enqueued to a workqueue. //! * The [`WorkItem`] trait is implemented for structs that can be enqueued to a workqueue.
//! * The `WorkItemPointer` trait is implemented for the pointer type that points at a something //! * The [`WorkItemPointer`] trait is implemented for the pointer type that points at a something
//! that implements `WorkItem`. //! that implements [`WorkItem`].
//! //!
//! ## Example //! ## Example
//! //!
...@@ -35,8 +35,7 @@ ...@@ -35,8 +35,7 @@
//! ``` //! ```
//! use kernel::prelude::*; //! use kernel::prelude::*;
//! use kernel::sync::Arc; //! use kernel::sync::Arc;
//! use kernel::workqueue::{self, Work, WorkItem}; //! use kernel::workqueue::{self, impl_has_work, new_work, Work, WorkItem};
//! use kernel::{impl_has_work, new_work};
//! //!
//! #[pin_data] //! #[pin_data]
//! struct MyStruct { //! struct MyStruct {
...@@ -78,8 +77,7 @@ ...@@ -78,8 +77,7 @@
//! ``` //! ```
//! use kernel::prelude::*; //! use kernel::prelude::*;
//! use kernel::sync::Arc; //! use kernel::sync::Arc;
//! use kernel::workqueue::{self, Work, WorkItem}; //! use kernel::workqueue::{self, impl_has_work, new_work, Work, WorkItem};
//! use kernel::{impl_has_work, new_work};
//! //!
//! #[pin_data] //! #[pin_data]
//! struct MyStruct { //! struct MyStruct {
...@@ -147,6 +145,7 @@ macro_rules! new_work { ...@@ -147,6 +145,7 @@ macro_rules! new_work {
$crate::workqueue::Work::new($crate::optional_name!($($name)?), $crate::static_lock_class!()) $crate::workqueue::Work::new($crate::optional_name!($($name)?), $crate::static_lock_class!())
}; };
} }
pub use new_work;
/// A kernel work queue. /// A kernel work queue.
/// ///
...@@ -168,7 +167,7 @@ impl Queue { ...@@ -168,7 +167,7 @@ impl Queue {
/// # Safety /// # Safety
/// ///
/// The caller must ensure that the provided raw pointer is not dangling, that it points at a /// The caller must ensure that the provided raw pointer is not dangling, that it points at a
/// valid workqueue, and that it remains valid until the end of 'a. /// valid workqueue, and that it remains valid until the end of `'a`.
pub unsafe fn from_raw<'a>(ptr: *const bindings::workqueue_struct) -> &'a Queue { pub unsafe fn from_raw<'a>(ptr: *const bindings::workqueue_struct) -> &'a Queue {
// SAFETY: The `Queue` type is `#[repr(transparent)]`, so the pointer cast is valid. The // SAFETY: The `Queue` type is `#[repr(transparent)]`, so the pointer cast is valid. The
// caller promises that the pointer is not dangling. // caller promises that the pointer is not dangling.
...@@ -218,7 +217,9 @@ pub fn try_spawn<T: 'static + Send + FnOnce()>(&self, func: T) -> Result<(), All ...@@ -218,7 +217,9 @@ pub fn try_spawn<T: 'static + Send + FnOnce()>(&self, func: T) -> Result<(), All
} }
} }
/// A helper type used in `try_spawn`. /// A helper type used in [`try_spawn`].
///
/// [`try_spawn`]: Queue::try_spawn
#[pin_data] #[pin_data]
struct ClosureWork<T> { struct ClosureWork<T> {
#[pin] #[pin]
...@@ -253,14 +254,16 @@ fn run(mut this: Pin<Box<Self>>) { ...@@ -253,14 +254,16 @@ fn run(mut this: Pin<Box<Self>>) {
/// actual value of the id is not important as long as you use different ids for different fields /// actual value of the id is not important as long as you use different ids for different fields
/// of the same struct. (Fields of different structs need not use different ids.) /// of the same struct. (Fields of different structs need not use different ids.)
/// ///
/// Note that the id is used only to select the right method to call during compilation. It wont be /// Note that the id is used only to select the right method to call during compilation. It won't be
/// part of the final executable. /// part of the final executable.
/// ///
/// # Safety /// # Safety
/// ///
/// Implementers must ensure that any pointers passed to a `queue_work_on` closure by `__enqueue` /// Implementers must ensure that any pointers passed to a `queue_work_on` closure by [`__enqueue`]
/// remain valid for the duration specified in the guarantees section of the documentation for /// remain valid for the duration specified in the guarantees section of the documentation for
/// `__enqueue`. /// [`__enqueue`].
///
/// [`__enqueue`]: RawWorkItem::__enqueue
pub unsafe trait RawWorkItem<const ID: u64> { pub unsafe trait RawWorkItem<const ID: u64> {
/// The return type of [`Queue::enqueue`]. /// The return type of [`Queue::enqueue`].
type EnqueueOutput; type EnqueueOutput;
...@@ -290,10 +293,11 @@ unsafe fn __enqueue<F>(self, queue_work_on: F) -> Self::EnqueueOutput ...@@ -290,10 +293,11 @@ unsafe fn __enqueue<F>(self, queue_work_on: F) -> Self::EnqueueOutput
/// Defines the method that should be called directly when a work item is executed. /// Defines the method that should be called directly when a work item is executed.
/// ///
/// This trait is implemented by `Pin<Box<T>>` and `Arc<T>`, and is mainly intended to be /// This trait is implemented by `Pin<Box<T>>` and [`Arc<T>`], and is mainly intended to be
/// implemented for smart pointer types. For your own structs, you would implement [`WorkItem`] /// implemented for smart pointer types. For your own structs, you would implement [`WorkItem`]
/// instead. The `run` method on this trait will usually just perform the appropriate /// instead. The [`run`] method on this trait will usually just perform the appropriate
/// `container_of` translation and then call into the `run` method from the [`WorkItem`] trait. /// `container_of` translation and then call into the [`run`][WorkItem::run] method from the
/// [`WorkItem`] trait.
/// ///
/// This trait is used when the `work_struct` field is defined using the [`Work`] helper. /// This trait is used when the `work_struct` field is defined using the [`Work`] helper.
/// ///
...@@ -309,8 +313,10 @@ pub unsafe trait WorkItemPointer<const ID: u64>: RawWorkItem<ID> { ...@@ -309,8 +313,10 @@ pub unsafe trait WorkItemPointer<const ID: u64>: RawWorkItem<ID> {
/// ///
/// # Safety /// # Safety
/// ///
/// The provided `work_struct` pointer must originate from a previous call to `__enqueue` where /// The provided `work_struct` pointer must originate from a previous call to [`__enqueue`]
/// the `queue_work_on` closure returned true, and the pointer must still be valid. /// where the `queue_work_on` closure returned true, and the pointer must still be valid.
///
/// [`__enqueue`]: RawWorkItem::__enqueue
unsafe extern "C" fn run(ptr: *mut bindings::work_struct); unsafe extern "C" fn run(ptr: *mut bindings::work_struct);
} }
...@@ -328,12 +334,14 @@ pub trait WorkItem<const ID: u64 = 0> { ...@@ -328,12 +334,14 @@ pub trait WorkItem<const ID: u64 = 0> {
/// Links for a work item. /// Links for a work item.
/// ///
/// This struct contains a function pointer to the `run` function from the [`WorkItemPointer`] /// This struct contains a function pointer to the [`run`] function from the [`WorkItemPointer`]
/// trait, and defines the linked list pointers necessary to enqueue a work item in a workqueue. /// trait, and defines the linked list pointers necessary to enqueue a work item in a workqueue.
/// ///
/// Wraps the kernel's C `struct work_struct`. /// Wraps the kernel's C `struct work_struct`.
/// ///
/// This is a helper type used to associate a `work_struct` with the [`WorkItem`] that uses it. /// This is a helper type used to associate a `work_struct` with the [`WorkItem`] that uses it.
///
/// [`run`]: WorkItemPointer::run
#[repr(transparent)] #[repr(transparent)]
pub struct Work<T: ?Sized, const ID: u64 = 0> { pub struct Work<T: ?Sized, const ID: u64 = 0> {
work: Opaque<bindings::work_struct>, work: Opaque<bindings::work_struct>,
...@@ -396,9 +404,8 @@ pub unsafe fn raw_get(ptr: *const Self) -> *mut bindings::work_struct { ...@@ -396,9 +404,8 @@ pub unsafe fn raw_get(ptr: *const Self) -> *mut bindings::work_struct {
/// like this: /// like this:
/// ///
/// ```no_run /// ```no_run
/// use kernel::impl_has_work;
/// use kernel::prelude::*; /// use kernel::prelude::*;
/// use kernel::workqueue::Work; /// use kernel::workqueue::{impl_has_work, Work};
/// ///
/// struct MyWorkItem { /// struct MyWorkItem {
/// work_field: Work<MyWorkItem, 1>, /// work_field: Work<MyWorkItem, 1>,
...@@ -409,28 +416,25 @@ pub unsafe fn raw_get(ptr: *const Self) -> *mut bindings::work_struct { ...@@ -409,28 +416,25 @@ pub unsafe fn raw_get(ptr: *const Self) -> *mut bindings::work_struct {
/// } /// }
/// ``` /// ```
/// ///
/// Note that since the `Work` type is annotated with an id, you can have several `work_struct` /// Note that since the [`Work`] type is annotated with an id, you can have several `work_struct`
/// fields by using a different id for each one. /// fields by using a different id for each one.
/// ///
/// # Safety /// # Safety
/// ///
/// The [`OFFSET`] constant must be the offset of a field in Self of type [`Work<T, ID>`]. The methods on /// The [`OFFSET`] constant must be the offset of a field in `Self` of type [`Work<T, ID>`]. The
/// this trait must have exactly the behavior that the definitions given below have. /// methods on this trait must have exactly the behavior that the definitions given below have.
/// ///
/// [`Work<T, ID>`]: Work
/// [`impl_has_work!`]: crate::impl_has_work /// [`impl_has_work!`]: crate::impl_has_work
/// [`OFFSET`]: HasWork::OFFSET /// [`OFFSET`]: HasWork::OFFSET
pub unsafe trait HasWork<T, const ID: u64 = 0> { pub unsafe trait HasWork<T, const ID: u64 = 0> {
/// The offset of the [`Work<T, ID>`] field. /// The offset of the [`Work<T, ID>`] field.
///
/// [`Work<T, ID>`]: Work
const OFFSET: usize; const OFFSET: usize;
/// Returns the offset of the [`Work<T, ID>`] field. /// Returns the offset of the [`Work<T, ID>`] field.
/// ///
/// This method exists because the [`OFFSET`] constant cannot be accessed if the type is not Sized. /// This method exists because the [`OFFSET`] constant cannot be accessed if the type is not
/// [`Sized`].
/// ///
/// [`Work<T, ID>`]: Work
/// [`OFFSET`]: HasWork::OFFSET /// [`OFFSET`]: HasWork::OFFSET
#[inline] #[inline]
fn get_work_offset(&self) -> usize { fn get_work_offset(&self) -> usize {
...@@ -442,8 +446,6 @@ fn get_work_offset(&self) -> usize { ...@@ -442,8 +446,6 @@ fn get_work_offset(&self) -> usize {
/// # Safety /// # Safety
/// ///
/// The provided pointer must point at a valid struct of type `Self`. /// The provided pointer must point at a valid struct of type `Self`.
///
/// [`Work<T, ID>`]: Work
#[inline] #[inline]
unsafe fn raw_get_work(ptr: *mut Self) -> *mut Work<T, ID> { unsafe fn raw_get_work(ptr: *mut Self) -> *mut Work<T, ID> {
// SAFETY: The caller promises that the pointer is valid. // SAFETY: The caller promises that the pointer is valid.
...@@ -455,8 +457,6 @@ unsafe fn raw_get_work(ptr: *mut Self) -> *mut Work<T, ID> { ...@@ -455,8 +457,6 @@ unsafe fn raw_get_work(ptr: *mut Self) -> *mut Work<T, ID> {
/// # Safety /// # Safety
/// ///
/// The pointer must point at a [`Work<T, ID>`] field in a struct of type `Self`. /// The pointer must point at a [`Work<T, ID>`] field in a struct of type `Self`.
///
/// [`Work<T, ID>`]: Work
#[inline] #[inline]
unsafe fn work_container_of(ptr: *mut Work<T, ID>) -> *mut Self unsafe fn work_container_of(ptr: *mut Work<T, ID>) -> *mut Self
where where
...@@ -473,9 +473,8 @@ unsafe fn work_container_of(ptr: *mut Work<T, ID>) -> *mut Self ...@@ -473,9 +473,8 @@ unsafe fn work_container_of(ptr: *mut Work<T, ID>) -> *mut Self
/// # Examples /// # Examples
/// ///
/// ``` /// ```
/// use kernel::impl_has_work;
/// use kernel::sync::Arc; /// use kernel::sync::Arc;
/// use kernel::workqueue::{self, Work}; /// use kernel::workqueue::{self, impl_has_work, Work};
/// ///
/// struct MyStruct { /// struct MyStruct {
/// work_field: Work<MyStruct, 17>, /// work_field: Work<MyStruct, 17>,
...@@ -485,8 +484,6 @@ unsafe fn work_container_of(ptr: *mut Work<T, ID>) -> *mut Self ...@@ -485,8 +484,6 @@ unsafe fn work_container_of(ptr: *mut Work<T, ID>) -> *mut Self
/// impl HasWork<MyStruct, 17> for MyStruct { self.work_field } /// impl HasWork<MyStruct, 17> for MyStruct { self.work_field }
/// } /// }
/// ``` /// ```
///
/// [`HasWork<T, ID>`]: HasWork
#[macro_export] #[macro_export]
macro_rules! impl_has_work { macro_rules! impl_has_work {
($(impl$(<$($implarg:ident),*>)? ($(impl$(<$($implarg:ident),*>)?
...@@ -509,6 +506,7 @@ unsafe fn raw_get_work(ptr: *mut Self) -> *mut $crate::workqueue::Work<$work_typ ...@@ -509,6 +506,7 @@ unsafe fn raw_get_work(ptr: *mut Self) -> *mut $crate::workqueue::Work<$work_typ
} }
)*}; )*};
} }
pub use impl_has_work;
impl_has_work! { impl_has_work! {
impl<T> HasWork<Self> for ClosureWork<T> { self.work } impl<T> HasWork<Self> for ClosureWork<T> { self.work }
......
...@@ -222,10 +222,15 @@ pub(crate) fn module(ts: TokenStream) -> TokenStream { ...@@ -222,10 +222,15 @@ pub(crate) fn module(ts: TokenStream) -> TokenStream {
}}; }};
// Loadable modules need to export the `{{init,cleanup}}_module` identifiers. // Loadable modules need to export the `{{init,cleanup}}_module` identifiers.
/// # Safety
///
/// This function must not be called after module initialization, because it may be
/// freed after that completes.
#[cfg(MODULE)] #[cfg(MODULE)]
#[doc(hidden)] #[doc(hidden)]
#[no_mangle] #[no_mangle]
pub extern \"C\" fn init_module() -> core::ffi::c_int {{ #[link_section = \".init.text\"]
pub unsafe extern \"C\" fn init_module() -> core::ffi::c_int {{
__init() __init()
}} }}
......
...@@ -290,7 +290,7 @@ quiet_cmd_rustc_o_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ ...@@ -290,7 +290,7 @@ quiet_cmd_rustc_o_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $< cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $<
$(obj)/%.o: $(src)/%.rs FORCE $(obj)/%.o: $(src)/%.rs FORCE
$(call if_changed_dep,rustc_o_rs) +$(call if_changed_dep,rustc_o_rs)
quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
cmd_rustc_rsi_rs = \ cmd_rustc_rsi_rs = \
...@@ -298,19 +298,19 @@ quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ ...@@ -298,19 +298,19 @@ quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
command -v $(RUSTFMT) >/dev/null && $(RUSTFMT) $@ command -v $(RUSTFMT) >/dev/null && $(RUSTFMT) $@
$(obj)/%.rsi: $(src)/%.rs FORCE $(obj)/%.rsi: $(src)/%.rs FORCE
$(call if_changed_dep,rustc_rsi_rs) +$(call if_changed_dep,rustc_rsi_rs)
quiet_cmd_rustc_s_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ quiet_cmd_rustc_s_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
cmd_rustc_s_rs = $(rust_common_cmd) --emit=asm=$@ $< cmd_rustc_s_rs = $(rust_common_cmd) --emit=asm=$@ $<
$(obj)/%.s: $(src)/%.rs FORCE $(obj)/%.s: $(src)/%.rs FORCE
$(call if_changed_dep,rustc_s_rs) +$(call if_changed_dep,rustc_s_rs)
quiet_cmd_rustc_ll_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ quiet_cmd_rustc_ll_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
cmd_rustc_ll_rs = $(rust_common_cmd) --emit=llvm-ir=$@ $< cmd_rustc_ll_rs = $(rust_common_cmd) --emit=llvm-ir=$@ $<
$(obj)/%.ll: $(src)/%.rs FORCE $(obj)/%.ll: $(src)/%.rs FORCE
$(call if_changed_dep,rustc_ll_rs) +$(call if_changed_dep,rustc_ll_rs)
# Compile assembler sources (.S) # Compile assembler sources (.S)
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
......
...@@ -156,7 +156,7 @@ quiet_cmd_host-rust = HOSTRUSTC $@ ...@@ -156,7 +156,7 @@ quiet_cmd_host-rust = HOSTRUSTC $@
cmd_host-rust = \ cmd_host-rust = \
$(HOSTRUSTC) $(hostrust_flags) --emit=link=$@ $< $(HOSTRUSTC) $(hostrust_flags) --emit=link=$@ $<
$(host-rust): $(obj)/%: $(src)/%.rs FORCE $(host-rust): $(obj)/%: $(src)/%.rs FORCE
$(call if_changed_dep,host-rust) +$(call if_changed_dep,host-rust)
targets += $(host-csingle) $(host-cmulti) $(host-cobjs) \ targets += $(host-csingle) $(host-cmulti) $(host-cobjs) \
$(host-cxxmulti) $(host-cxxobjs) $(host-rust) $(host-cxxmulti) $(host-cxxobjs) $(host-rust)
...@@ -33,7 +33,7 @@ llvm) ...@@ -33,7 +33,7 @@ llvm)
fi fi
;; ;;
rustc) rustc)
echo 1.74.1 echo 1.76.0
;; ;;
bindgen) bindgen)
echo 0.65.1 echo 0.65.1
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment