Commit 768409cf authored by Miguel Ojeda's avatar Miguel Ojeda

rust: upgrade to Rust 1.76.0

This is the next upgrade to the Rust toolchain, from 1.75.0 to 1.76.0
(i.e. the latest) [1].

See the upgrade policy [2] and the comments on the first upgrade in
commit 3ed03f4d ("rust: upgrade to Rust 1.68.2").

# Unstable features

No unstable features that we use were stabilized in Rust 1.76.0.

The only unstable features allowed to be used outside the `kernel` crate
are still `new_uninit,offset_of`, though other code to be upstreamed
may increase the list.

Please see [3] for details.

# Required changes

`rustc` (and others) now warns when it cannot connect to the Make
jobserver, thus mark those invocations as recursive as needed. Please
see the previous commit for details.

# Other changes

Rust 1.76.0 does not emit the `.debug_pub{names,types}` sections anymore
for DWARFv4 [4][5]. For instance, in the uncompressed debug info case,
this debug information took:

    samples/rust/rust_minimal.o   ~64 KiB (~18% of total object size)
    rust/kernel.o                 ~92 KiB (~15%)
    rust/core.o                  ~114 KiB ( ~5%)

In the compressed debug info (zlib) case:

    samples/rust/rust_minimal.o   ~11 KiB (~6%)
    rust/kernel.o                 ~17 KiB (~5%)
    rust/core.o                   ~21 KiB (~1.5%)

In addition, the `rustc_codegen_gcc` backend now does not emit the
`.eh_frame` section when compiling under `-Cpanic=abort` [6], thus
removing the need for the patch in the CI to compile the kernel [7].
Moreover, it also now emits the `.comment` section too [6].

# `alloc` upgrade and reviewing

The vast majority of changes are due to our `alloc` fork being upgraded
at once.

There are two kinds of changes to be aware of: the ones coming from
upstream, which we should follow as closely as possible, and the updates
needed in our added fallible APIs to keep them matching the newer
infallible APIs coming from upstream.

Instead of taking a look at the diff of this patch, an alternative
approach is reviewing a diff of the changes between upstream `alloc` and
the kernel's. This allows to easily inspect the kernel additions only,
especially to check if the fallible methods we already have still match
the infallible ones in the new version coming from upstream.

Another approach is reviewing the changes introduced in the additions in
the kernel fork between the two versions. This is useful to spot
potentially unintended changes to our additions.

To apply these approaches, one may follow steps similar to the following
to generate a pair of patches that show the differences between upstream
Rust and the kernel (for the subset of `alloc` we use) before and after
applying this patch:

    # Get the difference with respect to the old version.
    git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
    git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
        cut -d/ -f3- |
        grep -Fv README.md |
        xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
    git -C linux diff --patch-with-stat --summary -R > old.patch
    git -C linux restore rust/alloc

    # Apply this patch.
    git -C linux am rust-upgrade.patch

    # Get the difference with respect to the new version.
    git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
    git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
        cut -d/ -f3- |
        grep -Fv README.md |
        xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
    git -C linux diff --patch-with-stat --summary -R > new.patch
    git -C linux restore rust/alloc

Now one may check the `new.patch` to take a look at the additions (first
approach) or at the difference between those two patches (second
approach). For the latter, a side-by-side tool is recommended.

Link: https://github.com/rust-lang/rust/blob/stable/RELEASES.md#version-1760-2024-02-08 [1]
Link: https://rust-for-linux.com/rust-version-policy [2]
Link: https://github.com/Rust-for-Linux/linux/issues/2 [3]
Link: https://github.com/rust-lang/compiler-team/issues/688 [4]
Link: https://github.com/rust-lang/rust/pull/117962 [5]
Link: https://github.com/rust-lang/rust/pull/118068 [6]
Link: https://github.com/Rust-for-Linux/ci-rustc_codegen_gcc [7]
Tested-by: default avatarBoqun Feng <boqun.feng@gmail.com>
Reviewed-by: default avatarAlice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20240217002638.57373-2-ojeda@kernel.orgSigned-off-by: default avatarMiguel Ojeda <ojeda@kernel.org>
parent ecab4115
...@@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils. ...@@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils.
====================== =============== ======================================== ====================== =============== ========================================
GNU C 5.1 gcc --version GNU C 5.1 gcc --version
Clang/LLVM (optional) 11.0.0 clang --version Clang/LLVM (optional) 11.0.0 clang --version
Rust (optional) 1.75.0 rustc --version Rust (optional) 1.76.0 rustc --version
bindgen (optional) 0.65.1 bindgen --version bindgen (optional) 0.65.1 bindgen --version
GNU make 3.82 make --version GNU make 3.82 make --version
bash 4.2 bash --version bash 4.2 bash --version
......
...@@ -425,12 +425,14 @@ pub unsafe fn __rdl_oom(size: usize, _align: usize) -> ! { ...@@ -425,12 +425,14 @@ pub unsafe fn __rdl_oom(size: usize, _align: usize) -> ! {
} }
} }
#[cfg(not(no_global_oom_handling))]
/// Specialize clones into pre-allocated, uninitialized memory. /// Specialize clones into pre-allocated, uninitialized memory.
/// Used by `Box::clone` and `Rc`/`Arc::make_mut`. /// Used by `Box::clone` and `Rc`/`Arc::make_mut`.
pub(crate) trait WriteCloneIntoRaw: Sized { pub(crate) trait WriteCloneIntoRaw: Sized {
unsafe fn write_clone_into_raw(&self, target: *mut Self); unsafe fn write_clone_into_raw(&self, target: *mut Self);
} }
#[cfg(not(no_global_oom_handling))]
impl<T: Clone> WriteCloneIntoRaw for T { impl<T: Clone> WriteCloneIntoRaw for T {
#[inline] #[inline]
default unsafe fn write_clone_into_raw(&self, target: *mut Self) { default unsafe fn write_clone_into_raw(&self, target: *mut Self) {
...@@ -440,6 +442,7 @@ impl<T: Clone> WriteCloneIntoRaw for T { ...@@ -440,6 +442,7 @@ impl<T: Clone> WriteCloneIntoRaw for T {
} }
} }
#[cfg(not(no_global_oom_handling))]
impl<T: Copy> WriteCloneIntoRaw for T { impl<T: Copy> WriteCloneIntoRaw for T {
#[inline] #[inline]
unsafe fn write_clone_into_raw(&self, target: *mut Self) { unsafe fn write_clone_into_raw(&self, target: *mut Self) {
......
...@@ -1042,10 +1042,18 @@ impl<T: ?Sized, A: Allocator> Box<T, A> { ...@@ -1042,10 +1042,18 @@ impl<T: ?Sized, A: Allocator> Box<T, A> {
/// use std::ptr; /// use std::ptr;
/// ///
/// let x = Box::new(String::from("Hello")); /// let x = Box::new(String::from("Hello"));
/// let p = Box::into_raw(x); /// let ptr = Box::into_raw(x);
/// unsafe {
/// ptr::drop_in_place(ptr);
/// dealloc(ptr as *mut u8, Layout::new::<String>());
/// }
/// ```
/// Note: This is equivalent to the following:
/// ```
/// let x = Box::new(String::from("Hello"));
/// let ptr = Box::into_raw(x);
/// unsafe { /// unsafe {
/// ptr::drop_in_place(p); /// drop(Box::from_raw(ptr));
/// dealloc(p as *mut u8, Layout::new::<String>());
/// } /// }
/// ``` /// ```
/// ///
......
...@@ -150,6 +150,7 @@ fn fmt( ...@@ -150,6 +150,7 @@ fn fmt(
/// An intermediate trait for specialization of `Extend`. /// An intermediate trait for specialization of `Extend`.
#[doc(hidden)] #[doc(hidden)]
#[cfg(not(no_global_oom_handling))]
trait SpecExtend<I: IntoIterator> { trait SpecExtend<I: IntoIterator> {
/// Extends `self` with the contents of the given iterator. /// Extends `self` with the contents of the given iterator.
fn spec_extend(&mut self, iter: I); fn spec_extend(&mut self, iter: I);
......
...@@ -80,8 +80,8 @@ ...@@ -80,8 +80,8 @@
not(no_sync), not(no_sync),
target_has_atomic = "ptr" target_has_atomic = "ptr"
))] ))]
#![cfg_attr(not(bootstrap), doc(rust_logo))] #![doc(rust_logo)]
#![cfg_attr(not(bootstrap), feature(rustdoc_internals))] #![feature(rustdoc_internals)]
#![no_std] #![no_std]
#![needs_allocator] #![needs_allocator]
// Lints: // Lints:
...@@ -142,7 +142,6 @@ ...@@ -142,7 +142,6 @@
#![feature(maybe_uninit_uninit_array)] #![feature(maybe_uninit_uninit_array)]
#![feature(maybe_uninit_uninit_array_transpose)] #![feature(maybe_uninit_uninit_array_transpose)]
#![feature(pattern)] #![feature(pattern)]
#![feature(ptr_addr_eq)]
#![feature(ptr_internals)] #![feature(ptr_internals)]
#![feature(ptr_metadata)] #![feature(ptr_metadata)]
#![feature(ptr_sub_ptr)] #![feature(ptr_sub_ptr)]
...@@ -157,6 +156,7 @@ ...@@ -157,6 +156,7 @@
#![feature(std_internals)] #![feature(std_internals)]
#![feature(str_internals)] #![feature(str_internals)]
#![feature(strict_provenance)] #![feature(strict_provenance)]
#![feature(trusted_fused)]
#![feature(trusted_len)] #![feature(trusted_len)]
#![feature(trusted_random_access)] #![feature(trusted_random_access)]
#![feature(try_trait_v2)] #![feature(try_trait_v2)]
...@@ -277,7 +277,7 @@ pub(crate) mod test_helpers { ...@@ -277,7 +277,7 @@ pub(crate) mod test_helpers {
/// seed not being the same for every RNG invocation too. /// seed not being the same for every RNG invocation too.
pub(crate) fn test_rng() -> rand_xorshift::XorShiftRng { pub(crate) fn test_rng() -> rand_xorshift::XorShiftRng {
use std::hash::{BuildHasher, Hash, Hasher}; use std::hash::{BuildHasher, Hash, Hasher};
let mut hasher = std::collections::hash_map::RandomState::new().build_hasher(); let mut hasher = std::hash::RandomState::new().build_hasher();
std::panic::Location::caller().hash(&mut hasher); std::panic::Location::caller().hash(&mut hasher);
let hc64 = hasher.finish(); let hc64 = hasher.finish();
let seed_vec = let seed_vec =
......
...@@ -27,6 +27,16 @@ enum AllocInit { ...@@ -27,6 +27,16 @@ enum AllocInit {
Zeroed, Zeroed,
} }
#[repr(transparent)]
#[cfg_attr(target_pointer_width = "16", rustc_layout_scalar_valid_range_end(0x7fff))]
#[cfg_attr(target_pointer_width = "32", rustc_layout_scalar_valid_range_end(0x7fff_ffff))]
#[cfg_attr(target_pointer_width = "64", rustc_layout_scalar_valid_range_end(0x7fff_ffff_ffff_ffff))]
struct Cap(usize);
impl Cap {
const ZERO: Cap = unsafe { Cap(0) };
}
/// A low-level utility for more ergonomically allocating, reallocating, and deallocating /// A low-level utility for more ergonomically allocating, reallocating, and deallocating
/// a buffer of memory on the heap without having to worry about all the corner cases /// a buffer of memory on the heap without having to worry about all the corner cases
/// involved. This type is excellent for building your own data structures like Vec and VecDeque. /// involved. This type is excellent for building your own data structures like Vec and VecDeque.
...@@ -52,7 +62,12 @@ enum AllocInit { ...@@ -52,7 +62,12 @@ enum AllocInit {
#[allow(missing_debug_implementations)] #[allow(missing_debug_implementations)]
pub(crate) struct RawVec<T, A: Allocator = Global> { pub(crate) struct RawVec<T, A: Allocator = Global> {
ptr: Unique<T>, ptr: Unique<T>,
cap: usize, /// Never used for ZSTs; it's `capacity()`'s responsibility to return usize::MAX in that case.
///
/// # Safety
///
/// `cap` must be in the `0..=isize::MAX` range.
cap: Cap,
alloc: A, alloc: A,
} }
...@@ -121,7 +136,7 @@ impl<T, A: Allocator> RawVec<T, A> { ...@@ -121,7 +136,7 @@ impl<T, A: Allocator> RawVec<T, A> {
/// the returned `RawVec`. /// the returned `RawVec`.
pub const fn new_in(alloc: A) -> Self { pub const fn new_in(alloc: A) -> Self {
// `cap: 0` means "unallocated". zero-sized types are ignored. // `cap: 0` means "unallocated". zero-sized types are ignored.
Self { ptr: Unique::dangling(), cap: 0, alloc } Self { ptr: Unique::dangling(), cap: Cap::ZERO, alloc }
} }
/// Like `with_capacity`, but parameterized over the choice of /// Like `with_capacity`, but parameterized over the choice of
...@@ -203,7 +218,7 @@ fn allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Self { ...@@ -203,7 +218,7 @@ fn allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Self {
// here should change to `ptr.len() / mem::size_of::<T>()`. // here should change to `ptr.len() / mem::size_of::<T>()`.
Self { Self {
ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) }, ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) },
cap: capacity, cap: unsafe { Cap(capacity) },
alloc, alloc,
} }
} }
...@@ -228,7 +243,7 @@ fn try_allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Result<Self, T ...@@ -228,7 +243,7 @@ fn try_allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Result<Self, T
// here should change to `ptr.len() / mem::size_of::<T>()`. // here should change to `ptr.len() / mem::size_of::<T>()`.
Ok(Self { Ok(Self {
ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) }, ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) },
cap: capacity, cap: unsafe { Cap(capacity) },
alloc, alloc,
}) })
} }
...@@ -240,12 +255,13 @@ fn try_allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Result<Self, T ...@@ -240,12 +255,13 @@ fn try_allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Result<Self, T
/// The `ptr` must be allocated (via the given allocator `alloc`), and with the given /// The `ptr` must be allocated (via the given allocator `alloc`), and with the given
/// `capacity`. /// `capacity`.
/// The `capacity` cannot exceed `isize::MAX` for sized types. (only a concern on 32-bit /// The `capacity` cannot exceed `isize::MAX` for sized types. (only a concern on 32-bit
/// systems). ZST vectors may have a capacity up to `usize::MAX`. /// systems). For ZSTs capacity is ignored.
/// If the `ptr` and `capacity` come from a `RawVec` created via `alloc`, then this is /// If the `ptr` and `capacity` come from a `RawVec` created via `alloc`, then this is
/// guaranteed. /// guaranteed.
#[inline] #[inline]
pub unsafe fn from_raw_parts_in(ptr: *mut T, capacity: usize, alloc: A) -> Self { pub unsafe fn from_raw_parts_in(ptr: *mut T, capacity: usize, alloc: A) -> Self {
Self { ptr: unsafe { Unique::new_unchecked(ptr) }, cap: capacity, alloc } let cap = if T::IS_ZST { Cap::ZERO } else { unsafe { Cap(capacity) } };
Self { ptr: unsafe { Unique::new_unchecked(ptr) }, cap, alloc }
} }
/// Gets a raw pointer to the start of the allocation. Note that this is /// Gets a raw pointer to the start of the allocation. Note that this is
...@@ -261,7 +277,7 @@ pub fn ptr(&self) -> *mut T { ...@@ -261,7 +277,7 @@ pub fn ptr(&self) -> *mut T {
/// This will always be `usize::MAX` if `T` is zero-sized. /// This will always be `usize::MAX` if `T` is zero-sized.
#[inline(always)] #[inline(always)]
pub fn capacity(&self) -> usize { pub fn capacity(&self) -> usize {
if T::IS_ZST { usize::MAX } else { self.cap } if T::IS_ZST { usize::MAX } else { self.cap.0 }
} }
/// Returns a shared reference to the allocator backing this `RawVec`. /// Returns a shared reference to the allocator backing this `RawVec`.
...@@ -270,7 +286,7 @@ pub fn allocator(&self) -> &A { ...@@ -270,7 +286,7 @@ pub fn allocator(&self) -> &A {
} }
fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> { fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> {
if T::IS_ZST || self.cap == 0 { if T::IS_ZST || self.cap.0 == 0 {
None None
} else { } else {
// We could use Layout::array here which ensures the absence of isize and usize overflows // We could use Layout::array here which ensures the absence of isize and usize overflows
...@@ -280,7 +296,7 @@ fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> { ...@@ -280,7 +296,7 @@ fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> {
let _: () = const { assert!(mem::size_of::<T>() % mem::align_of::<T>() == 0) }; let _: () = const { assert!(mem::size_of::<T>() % mem::align_of::<T>() == 0) };
unsafe { unsafe {
let align = mem::align_of::<T>(); let align = mem::align_of::<T>();
let size = mem::size_of::<T>().unchecked_mul(self.cap); let size = mem::size_of::<T>().unchecked_mul(self.cap.0);
let layout = Layout::from_size_align_unchecked(size, align); let layout = Layout::from_size_align_unchecked(size, align);
Some((self.ptr.cast().into(), layout)) Some((self.ptr.cast().into(), layout))
} }
...@@ -414,12 +430,15 @@ fn needs_to_grow(&self, len: usize, additional: usize) -> bool { ...@@ -414,12 +430,15 @@ fn needs_to_grow(&self, len: usize, additional: usize) -> bool {
additional > self.capacity().wrapping_sub(len) additional > self.capacity().wrapping_sub(len)
} }
fn set_ptr_and_cap(&mut self, ptr: NonNull<[u8]>, cap: usize) { /// # Safety:
///
/// `cap` must not exceed `isize::MAX`.
unsafe fn set_ptr_and_cap(&mut self, ptr: NonNull<[u8]>, cap: usize) {
// Allocators currently return a `NonNull<[u8]>` whose length matches // Allocators currently return a `NonNull<[u8]>` whose length matches
// the size requested. If that ever changes, the capacity here should // the size requested. If that ever changes, the capacity here should
// change to `ptr.len() / mem::size_of::<T>()`. // change to `ptr.len() / mem::size_of::<T>()`.
self.ptr = unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) }; self.ptr = unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) };
self.cap = cap; self.cap = unsafe { Cap(cap) };
} }
// This method is usually instantiated many times. So we want it to be as // This method is usually instantiated many times. So we want it to be as
...@@ -444,14 +463,15 @@ fn grow_amortized(&mut self, len: usize, additional: usize) -> Result<(), TryRes ...@@ -444,14 +463,15 @@ fn grow_amortized(&mut self, len: usize, additional: usize) -> Result<(), TryRes
// This guarantees exponential growth. The doubling cannot overflow // This guarantees exponential growth. The doubling cannot overflow
// because `cap <= isize::MAX` and the type of `cap` is `usize`. // because `cap <= isize::MAX` and the type of `cap` is `usize`.
let cap = cmp::max(self.cap * 2, required_cap); let cap = cmp::max(self.cap.0 * 2, required_cap);
let cap = cmp::max(Self::MIN_NON_ZERO_CAP, cap); let cap = cmp::max(Self::MIN_NON_ZERO_CAP, cap);
let new_layout = Layout::array::<T>(cap); let new_layout = Layout::array::<T>(cap);
// `finish_grow` is non-generic over `T`. // `finish_grow` is non-generic over `T`.
let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?; let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
self.set_ptr_and_cap(ptr, cap); // SAFETY: finish_grow would have resulted in a capacity overflow if we tried to allocate more than isize::MAX items
unsafe { self.set_ptr_and_cap(ptr, cap) };
Ok(()) Ok(())
} }
...@@ -470,7 +490,10 @@ fn grow_exact(&mut self, len: usize, additional: usize) -> Result<(), TryReserve ...@@ -470,7 +490,10 @@ fn grow_exact(&mut self, len: usize, additional: usize) -> Result<(), TryReserve
// `finish_grow` is non-generic over `T`. // `finish_grow` is non-generic over `T`.
let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?; let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
// SAFETY: finish_grow would have resulted in a capacity overflow if we tried to allocate more than isize::MAX items
unsafe {
self.set_ptr_and_cap(ptr, cap); self.set_ptr_and_cap(ptr, cap);
}
Ok(()) Ok(())
} }
...@@ -488,7 +511,7 @@ fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> { ...@@ -488,7 +511,7 @@ fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> {
if cap == 0 { if cap == 0 {
unsafe { self.alloc.deallocate(ptr, layout) }; unsafe { self.alloc.deallocate(ptr, layout) };
self.ptr = Unique::dangling(); self.ptr = Unique::dangling();
self.cap = 0; self.cap = Cap::ZERO;
} else { } else {
let ptr = unsafe { let ptr = unsafe {
// `Layout::array` cannot overflow here because it would have // `Layout::array` cannot overflow here because it would have
...@@ -499,8 +522,11 @@ fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> { ...@@ -499,8 +522,11 @@ fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> {
.shrink(ptr, layout, new_layout) .shrink(ptr, layout, new_layout)
.map_err(|_| AllocError { layout: new_layout, non_exhaustive: () })? .map_err(|_| AllocError { layout: new_layout, non_exhaustive: () })?
}; };
// SAFETY: if the allocation is valid, then the capacity is too
unsafe {
self.set_ptr_and_cap(ptr, cap); self.set_ptr_and_cap(ptr, cap);
} }
}
Ok(()) Ok(())
} }
} }
......
...@@ -9,7 +9,8 @@ ...@@ -9,7 +9,8 @@
use core::array; use core::array;
use core::fmt; use core::fmt;
use core::iter::{ use core::iter::{
FusedIterator, InPlaceIterable, SourceIter, TrustedLen, TrustedRandomAccessNoCoerce, FusedIterator, InPlaceIterable, SourceIter, TrustedFused, TrustedLen,
TrustedRandomAccessNoCoerce,
}; };
use core::marker::PhantomData; use core::marker::PhantomData;
use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties}; use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties};
...@@ -287,9 +288,7 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item ...@@ -287,9 +288,7 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item
// Also note the implementation of `Self: TrustedRandomAccess` requires // Also note the implementation of `Self: TrustedRandomAccess` requires
// that `T: Copy` so reading elements from the buffer doesn't invalidate // that `T: Copy` so reading elements from the buffer doesn't invalidate
// them for `Drop`. // them for `Drop`.
unsafe { unsafe { if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) } }
if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) }
}
} }
} }
...@@ -341,6 +340,10 @@ fn is_empty(&self) -> bool { ...@@ -341,6 +340,10 @@ fn is_empty(&self) -> bool {
#[stable(feature = "fused", since = "1.26.0")] #[stable(feature = "fused", since = "1.26.0")]
impl<T, A: Allocator> FusedIterator for IntoIter<T, A> {} impl<T, A: Allocator> FusedIterator for IntoIter<T, A> {}
#[doc(hidden)]
#[unstable(issue = "none", feature = "trusted_fused")]
unsafe impl<T, A: Allocator> TrustedFused for IntoIter<T, A> {}
#[unstable(feature = "trusted_len", issue = "37572")] #[unstable(feature = "trusted_len", issue = "37572")]
unsafe impl<T, A: Allocator> TrustedLen for IntoIter<T, A> {} unsafe impl<T, A: Allocator> TrustedLen for IntoIter<T, A> {}
...@@ -425,7 +428,10 @@ fn drop(&mut self) { ...@@ -425,7 +428,10 @@ fn drop(&mut self) {
// also refer to the vec::in_place_collect module documentation to get an overview // also refer to the vec::in_place_collect module documentation to get an overview
#[unstable(issue = "none", feature = "inplace_iteration")] #[unstable(issue = "none", feature = "inplace_iteration")]
#[doc(hidden)] #[doc(hidden)]
unsafe impl<T, A: Allocator> InPlaceIterable for IntoIter<T, A> {} unsafe impl<T, A: Allocator> InPlaceIterable for IntoIter<T, A> {
const EXPAND_BY: Option<NonZeroUsize> = NonZeroUsize::new(1);
const MERGE_BY: Option<NonZeroUsize> = NonZeroUsize::new(1);
}
#[unstable(issue = "none", feature = "inplace_iteration")] #[unstable(issue = "none", feature = "inplace_iteration")]
#[doc(hidden)] #[doc(hidden)]
......
...@@ -105,6 +105,7 @@ ...@@ -105,6 +105,7 @@
#[cfg(not(no_global_oom_handling))] #[cfg(not(no_global_oom_handling))]
use self::is_zero::IsZero; use self::is_zero::IsZero;
#[cfg(not(no_global_oom_handling))]
mod is_zero; mod is_zero;
#[cfg(not(no_global_oom_handling))] #[cfg(not(no_global_oom_handling))]
...@@ -123,7 +124,7 @@ ...@@ -123,7 +124,7 @@
mod set_len_on_drop; mod set_len_on_drop;
#[cfg(not(no_global_oom_handling))] #[cfg(not(no_global_oom_handling))]
use self::in_place_drop::{InPlaceDrop, InPlaceDstBufDrop}; use self::in_place_drop::{InPlaceDrop, InPlaceDstDataSrcBufDrop};
#[cfg(not(no_global_oom_handling))] #[cfg(not(no_global_oom_handling))]
mod in_place_drop; mod in_place_drop;
...@@ -1893,7 +1894,32 @@ pub fn dedup_by<F>(&mut self, mut same_bucket: F) ...@@ -1893,7 +1894,32 @@ pub fn dedup_by<F>(&mut self, mut same_bucket: F)
return; return;
} }
/* INVARIANT: vec.len() > read >= write > write-1 >= 0 */ // Check if we ever want to remove anything.
// This allows to use copy_non_overlapping in next cycle.
// And avoids any memory writes if we don't need to remove anything.
let mut first_duplicate_idx: usize = 1;
let start = self.as_mut_ptr();
while first_duplicate_idx != len {
let found_duplicate = unsafe {
// SAFETY: first_duplicate always in range [1..len)
// Note that we start iteration from 1 so we never overflow.
let prev = start.add(first_duplicate_idx.wrapping_sub(1));
let current = start.add(first_duplicate_idx);
// We explicitly say in docs that references are reversed.
same_bucket(&mut *current, &mut *prev)
};
if found_duplicate {
break;
}
first_duplicate_idx += 1;
}
// Don't need to remove anything.
// We cannot get bigger than len.
if first_duplicate_idx == len {
return;
}
/* INVARIANT: vec.len() > read > write > write-1 >= 0 */
struct FillGapOnDrop<'a, T, A: core::alloc::Allocator> { struct FillGapOnDrop<'a, T, A: core::alloc::Allocator> {
/* Offset of the element we want to check if it is duplicate */ /* Offset of the element we want to check if it is duplicate */
read: usize, read: usize,
...@@ -1939,31 +1965,39 @@ fn drop(&mut self) { ...@@ -1939,31 +1965,39 @@ fn drop(&mut self) {
} }
} }
let mut gap = FillGapOnDrop { read: 1, write: 1, vec: self };
let ptr = gap.vec.as_mut_ptr();
/* Drop items while going through Vec, it should be more efficient than /* Drop items while going through Vec, it should be more efficient than
* doing slice partition_dedup + truncate */ * doing slice partition_dedup + truncate */
// Construct gap first and then drop item to avoid memory corruption if `T::drop` panics.
let mut gap =
FillGapOnDrop { read: first_duplicate_idx + 1, write: first_duplicate_idx, vec: self };
unsafe {
// SAFETY: we checked that first_duplicate_idx in bounds before.
// If drop panics, `gap` would remove this item without drop.
ptr::drop_in_place(start.add(first_duplicate_idx));
}
/* SAFETY: Because of the invariant, read_ptr, prev_ptr and write_ptr /* SAFETY: Because of the invariant, read_ptr, prev_ptr and write_ptr
* are always in-bounds and read_ptr never aliases prev_ptr */ * are always in-bounds and read_ptr never aliases prev_ptr */
unsafe { unsafe {
while gap.read < len { while gap.read < len {
let read_ptr = ptr.add(gap.read); let read_ptr = start.add(gap.read);
let prev_ptr = ptr.add(gap.write.wrapping_sub(1)); let prev_ptr = start.add(gap.write.wrapping_sub(1));
if same_bucket(&mut *read_ptr, &mut *prev_ptr) { // We explicitly say in docs that references are reversed.
let found_duplicate = same_bucket(&mut *read_ptr, &mut *prev_ptr);
if found_duplicate {
// Increase `gap.read` now since the drop may panic. // Increase `gap.read` now since the drop may panic.
gap.read += 1; gap.read += 1;
/* We have found duplicate, drop it in-place */ /* We have found duplicate, drop it in-place */
ptr::drop_in_place(read_ptr); ptr::drop_in_place(read_ptr);
} else { } else {
let write_ptr = ptr.add(gap.write); let write_ptr = start.add(gap.write);
/* Because `read_ptr` can be equal to `write_ptr`, we either /* read_ptr cannot be equal to write_ptr because at this point
* have to use `copy` or conditional `copy_nonoverlapping`. * we guaranteed to skip at least one element (before loop starts).
* Looks like the first option is faster. */ */
ptr::copy(read_ptr, write_ptr, 1); ptr::copy_nonoverlapping(read_ptr, write_ptr, 1);
/* We have filled that place, so go further */ /* We have filled that place, so go further */
gap.write += 1; gap.write += 1;
...@@ -2844,6 +2878,7 @@ pub fn from_elem_in<T: Clone, A: Allocator>(elem: T, n: usize, alloc: A) -> Vec< ...@@ -2844,6 +2878,7 @@ pub fn from_elem_in<T: Clone, A: Allocator>(elem: T, n: usize, alloc: A) -> Vec<
<T as SpecFromElem>::from_elem(elem, n, alloc) <T as SpecFromElem>::from_elem(elem, n, alloc)
} }
#[cfg(not(no_global_oom_handling))]
trait ExtendFromWithinSpec { trait ExtendFromWithinSpec {
/// # Safety /// # Safety
/// ///
...@@ -2852,6 +2887,7 @@ trait ExtendFromWithinSpec { ...@@ -2852,6 +2887,7 @@ trait ExtendFromWithinSpec {
unsafe fn spec_extend_from_within(&mut self, src: Range<usize>); unsafe fn spec_extend_from_within(&mut self, src: Range<usize>);
} }
#[cfg(not(no_global_oom_handling))]
impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> { impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> {
default unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) { default unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) {
// SAFETY: // SAFETY:
...@@ -2871,6 +2907,7 @@ impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> { ...@@ -2871,6 +2907,7 @@ impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> {
} }
} }
#[cfg(not(no_global_oom_handling))]
impl<T: Copy, A: Allocator> ExtendFromWithinSpec for Vec<T, A> { impl<T: Copy, A: Allocator> ExtendFromWithinSpec for Vec<T, A> {
unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) { unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) {
let count = src.len(); let count = src.len();
...@@ -2951,7 +2988,7 @@ fn clone_from(&mut self, other: &Self) { ...@@ -2951,7 +2988,7 @@ fn clone_from(&mut self, other: &Self) {
/// ``` /// ```
/// use std::hash::BuildHasher; /// use std::hash::BuildHasher;
/// ///
/// let b = std::collections::hash_map::RandomState::new(); /// let b = std::hash::RandomState::new();
/// let v: Vec<u8> = vec![0xa8, 0x3c, 0x09]; /// let v: Vec<u8> = vec![0xa8, 0x3c, 0x09];
/// let s: &[u8] = &[0xa8, 0x3c, 0x09]; /// let s: &[u8] = &[0xa8, 0x3c, 0x09];
/// assert_eq!(b.hash_one(v), b.hash_one(s)); /// assert_eq!(b.hash_one(v), b.hash_one(s));
......
...@@ -33,7 +33,7 @@ llvm) ...@@ -33,7 +33,7 @@ llvm)
fi fi
;; ;;
rustc) rustc)
echo 1.75.0 echo 1.76.0
;; ;;
bindgen) bindgen)
echo 0.65.1 echo 0.65.1
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment