Struct BitPtr
#[repr(C, packed(1))]pub struct BitPtr<M = Const, T = usize, O = Lsb0>{ /* private fields */ }
Expand description
§Single-Bit Pointer
This structure defines a pointer to exactly one bit in a memory element. It is a
structure, rather than an encoding of a *Bit
raw pointer, because it contains
more information than can be packed into such a pointer. Furthermore, it can
uphold the same requirements and guarantees that the rest of the crate demands,
whereäs a raw pointer cannot.
§Original
*bool
and
NonNull<bool>
§API Differences
Since raw pointers are not sufficient in space or guarantees, and are limited by
not being marked #[fundamental]
, this is an ordinary struct
. Because it
cannot use the *const
/*mut
distinction that raw pointers and references can,
this encodes mutability in a type parameter instead.
In order to be consistent with the rest of the crate, particularly the
*BitSlice
encoding, this enforces that all T
element addresses are
well-aligned to T
and non-null. While this type is used in the API as an
analogue of raw pointers, it is restricted in value to only contain the values
of valid references to memory, not arbitrary pointers.
§ABI Differences
This is aligned to 1
, rather than the processor word, in order to enable some
crate-internal space optimizations.
§Type Parameters
M
: Marks whether the pointer has mutability permissions to the referent memory. OnlyMut
pointers can be used to create&mut
references.T
: A memory type used to select both the register width and the bus behavior when performing memory accesses.O
: The ordering of bits within a memory element.
§Usage
This structure is used as the bitvec
equivalent to *bool
. It is used in all
raw-pointer APIs and provides behavior to emulate raw pointers. It cannot be
directly dereferenced, as it is not a pointer; it can only be transformed back
into higher referential types, or used in functions that accept it.
These pointers can never be null or misaligned.
§Safety
Rust and LLVM do not have a concept of bit-level initialization yet.
Furthermore, the underlying foundational code that this type uses to manipulate
individual bits in memory relies on construction of shared references to
memory, which means that unlike standard pointers, the T
element to which
BitPtr
values point must always be already initialized in your program
context.
bitvec
is not able to detect or enforce this requirement, and is currently not
able to avoid it. See BitAccess
for more information.
Implementations§
§impl<M, T, O> BitPtr<M, T, O>
impl<M, T, O> BitPtr<M, T, O>
pub const DANGLING: BitPtr<M, T, O> = _
pub const DANGLING: BitPtr<M, T, O> = _
The canonical dangling pointer. This selects the starting bit of the
canonical dangling pointer for T
.
pub fn new(
ptr: Address<M, T>,
bit: BitIdx<<T as BitStore>::Mem>,
) -> Result<BitPtr<M, T, O>, MisalignError<T>>
pub fn new( ptr: Address<M, T>, bit: BitIdx<<T as BitStore>::Mem>, ) -> Result<BitPtr<M, T, O>, MisalignError<T>>
Tries to construct a BitPtr
from a memory location and a bit index.
§Parameters
ptr
: The address of a memory element.Address
wraps raw pointers or references, and enforces that they are not null.BitPtr
additionally requires that the address be well-aligned to its type; misaligned addresses cause this to return an error.bit
: The index of the selected bit within*ptr
.
§Returns
This returns an error if ptr
is not aligned to T
; otherwise, it
returns a new bit-pointer structure to the given element and bit.
You should typically prefer to use constructors that take directly from
a memory reference or pointer, such as the TryFrom<*T>
implementations, the From<&/mut T>
implementations, or the
::from_ref()
, ::from_mut()
, ::from_slice()
, or
::from_slice_mut()
functions.
pub unsafe fn new_unchecked(
ptr: Address<M, T>,
bit: BitIdx<<T as BitStore>::Mem>,
) -> BitPtr<M, T, O>
pub unsafe fn new_unchecked( ptr: Address<M, T>, bit: BitIdx<<T as BitStore>::Mem>, ) -> BitPtr<M, T, O>
Constructs a BitPtr
from an address and head index, without checking
the address for validity.
§Parameters
addr
: The memory address to use in the bit-pointer. See the Safety section.head
: The index of the bit in*addr
that this bit-pointer selects.
§Returns
A new bit-pointer composed of the parameters. No validity checking is performed.
§Safety
The Address
type imposes a non-null requirement. BitPtr
additionally
requires that addr
is well-aligned for T
, and presumes that the
caller has ensured this with bv_ptr::check_alignment
. If this is
not the case, then the program is incorrect, and subsequent behavior is
not specified.
pub fn address(self) -> Address<M, T>
pub fn address(self) -> Address<M, T>
Gets the address of the base storage element.
pub fn bit(self) -> BitIdx<<T as BitStore>::Mem>
pub fn bit(self) -> BitIdx<<T as BitStore>::Mem>
Gets the BitIdx
that selects the bit within the memory element.
§impl<T, O> BitPtr<Const, T, O>
impl<T, O> BitPtr<Const, T, O>
pub fn from_ref(elem: &T) -> BitPtr<Const, T, O>
pub fn from_ref(elem: &T) -> BitPtr<Const, T, O>
Constructs a BitPtr
to the zeroth bit in a single element.
pub fn from_slice(slice: &[T]) -> BitPtr<Const, T, O>
pub fn from_slice(slice: &[T]) -> BitPtr<Const, T, O>
Constructs a BitPtr
to the zeroth bit in the zeroth element of a
slice.
This method is distinct from Self::from_ref(&elem[0])
, because it
ensures that the returned bit-pointer has provenance over the entire
slice. Indexing within a slice narrows the provenance range, and makes
departure from the subslice, even within the original slice, illegal.
§impl<T, O> BitPtr<Mut, T, O>
impl<T, O> BitPtr<Mut, T, O>
pub fn from_mut(elem: &mut T) -> BitPtr<Mut, T, O>
pub fn from_mut(elem: &mut T) -> BitPtr<Mut, T, O>
Constructs a mutable BitPtr
to the zeroth bit in a single element.
pub fn from_mut_slice(slice: &mut [T]) -> BitPtr<Mut, T, O>
pub fn from_mut_slice(slice: &mut [T]) -> BitPtr<Mut, T, O>
Constructs a BitPtr
to the zeroth bit in the zeroth element of a
mutable slice.
This method is distinct from Self::from_mut(&mut elem[0])
, because it
ensures that the returned bit-pointer has provenance over the entire
slice. Indexing within a slice narrows the provenance range, and makes
departure from the subslice, even within the original slice, illegal.
pub fn from_slice_mut(slice: &mut [T]) -> BitPtr<Mut, T, O>
pub fn from_slice_mut(slice: &mut [T]) -> BitPtr<Mut, T, O>
Constructs a mutable BitPtr
to the zeroth bit in the zeroth element of
a slice.
This method is distinct from Self::from_mut(&mut elem[0])
, because it
ensures that the returned bit-pointer has provenance over the entire
slice. Indexing within a slice narrows the provenance range, and makes
departure from the subslice, even within the original slice, illegal.
§impl<M, T, O> BitPtr<M, T, O>
impl<M, T, O> BitPtr<M, T, O>
Port of the *bool
inherent API.
pub fn is_null(self) -> bool
👎Deprecated: BitPtr
is never null
pub fn is_null(self) -> bool
BitPtr
is never nullTests if a bit-pointer is the null value.
This is always false, as a BitPtr
is a NonNull
internally. Use
Option<BitPtr>
to express the potential for a null pointer.
§Original
pub fn cast<U>(self) -> BitPtr<M, U, O>where
U: BitStore,
pub fn cast<U>(self) -> BitPtr<M, U, O>where
U: BitStore,
Casts to a BitPtr
with a different storage parameter.
This is not free! In order to maintain value integrity, it encodes a
BitSpan
encoded descriptor with its value, casts that, then decodes
into a BitPtr
of the target type. If T
and U
have different
::Mem
associated types, then this may change the selected bit in
memory. This is an unavoidable cost of the addressing and encoding
schemes.
§Original
pub fn to_raw_parts(self) -> (Address<M, T>, BitIdx<<T as BitStore>::Mem>)
Available on non-tarpaulin_include
only.
pub fn to_raw_parts(self) -> (Address<M, T>, BitIdx<<T as BitStore>::Mem>)
tarpaulin_include
only.Decomposes a bit-pointer into its address and head-index components.
§Original
§API Differences
The original method is unstable as of 1.54.0; however, because BitPtr
already has a similar API, the name is optimistically stabilized here.
Prefer .raw_parts()
until the original inherent stabilizes.
pub unsafe fn as_ref<'a>(self) -> Option<BitRef<'a, Const, T, O>>
pub unsafe fn as_ref<'a>(self) -> Option<BitRef<'a, Const, T, O>>
Produces a proxy reference to the referent bit.
Because BitPtr
guarantees that it is non-null and well-aligned, this
never returns None
. However, this is still unsafe to call on any
bit-pointers created from conjured values rather than known references.
§Original
§API Differences
This produces a proxy type rather than a true reference. The proxy
implements Deref<Target = bool>
, and can be converted to &bool
with
a reborrow &*
.
§Safety
Since BitPtr
does not permit null or misaligned pointers, this method
will always dereference the pointer in order to create the proxy. As
such, you must ensure the following conditions are met:
- the pointer must be dereferenceable as defined in the standard library documentation
- the pointer must point to an initialized instance of
T
- you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy
§Examples
use bitvec::prelude::*;
let data = 1u8;
let ptr = BitPtr::<_, _, Lsb0>::from_ref(&data);
let val = unsafe { ptr.as_ref() }.unwrap();
assert!(*val);
pub unsafe fn offset(self, count: isize) -> BitPtr<M, T, O>
pub unsafe fn offset(self, count: isize) -> BitPtr<M, T, O>
Creates a new bit-pointer at a specified offset from the original.
count
is in units of bits.
§Original
§Safety
BitPtr
is implemented with Rust raw pointers internally, and is
subject to all of Rust’s rules about provenance and permission tracking.
You must abide by the safety rules established in the original method,
to which this internally delegates.
Additionally, bitvec
imposes its own rules: while Rust cannot observe
provenance beyond an element or byte level, bitvec
demands that
&mut BitSlice
have exclusive view over all bits it observes. You must
not produce a bit-pointer that departs a BitSlice
region and intrudes
on any &mut BitSlice
’s handle, and you must not produce a
write-capable bit-pointer that intrudes on a &BitSlice
handle that
expects its contents to be immutable.
Note that it is illegal to construct a bit-pointer that invalidates
any of these rules. If you wish to defer safety-checking to the point of
dereferencing, and allow the temporary construction but not
dereference of illegal BitPtr
s, use .wrapping_offset()
instead.
§Examples
use bitvec::prelude::*;
let data = 5u8;
let ptr = BitPtr::<_, _, Lsb0>::from_ref(&data);
unsafe {
assert!(ptr.read());
assert!(!ptr.offset(1).read());
assert!(ptr.offset(2).read());
}
pub fn wrapping_offset(self, count: isize) -> BitPtr<M, T, O>
pub fn wrapping_offset(self, count: isize) -> BitPtr<M, T, O>
Creates a new bit-pointer at a specified offset from the original.
count
is in units of bits.
§Original
§API Differences
bitvec
makes it explicitly illegal to wrap a pointer around the high
end of the address space, because it is incapable of representing a null
pointer.
However, <*T>::wrapping_offset
has additional properties as a result
of its tolerance for wrapping the address space: it tolerates departing
a provenance region, and is not unsafe to use to create a bit-pointer
that is outside the bounds of its original provenance.
§Safety
This function is safe to use because the bit-pointers it creates defer their provenance checks until the point of dereference. As such, you can safely use this to perform arbitrary pointer arithmetic that Rust considers illegal in ordinary arithmetic, as long as you do not dereference the bit-pointer until it has been brought in bounds of the originating provenance region.
This means that, to the Rust rule engine,
let z = x.wrapping_add(y as usize).wrapping_sub(x as usize);
is not
equivalent to y
, but z
is safe to construct, and
z.wrapping_add(x as usize).wrapping_sub(y as usize)
produces a
bit-pointer that is equivalent to x
.
See the documentation of the original method for more details about provenance regions, and the distinctions that the optimizer makes about them.
§Examples
use bitvec::prelude::*;
let data = 0u32;
let mut ptr = BitPtr::<_, _, Lsb0>::from_ref(&data);
let end = ptr.wrapping_offset(32);
while ptr < end {
println!("{}", unsafe { ptr.read() });
ptr = ptr.wrapping_offset(3);
}
pub unsafe fn offset_from<U>(self, origin: BitPtr<M, U, O>) -> isize
pub unsafe fn offset_from<U>(self, origin: BitPtr<M, U, O>) -> isize
Calculates the distance (in bits) between two bit-pointers.
This method is the inverse of .offset()
.
§Original
§API Differences
The base pointer may have a different BitStore
type parameter, as long
as they share an underlying memory type. This is necessary in order to
accommodate aliasing markers introduced between when an origin pointer
was taken and when self
compared against it.
§Safety
Both self
and origin
must be drawn from the same provenance
region. This means that they must be created from the same Rust
allocation, whether with let
or the allocator API, and must be in the
(inclusive) range base ..= base + len
. The first bit past the end of
a region can be addressed, just not dereferenced.
See the original <*T>::offset_from
for more details on region safety.
§Examples
use bitvec::prelude::*;
let data = 0u32;
let base = BitPtr::<_, _, Lsb0>::from_ref(&data);
let low = unsafe { base.add(10) };
let high = unsafe { low.add(15) };
unsafe {
assert_eq!(high.offset_from(low), 15);
assert_eq!(low.offset_from(high), -15);
assert_eq!(low.offset(15), high);
assert_eq!(high.offset(-15), low);
}
While this method is safe to construct bit-pointers that depart a provenance region, it remains illegal to dereference those pointers!
This usage is incorrect, and a program that contains it is not well-formed.
use bitvec::prelude::*;
let a = 0u8;
let b = !0u8;
let a_ptr = BitPtr::<_, _, Lsb0>::from_ref(&a);
let b_ptr = BitPtr::<_, _, Lsb0>::from_ref(&b);
let diff = (b_ptr.pointer() as isize)
.wrapping_sub(a_ptr.pointer() as isize)
// Remember: raw pointers are byte-stepped,
// but bit-pointers are bit-stepped.
.wrapping_mul(8);
// This pointer to `b` has `a`’s provenance:
let b_ptr_2 = a_ptr.wrapping_offset(diff);
// They are *arithmetically* equal:
assert_eq!(b_ptr, b_ptr_2);
// But it is still undefined behavior to cross provenances!
assert_eq!(0, unsafe { b_ptr_2.offset_from(b_ptr) });
pub unsafe fn add(self, count: usize) -> BitPtr<M, T, O>
pub unsafe fn add(self, count: usize) -> BitPtr<M, T, O>
pub unsafe fn sub(self, count: usize) -> BitPtr<M, T, O>
pub unsafe fn sub(self, count: usize) -> BitPtr<M, T, O>
pub fn wrapping_add(self, count: usize) -> BitPtr<M, T, O>
pub fn wrapping_add(self, count: usize) -> BitPtr<M, T, O>
Adjusts a bit-pointer upwards in memory, using wrapping semantics. This
is equivalent to .wrapping_offset(count as isize)
.
count
is in units of bits.
§Original
§Safety
See .wrapping_offset()
.
pub fn wrapping_sub(self, count: usize) -> BitPtr<M, T, O>
pub fn wrapping_sub(self, count: usize) -> BitPtr<M, T, O>
Adjusts a bit-pointer downwards in memory, using wrapping semantics.
This is equivalent to
.wrapping_offset((count as isize).wrapping_neg())
.
count
is in units of bits.
§Original
§Safety
See .wrapping_offset()
.
pub unsafe fn read_volatile(self) -> bool
pub unsafe fn read_volatile(self) -> bool
Reads the bit from *self
using a volatile load.
Prefer using a crate such as voladdress
to manage volatile I/O
and use bitvec
only on the local objects it provides. Individual I/O
operations for individual bits are likely not the behavior you want.
§Original
§Safety
See ptr::read_volatile
.
pub unsafe fn read_unaligned(self) -> bool
👎Deprecated: BitPtr
does not have unaligned addresses
pub unsafe fn read_unaligned(self) -> bool
BitPtr
does not have unaligned addressespub unsafe fn copy_to<T2, O2>(self, dest: BitPtr<Mut, T2, O2>, count: usize)
Available on non-tarpaulin_include
only.
pub unsafe fn copy_to<T2, O2>(self, dest: BitPtr<Mut, T2, O2>, count: usize)
tarpaulin_include
only.pub unsafe fn copy_to_nonoverlapping<T2, O2>(
self,
dest: BitPtr<Mut, T2, O2>,
count: usize,
)
Available on non-tarpaulin_include
only.
pub unsafe fn copy_to_nonoverlapping<T2, O2>( self, dest: BitPtr<Mut, T2, O2>, count: usize, )
tarpaulin_include
only.Copies count
bits from self
to dest
. The source and destination
may not overlap.
§Original
pointer::copy_to_nonoverlapping
§Safety
pub fn align_offset(self, align: usize) -> usize
pub fn align_offset(self, align: usize) -> usize
Computes the offset (in bits) that needs to be applied to the bit-pointer in order to make it aligned to the given byte alignment.
“Alignment” here means that the bit-pointer selects the starting bit of a memory location whose address satisfies the requested alignment.
align
is measured in bytes. If you wish to align your bit-pointer
to a specific fraction (½, ¼, or ⅛ of one byte), please file an issue
and I will work on adding this functionality.
§Original
§Notes
If the base-element address of the bit-pointer is already aligned to
align
, then this will return the bit-offset required to select the
first bit of the successor element.
If it is not possible to align the bit-pointer, then the implementation
returns usize::MAX
.
The return value is measured in bits, not T
elements or bytes. The
only thing you can do with it is pass it into .add()
or
.wrapping_add()
.
Note from the standard library: It is permissible for the implementation
to always return usize::MAX
. Only your algorithm’s performance can
depend on getting a usable offset here; it must be correct independently
of this function providing a useful value.
§Safety
There are no guarantees whatsoëver that offsetting the bit-pointer will not overflow or go beyond the allocation that the bit-pointer selects. It is up to the caller to ensure that the returned offset is correct in all terms other than alignment.
§Panics
This method panics if align
is not a power of two.
§Examples
use bitvec::prelude::*;
let data = [0u8; 3];
let ptr = BitPtr::<_, _, Lsb0>::from_slice(&data);
let ptr = unsafe { ptr.add(2) };
let count = ptr.align_offset(2);
assert!(count >= 6);
§impl<T, O> BitPtr<Mut, T, O>
impl<T, O> BitPtr<Mut, T, O>
Port of the *mut bool
inherent API.
pub unsafe fn as_mut<'a>(self) -> Option<BitRef<'a, Mut, T, O>>
pub unsafe fn as_mut<'a>(self) -> Option<BitRef<'a, Mut, T, O>>
Produces a proxy reference to the referent bit.
Because BitPtr
guarantees that it is non-null and well-aligned, this
never returns None
. However, this is still unsafe to call on any
bit-pointers created from conjured values rather than known references.
§Original
§API Differences
This produces a proxy type rather than a true reference. The proxy
implements DerefMut<Target = bool>
, and can be converted to
&mut bool
with a reborrow &mut *
.
Writes to the proxy are not reflected in the proxied location until the
proxy is destroyed, either through Drop
or its .commit()
method.
§Safety
Since BitPtr
does not permit null or misaligned pointers, this method
will always dereference the pointer in order to create the proxy. As
such, you must ensure the following conditions are met:
- the pointer must be dereferenceable as defined in the standard library documentation
- the pointer must point to an initialized instance of
T
- you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy
- you must ensure that no other
bitvec
handle targets the referent bit
§Examples
use bitvec::prelude::*;
let mut data = 0u8;
let ptr = BitPtr::<_, _, Lsb0>::from_mut(&mut data);
let mut val = unsafe { ptr.as_mut() }.unwrap();
assert!(!*val);
*val = true;
assert!(*val);
pub unsafe fn copy_from<T2, O2>(self, src: BitPtr<Const, T2, O2>, count: usize)
Available on non-tarpaulin_include
only.
pub unsafe fn copy_from<T2, O2>(self, src: BitPtr<Const, T2, O2>, count: usize)
tarpaulin_include
only.Copies count
bits from the region starting at src
to the region
starting at self
.
The regions are free to overlap; the implementation will detect overlap and correctly avoid it.
Note: this has the opposite argument order from ptr::copy
: self
is the destination, not the source.
§Original
§Safety
See ptr::copy
.
pub unsafe fn copy_from_nonoverlapping<T2, O2>(
self,
src: BitPtr<Const, T2, O2>,
count: usize,
)
Available on non-tarpaulin_include
only.
pub unsafe fn copy_from_nonoverlapping<T2, O2>( self, src: BitPtr<Const, T2, O2>, count: usize, )
tarpaulin_include
only.Copies count
bits from the region starting at src
to the region
starting at self
.
Unlike .copy_from()
, the two regions may not overlap; this method
does not attempt to detect overlap and thus may have a slight
performance boost over the overlap-handling .copy_from()
.
Note: this has the opposite argument order from
ptr::copy_nonoverlapping
: self
is the destination, not the source.
§Original
pointer::copy_from_nonoverlapping
§Safety
pub fn drop_in_place(self)
👎Deprecated: this has no effect, and should not be called
pub fn drop_in_place(self)
Runs the destructor of the referent value.
bool
has no destructor; this function does nothing.
§Original
§Safety
See ptr::drop_in_place
.
pub unsafe fn write_volatile(self, value: bool)
pub unsafe fn write_volatile(self, value: bool)
Writes a new bit using volatile I/O operations.
Because processors do not generally have single-bit read or write instructions, this must perform a volatile read of the entire memory location, perform the write locally, then perform another volatile write to the entire location. These three steps are guaranteed to be sequential with respect to each other, but are not guaranteed to be atomic.
Volatile operations are intended to act on I/O memory, and are only guaranteed not to be elided or reördered by the compiler across other I/O operations.
You should not use bitvec
to act on volatile memory. You should use a
crate specialized for volatile I/O work, such as voladdr
, and use it
to explicitly manage the I/O and ask it to perform bitvec
work only on
the local snapshot of a volatile location.
§Original
§Safety
See ptr::write_volatile
.
pub unsafe fn write_unaligned(self, value: bool)
👎Deprecated: BitPtr
does not have unaligned addresses
pub unsafe fn write_unaligned(self, value: bool)
BitPtr
does not have unaligned addressesWrites a bit into memory, tolerating unaligned addresses.
BitPtr
does not have unaligned addresses. BitPtr
itself is capable
of operating on misaligned addresses, but elects to disallow use of them
in keeping with the rest of bitvec
’s requirements.
§Original
§Safety
See ptr::write_unaligned
.
Trait Implementations§
§impl<M, T, O> Ord for BitPtr<M, T, O>
impl<M, T, O> Ord for BitPtr<M, T, O>
§impl<M1, M2, T1, T2, O> PartialOrd<BitPtr<M2, T2, O>> for BitPtr<M1, T1, O>
impl<M1, M2, T1, T2, O> PartialOrd<BitPtr<M2, T2, O>> for BitPtr<M1, T1, O>
§impl<M, T, O> RangeBounds<BitPtr<M, T, O>> for BitPtrRange<M, T, O>
Available on non-tarpaulin_include
only.
impl<M, T, O> RangeBounds<BitPtr<M, T, O>> for BitPtrRange<M, T, O>
tarpaulin_include
only.impl<M, T, O> Copy for BitPtr<M, T, O>
impl<M, T, O> Eq for BitPtr<M, T, O>
Auto Trait Implementations§
impl<M, T, O> Freeze for BitPtr<M, T, O>where
M: Freeze,
impl<M, T, O> RefUnwindSafe for BitPtr<M, T, O>
impl<M = Const, T = usize, O = Lsb0> !Send for BitPtr<M, T, O>
impl<M = Const, T = usize, O = Lsb0> !Sync for BitPtr<M, T, O>
impl<M, T, O> Unpin for BitPtr<M, T, O>
impl<M, T, O> UnwindSafe for BitPtr<M, T, O>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<Q, K> Comparable<K> for Q
impl<Q, K> Comparable<K> for Q
§impl<T> Conv for T
impl<T> Conv for T
§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key
and return true
if they are equal.§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
§impl<T> FmtForward for T
impl<T> FmtForward for T
§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
self
to use its Binary
implementation when Debug
-formatted.§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
self
to use its Display
implementation when
Debug
-formatted.§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
self
to use its LowerExp
implementation when
Debug
-formatted.§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
self
to use its LowerHex
implementation when
Debug
-formatted.§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
self
to use its Octal
implementation when Debug
-formatted.§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
self
to use its Pointer
implementation when
Debug
-formatted.§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
self
to use its UpperExp
implementation when
Debug
-formatted.§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
self
to use its UpperHex
implementation when
Debug
-formatted.§fn fmt_list(self) -> FmtList<Self>where
&'a Self: for<'a> IntoIterator,
fn fmt_list(self) -> FmtList<Self>where
&'a Self: for<'a> IntoIterator,
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
Source§fn in_current_span(self) -> Instrumented<Self> ⓘ
fn in_current_span(self) -> Instrumented<Self> ⓘ
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere
T: ?Sized,
§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read more§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read more§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R,
) -> R
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R, ) -> R
§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
self
, then passes self.as_ref()
into the pipe function.§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
self
, then passes self.as_mut()
into the pipe
function.§fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
self
, then passes self.deref()
into the pipe function.§impl<T> Pointable for T
impl<T> Pointable for T
§impl<T> Tap for T
impl<T> Tap for T
§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
Borrow<B>
of a value. Read more§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
BorrowMut<B>
of a value. Read more§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
AsRef<R>
view of a value. Read more§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
AsMut<R>
view of a value. Read more§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
Deref::Target
of a value. Read more§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
Deref::Target
of a value. Read more§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap()
only in debug builds, and is erased in release builds.§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut()
only in debug builds, and is erased in release
builds.§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
.tap_borrow()
only in debug builds, and is erased in release
builds.§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
.tap_borrow_mut()
only in debug builds, and is erased in release
builds.§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
.tap_ref()
only in debug builds, and is erased in release
builds.§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
.tap_ref_mut()
only in debug builds, and is erased in release
builds.§fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
.tap_deref()
only in debug builds, and is erased in release
builds.§impl<T> TryConv for T
impl<T> TryConv for T
§impl<T> WithSubscriber for T
impl<T> WithSubscriber for T
§fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> ⓘwhere
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> ⓘwhere
S: Into<Dispatch>,
§fn with_current_subscriber(self) -> WithDispatch<Self> ⓘ
fn with_current_subscriber(self) -> WithDispatch<Self> ⓘ
Source§impl<T> WithSubscriber for T
impl<T> WithSubscriber for T
Source§fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> ⓘwhere
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> ⓘwhere
S: Into<Dispatch>,
Source§fn with_current_subscriber(self) -> WithDispatch<Self> ⓘ
fn with_current_subscriber(self) -> WithDispatch<Self> ⓘ
impl<T> ErasedDestructor for Twhere
T: 'static,
impl<T> MaybeDebug for Twhere
T: Debug,
impl<T> MaybeSendSync for T
Layout§
Note: Unable to compute type layout, possibly due to this type having generic parameters. Layout can only be computed for concrete, fully-instantiated types.