Struct BitVec
#[repr(C)]pub struct BitVec<T = usize, O = Lsb0>{ /* private fields */ }
Expand description
§Bit-Precision Dynamic Array
This is an analogue to Vec<bool>
that stores its data using a compaction
scheme to ensure that each bool
takes exactly one bit of memory. It is similar
to the C++ type std::vector<bool>
, but uses bitvec
’s type parameter system
to provide more detailed control over the in-memory representation.
This is always a heap allocation. If you know your sizes at compile-time, you
may prefer to use BitArray
instead, which is able to store its data as an
immediate value rather than through an indirection.
§Documentation Practices
BitVec
exactly replicates the API of the standard-library Vec
type,
including inherent methods, trait implementations, and relationships with the
BitSlice
slice analogue.
Items that are either direct ports, or renamed variants, of standard-library
APIs will have a ## Original
section that links to their standard-library
documentation. Items that map to standard-library APIs but have a different API
signature will also have an ## API Differences
section that describes what
the difference is, why it exists, and how to transform your code to fit it. For
example:
§Original
§API Differences
As with all bitvec
data structures, this takes two type parameters <T, O>
that govern the bit-vector’s storage representation in the underlying memory,
and does not take a type parameter to govern what data type it stores (always
bool
)
§Suggested Uses
BitVec
is able to act as a compacted usize => bool
dictionary, and is useful
for holding large collections of truthiness. For instance, you might replace a
Vec<Option<T>>
with a (BitVec, Vec<MaybeUninit<T>>
) to cut down on the
resident size of the discriminant.
Through the BitField
trait, BitVec
is also able to act as a transport
buffer for data that can be marshalled as integers. Serializing data to a
narrower compacted form, or deserializing data from that form, can be easily
accomplished by viewing subsets of a bit-vector and storing integers into, or
loading integers out of, that subset. As an example, transporting four ten-bit
integers can be done in five bytes instead of eight like so:
use bitvec::prelude::*;
let mut bv = bitvec![u8, Msb0; 0; 40];
bv[0 .. 10].store::<u16>(0x3A8);
bv[10 .. 20].store::<u16>(0x2F9);
bv[20 .. 30].store::<u16>(0x154);
bv[30 .. 40].store::<u16>(0x06D);
If you wish to use bit-field memory representations as struct
fields rather
than a transport buffer, consider BitArray
instead: that type keeps its data
as an immediate, and is more likely to act like a C struct with bitfields.
§Examples
BitVec
has exactly the same API as Vec<bool>
, and even extends it with some
of Vec<T>
’s behaviors. As a brief tour:
§Push and Pop
use bitvec::prelude::*;
let mut bv: BitVec = BitVec::new();
bv.push(false);
bv.push(true);
assert_eq!(bv.len(), 2);
assert_eq!(bv[0], false);
assert_eq!(bv.pop(), Some(true));
assert_eq!(bv.len(), 1);
§Writing Into a Bit-Vector
The only Vec<bool>
API that BitVec
does not implement is IndexMut
,
because that is not yet possible. Instead, .get_mut()
can produce a proxy
reference, or .set()
can take an index and a value to write.
use bitvec::prelude::*;
let mut bv: BitVec = BitVec::new();
bv.push(false);
*bv.get_mut(0).unwrap() = true;
assert!(bv[0]);
bv.set(0, false);
assert!(!bv[0]);
§Macro Construction
Like Vec
, BitVec
also has a macro constructor: bitvec!
takes a sequence
of bit expressions and encodes them at compile-time into a suitable buffer. At
run-time, this buffer is copied into the heap as a BitVec
with no extra cost
beyond the allocation.
use bitvec::prelude::*;
let bv = bitvec![0; 10];
let bv = bitvec![0, 1, 0, 0, 1];
let bv = bitvec![u16, Msb0; 1; 20];
§Borrowing as BitSlice
BitVec
lends its buffer as a BitSlice
, so you can freely give permission to
view or modify the contained data without affecting the allocation:
use bitvec::prelude::*;
fn read_bitslice(bits: &BitSlice) {
// …
}
let bv = bitvec![0; 30];
read_bitslice(&bv);
let bs: &BitSlice = &bv;
§Other Notes
The default type parameters are <usize, Lsb0>
. This is the most performant
pair when operating on memory, but likely does not match your needs if you are
using BitVec
to represent a transport buffer. See the user guide for
more details on how the type parameters govern memory representation.
Applications, or single-purpose libraries, built atop bitvec
will likely want
to create a type
alias with specific type parameters for their usage. bitvec
is fully generic over the ordering/storage types, but this generality is rarely
useful for client crates to propagate. <usize, Lsb0>
is fastest; <u8, Msb0>
matches what most debugger views of memory will print, and the rest are
documented in the guide.
§Safety
Unlike the other data structures in this crate, BitVec
is uniquely able to
hold uninitialized memory and produce pointers into it. As described in the
BitAccess
documentation, this crate is categorically unable to operate on
uninitialized memory in any way. In particular, you may not allocate a buffer
using ::with_capacity()
, then use .as_mut_bitptr()
to create a pointer
used to write into the uninitialized buffer.
You must always initialize the buffer contents of a BitVec
before attempting
to view its contents. You can accomplish this through safe APIs such as
.push()
, .extend()
, or .reserve()
. These are all guaranteed to safely
initialize the memory elements underlying the BitVec
buffer without incurring
undefined behavior in their operation.
Implementations§
§impl<T, O> BitVec<T, O>
Port of the Vec<T>
inherent API.
impl<T, O> BitVec<T, O>
Port of the Vec<T>
inherent API.
pub fn new() -> BitVec<T, O> ⓘ
Available on crate feature alloc
only.
pub fn new() -> BitVec<T, O> ⓘ
alloc
only.Constructs a new, empty, bit-vector.
This does not allocate until bits are .push()
ed into it, or space is
explicitly .reserve()
d.
§Original
§Examples
use bitvec::prelude::*;
let bv = BitVec::<u8, Msb0>::new();
assert!(bv.is_empty());
pub fn with_capacity(capacity: usize) -> BitVec<T, O> ⓘ
Available on crate feature alloc
only.
pub fn with_capacity(capacity: usize) -> BitVec<T, O> ⓘ
alloc
only.Allocates a new, empty, bit-vector with space for at least capacity
bits before reallocating.
§Original
§Panics
This panics if the requested capacity is longer than what the bit-vector
can represent. See BitSlice::MAX_BITS
.
§Examples
use bitvec::prelude::*;
let mut bv: BitVec = BitVec::with_capacity(128);
assert!(bv.is_empty());
assert!(bv.capacity() >= 128);
for i in 0 .. 128 {
bv.push(i & 0xC0 == i);
}
assert_eq!(bv.len(), 128);
assert!(bv.capacity() >= 128);
bv.push(false);
assert_eq!(bv.len(), 129);
assert!(bv.capacity() >= 129);
pub unsafe fn from_raw_parts(
bitptr: BitPtr<Mut, T, O>,
length: usize,
capacity: usize,
) -> BitVec<T, O> ⓘ
Available on crate feature alloc
only.
pub unsafe fn from_raw_parts( bitptr: BitPtr<Mut, T, O>, length: usize, capacity: usize, ) -> BitVec<T, O> ⓘ
alloc
only.Constructs a bit-vector handle from its constituent fields.
§Original
§Safety
The only acceptable argument values for this function are those that
were previously produced by calling .into_raw_parts()
. Furthermore,
you may only call this at most once on any set of arguments. Using
the same arguments in more than one call to this function will result in
a double- or use-after free error.
Attempting to conjure your own values and pass them into this function will break the allocator state.
§Examples
use bitvec::prelude::*;
let bv = bitvec![0, 1, 0, 0, 1];
let (bitptr, len, capa) = bv.into_raw_parts();
let bv2 = unsafe {
BitVec::from_raw_parts(bitptr, len, capa)
};
assert_eq!(bv2, bits![0, 1, 0, 0, 1]);
pub fn into_raw_parts(self) -> (BitPtr<Mut, T, O>, usize, usize)
Available on crate feature alloc
only.
pub fn into_raw_parts(self) -> (BitPtr<Mut, T, O>, usize, usize)
alloc
only.Decomposes a bit-vector into its constituent member fields.
This disarms the destructor. In order to prevent a memory leak, you must
pass these exact values back into ::from_raw_parts()
.
§Original
§API Differences
This method is still unstable as of 1.54. It is provided here as a convenience, under the expectation that the standard-library method will stabilize as-is.
pub fn capacity(&self) -> usize
Available on crate feature alloc
only.
pub fn capacity(&self) -> usize
alloc
only.Gets the allocation capacity, measured in bits.
This counts how many total bits the bit-vector can store before it must perform a reällocation to acquire more memory.
If the capacity is not a multiple of 8, you should call
.force_align()
.
§Original
§Examples
use bitvec::prelude::*;
let bv = bitvec![0, 1, 0, 0, 1];
pub fn reserve(&mut self, additional: usize)
Available on crate feature alloc
only.
pub fn reserve(&mut self, additional: usize)
alloc
only.Ensures that the bit-vector has allocation capacity for at least
additional
more bits to be appended to it.
For convenience, this method guarantees that the underlying memory for
self[.. self.len() + additional]
is initialized, and may be safely
accessed directly without requiring use of .push()
or .extend()
to
initialize it.
Newly-allocated memory is always initialized to zero. It is still dead
until the bit-vector is grown (by .push()
, .extend()
, or
.set_len()
), but direct access will not trigger UB.
§Original
§Panics
This panics if the new capacity exceeds the bit-vector’s maximum.
§Examples
use bitvec::prelude::*;
let mut bv: BitVec = BitVec::with_capacity(80);
assert!(bv.capacity() >= 80);
bv.reserve(800);
assert!(bv.capacity() >= 800);
pub fn reserve_exact(&mut self, additional: usize)
Available on crate feature alloc
only.
pub fn reserve_exact(&mut self, additional: usize)
alloc
only.Ensures that the bit-vector has allocation capacity for at least
additional
more bits to be appended to it.
This differs from .reserve()
by requesting that the allocator
provide the minimum capacity necessary, rather than a potentially larger
amount that the allocator may find more convenient.
Remember that this is a request: the allocator provides what it
provides, and you cannot rely on the new capacity to be exactly minimal.
You should still prefer .reserve()
, especially if you expect to append
to the bit-vector in the future.
§Original
§Panics
This panics if the new capacity exceeds the bit-vector’s maximum.
§Examples
use bitvec::prelude::*;
let mut bv: BitVec = BitVec::with_capacity(80);
assert!(bv.capacity() >= 80);
bv.reserve_exact(800);
assert!(bv.capacity() >= 800);
pub fn shrink_to_fit(&mut self)
Available on crate feature alloc
only.
pub fn shrink_to_fit(&mut self)
alloc
only.Releases excess capacity back to the allocator.
Like .reserve_exact()
, this is a request to the allocator, not a
command. The allocator may reclaim excess memory or may not.
§Original
§Examples
use bitvec::prelude::*;
let mut bv: BitVec = BitVec::with_capacity(1000);
bv.push(true);
bv.shrink_to_fit();
pub fn into_boxed_slice(self) -> BitBox<T, O>
alloc
and non-tarpaulin_include
only.pub fn truncate(&mut self, new_len: usize)
Available on crate feature alloc
only.
pub fn truncate(&mut self, new_len: usize)
alloc
only.Shortens the bit-vector, keeping the first new_len
bits and discarding
the rest.
If len
is greater than the bit-vector’s current length, this has no
effect.
The .drain()
method can emulate .truncate()
, except that it yields
the excess bits rather than discarding them.
Note that this has no effect on the allocated capacity of the
bit-vector, nor does it erase truncated memory. Bits in the
allocated memory that are outside of the .as_bitslice()
view are
always considered to have initialized, but unspecified, values,
and you cannot rely on them to be zero.
§Original
§Examples
Truncating a five-bit vector to two bits:
use bitvec::prelude::*;
let mut bv = bitvec![0, 1, 0, 0, 1];
bv.truncate(2);
assert_eq!(bv.len(), 2);
assert!(bv.as_raw_slice()[0].count_ones() >= 2);
No truncation occurs when len
is greater than the bit-vector’s current
length:
pub fn as_slice(&self) -> &BitSlice<T, O> ⓘ
.as_bitslice()
insteadalloc
and non-tarpaulin_include
only.pub fn as_mut_slice(&mut self) -> &mut BitSlice<T, O> ⓘ
.as_mut_bitslice()
insteadalloc
and non-tarpaulin_include
only.pub fn as_ptr(&self) -> BitPtr<Const, T, O>
.as_bitptr()
insteadalloc
and non-tarpaulin_include
only.pub fn as_mut_ptr(&mut self) -> BitPtr<Mut, T, O>
.as_mut_bitptr()
insteadalloc
and non-tarpaulin_include
only.pub unsafe fn set_len(&mut self, new_len: usize)
Available on crate feature alloc
only.
pub unsafe fn set_len(&mut self, new_len: usize)
alloc
only.Resizes a bit-vector to a new length.
§Original
§Safety
NOT ALL MEMORY IN THE ALLOCATION IS INITIALIZED!
Memory in a bit-vector’s allocation is only initialized when the
bit-vector grows into it normally (through .push()
or one of the
various .extend*()
methods). Setting the length to a value beyond what
was previously initialized, but still within the allocation, is
undefined behavior.
The caller is responsible for ensuring that all memory up to (but not including) the new length has already been initialized.
§Panics
This panics if new_len
exceeds the capacity as reported by
.capacity()
.
§Examples
use bitvec::prelude::*;
let mut bv = bitvec![0, 1, 0, 0, 1];
unsafe {
// The default storage type, `usize`, is at least 32 bits.
bv.set_len(32);
}
assert_eq!(bv, bits![
0, 1, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
]);
// `BitVec` guarantees that newly-initialized memory is zeroed.
pub fn swap_remove(&mut self, index: usize) -> bool
Available on crate feature alloc
only.
pub fn swap_remove(&mut self, index: usize) -> bool
alloc
only.Takes a bit out of the bit-vector.
The empty slot is filled with the last bit in the bit-vector, rather
than shunting index + 1 .. self.len()
down by one.
§Original
§Panics
This panics if index
is out of bounds (self.len()
or greater).
§Examples
use bitvec::prelude::*;
let mut bv = bitvec![0, 1, 0, 0, 1];
assert!(!bv.swap_remove(2));
assert_eq!(bv, bits![0, 1, 1, 0]);
pub fn insert(&mut self, index: usize, value: bool)
Available on crate feature alloc
only.
pub fn insert(&mut self, index: usize, value: bool)
alloc
only.pub fn remove(&mut self, index: usize) -> bool
Available on crate feature alloc
only.
pub fn remove(&mut self, index: usize) -> bool
alloc
only.pub fn retain<F>(&mut self, func: F)
Available on crate feature alloc
only.
pub fn retain<F>(&mut self, func: F)
alloc
only.Retains only the bits that the predicate allows.
Bits are deleted from the vector when the predicate function returns
false. This function is linear in self.len()
.
§Original
§API Differences
The predicate receives both the index of the bit as well as its value, in order to allow the predicate to have more than one bit of keep/discard information.
§Examples
use bitvec::prelude::*;
let mut bv = bitvec![0, 1, 0, 0, 1];
bv.retain(|idx, _| idx % 2 == 0);
assert_eq!(bv, bits![0, 0, 1]);
pub fn append<T2, O2>(&mut self, other: &mut BitVec<T2, O2>)
Available on crate feature alloc
only.
pub fn append<T2, O2>(&mut self, other: &mut BitVec<T2, O2>)
alloc
only.Moves all the bits out of other
into the back of self
.
The other
bit-vector is emptied after this occurs.
§Original
§API Differences
This permits other
to have different type parameters than self
, and
does not require that it be literally Self
.
§Panics
This panics if self.len() + other.len()
exceeds the maximum capacity
of a bit-vector.
§Examples
use bitvec::prelude::*;
let mut bv1 = bitvec![u16, Msb0; 0; 10];
let mut bv2 = bitvec![u32, Lsb0; 1; 10];
bv1.append(&mut bv2);
assert_eq!(bv1.count_ones(), 10);
assert_eq!(bv1.count_zeros(), 10);
assert!(bv2.is_empty());
pub fn drain<R>(&mut self, range: R) -> Drain<'_, T, O> ⓘwhere
R: RangeBounds<usize>,
Available on crate feature alloc
only.
pub fn drain<R>(&mut self, range: R) -> Drain<'_, T, O> ⓘwhere
R: RangeBounds<usize>,
alloc
only.Iterates over a portion of the bit-vector, removing all yielded bits from it.
When the iterator drops, all bits in its coverage are removed from
self
, even if the iterator did not yield them. If the iterator is
leaked or otherwise forgotten, and its destructor never runs, then the
amount of un-yielded bits removed from the bit-vector is not specified.
§Original
§Panics
This panics if range
departs 0 .. self.len()
.
§Examples
use bitvec::prelude::*;
let mut bv = bitvec![0, 1, 0, 0, 1];
let bv2 = bv.drain(1 ..= 3).collect::<BitVec>();
assert_eq!(bv, bits![0, 1]);
assert_eq!(bv2, bits![1, 0, 0]);
// A full range clears the bit-vector.
bv.drain(..);
assert!(bv.is_empty());
pub fn clear(&mut self)
Available on crate feature alloc
only.
pub fn clear(&mut self)
alloc
only.pub fn is_empty(&self) -> bool
Available on crate feature alloc
and non-tarpaulin_include
only.
pub fn is_empty(&self) -> bool
alloc
and non-tarpaulin_include
only.Tests if the bit-vector is empty.
This is equivalent to BitSlice::is_empty
; it is provided as an
inherent method here rather than relying on Deref
forwarding so that
you can write BitVec::is_empty
as a named function item.
§Original
pub fn split_off(&mut self, at: usize) -> BitVec<T, O> ⓘ
Available on crate feature alloc
only.
pub fn split_off(&mut self, at: usize) -> BitVec<T, O> ⓘ
alloc
only.pub fn resize_with<F>(&mut self, new_len: usize, func: F)
Available on crate feature alloc
only.
pub fn resize_with<F>(&mut self, new_len: usize, func: F)
alloc
only.Resizes the bit-vector to a new length, using a function to produce each inserted bit.
If new_len
is less than self.len()
, this is a truncate operation; if
it is greater, then self
is extended by repeatedly pushing func()
.
§Original
§API Differences
The generator function receives the index into which its bit will be placed.
§Examples
use bitvec::prelude::*;
let mut bv = bitvec![1; 2];
bv.resize_with(5, |idx| idx % 2 == 1);
assert_eq!(bv, bits![1, 1, 0, 1, 0]);
pub fn leak<'a>(self) -> &'a mut BitSlice<T, O> ⓘ
Available on crate feature alloc
and non-tarpaulin_include
only.
pub fn leak<'a>(self) -> &'a mut BitSlice<T, O> ⓘ
alloc
and non-tarpaulin_include
only.Destroys the BitVec
handle without destroying the bit-vector
allocation. The allocation is returned as an &mut BitSlice
that lasts
for the remaining program lifetime.
You may call BitBox::from_raw
on this slice handle exactly once in
order to reap the allocation before program exit. That function takes a
mutable pointer, not a mutable reference, so you must ensure that the
returned reference is never used again after restoring the allocation
handle.
§Original
§Examples
use bitvec::prelude::*;
let bv = bitvec![0, 0, 1];
let static_bits: &'static mut BitSlice = bv.leak();
static_bits.set(0, true);
assert_eq!(static_bits, bits![1, 0, 1]);
let bb = unsafe { BitBox::from_raw(static_bits) };
// static_bits may no longer be used.
drop(bb); // explicitly reap memory before program exit
pub fn resize(&mut self, new_len: usize, value: bool)
Available on crate feature alloc
only.
pub fn resize(&mut self, new_len: usize, value: bool)
alloc
only.pub fn extend_from_slice<T2, O2>(&mut self, other: &BitSlice<T2, O2>)
.extend_from_bitslice()
or .extend_from_raw_slice()
insteadalloc
and non-tarpaulin_include
only.pub fn extend_from_within<R>(&mut self, src: R)where
R: RangeExt<usize>,
Available on crate feature alloc
only.
pub fn extend_from_within<R>(&mut self, src: R)where
R: RangeExt<usize>,
alloc
only.Extends self
by copying an internal range of its bit-slice as the
region to append.
§Original
§Panics
This panics if src
is not within 0 .. self.len()
.
§Examples
use bitvec::prelude::*;
let mut bv = bitvec![0, 1, 0, 0, 1];
bv.extend_from_within(1 .. 4);
assert_eq!(bv, bits![0, 1, 0, 0, 1, 1, 0, 0]);
pub fn splice<R, I>(
&mut self,
range: R,
replace_with: I,
) -> Splice<'_, T, O, <I as IntoIterator>::IntoIter> ⓘ
Available on crate feature alloc
only.
pub fn splice<R, I>( &mut self, range: R, replace_with: I, ) -> Splice<'_, T, O, <I as IntoIterator>::IntoIter> ⓘ
alloc
only.Modifies self.drain()
so that the removed bit-slice is instead
replaced with the contents of another bit-stream.
As with .drain()
, the specified range is always removed from the
bit-vector even if the splicer is not fully consumed, and the splicer
does not specify how many bits are removed if it leaks.
The replacement source is only consumed when the splicer drops; however, it may be pulled before then. The replacement source cannot assume that there will be a delay between creation of the splicer and when it must begin producing bits.
This copies the Vec::splice
implementation; see its documentation for
more details about how the replacement should act.
§Original
§Panics
This panics if range
departs 0 .. self.len()
.
§Examples
use bitvec::prelude::*;
let mut bv = bitvec![0, 1, 1];
// a b c
let mut yank = bv.splice(
.. 2,
bits![static 1, 1, 0].iter().by_vals(),
// d e f
);
assert!(!yank.next().unwrap()); // a
assert!(yank.next().unwrap()); // b
drop(yank);
assert_eq!(bv, bits![1, 1, 0, 1]);
// d e f c
§impl<T, O> BitVec<T, O>
Constructors.
impl<T, O> BitVec<T, O>
Constructors.
pub const EMPTY: BitVec<T, O>
Available on crate feature alloc
only.
pub const EMPTY: BitVec<T, O>
alloc
only.An empty bit-vector with no backing allocation.
pub fn repeat(bit: bool, len: usize) -> BitVec<T, O> ⓘ
Available on crate feature alloc
only.
pub fn repeat(bit: bool, len: usize) -> BitVec<T, O> ⓘ
alloc
only.Creates a new bit-vector by repeating a bit for the desired length.
§Examples
use bitvec::prelude::*;
let zeros = BitVec::<u8, Msb0>::repeat(false, 50);
let ones = BitVec::<u16, Lsb0>::repeat(true, 50);
pub fn from_bitslice(slice: &BitSlice<T, O>) -> BitVec<T, O> ⓘ
Available on crate feature alloc
only.
pub fn from_bitslice(slice: &BitSlice<T, O>) -> BitVec<T, O> ⓘ
alloc
only.Copies the contents of a bit-slice into a new heap allocation.
This copies the raw underlying elements into a new allocation, and sets
the produced bit-vector to use the same memory layout as the originating
bit-slice. This means that it may begin at any bit in the first element,
not just the zeroth bit. If you require this property, call
.force_align()
.
Dead bits in the copied memory elements are guaranteed to be zeroed.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let bv = BitVec::from_bitslice(bits);
assert_eq!(bv, bits);
pub fn from_element(elem: T) -> BitVec<T, O> ⓘ
Available on crate feature alloc
only.
pub fn from_element(elem: T) -> BitVec<T, O> ⓘ
alloc
only.Constructs a new bit-vector from a single element.
This copies elem
into a new heap allocation, and sets the bit-vector
to cover it entirely.
§Examples
use bitvec::prelude::*;
let bv = BitVec::<_, Msb0>::from_element(1u8);
assert!(bv[7]);
pub fn from_slice(slice: &[T]) -> BitVec<T, O> ⓘ
Available on crate feature alloc
only.
pub fn from_slice(slice: &[T]) -> BitVec<T, O> ⓘ
alloc
only.Constructs a new bit-vector from a slice of memory elements.
This copies slice
into a new heap allocation, and sets the bit-vector
to cover it entirely.
§Panics
This panics if slice
exceeds bit-vector capacity.
§Examples
use bitvec::prelude::*;
let slice = &[0u8, 1, 2, 3];
let bv = BitVec::<_, Lsb0>::from_slice(slice);
assert_eq!(bv.len(), 32);
pub fn try_from_slice(slice: &[T]) -> Result<BitVec<T, O>, BitSpanError<T>>
Available on crate feature alloc
only.
pub fn try_from_slice(slice: &[T]) -> Result<BitVec<T, O>, BitSpanError<T>>
alloc
only.Fallibly constructs a new bit-vector from a slice of memory elements.
This fails early if slice
exceeds bit-vector capacity. If it is not,
then slice
is copied into a new heap allocation and fully spanned by
the returned bit-vector.
§Examples
use bitvec::prelude::*;
let slice = &[0u8, 1, 2, 3];
let bv = BitVec::<_, Lsb0>::try_from_slice(slice).unwrap();
assert_eq!(bv.len(), 32);
pub fn from_vec(vec: Vec<T>) -> BitVec<T, O> ⓘ
Available on crate feature alloc
only.
pub fn from_vec(vec: Vec<T>) -> BitVec<T, O> ⓘ
alloc
only.Converts a regular vector in-place into a bit-vector.
The produced bit-vector spans every bit in the original vector. No reällocation occurs; this is purely a transform of the handle.
§Panics
This panics if the source vector is too long to view as a bit-slice.
§Examples
use bitvec::prelude::*;
let v = vec![0u8, 1, 2, 3];
let bv = BitVec::<_, Msb0>::from_vec(v);
assert_eq!(bv.len(), 32);
pub fn try_from_vec(vec: Vec<T>) -> Result<BitVec<T, O>, Vec<T>>
Available on crate feature alloc
only.
pub fn try_from_vec(vec: Vec<T>) -> Result<BitVec<T, O>, Vec<T>>
alloc
only.Attempts to convert a regular vector in-place into a bit-vector.
This fails if the source vector is too long to view as a bit-slice. On success, the produced bit-vector spans every bit in the original vector. No reällocation occurs; this is purely a transform of the handle.
§Examples
use bitvec::prelude::*;
let v = vec![0u8; 20];
assert_eq!(BitVec::<_, Msb0>::try_from_vec(v).unwrap().len(), 160);
It is not practical to allocate a vector that will fail this conversion.
pub fn extend_from_bitslice<T2, O2>(&mut self, other: &BitSlice<T2, O2>)
Available on crate feature alloc
only.
pub fn extend_from_bitslice<T2, O2>(&mut self, other: &BitSlice<T2, O2>)
alloc
only.Appends the contents of a bit-slice to a bit-vector.
This can extend from a bit-slice of any type parameters; it is not
restricted to using the same parameters as self
. However, when the
type parameters do match, it is possible for this to use a batch-copy
optimization to go faster than the individual-bit crawl that is
necessary when they differ.
Until Rust provides extensive support for specialization in trait
implementations, you should use this method whenever you are extending
from a BitSlice
proper, and only use the general .extend()
implementation if you are required to use a generic bool
source.
§Original
§Examples
use bitvec::prelude::*;
let mut bv = bitvec![0, 1];
bv.extend_from_bitslice(bits![0, 1, 0, 0, 1]);
assert_eq!(bv, bits![0, 1, 0, 1, 0, 0, 1]);
pub fn extend_from_raw_slice(&mut self, slice: &[T])
Available on crate feature alloc
only.
pub fn extend_from_raw_slice(&mut self, slice: &[T])
alloc
only.Appends a slice of T
elements to a bit-vector.
The slice is viewed as a BitSlice<T, O>
, then appended directly to the
bit-vector.
§Original
§impl<T, O> BitVec<T, O>
Converters.
impl<T, O> BitVec<T, O>
Converters.
pub fn as_bitslice(&self) -> &BitSlice<T, O> ⓘ
Available on crate feature alloc
only.
pub fn as_bitslice(&self) -> &BitSlice<T, O> ⓘ
alloc
only.Explicitly views the bit-vector as a bit-slice.
pub fn as_mut_bitslice(&mut self) -> &mut BitSlice<T, O> ⓘ
Available on crate feature alloc
only.
pub fn as_mut_bitslice(&mut self) -> &mut BitSlice<T, O> ⓘ
alloc
only.Explicitly views the bit-vector as a mutable bit-slice.
pub fn as_raw_slice(&self) -> &[T]
Available on crate feature alloc
only.
pub fn as_raw_slice(&self) -> &[T]
alloc
only.Views the bit-vector as a slice of its underlying memory elements.
pub fn as_raw_mut_slice(&mut self) -> &mut [T]
Available on crate feature alloc
only.
pub fn as_raw_mut_slice(&mut self) -> &mut [T]
alloc
only.Views the bit-vector as a mutable slice of its underlying memory elements.
pub fn as_bitptr(&self) -> BitPtr<Const, T, O>
Available on crate feature alloc
only.
pub fn as_bitptr(&self) -> BitPtr<Const, T, O>
alloc
only.pub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
Available on crate feature alloc
only.
pub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
alloc
only.pub fn into_boxed_bitslice(self) -> BitBox<T, O>
Available on crate feature alloc
only.
pub fn into_boxed_bitslice(self) -> BitBox<T, O>
alloc
only.pub fn into_vec(self) -> Vec<T>
Available on crate feature alloc
only.
pub fn into_vec(self) -> Vec<T>
alloc
only.Converts a bit-vector into a Vec
of its underlying storage.
The produced vector contains all elements that contained live bits. Dead
bits have an unspecified value; you should call .set_uninitialized()
before converting into a vector.
This does not affect the allocated memory; it is purely a conversion of the handle.
§Examples
use bitvec::prelude::*;
let bv = bitvec![u8, Msb0; 0, 1, 0, 0, 1];
let v = bv.into_vec();
assert_eq!(v[0] & 0xF8, 0b01001_000);
§impl<T, O> BitVec<T, O>
Utilities.
impl<T, O> BitVec<T, O>
Utilities.
pub fn set_elements(&mut self, element: <T as BitStore>::Mem)
Available on crate feature alloc
only.
pub fn set_elements(&mut self, element: <T as BitStore>::Mem)
alloc
only.Overwrites each element (visible in .as_raw_mut_slice()
) with a new
bit-pattern.
This unconditionally writes element
into each element in the backing
slice, without altering the bit-vector’s length or capacity.
This guarantees that dead bits visible in .as_raw_slice()
but not
.as_bitslice()
are initialized according to the bit-pattern of
element.
The elements not visible in the raw slice, but present in the
allocation, do not specify a value. You may not rely on them being
zeroed or being set to the element
bit-pattern.
§Parameters
&mut self
element
: The bit-pattern with which each live element in the backing store is initialized.
§Examples
use bitvec::prelude::*;
let mut bv = bitvec![u8, Msb0; 0; 20];
assert_eq!(bv.as_raw_slice(), [0; 3]);
bv.set_elements(0xA5);
assert_eq!(bv.as_raw_slice(), [0xA5; 3]);
pub fn set_uninitialized(&mut self, value: bool)
Available on crate feature alloc
only.
pub fn set_uninitialized(&mut self, value: bool)
alloc
only.Sets the uninitialized bits of a bit-vector to a known value.
This method modifies all bits that are observable in .as_raw_slice()
but not observable in .as_bitslice()
to a known value.
Memory beyond the raw-slice view, but still within the allocation, is
considered fully dead and will never be seen.
This can be used to zero the unused memory so that when viewed as a raw slice, unused bits have a consistent and predictable value.
§Examples
use bitvec::prelude::*;
let mut bv = 0b1101_1100u8.view_bits::<Lsb0>().to_bitvec();
assert_eq!(bv.as_raw_slice()[0], 0b1101_1100u8);
bv.truncate(4);
assert_eq!(bv.count_ones(), 2);
assert_eq!(bv.as_raw_slice()[0], 0b1101_1100u8);
bv.set_uninitialized(false);
assert_eq!(bv.as_raw_slice()[0], 0b0000_1100u8);
bv.set_uninitialized(true);
assert_eq!(bv.as_raw_slice()[0], 0b1111_1100u8);
pub fn force_align(&mut self)
Available on crate feature alloc
only.
pub fn force_align(&mut self)
alloc
only.Ensures that the live region of the bit-vector’s contents begin at the front edge of the buffer.
BitVec
has performance optimizations where it moves its view of its
buffer contents in order to avoid needless moves of its data within the
buffer. This can lead to unexpected contents of the raw memory values,
so this method ensures that the semantic contents of the bit-vector
match its in-memory storage.
§Examples
use bitvec::prelude::*;
let data = 0b00_1111_00u8;
let bits = data.view_bits::<Msb0>();
let mut bv = bits[2 .. 6].to_bitvec();
assert_eq!(bv, bits![1; 4]);
assert_eq!(bv.as_raw_slice()[0], data);
bv.force_align();
assert_eq!(bv, bits![1; 4]);
// BitVec does not specify the value of dead bits in its buffer.
assert_eq!(bv.as_raw_slice()[0] & 0xF0, 0xF0);
Methods from Deref<Target = BitSlice<T, O>>§
pub fn len(&self) -> usize
pub fn len(&self) -> usize
pub fn is_empty(&self) -> bool
pub fn is_empty(&self) -> bool
pub fn first(&self) -> Option<BitRef<'_, Const, T, O>>
pub fn first(&self) -> Option<BitRef<'_, Const, T, O>>
Gets a reference to the first bit of the bit-slice, or None
if it is
empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
§Examples
use bitvec::prelude::*;
let bits = bits![1, 0, 0];
assert_eq!(bits.first().as_deref(), Some(&true));
assert!(bits![].first().is_none());
pub fn first_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
pub fn first_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
Gets a mutable reference to the first bit of the bit-slice, or None
if
it is empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some(mut first) = bits.first_mut() {
*first = true;
}
assert_eq!(bits, bits![1, 0, 0]);
assert!(bits![mut].first_mut().is_none());
pub fn split_first(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>
pub fn split_first(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>
Splits the bit-slice into a reference to its first bit, and the rest of
the bit-slice. Returns None
when empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
§Examples
use bitvec::prelude::*;
let bits = bits![1, 0, 0];
let (first, rest) = bits.split_first().unwrap();
assert_eq!(first, &true);
assert_eq!(rest, bits![0; 2]);
pub fn split_first_mut(
&mut self,
) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>
pub fn split_first_mut( &mut self, ) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>
Splits the bit-slice into mutable references of its first bit, and the
rest of the bit-slice. Returns None
when empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some((mut first, rest)) = bits.split_first_mut() {
*first = true;
assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![1, 0, 0]);
pub fn split_last(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>
pub fn split_last(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>
Splits the bit-slice into a reference to its last bit, and the rest of
the bit-slice. Returns None
when empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
let (last, rest) = bits.split_last().unwrap();
assert_eq!(last, &true);
assert_eq!(rest, bits![0; 2]);
pub fn split_last_mut(
&mut self,
) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>
pub fn split_last_mut( &mut self, ) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>
Splits the bit-slice into mutable references to its last bit, and the
rest of the bit-slice. Returns None
when empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some((mut last, rest)) = bits.split_last_mut() {
*last = true;
assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![0, 0, 1]);
pub fn last(&self) -> Option<BitRef<'_, Const, T, O>>
pub fn last(&self) -> Option<BitRef<'_, Const, T, O>>
Gets a reference to the last bit of the bit-slice, or None
if it is
empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
assert_eq!(bits.last().as_deref(), Some(&true));
assert!(bits![].last().is_none());
pub fn last_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
pub fn last_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
Gets a mutable reference to the last bit of the bit-slice, or None
if
it is empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some(mut last) = bits.last_mut() {
*last = true;
}
assert_eq!(bits, bits![0, 0, 1]);
assert!(bits![mut].last_mut().is_none());
pub fn get<'a, I>(
&'a self,
index: I,
) -> Option<<I as BitSliceIndex<'a, T, O>>::Immut>where
I: BitSliceIndex<'a, T, O>,
pub fn get<'a, I>(
&'a self,
index: I,
) -> Option<<I as BitSliceIndex<'a, T, O>>::Immut>where
I: BitSliceIndex<'a, T, O>,
Gets a reference to a single bit or a subsection of the bit-slice,
depending on the type of index
.
- If given a
usize
, this produces a reference structure to thebool
at the position. - If given any form of range, this produces a smaller bit-slice.
This returns None
if the index
departs the bounds of self
.
§Original
§API Differences
BitSliceIndex
uses discrete types for immutable and mutable
references, rather than a single referent type.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
assert_eq!(bits.get(1).as_deref(), Some(&true));
assert_eq!(bits.get(0 .. 2), Some(bits![0, 1]));
assert!(bits.get(3).is_none());
assert!(bits.get(0 .. 4).is_none());
pub fn get_mut<'a, I>(
&'a mut self,
index: I,
) -> Option<<I as BitSliceIndex<'a, T, O>>::Mut>where
I: BitSliceIndex<'a, T, O>,
pub fn get_mut<'a, I>(
&'a mut self,
index: I,
) -> Option<<I as BitSliceIndex<'a, T, O>>::Mut>where
I: BitSliceIndex<'a, T, O>,
Gets a mutable reference to a single bit or a subsection of the
bit-slice, depending on the type of index
.
- If given a
usize
, this produces a reference structure to thebool
at the position. - If given any form of range, this produces a smaller bit-slice.
This returns None
if the index
departs the bounds of self
.
§Original
§API Differences
BitSliceIndex
uses discrete types for immutable and mutable
references, rather than a single referent type.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
*bits.get_mut(0).unwrap() = true;
bits.get_mut(1 ..).unwrap().fill(true);
assert_eq!(bits, bits![1; 3]);
pub unsafe fn get_unchecked<'a, I>(
&'a self,
index: I,
) -> <I as BitSliceIndex<'a, T, O>>::Immutwhere
I: BitSliceIndex<'a, T, O>,
pub unsafe fn get_unchecked<'a, I>(
&'a self,
index: I,
) -> <I as BitSliceIndex<'a, T, O>>::Immutwhere
I: BitSliceIndex<'a, T, O>,
Gets a reference to a single bit or to a subsection of the bit-slice, without bounds checking.
This has the same arguments and behavior as .get()
, except that it
does not check that index
is in bounds.
§Original
§Safety
You must ensure that index
is within bounds (within the range 0 .. self.len()
), or this method will introduce memory safety and/or
undefined behavior.
It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.
§Examples
use bitvec::prelude::*;
let data = 0b0001_0010u8;
let bits = &data.view_bits::<Lsb0>()[.. 3];
unsafe {
assert!(bits.get_unchecked(1));
assert!(bits.get_unchecked(4));
}
pub unsafe fn get_unchecked_mut<'a, I>(
&'a mut self,
index: I,
) -> <I as BitSliceIndex<'a, T, O>>::Mutwhere
I: BitSliceIndex<'a, T, O>,
pub unsafe fn get_unchecked_mut<'a, I>(
&'a mut self,
index: I,
) -> <I as BitSliceIndex<'a, T, O>>::Mutwhere
I: BitSliceIndex<'a, T, O>,
Gets a mutable reference to a single bit or a subsection of the
bit-slice, depending on the type of index
.
This has the same arguments and behavior as .get_mut()
, except that
it does not check that index
is in bounds.
§Original
§Safety
You must ensure that index
is within bounds (within the range 0 .. self.len()
), or this method will introduce memory safety and/or
undefined behavior.
It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.
§Examples
use bitvec::prelude::*;
let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 3];
unsafe {
bits.get_unchecked_mut(1).commit(true);
bits.get_unchecked_mut(4 .. 6).fill(true);
}
assert_eq!(data, 0b0011_0010);
pub fn as_ptr(&self) -> BitPtr<Const, T, O>
.as_bitptr()
insteadtarpaulin_include
only.pub fn as_mut_ptr(&mut self) -> BitPtr<Mut, T, O>
.as_mut_bitptr()
insteadtarpaulin_include
only.pub fn as_ptr_range(&self) -> Range<BitPtr<Const, T, O>> ⓘ
Available on non-tarpaulin_include
only.
pub fn as_ptr_range(&self) -> Range<BitPtr<Const, T, O>> ⓘ
tarpaulin_include
only.Produces a range of bit-pointers to each bit in the bit-slice.
This is a standard-library range, which has no real functionality for
pointer types. You should prefer .as_bitptr_range()
instead, as it
produces a custom structure that provides expected ranging
functionality.
§Original
pub fn as_mut_ptr_range(&mut self) -> Range<BitPtr<Mut, T, O>> ⓘ
Available on non-tarpaulin_include
only.
pub fn as_mut_ptr_range(&mut self) -> Range<BitPtr<Mut, T, O>> ⓘ
tarpaulin_include
only.Produces a range of mutable bit-pointers to each bit in the bit-slice.
This is a standard-library range, which has no real functionality for
pointer types. You should prefer .as_mut_bitptr_range()
instead, as
it produces a custom structure that provides expected ranging
functionality.
§Original
pub fn swap(&mut self, a: usize, b: usize)
pub fn swap(&mut self, a: usize, b: usize)
pub fn reverse(&mut self)
pub fn reverse(&mut self)
pub fn iter(&self) -> Iter<'_, T, O> ⓘ
pub fn iter(&self) -> Iter<'_, T, O> ⓘ
Produces an iterator over each bit in the bit-slice.
§Original
§API Differences
This iterator yields proxy-reference structures, not &bool
. It can be
adapted to yield &bool
with the .by_refs()
method, or bool
with
.by_vals()
.
This iterator, and its adapters, are fast. Do not try to be more clever
than them by abusing .as_bitptr_range()
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 1];
let mut iter = bits.iter();
assert!(!iter.next().unwrap());
assert!( iter.next().unwrap());
assert!( iter.next_back().unwrap());
assert!(!iter.next_back().unwrap());
assert!( iter.next().is_none());
pub fn iter_mut(&mut self) -> IterMut<'_, T, O> ⓘ
pub fn iter_mut(&mut self) -> IterMut<'_, T, O> ⓘ
Produces a mutable iterator over each bit in the bit-slice.
§Original
§API Differences
This iterator yields proxy-reference structures, not &mut bool
. In
addition, it marks each proxy as alias-tainted.
If you are using this in an ordinary loop and not keeping multiple
yielded proxy-references alive at the same scope, you may use the
.remove_alias()
adapter to undo the alias marking.
This iterator is fast. Do not try to be more clever than it by abusing
.as_mut_bitptr_range()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 4];
let mut iter = bits.iter_mut();
iter.nth(1).unwrap().commit(true); // index 1
iter.next_back().unwrap().commit(true); // index 3
assert!(iter.next().is_some()); // index 2
assert!(iter.next().is_none()); // complete
assert_eq!(bits, bits![0, 1, 0, 1]);
pub fn windows(&self, size: usize) -> Windows<'_, T, O> ⓘ
pub fn windows(&self, size: usize) -> Windows<'_, T, O> ⓘ
Iterates over consecutive windowing subslices in a bit-slice.
Windows are overlapping views of the bit-slice. Each window advances one
bit from the previous, so in a bit-slice [A, B, C, D, E]
, calling
.windows(3)
will yield [A, B, C]
, [B, C, D]
, and [C, D, E]
.
§Original
§Panics
This panics if size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.windows(3);
assert_eq!(iter.next(), Some(bits![0, 1, 0]));
assert_eq!(iter.next(), Some(bits![1, 0, 0]));
assert_eq!(iter.next(), Some(bits![0, 0, 1]));
assert!(iter.next().is_none());
pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, T, O> ⓘ
pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, T, O> ⓘ
Iterates over non-overlapping subslices of a bit-slice.
Unlike .windows()
, the subslices this yields do not overlap with each
other. If self.len()
is not an even multiple of chunk_size
, then the
last chunk yielded will be shorter.
§Original
§Sibling Methods
.chunks_mut()
has the same division logic, but each yielded bit-slice is mutable..chunks_exact()
does not yield the final chunk if it is shorter thanchunk_size
..rchunks()
iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert_eq!(iter.next(), Some(bits![1]));
assert!(iter.next().is_none());
pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, T, O> ⓘ
pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, T, O> ⓘ
Iterates over non-overlapping mutable subslices of a bit-slice.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§Sibling Methods
.chunks()
has the same division logic, but each yielded bit-slice is immutable..chunks_exact_mut()
does not yield the final chunk if it is shorter thanchunk_size
..rchunks_mut()
iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
for (idx, chunk) in unsafe {
bits.chunks_mut(2).remove_alias()
}.enumerate() {
chunk.store(idx + 1);
}
assert_eq!(bits, bits![0, 1, 1, 0, 1]);
// ^^^^ ^^^^ ^
pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, T, O> ⓘ
pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, T, O> ⓘ
Iterates over non-overlapping subslices of a bit-slice.
If self.len()
is not an even multiple of chunk_size
, then the last
few bits are not yielded by the iterator at all. They can be accessed
with the .remainder()
method if the iterator is bound to a name.
§Original
§Sibling Methods
.chunks()
yields any leftover bits at the end as a shorter chunk during iteration..chunks_exact_mut()
has the same division logic, but each yielded bit-slice is mutable..rchunks_exact()
iterates from the back of the bit-slice to the front, with the unyielded remainder segment at the front edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks_exact(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![1]);
pub fn chunks_exact_mut(
&mut self,
chunk_size: usize,
) -> ChunksExactMut<'_, T, O> ⓘ
pub fn chunks_exact_mut( &mut self, chunk_size: usize, ) -> ChunksExactMut<'_, T, O> ⓘ
Iterates over non-overlapping mutable subslices of a bit-slice.
If self.len()
is not an even multiple of chunk_size
, then the last
few bits are not yielded by the iterator at all. They can be accessed
with the .into_remainder()
method if the iterator is bound to a
name.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§Sibling Methods
.chunks_mut()
yields any leftover bits at the end as a shorter chunk during iteration..chunks_exact()
has the same division logic, but each yielded bit-slice is immutable..rchunks_exact_mut()
iterates from the back of the bit-slice forwards, with the unyielded remainder segment at the front edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.chunks_exact_mut(2);
for (idx, chunk) in iter.by_ref().enumerate() {
chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);
assert_eq!(bits, bits![0, 1, 1, 0, 1]);
// remainder ^
pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, T, O> ⓘ
pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, T, O> ⓘ
Iterates over non-overlapping subslices of a bit-slice, from the back edge.
Unlike .chunks()
, this aligns its chunks to the back edge of self
.
If self.len()
is not an even multiple of chunk_size
, then the
leftover partial chunk is self[0 .. len % chunk_size]
.
§Original
§Sibling Methods
.rchunks_mut()
has the same division logic, but each yielded bit-slice is mutable..rchunks_exact()
does not yield the final chunk if it is shorter thanchunk_size
..chunks()
iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert_eq!(iter.next(), Some(bits![0]));
assert!(iter.next().is_none());
pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, T, O> ⓘ
pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, T, O> ⓘ
Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.
Unlike .chunks_mut()
, this aligns its chunks to the back edge of
self
. If self.len()
is not an even multiple of chunk_size
, then
the leftover partial chunk is self[0 .. len % chunk_size]
.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded values for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§Sibling Methods
.rchunks()
has the same division logic, but each yielded bit-slice is immutable..rchunks_exact_mut()
does not yield the final chunk if it is shorter thanchunk_size
..chunks_mut()
iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
for (idx, chunk) in unsafe {
bits.rchunks_mut(2).remove_alias()
}.enumerate() {
chunk.store(idx + 1);
}
assert_eq!(bits, bits![1, 1, 0, 0, 1]);
// remainder ^ ^^^^ ^^^^
pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, T, O> ⓘ
pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, T, O> ⓘ
Iterates over non-overlapping subslices of a bit-slice, from the back edge.
If self.len()
is not an even multiple of chunk_size
, then the first
few bits are not yielded by the iterator at all. They can be accessed
with the .remainder()
method if the iterator is bound to a name.
§Original
§Sibling Methods
.rchunks()
yields any leftover bits at the front as a shorter chunk during iteration..rchunks_exact_mut()
has the same division logic, but each yielded bit-slice is mutable..chunks_exact()
iterates from the front of the bit-slice to the back, with the unyielded remainder segment at the back edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks_exact(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![0]);
pub fn rchunks_exact_mut(
&mut self,
chunk_size: usize,
) -> RChunksExactMut<'_, T, O> ⓘ
pub fn rchunks_exact_mut( &mut self, chunk_size: usize, ) -> RChunksExactMut<'_, T, O> ⓘ
Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.
If self.len()
is not an even multiple of chunk_size
, then the first
few bits are not yielded by the iterator at all. They can be accessed
with the .into_remainder()
method if the iterator is bound to a
name.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Sibling Methods
.rchunks_mut()
yields any leftover bits at the front as a shorter chunk during iteration..rchunks_exact()
has the same division logic, but each yielded bit-slice is immutable..chunks_exact_mut()
iterates from the front of the bit-slice backwards, with the unyielded remainder segment at the back edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.rchunks_exact_mut(2);
for (idx, chunk) in iter.by_ref().enumerate() {
chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);
assert_eq!(bits, bits![1, 1, 0, 0, 1]);
// remainder ^
pub fn split_at(&self, mid: usize) -> (&BitSlice<T, O>, &BitSlice<T, O>)
pub fn split_at(&self, mid: usize) -> (&BitSlice<T, O>, &BitSlice<T, O>)
Splits a bit-slice in two parts at an index.
The returned bit-slices are self[.. mid]
and self[mid ..]
. mid
is
included in the right bit-slice, not the left.
If mid
is 0
then the left bit-slice is empty; if it is self.len()
then the right bit-slice is empty.
This method guarantees that even when either partition is empty, the
encoded bit-pointer values of the bit-slice references is &self[0]
and
&self[mid]
.
§Original
§Panics
This panics if mid
is greater than self.len()
. It is allowed to be
equal to the length, in which case the right bit-slice is simply empty.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 0, 1, 1, 1];
let base = bits.as_bitptr();
let (a, b) = bits.split_at(0);
assert_eq!(unsafe { a.as_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 0);
let (a, b) = bits.split_at(6);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 6);
let (a, b) = bits.split_at(3);
assert_eq!(a, bits![0; 3]);
assert_eq!(b, bits![1; 3]);
pub fn split_at_mut(
&mut self,
mid: usize,
) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)
pub fn split_at_mut( &mut self, mid: usize, ) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)
Splits a mutable bit-slice in two parts at an index.
The returned bit-slices are self[.. mid]
and self[mid ..]
. mid
is
included in the right bit-slice, not the left.
If mid
is 0
then the left bit-slice is empty; if it is self.len()
then the right bit-slice is empty.
This method guarantees that even when either partition is empty, the
encoded bit-pointer values of the bit-slice references is &self[0]
and
&self[mid]
.
§Original
§API Differences
The end bits of the left half and the start bits of the right half might
be stored in the same memory element. In order to avoid breaking
bitvec
’s memory-safety guarantees, both bit-slices are marked as
T::Alias
. This marking allows them to be used without interfering with
each other when they interact with memory.
§Panics
This panics if mid
is greater than self.len()
. It is allowed to be
equal to the length, in which case the right bit-slice is simply empty.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 6];
let base = bits.as_mut_bitptr();
let (a, b) = bits.split_at_mut(0);
assert_eq!(unsafe { a.as_mut_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 0);
let (a, b) = bits.split_at_mut(6);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 6);
let (a, b) = bits.split_at_mut(3);
a.store(3);
b.store(5);
assert_eq!(bits, bits![0, 1, 1, 1, 0, 1]);
pub fn split<F>(&self, pred: F) -> Split<'_, T, O, F> ⓘ
pub fn split<F>(&self, pred: F) -> Split<'_, T, O, F> ⓘ
Iterates over subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.split_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split_inclusive()
includes the matched bit in the yielded bit-slice..rsplit()
iterates from the back of the bit-slice instead of the front..splitn()
times out aftern
yields.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
// ^
let mut iter = bits.split(|pos, _bit| pos % 3 == 2);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert_eq!(iter.next().unwrap(), bits![0]);
assert!(iter.next().is_none());
If the first bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the last bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
// ^
let mut iter = bits.split(|_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().unwrap().is_empty());
assert!(iter.next().is_none());
If two matched bits are directly adjacent, then an empty bit-slice will be yielded between them:
use bitvec::prelude::*;
let bits = bits![1, 0, 0, 1];
// ^ ^
let mut iter = bits.split(|_pos, bit| !*bit);
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());
pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, O, F> ⓘ
pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, O, F> ⓘ
Iterates over mutable subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.split()
has the same splitting logic, but each yielded bit-slice is immutable..split_inclusive_mut()
includes the matched bit in the yielded bit-slice..rsplit_mut()
iterates from the back of the bit-slice instead of the front..splitn_mut()
times out aftern
yields.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// ^ ^
for group in bits.split_mut(|_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);
pub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, O, F> ⓘ
pub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, O, F> ⓘ
Iterates over subslices separated by bits that match a predicate. Unlike
.split()
, this does include the matching bit as the last bit in the
yielded bit-slice.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.split_inclusive_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split()
does not include the matched bit in the yielded bit-slice.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1];
// ^ ^
let mut iter = bits.split_inclusive(|_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());
pub fn split_inclusive_mut<F>(
&mut self,
pred: F,
) -> SplitInclusiveMut<'_, T, O, F> ⓘ
pub fn split_inclusive_mut<F>( &mut self, pred: F, ) -> SplitInclusiveMut<'_, T, O, F> ⓘ
Iterates over mutable subslices separated by bits that match a
predicate. Unlike .split_mut()
, this does include the matching bit
as the last bit in the bit-slice.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.split_inclusive()
has the same splitting logic, but each yielded bit-slice is immutable..split_mut()
does not include the matched bit in the yielded bit-slice.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 0, 0, 0];
// ^
for group in bits.split_inclusive_mut(|pos, _bit| pos % 3 == 2) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 0, 1, 0]);
pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, O, F> ⓘ
pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, O, F> ⓘ
Iterates over subslices separated by bits that match a predicate, from the back edge. The matched bit is not contained in the yielded bit-slices.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.rsplit_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split()
iterates from the front of the bit-slice instead of the back..rsplitn()
times out aftern
yields.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
// ^
let mut iter = bits.rsplit(|pos, _bit| pos % 3 == 2);
assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());
If the last bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the first bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
// ^
let mut iter = bits.rsplit(|_pos, bit| *bit);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().is_none());
If two yielded bits are directly adjacent, then an empty bit-slice will be yielded between them:
use bitvec::prelude::*;
let bits = bits![1, 0, 0, 1];
// ^ ^
let mut iter = bits.split(|_pos, bit| !*bit);
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());
pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, O, F> ⓘ
pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, O, F> ⓘ
Iterates over mutable subslices separated by bits that match a predicate, from the back. The matched bit is not contained in the yielded bit-slices.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.rsplit()
has the same splitting logic, but each yielded bit-slice is immutable..split_mut()
iterates from the front of the bit-slice to the back..rsplitn_mut()
iterates from the front of the bit-slice to the back.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// ^ ^
for group in bits.rsplit_mut(|_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);
pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, O, F> ⓘ
pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, O, F> ⓘ
Iterates over subslices separated by bits that match a predicate, giving
up after yielding n
times. The n
th yield contains the rest of the
bit-slice. As with .split()
, the yielded bit-slices do not contain the
matched bit.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.splitn_mut()
has the same splitting logic, but each yielded bit-slice is mutable..rsplitn()
iterates from the back of the bit-slice instead of the front..split()
has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1, 0];
let mut iter = bits.splitn(2, |_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0, 0]);
assert_eq!(iter.next().unwrap(), bits![0, 1, 0]);
assert!(iter.next().is_none());
pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, O, F> ⓘ
pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, O, F> ⓘ
Iterates over mutable subslices separated by bits that match a
predicate, giving up after yielding n
times. The n
th yield contains
the rest of the bit-slice. As with .split_mut()
, the yielded
bit-slices do not contain the matched bit.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.splitn()
has the same splitting logic, but each yielded bit-slice is immutable..rsplitn_mut()
iterates from the back of the bit-slice instead of the front..split_mut()
has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
for group in bits.splitn_mut(2, |_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 0]);
pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, O, F> ⓘ
pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, O, F> ⓘ
Iterates over mutable subslices separated by bits that match a
predicate from the back edge, giving up after yielding n
times. The
n
th yield contains the rest of the bit-slice. As with .split_mut()
,
the yielded bit-slices do not contain the matched bit.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.rsplitn_mut()
has the same splitting logic, but each yielded bit-slice is mutable..splitn()
: iterates from the front of the bit-slice instead of the back..rsplit()
has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 1, 0];
// ^
let mut iter = bits.rsplitn(2, |_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert!(iter.next().is_none());
pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, O, F> ⓘ
pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, O, F> ⓘ
Iterates over mutable subslices separated by bits that match a
predicate from the back edge, giving up after yielding n
times. The
n
th yield contains the rest of the bit-slice. As with .split_mut()
,
the yielded bit-slices do not contain the matched bit.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.rsplitn()
has the same splitting logic, but each yielded bit-slice is immutable..splitn_mut()
iterates from the front of the bit-slice instead of the back..rsplit_mut()
has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 0, 1, 0, 0, 0];
for group in bits.rsplitn_mut(2, |_idx, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 0, 0, 1, 1, 0, 0]);
// ^ group 2 ^ group 1
pub fn contains<T2, O2>(&self, other: &BitSlice<T2, O2>) -> bool
pub fn contains<T2, O2>(&self, other: &BitSlice<T2, O2>) -> bool
Tests if the bit-slice contains the given sequence anywhere within it.
This scans over self.windows(other.len())
until one of the windows
matches. The search key does not need to share type parameters with the
bit-slice being tested, as the comparison is bit-wise. However, sharing
type parameters will accelerate the comparison.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1, 1, 0, 0];
assert!( bits.contains(bits![0, 1, 1, 0]));
assert!(!bits.contains(bits![1, 0, 0, 1]));
pub fn starts_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool
pub fn starts_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool
Tests if the bit-slice begins with the given sequence.
The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
assert!( bits.starts_with(bits![0, 1]));
assert!(!bits.starts_with(bits![1, 0]));
This always returns true
if the needle is empty:
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.starts_with(empty));
assert!(empty.starts_with(empty));
pub fn ends_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool
pub fn ends_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool
Tests if the bit-slice ends with the given sequence.
The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
assert!( bits.ends_with(bits![1, 0]));
assert!(!bits.ends_with(bits![0, 1]));
This always returns true
if the needle is empty:
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.ends_with(empty));
assert!(empty.ends_with(empty));
pub fn strip_prefix<T2, O2>(
&self,
prefix: &BitSlice<T2, O2>,
) -> Option<&BitSlice<T, O>>
pub fn strip_prefix<T2, O2>( &self, prefix: &BitSlice<T2, O2>, ) -> Option<&BitSlice<T, O>>
Removes a prefix bit-slice, if present.
Like .starts_with()
, the search key does not need to share type
parameters with the bit-slice being stripped. If
self.starts_with(suffix)
, then this returns Some(&self[prefix.len() ..])
, otherwise it returns None
.
§Original
§API Differences
BitSlice
does not support pattern searches; instead, it permits self
and prefix
to differ in type parameters.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_prefix(bits![0, 1]).unwrap(), bits[2 ..]);
assert_eq!(bits.strip_prefix(bits![0, 1, 0, 0,]).unwrap(), bits[4 ..]);
assert!(bits.strip_prefix(bits![1, 0]).is_none());
pub fn strip_suffix<T2, O2>(
&self,
suffix: &BitSlice<T2, O2>,
) -> Option<&BitSlice<T, O>>
pub fn strip_suffix<T2, O2>( &self, suffix: &BitSlice<T2, O2>, ) -> Option<&BitSlice<T, O>>
Removes a suffix bit-slice, if present.
Like .ends_with()
, the search key does not need to share type
parameters with the bit-slice being stripped. If
self.ends_with(suffix)
, then this returns Some(&self[.. self.len() - suffix.len()])
, otherwise it returns None
.
§Original
§API Differences
BitSlice
does not support pattern searches; instead, it permits self
and suffix
to differ in type parameters.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_suffix(bits![1, 0]).unwrap(), bits[.. 7]);
assert_eq!(bits.strip_suffix(bits![0, 1, 1, 0]).unwrap(), bits[.. 5]);
assert!(bits.strip_suffix(bits![0, 1]).is_none());
pub fn rotate_left(&mut self, by: usize)
pub fn rotate_left(&mut self, by: usize)
Rotates the contents of a bit-slice to the left (towards the zero index).
This essentially splits the bit-slice at by
, then exchanges the two
pieces. self[.. by]
becomes the first section, and is then followed by
self[.. by]
.
The implementation is batch-accelerated where possible. It should have a
runtime complexity much lower than O(by)
.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// split occurs here ^
bits.rotate_left(2);
assert_eq!(bits, bits![1, 0, 1, 0, 0, 0]);
pub fn rotate_right(&mut self, by: usize)
pub fn rotate_right(&mut self, by: usize)
Rotates the contents of a bit-slice to the right (away from the zero index).
This essentially splits the bit-slice at self.len() - by
, then
exchanges the two pieces. self[len - by ..]
becomes the first section,
and is then followed by self[.. len - by]
.
The implementation is batch-accelerated where possible. It should have a
runtime complexity much lower than O(by)
.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 1, 1, 0];
// split occurs here ^
bits.rotate_right(2);
assert_eq!(bits, bits![1, 0, 0, 0, 1, 1]);
pub fn fill(&mut self, value: bool)
pub fn fill(&mut self, value: bool)
Fills the bit-slice with a given bit.
This is a recent stabilization in the standard library. bitvec
previously offered this behavior as the novel API .set_all()
. That
method name is now removed in favor of this standard-library analogue.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 5];
bits.fill(true);
assert_eq!(bits, bits![1; 5]);
pub fn fill_with<F>(&mut self, func: F)
pub fn fill_with<F>(&mut self, func: F)
Fills the bit-slice with bits produced by a generator function.
§Original
§API Differences
The generator function receives the index of the bit being initialized as an argument.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 5];
bits.fill_with(|idx| idx % 2 == 0);
assert_eq!(bits, bits![1, 0, 1, 0, 1]);
pub fn clone_from_slice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)
.clone_from_bitslice()
insteadtarpaulin_include
only.pub fn copy_from_slice(&mut self, src: &BitSlice<T, O>)
.copy_from_bitslice()
insteadtarpaulin_include
only.pub fn copy_within<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
pub fn copy_within<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
Copies a span of bits to another location in the bit-slice.
src
is the range of bit-indices in the bit-slice to copy, and dest is the starting index of the destination range.
srcand
dest .. dest +
src.len()are permitted to overlap; the copy will automatically detect and manage this. However, both
srcand
dest .. dest + src.len()**must** fall within the bounds of
self`.
§Original
§Panics
This panics if either the source or destination range exceed
self.len()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0];
bits.copy_within(1 .. 5, 8);
// v v v v
assert_eq!(bits, bits![1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0]);
// ^ ^ ^ ^
pub fn swap_with_slice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)
.swap_with_bitslice()
insteadpub unsafe fn align_to<U>(
&self,
) -> (&BitSlice<T, O>, &BitSlice<U, O>, &BitSlice<T, O>)where
U: BitStore,
pub unsafe fn align_to<U>(
&self,
) -> (&BitSlice<T, O>, &BitSlice<U, O>, &BitSlice<T, O>)where
U: BitStore,
Produces bit-slice view(s) with different underlying storage types.
This may have unexpected effects, and you cannot assume that
before[idx] == after[idx]
! Consult the tables in the manual
for information about memory layouts.
§Original
§Notes
Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.
§Safety
You may not use this to cast away alias protections. Rust does not have
support for higher-kinded types, so this cannot express the relation
Outer<T> -> Outer<U> where Outer: BitStoreContainer
, but memory safety
does require that you respect this rule. Reälign integers to integers,
Cell
s to Cell
s, and atomics to atomics, but do not cross these
boundaries.
§Examples
use bitvec::prelude::*;
let bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
bits.align_to::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);
pub unsafe fn align_to_mut<U>(
&mut self,
) -> (&mut BitSlice<T, O>, &mut BitSlice<U, O>, &mut BitSlice<T, O>)where
U: BitStore,
pub unsafe fn align_to_mut<U>(
&mut self,
) -> (&mut BitSlice<T, O>, &mut BitSlice<U, O>, &mut BitSlice<T, O>)where
U: BitStore,
Produces bit-slice view(s) with different underlying storage types.
This may have unexpected effects, and you cannot assume that
before[idx] == after[idx]
! Consult the tables in the manual
for information about memory layouts.
§Original
§Notes
Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.
§Safety
You may not use this to cast away alias protections. Rust does not have
support for higher-kinded types, so this cannot express the relation
Outer<T> -> Outer<U> where Outer: BitStoreContainer
, but memory safety
does require that you respect this rule. Reälign integers to integers,
Cell
s to Cell
s, and atomics to atomics, but do not cross these
boundaries.
§Examples
use bitvec::prelude::*;
let mut bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits_mut::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
bits.align_to_mut::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);
pub fn to_vec(&self) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
.to_bitvec()
insteadalloc
only.pub fn repeat(&self, n: usize) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
Available on crate feature alloc
only.
pub fn repeat(&self, n: usize) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
alloc
only.Creates a bit-vector by repeating a bit-slice n
times.
§Original
§Panics
This method panics if self.len() * n
exceeds the BitVec
capacity.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![0, 1].repeat(3), bitvec![0, 1, 0, 1, 0, 1]);
This panics by exceeding bit-vector maximum capacity:
use bitvec::prelude::*;
bits![0, 1].repeat(BitSlice::<usize, Lsb0>::MAX_BITS);
pub fn as_bitptr(&self) -> BitPtr<Const, T, O>
pub fn as_bitptr(&self) -> BitPtr<Const, T, O>
pub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
pub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
pub fn as_bitptr_range(&self) -> BitPtrRange<Const, T, O> ⓘ
pub fn as_bitptr_range(&self) -> BitPtrRange<Const, T, O> ⓘ
Views the bit-slice as a half-open range of bit-pointers, to its first bit in the bit-slice and first bit beyond it.
§Original
§API Differences
This is renamed to indicate that it returns a bitvec
structure, rather
than an ordinary Range
.
§Notes
BitSlice
does define a .as_ptr_range()
, which returns a
Range<BitPtr>
. BitPtrRange
has additional capabilities that
Range<*const T>
and Range<BitPtr>
do not.
pub fn as_mut_bitptr_range(&mut self) -> BitPtrRange<Mut, T, O> ⓘ
pub fn as_mut_bitptr_range(&mut self) -> BitPtrRange<Mut, T, O> ⓘ
Views the bit-slice as a half-open range of write-capable bit-pointers, to its first bit in the bit-slice and the first bit beyond it.
§Original
§API Differences
This is renamed to indicate that it returns a bitvec
structure, rather
than an ordinary Range
.
§Notes
BitSlice
does define a [.as_mut_ptr_range()
], which returns a
Range<BitPtr>
. BitPtrRange
has additional capabilities that
Range<*mut T>
and Range<BitPtr>
do not.
pub fn clone_from_bitslice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)
pub fn clone_from_bitslice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)
Copies the bits from src
into self
.
self
and src
must have the same length.
§Performance
If src
has the same type arguments as self
, it will use the same
implementation as .copy_from_bitslice()
; if you know that this will
always be the case, you should prefer to use that method directly.
Only .copy_from_bitslice()
is able to perform acceleration; this
method is always required to perform a bit-by-bit crawl over both
bit-slices.
§Original
§API Differences
This is renamed to reflect that it copies from another bit-slice, not from an element slice.
In order to support general usage, it allows src
to have different
type parameters than self
, at the cost of performance optimizations.
§Panics
This panics if the two bit-slices have different lengths.
§Examples
use bitvec::prelude::*;
pub fn copy_from_bitslice(&mut self, src: &BitSlice<T, O>)
pub fn copy_from_bitslice(&mut self, src: &BitSlice<T, O>)
pub fn swap_with_bitslice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)
pub fn swap_with_bitslice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)
Swaps the contents of two bit-slices.
self
and other
must have the same length.
§Original
§API Differences
This method is renamed, as it takes a bit-slice rather than an element slice.
§Panics
This panics if the two bit-slices have different lengths.
§Examples
use bitvec::prelude::*;
let mut one = [0xA5u8, 0x69];
let mut two = 0x1234u16;
let one_bits = one.view_bits_mut::<Msb0>();
let two_bits = two.view_bits_mut::<Lsb0>();
one_bits.swap_with_bitslice(two_bits);
assert_eq!(one, [0x2C, 0x48]);
assert_eq!(two, 0x96A5);
pub fn set(&mut self, index: usize, value: bool)
pub fn set(&mut self, index: usize, value: bool)
Writes a new value into a single bit.
This is the replacement for *slice[index] = value;
, as bitvec
is not
able to express that under the current IndexMut
API signature.
§Parameters
&mut self
index
: The bit-index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
§Panics
This panics if index
is out of bounds.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 1];
bits.set(0, true);
bits.set(1, false);
assert_eq!(bits, bits![1, 0]);
pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
Writes a new value into a single bit, without bounds checking.
§Parameters
&mut self
index
: The bit-index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
§Safety
You must ensure that index
is in the range 0 .. self.len()
.
This performs bit-pointer offset arithmetic without doing any bounds
checks. If index
is out of bounds, then this will issue an
out-of-bounds access and will trigger memory unsafety.
§Examples
use bitvec::prelude::*;
let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 2];
assert_eq!(bits.len(), 2);
unsafe {
bits.set_unchecked(3, true);
}
assert_eq!(data, 8);
pub unsafe fn replace_unchecked(&mut self, index: usize, value: bool) -> bool
pub unsafe fn replace_unchecked(&mut self, index: usize, value: bool) -> bool
Writes a new value into a bit, returning the previous value, without bounds checking.
§Safety
index
must be less than self.len()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0];
let old = unsafe {
let a = &mut bits[.. 1];
a.replace_unchecked(1, true)
};
assert!(!old);
assert!(bits[1]);
pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
Swaps two bits in a bit-slice, without bounds checking.
See .swap()
for documentation.
§Safety
You must ensure that a
and b
are both in the range 0 .. self.len()
.
This method performs bit-pointer offset arithmetic without doing any
bounds checks. If a
or b
are out of bounds, then this will issue an
out-of-bounds access and will trigger memory unsafety.
pub unsafe fn split_at_unchecked(
&self,
mid: usize,
) -> (&BitSlice<T, O>, &BitSlice<T, O>)
pub unsafe fn split_at_unchecked( &self, mid: usize, ) -> (&BitSlice<T, O>, &BitSlice<T, O>)
Splits a bit-slice at an index, without bounds checking.
See .split_at()
for documentation.
§Safety
You must ensure that mid
is in the range 0 ..= self.len()
.
This method produces new bit-slice references. If mid
is out of
bounds, its behavior is library-level undefined. You must
conservatively assume that an out-of-bounds split point produces
compiler-level UB.
pub unsafe fn split_at_unchecked_mut(
&mut self,
mid: usize,
) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)
pub unsafe fn split_at_unchecked_mut( &mut self, mid: usize, ) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)
Splits a mutable bit-slice at an index, without bounds checking.
See .split_at_mut()
for documentation.
§Safety
You must ensure that mid
is in the range 0 ..= self.len()
.
This method produces new bit-slice references. If mid
is out of
bounds, its behavior is library-level undefined. You must
conservatively assume that an out-of-bounds split point produces
compiler-level UB.
pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
Copies bits from one region of the bit-slice to another region of itself, without doing bounds checks.
The regions are allowed to overlap.
§Parameters
&mut self
src
: The range withinself
from which to copy.dst
: The starting index withinself
at which to paste.
§Effects
self[src]
is copied to self[dest .. dest + src.len()]
. The bits of
self[src]
are in an unspecified, but initialized, state.
§Safety
src.end()
and dest + src.len()
must be entirely within bounds.
§Examples
use bitvec::prelude::*;
let mut data = 0b1011_0000u8;
let bits = data.view_bits_mut::<Msb0>();
unsafe {
bits.copy_within_unchecked(.. 4, 2);
}
assert_eq!(data, 0b1010_1100);
pub fn bit_domain(&self) -> BitDomain<'_, Const, T, O>
Available on non-tarpaulin_include
only.
pub fn bit_domain(&self) -> BitDomain<'_, Const, T, O>
tarpaulin_include
only.Partitions a bit-slice into maybe-contended and known-uncontended parts.
The documentation of BitDomain
goes into this in more detail. In
short, this produces a &BitSlice
that is as large as possible without
requiring alias protection, as well as any bits that were not able to be
included in the unaliased bit-slice.
pub fn bit_domain_mut(&mut self) -> BitDomain<'_, Mut, T, O>
Available on non-tarpaulin_include
only.
pub fn bit_domain_mut(&mut self) -> BitDomain<'_, Mut, T, O>
tarpaulin_include
only.Partitions a mutable bit-slice into maybe-contended and known-uncontended parts.
The documentation of BitDomain
goes into this in more detail. In
short, this produces a &mut BitSlice
that is as large as possible
without requiring alias protection, as well as any bits that were not
able to be included in the unaliased bit-slice.
pub fn domain(&self) -> Domain<'_, Const, T, O> ⓘ
Available on non-tarpaulin_include
only.
pub fn domain(&self) -> Domain<'_, Const, T, O> ⓘ
tarpaulin_include
only.Views the underlying memory of a bit-slice, removing alias protections where possible.
The documentation of Domain
goes into this in more detail. In short,
this produces a &[T]
slice with alias protections removed, covering
all elements that self
completely fills. Partially-used elements on
either the front or back edge of the slice are returned separately.
pub fn domain_mut(&mut self) -> Domain<'_, Mut, T, O>
Available on non-tarpaulin_include
only.
pub fn domain_mut(&mut self) -> Domain<'_, Mut, T, O>
tarpaulin_include
only.Views the underlying memory of a bit-slice, removing alias protections where possible.
The documentation of Domain
goes into this in more detail. In short,
this produces a &mut [T]
slice with alias protections removed,
covering all elements that self
completely fills. Partially-used
elements on the front or back edge of the slice are returned separately.
pub fn count_ones(&self) -> usize
pub fn count_ones(&self) -> usize
Counts the number of bits set to 1
in the bit-slice contents.
§Examples
use bitvec::prelude::*;
let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_ones(), 2);
assert_eq!(bits[2 ..].count_ones(), 0);
assert_eq!(bits![].count_ones(), 0);
pub fn count_zeros(&self) -> usize
pub fn count_zeros(&self) -> usize
Counts the number of bits cleared to 0
in the bit-slice contents.
§Examples
use bitvec::prelude::*;
let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_zeros(), 0);
assert_eq!(bits[2 ..].count_zeros(), 2);
assert_eq!(bits![].count_zeros(), 0);
pub fn iter_ones(&self) -> IterOnes<'_, T, O> ⓘ
pub fn iter_ones(&self) -> IterOnes<'_, T, O> ⓘ
Enumerates the index of each bit in a bit-slice set to 1
.
This is a shorthand for a .enumerate().filter_map()
iterator that
selects the index of each true
bit; however, its implementation is
eligible for optimizations that the individual-bit iterator is not.
Specializations for the Lsb0
and Msb0
orderings allow processors
with instructions that seek particular bits within an element to operate
on whole elements, rather than on each bit individually.
§Examples
This example uses .iter_ones()
, a .filter_map()
that finds the index
of each set bit, and the known indices, in order to show that they have
equivalent behavior.
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 0, 0, 1];
let iter_ones = bits.iter_ones();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
.by_vals()
.enumerate()
.filter_map(|(idx, bit)| if bit { Some(idx) } else { None });
let all = iter_ones.zip(known_indices).zip(filter);
for ((iter_one, known), filtered) in all {
assert_eq!(iter_one, known);
assert_eq!(known, filtered);
}
pub fn iter_zeros(&self) -> IterZeros<'_, T, O> ⓘ
pub fn iter_zeros(&self) -> IterZeros<'_, T, O> ⓘ
Enumerates the index of each bit in a bit-slice cleared to 0
.
This is a shorthand for a .enumerate().filter_map()
iterator that
selects the index of each false
bit; however, its implementation is
eligible for optimizations that the individual-bit iterator is not.
Specializations for the Lsb0
and Msb0
orderings allow processors
with instructions that seek particular bits within an element to operate
on whole elements, rather than on each bit individually.
§Examples
This example uses .iter_zeros()
, a .filter_map()
that finds the
index of each cleared bit, and the known indices, in order to show that
they have equivalent behavior.
use bitvec::prelude::*;
let bits = bits![1, 0, 1, 1, 0, 1, 1, 1, 0];
let iter_zeros = bits.iter_zeros();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
.by_vals()
.enumerate()
.filter_map(|(idx, bit)| if !bit { Some(idx) } else { None });
let all = iter_zeros.zip(known_indices).zip(filter);
for ((iter_zero, known), filtered) in all {
assert_eq!(iter_zero, known);
assert_eq!(known, filtered);
}
pub fn first_one(&self) -> Option<usize>
pub fn first_one(&self) -> Option<usize>
Finds the index of the first bit in the bit-slice set to 1
.
Returns None
if there is no true
bit in the bit-slice.
§Examples
use bitvec::prelude::*;
assert!(bits![].first_one().is_none());
assert!(bits![0].first_one().is_none());
assert_eq!(bits![0, 1].first_one(), Some(1));
pub fn first_zero(&self) -> Option<usize>
pub fn first_zero(&self) -> Option<usize>
Finds the index of the first bit in the bit-slice cleared to 0
.
Returns None
if there is no false
bit in the bit-slice.
§Examples
use bitvec::prelude::*;
assert!(bits![].first_zero().is_none());
assert!(bits![1].first_zero().is_none());
assert_eq!(bits![1, 0].first_zero(), Some(1));
pub fn last_one(&self) -> Option<usize>
pub fn last_one(&self) -> Option<usize>
Finds the index of the last bit in the bit-slice set to 1
.
Returns None
if there is no true
bit in the bit-slice.
§Examples
use bitvec::prelude::*;
assert!(bits![].last_one().is_none());
assert!(bits![0].last_one().is_none());
assert_eq!(bits![1, 0].last_one(), Some(0));
pub fn last_zero(&self) -> Option<usize>
pub fn last_zero(&self) -> Option<usize>
Finds the index of the last bit in the bit-slice cleared to 0
.
Returns None
if there is no false
bit in the bit-slice.
§Examples
use bitvec::prelude::*;
assert!(bits![].last_zero().is_none());
assert!(bits![1].last_zero().is_none());
assert_eq!(bits![0, 1].last_zero(), Some(0));
pub fn leading_ones(&self) -> usize
pub fn leading_ones(&self) -> usize
Counts the number of bits from the start of the bit-slice to the first
bit set to 0
.
This returns 0
if the bit-slice is empty.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![].leading_ones(), 0);
assert_eq!(bits![0].leading_ones(), 0);
assert_eq!(bits![1, 0].leading_ones(), 1);
pub fn leading_zeros(&self) -> usize
pub fn leading_zeros(&self) -> usize
Counts the number of bits from the start of the bit-slice to the first
bit set to 1
.
This returns 0
if the bit-slice is empty.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![].leading_zeros(), 0);
assert_eq!(bits![1].leading_zeros(), 0);
assert_eq!(bits![0, 1].leading_zeros(), 1);
pub fn trailing_ones(&self) -> usize
pub fn trailing_ones(&self) -> usize
Counts the number of bits from the end of the bit-slice to the last bit
set to 0
.
This returns 0
if the bit-slice is empty.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![].trailing_ones(), 0);
assert_eq!(bits![0].trailing_ones(), 0);
assert_eq!(bits![0, 1].trailing_ones(), 1);
pub fn trailing_zeros(&self) -> usize
pub fn trailing_zeros(&self) -> usize
Counts the number of bits from the end of the bit-slice to the last bit
set to 1
.
This returns 0
if the bit-slice is empty.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![].trailing_zeros(), 0);
assert_eq!(bits![1].trailing_zeros(), 0);
assert_eq!(bits![1, 0].trailing_zeros(), 1);
pub fn any(&self) -> bool
pub fn any(&self) -> bool
Tests if there is at least one bit set to 1
in the bit-slice.
Returns false
when self
is empty.
§Examples
use bitvec::prelude::*;
assert!(!bits![].any());
assert!(!bits![0].any());
assert!(bits![0, 1].any());
pub fn all(&self) -> bool
pub fn all(&self) -> bool
Tests if every bit is set to 1
in the bit-slice.
Returns true
when self
is empty.
§Examples
use bitvec::prelude::*;
assert!( bits![].all());
assert!(!bits![0].all());
assert!( bits![1].all());
pub fn not_any(&self) -> bool
pub fn not_any(&self) -> bool
Tests if every bit is cleared to 0
in the bit-slice.
Returns true
when self
is empty.
§Examples
use bitvec::prelude::*;
assert!( bits![].not_any());
assert!(!bits![1].not_any());
assert!( bits![0].not_any());
pub fn not_all(&self) -> bool
pub fn not_all(&self) -> bool
Tests if at least one bit is cleared to 0
in the bit-slice.
Returns false
when self
is empty.
§Examples
use bitvec::prelude::*;
assert!(!bits![].not_all());
assert!(!bits![1].not_all());
assert!( bits![0].not_all());
pub fn some(&self) -> bool
pub fn some(&self) -> bool
Tests if at least one bit is set to 1
, and at least one bit is cleared
to 0
, in the bit-slice.
Returns false
when self
is empty.
§Examples
use bitvec::prelude::*;
assert!(!bits![].some());
assert!(!bits![0].some());
assert!(!bits![1].some());
assert!( bits![0, 1].some());
pub fn shift_left(&mut self, by: usize)
pub fn shift_left(&mut self, by: usize)
Shifts the contents of a bit-slice “left” (towards the zero-index),
clearing the “right” bits to 0
.
This is a strictly-worse analogue to taking bits = &bits[by ..]
: it
has to modify the entire memory region that bits
governs, and destroys
contained information. Unless the actual memory layout and contents of
your bit-slice matters to your program, you should probably prefer to
munch your way forward through a bit-slice handle.
Note also that the “left” here is semantic only, and does not necessarily correspond to a left-shift instruction applied to the underlying integer storage.
This has no effect when by
is 0
. When by
is self.len()
, the
bit-slice is entirely cleared to 0
.
§Panics
This panics if by
is not less than self.len()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits are retained ^--------------------------^
bits.shift_left(2);
assert_eq!(bits, bits![1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0]);
// and move here ^--------------------------^
let bits = bits![mut 1; 2];
bits.shift_left(2);
assert_eq!(bits, bits![0; 2]);
pub fn shift_right(&mut self, by: usize)
pub fn shift_right(&mut self, by: usize)
Shifts the contents of a bit-slice “right” (away from the zero-index),
clearing the “left” bits to 0
.
This is a strictly-worse analogue to taking `bits = &bits[.. bits.len()
- by]
: it must modify the entire memory region that
bits` governs, and destroys contained information. Unless the actual memory layout and contents of your bit-slice matters to your program, you should probably prefer to munch your way backward through a bit-slice handle.
Note also that the “right” here is semantic only, and does not necessarily correspond to a right-shift instruction applied to the underlying integer storage.
This has no effect when by
is 0
. When by
is self.len()
, the
bit-slice is entirely cleared to 0
.
§Panics
This panics if by
is not less than self.len()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits stay ^--------------------------^
bits.shift_right(2);
assert_eq!(bits, bits![0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1]);
// and move here ^--------------------------^
let bits = bits![mut 1; 2];
bits.shift_right(2);
assert_eq!(bits, bits![0; 2]);
pub fn set_aliased(&self, index: usize, value: bool)
pub fn set_aliased(&self, index: usize, value: bool)
Writes a new value into a single bit, using alias-safe operations.
This is equivalent to .set()
, except that it does not require an
&mut
reference, and allows bit-slices with alias-safe storage to share
write permissions.
§Parameters
&self
: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.index
: The bit index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
§Panics
This panics if index
is out of bounds.
§Examples
use bitvec::prelude::*;
use core::cell::Cell;
let bits: &BitSlice<_, _> = bits![Cell<usize>, Lsb0; 0, 1];
bits.set_aliased(0, true);
bits.set_aliased(1, false);
assert_eq!(bits, bits![1, 0]);
pub unsafe fn set_aliased_unchecked(&self, index: usize, value: bool)
pub unsafe fn set_aliased_unchecked(&self, index: usize, value: bool)
Writes a new value into a single bit, using alias-safe operations and without bounds checking.
This is equivalent to .set_unchecked()
, except that it does not
require an &mut
reference, and allows bit-slices with alias-safe
storage to share write permissions.
§Parameters
&self
: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.index
: The bit index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
§Safety
The caller must ensure that index
is not out of bounds.
§Examples
use bitvec::prelude::*;
use core::cell::Cell;
let data = Cell::new(0u8);
let bits = &data.view_bits::<Lsb0>()[.. 2];
unsafe {
bits.set_aliased_unchecked(3, true);
}
assert_eq!(data.get(), 8);
pub const MAX_BITS: usize = 2_305_843_009_213_693_951usize
pub const MAX_ELTS: usize = BitSpan<Const, T, O>::REGION_MAX_ELTS
pub fn to_bitvec(&self) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
Available on crate feature alloc
only.
pub fn to_bitvec(&self) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
alloc
only.Copies a bit-slice into an owned bit-vector.
Since the new vector is freshly owned, this gets marked as ::Unalias
to remove any guards that may have been inserted by the bit-slice’s
history.
It does not use the underlying memory type, so that a BitSlice<_, Cell<_>>
will produce a BitVec<_, Cell<_>>
.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 1];
let bv = bits.to_bitvec();
assert_eq!(bits, bv);
Trait Implementations§
§impl<T, O> BitAndAssign<&BitVec<T, O>> for BitSlice<T, O>
Available on non-tarpaulin_include
only.
impl<T, O> BitAndAssign<&BitVec<T, O>> for BitSlice<T, O>
tarpaulin_include
only.§fn bitand_assign(&mut self, rhs: &BitVec<T, O>)
fn bitand_assign(&mut self, rhs: &BitVec<T, O>)
&=
operation. Read more§impl<T, O> BitAndAssign<BitVec<T, O>> for BitSlice<T, O>
Available on non-tarpaulin_include
only.
impl<T, O> BitAndAssign<BitVec<T, O>> for BitSlice<T, O>
tarpaulin_include
only.§fn bitand_assign(&mut self, rhs: BitVec<T, O>)
fn bitand_assign(&mut self, rhs: BitVec<T, O>)
&=
operation. Read more§impl<T, O, Rhs> BitAndAssign<Rhs> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<T, O, Rhs> BitAndAssign<Rhs> for BitVec<T, O>
tarpaulin_include
only.§fn bitand_assign(&mut self, rhs: Rhs)
fn bitand_assign(&mut self, rhs: Rhs)
&=
operation. Read more§impl<T, O> BitField for BitVec<T, O>
Available on crate feature alloc
and non-tarpaulin_include
only.
impl<T, O> BitField for BitVec<T, O>
alloc
and non-tarpaulin_include
only.§impl<T, O> BitOrAssign<&BitVec<T, O>> for BitSlice<T, O>
Available on non-tarpaulin_include
only.
impl<T, O> BitOrAssign<&BitVec<T, O>> for BitSlice<T, O>
tarpaulin_include
only.§fn bitor_assign(&mut self, rhs: &BitVec<T, O>)
fn bitor_assign(&mut self, rhs: &BitVec<T, O>)
|=
operation. Read more§impl<T, O> BitOrAssign<BitVec<T, O>> for BitSlice<T, O>
Available on non-tarpaulin_include
only.
impl<T, O> BitOrAssign<BitVec<T, O>> for BitSlice<T, O>
tarpaulin_include
only.§fn bitor_assign(&mut self, rhs: BitVec<T, O>)
fn bitor_assign(&mut self, rhs: BitVec<T, O>)
|=
operation. Read more§impl<T, O, Rhs> BitOrAssign<Rhs> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<T, O, Rhs> BitOrAssign<Rhs> for BitVec<T, O>
tarpaulin_include
only.§fn bitor_assign(&mut self, rhs: Rhs)
fn bitor_assign(&mut self, rhs: Rhs)
|=
operation. Read more§impl<T, O> BitXorAssign<&BitVec<T, O>> for BitSlice<T, O>
Available on non-tarpaulin_include
only.
impl<T, O> BitXorAssign<&BitVec<T, O>> for BitSlice<T, O>
tarpaulin_include
only.§fn bitxor_assign(&mut self, rhs: &BitVec<T, O>)
fn bitxor_assign(&mut self, rhs: &BitVec<T, O>)
^=
operation. Read more§impl<T, O> BitXorAssign<BitVec<T, O>> for BitSlice<T, O>
Available on non-tarpaulin_include
only.
impl<T, O> BitXorAssign<BitVec<T, O>> for BitSlice<T, O>
tarpaulin_include
only.§fn bitxor_assign(&mut self, rhs: BitVec<T, O>)
fn bitxor_assign(&mut self, rhs: BitVec<T, O>)
^=
operation. Read more§impl<T, O, Rhs> BitXorAssign<Rhs> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<T, O, Rhs> BitXorAssign<Rhs> for BitVec<T, O>
tarpaulin_include
only.§fn bitxor_assign(&mut self, rhs: Rhs)
fn bitxor_assign(&mut self, rhs: Rhs)
^=
operation. Read more§impl<T, O> BorrowMut<BitSlice<T, O>> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<T, O> BorrowMut<BitSlice<T, O>> for BitVec<T, O>
tarpaulin_include
only.§fn borrow_mut(&mut self) -> &mut BitSlice<T, O> ⓘ
fn borrow_mut(&mut self) -> &mut BitSlice<T, O> ⓘ
§impl<'de, T, O> Deserialize<'de> for BitVec<T, O>
Available on crate feature alloc
only.
impl<'de, T, O> Deserialize<'de> for BitVec<T, O>
alloc
only.§fn deserialize<D>(
deserializer: D,
) -> Result<BitVec<T, O>, <D as Deserializer<'de>>::Error>where
D: Deserializer<'de>,
fn deserialize<D>(
deserializer: D,
) -> Result<BitVec<T, O>, <D as Deserializer<'de>>::Error>where
D: Deserializer<'de>,
§impl<'a, T, O> Extend<&'a T> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<'a, T, O> Extend<&'a T> for BitVec<T, O>
tarpaulin_include
only.§fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = &'a T>,
fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = &'a T>,
Source§fn extend_one(&mut self, item: A)
fn extend_one(&mut self, item: A)
extend_one
#72631)§impl<'a, T, O> Extend<&'a bool> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<'a, T, O> Extend<&'a bool> for BitVec<T, O>
tarpaulin_include
only.§fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = &'a bool>,
fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = &'a bool>,
Source§fn extend_one(&mut self, item: A)
fn extend_one(&mut self, item: A)
extend_one
#72631)§impl<'a, M, T1, T2, O1, O2> Extend<BitRef<'a, M, T2, O2>> for BitVec<T1, O1>
Available on non-tarpaulin_include
only.§Bit-Vector Extension by Proxy References
DO NOT use this. You clearly have a bit-slice. Use
.extend_from_bitslice()
instead!
impl<'a, M, T1, T2, O1, O2> Extend<BitRef<'a, M, T2, O2>> for BitVec<T1, O1>
tarpaulin_include
only.§Bit-Vector Extension by Proxy References
DO NOT use this. You clearly have a bit-slice. Use
.extend_from_bitslice()
instead!
Iterating over a bit-slice requires loading from memory and constructing a proxy reference for each bit. This is needlessly slow; the specialized method is able to avoid this per-bit cost and possibly even use batched operations.
§fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = BitRef<'a, M, T2, O2>>,
fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = BitRef<'a, M, T2, O2>>,
Source§fn extend_one(&mut self, item: A)
fn extend_one(&mut self, item: A)
extend_one
#72631)§impl<T, O> Extend<T> for BitVec<T, O>
impl<T, O> Extend<T> for BitVec<T, O>
§fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = T>,
fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = T>,
Source§fn extend_one(&mut self, item: A)
fn extend_one(&mut self, item: A)
extend_one
#72631)§impl<T, O> Extend<bool> for BitVec<T, O>
§Bit-Vector Extension
This extends a bit-vector from anything that produces individual bits.
impl<T, O> Extend<bool> for BitVec<T, O>
§Bit-Vector Extension
This extends a bit-vector from anything that produces individual bits.
§Original
§Notes
This .extend()
call is the second-slowest possible way to append bits into a
bit-vector, faster only than calling iter.for_each(|bit| bv.push(bit))
.
DO NOT use this if you have any other choice.
If you are extending a bit-vector from the contents of a bit-slice, then you
should use .extend_from_bitslice()
instead. That method is specialized to
perform upfront allocation and, where possible, use a batch copy rather than
copying each bit individually from the source into the bit-vector.
§fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = bool>,
fn extend<I>(&mut self, iter: I)where
I: IntoIterator<Item = bool>,
Source§fn extend_one(&mut self, item: A)
fn extend_one(&mut self, item: A)
extend_one
#72631)§impl<A, O> From<BitArray<A, O>> for BitVec<<A as BitView>::Store, O>where
O: BitOrder,
A: BitViewSized,
Available on non-tarpaulin_include
only.
impl<A, O> From<BitArray<A, O>> for BitVec<<A as BitView>::Store, O>where
O: BitOrder,
A: BitViewSized,
tarpaulin_include
only.§impl<'a, T, O> From<Cow<'a, BitSlice<T, O>>> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<'a, T, O> From<Cow<'a, BitSlice<T, O>>> for BitVec<T, O>
tarpaulin_include
only.§impl<'a, T, O> FromIterator<&'a T> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<'a, T, O> FromIterator<&'a T> for BitVec<T, O>
tarpaulin_include
only.§impl<'a, T, O> FromIterator<&'a bool> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<'a, T, O> FromIterator<&'a bool> for BitVec<T, O>
tarpaulin_include
only.§impl<'a, M, T1, T2, O1, O2> FromIterator<BitRef<'a, M, T2, O2>> for BitVec<T1, O1>
Available on non-tarpaulin_include
only.§Bit-Vector Collection from Proxy References
DO NOT use this. You clearly have a bit-slice. Use
::from_bitslice()
instead!
impl<'a, M, T1, T2, O1, O2> FromIterator<BitRef<'a, M, T2, O2>> for BitVec<T1, O1>
tarpaulin_include
only.§Bit-Vector Collection from Proxy References
DO NOT use this. You clearly have a bit-slice. Use
::from_bitslice()
instead!
Iterating over a bit-slice requires loading from memory and constructing a proxy reference for each bit. This is needlessly slow; the specialized method is able to avoid this per-bit cost and possibly even use batched operations.
§impl<T, O> FromIterator<T> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<T, O> FromIterator<T> for BitVec<T, O>
tarpaulin_include
only.§impl<T, O> FromIterator<bool> for BitVec<T, O>
Available on non-tarpaulin_include
only.§Bit-Vector Collection
This collects a bit-vector from anything that produces individual bits.
impl<T, O> FromIterator<bool> for BitVec<T, O>
tarpaulin_include
only.§Bit-Vector Collection
This collects a bit-vector from anything that produces individual bits.
§Original
impl<T> FromIterator<T> for Vec<T>
§Notes
This .collect()
call is the second-slowest possible way to collect bits into a
bit-vector, faster only than calling iter.for_each(|bit| bv.push(bit))
.
DO NOT use this if you have any other choice.
If you are collecting a bit-vector from the contents of a bit-slice, then you
should use ::from_bitslice()
instead. That method is specialized to
perform upfront allocation and, where possible, use a batch copy rather than
copying each bit individually from the source into the bit-vector.
§impl<'a, T, O> IntoIterator for &'a BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<'a, T, O> IntoIterator for &'a BitVec<T, O>
tarpaulin_include
only.§type IntoIter = <&'a BitSlice<T, O> as IntoIterator>::IntoIter
type IntoIter = <&'a BitSlice<T, O> as IntoIterator>::IntoIter
§type Item = <&'a BitSlice<T, O> as IntoIterator>::Item
type Item = <&'a BitSlice<T, O> as IntoIterator>::Item
§impl<'a, T, O> IntoIterator for &'a mut BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<'a, T, O> IntoIterator for &'a mut BitVec<T, O>
tarpaulin_include
only.§type IntoIter = <&'a mut BitSlice<T, O> as IntoIterator>::IntoIter
type IntoIter = <&'a mut BitSlice<T, O> as IntoIterator>::IntoIter
§type Item = <&'a mut BitSlice<T, O> as IntoIterator>::Item
type Item = <&'a mut BitSlice<T, O> as IntoIterator>::Item
§impl<T, O> IntoIterator for BitVec<T, O>
§Bit-Vector Iteration
Bit-vectors have the advantage that iteration consumes the whole structure, so
they can simply freeze the allocation into a bit-box, then use its iteration and
destructor.
impl<T, O> IntoIterator for BitVec<T, O>
§Bit-Vector Iteration
Bit-vectors have the advantage that iteration consumes the whole structure, so they can simply freeze the allocation into a bit-box, then use its iteration and destructor.
§Original
§type IntoIter = <BitBox<T, O> as IntoIterator>::IntoIter
type IntoIter = <BitBox<T, O> as IntoIterator>::IntoIter
§type Item = <BitBox<T, O> as IntoIterator>::Item
type Item = <BitBox<T, O> as IntoIterator>::Item
§impl<T, O> Not for BitVec<T, O>
This implementation inverts all elements in the live buffer. You cannot rely
on the value of bits in the buffer that are outside the domain of
BitVec::as_mut_bitslice
.
impl<T, O> Not for BitVec<T, O>
This implementation inverts all elements in the live buffer. You cannot rely
on the value of bits in the buffer that are outside the domain of
BitVec::as_mut_bitslice
.
§impl<T, O> Ord for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<T, O> Ord for BitVec<T, O>
tarpaulin_include
only.§impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &BitSlice<T1, O1>
Available on non-tarpaulin_include
only.
impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &BitSlice<T1, O1>
tarpaulin_include
only.§impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &mut BitSlice<T1, O1>
Available on non-tarpaulin_include
only.
impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &mut BitSlice<T1, O1>
tarpaulin_include
only.§impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for BitSlice<T1, O1>
Available on non-tarpaulin_include
only.
impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for BitSlice<T1, O1>
tarpaulin_include
only.§impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a BitSlice<T1, O1>
Available on non-tarpaulin_include
only.
impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a BitSlice<T1, O1>
tarpaulin_include
only.§impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a mut BitSlice<T1, O1>
Available on non-tarpaulin_include
only.
impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a mut BitSlice<T1, O1>
tarpaulin_include
only.§impl<T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for BitSlice<T1, O1>
Available on non-tarpaulin_include
only.
impl<T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for BitSlice<T1, O1>
tarpaulin_include
only.§impl<T, O, Rhs> PartialOrd<Rhs> for BitVec<T, O>
Available on non-tarpaulin_include
only.
impl<T, O, Rhs> PartialOrd<Rhs> for BitVec<T, O>
tarpaulin_include
only.§impl<T, O> Read for BitVec<T, O>
§Reading From a Bit-Vector
The implementation loads bytes out of the reference bit-vector until either the
destination buffer is filled or the source has no more bytes to provide. When
.read()
returns, the provided bit-vector will have its contents shifted down
so that it begins at the first bit after the last byte copied out into buf
.
impl<T, O> Read for BitVec<T, O>
§Reading From a Bit-Vector
The implementation loads bytes out of the reference bit-vector until either the
destination buffer is filled or the source has no more bytes to provide. When
.read()
returns, the provided bit-vector will have its contents shifted down
so that it begins at the first bit after the last byte copied out into buf
.
Note that the return value of .read()
is always the number of bytes of buf
filled!
§API Differences
The standard library does not impl Read for Vec<u8>
. It is provided here as a
courtesy.
§fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error>
fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error>
1.36.0 · Source§fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error>
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error>
read
, except that it reads into a slice of buffers. Read moreSource§fn is_read_vectored(&self) -> bool
fn is_read_vectored(&self) -> bool
can_vector
#69941)1.0.0 · Source§fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize, Error>
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize, Error>
buf
. Read more1.0.0 · Source§fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error>
fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error>
buf
. Read more1.6.0 · Source§fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error>
fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error>
buf
. Read moreSource§fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> Result<(), Error>
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> Result<(), Error>
read_buf
#78485)Source§fn read_buf_exact(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error>
fn read_buf_exact(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error>
read_buf
#78485)cursor
. Read more1.0.0 · Source§fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
Read
. Read more§impl<T, O> Serialize for BitVec<T, O>
Available on crate feature alloc
only.
impl<T, O> Serialize for BitVec<T, O>
alloc
only.§fn serialize<S>(
&self,
serializer: S,
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error>where
S: Serializer,
fn serialize<S>(
&self,
serializer: S,
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error>where
S: Serializer,
§impl<T, O> Write for BitVec<T, O>
§Writing Into a Bit-Vector
The implementation appends bytes to the referenced bit-vector until the source
buffer is exhausted.
impl<T, O> Write for BitVec<T, O>
§Writing Into a Bit-Vector
The implementation appends bytes to the referenced bit-vector until the source buffer is exhausted.
Note that the return value of .write()
is always the number of bytes of
buf
consumed!
The implementation uses BitField::store_be
to fill bytes. Note that unlike
the standard library, it is implemented on bit-vectors of any underlying
element type. However, using a BitVec<_, u8>
is still likely to be fastest.
§Original
§fn write(&mut self, buf: &[u8]) -> Result<usize, Error>
fn write(&mut self, buf: &[u8]) -> Result<usize, Error>
§fn flush(&mut self) -> Result<(), Error>
fn flush(&mut self) -> Result<(), Error>
Source§fn is_write_vectored(&self) -> bool
fn is_write_vectored(&self) -> bool
can_vector
#69941)1.0.0 · Source§fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>
fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>
Source§fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>
fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>
write_all_vectored
#70436)impl<T, O> Eq for BitVec<T, O>
impl<T, O> Send for BitVec<T, O>
impl<T, O> Sync for BitVec<T, O>
impl<T, O> Unpin for BitVec<T, O>
Auto Trait Implementations§
impl<T, O> Freeze for BitVec<T, O>
impl<T, O> RefUnwindSafe for BitVec<T, O>where
O: RefUnwindSafe,
T: RefUnwindSafe,
impl<T, O> UnwindSafe for BitVec<T, O>where
O: UnwindSafe,
T: RefUnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<Q, K> Comparable<K> for Q
impl<Q, K> Comparable<K> for Q
§impl<T> Conv for T
impl<T> Conv for T
§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key
and return true
if they are equal.§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
§impl<T> FmtForward for T
impl<T> FmtForward for T
§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
self
to use its Binary
implementation when Debug
-formatted.§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
self
to use its Display
implementation when
Debug
-formatted.§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
self
to use its LowerExp
implementation when
Debug
-formatted.§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
self
to use its LowerHex
implementation when
Debug
-formatted.§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
self
to use its Octal
implementation when Debug
-formatted.§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
self
to use its Pointer
implementation when
Debug
-formatted.§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
self
to use its UpperExp
implementation when
Debug
-formatted.§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
self
to use its UpperHex
implementation when
Debug
-formatted.§fn fmt_list(self) -> FmtList<Self>where
&'a Self: for<'a> IntoIterator,
fn fmt_list(self) -> FmtList<Self>where
&'a Self: for<'a> IntoIterator,
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more§impl<T> Joiner for T
impl<T> Joiner for T
§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere
T: ?Sized,
§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read more§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read more§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R,
) -> R
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R, ) -> R
§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
self
, then passes self.as_ref()
into the pipe function.§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
self
, then passes self.as_mut()
into the pipe
function.§fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
self
, then passes self.deref()
into the pipe function.§impl<T> Pointable for T
impl<T> Pointable for T
§impl<R> ReadBytesExt for R
impl<R> ReadBytesExt for R
§fn read_u8(&mut self) -> Result<u8, Error>
fn read_u8(&mut self) -> Result<u8, Error>
§fn read_i8(&mut self) -> Result<i8, Error>
fn read_i8(&mut self) -> Result<i8, Error>
§fn read_u16<T>(&mut self) -> Result<u16, Error>where
T: ByteOrder,
fn read_u16<T>(&mut self) -> Result<u16, Error>where
T: ByteOrder,
§fn read_i16<T>(&mut self) -> Result<i16, Error>where
T: ByteOrder,
fn read_i16<T>(&mut self) -> Result<i16, Error>where
T: ByteOrder,
§fn read_u24<T>(&mut self) -> Result<u32, Error>where
T: ByteOrder,
fn read_u24<T>(&mut self) -> Result<u32, Error>where
T: ByteOrder,
§fn read_i24<T>(&mut self) -> Result<i32, Error>where
T: ByteOrder,
fn read_i24<T>(&mut self) -> Result<i32, Error>where
T: ByteOrder,
§fn read_u32<T>(&mut self) -> Result<u32, Error>where
T: ByteOrder,
fn read_u32<T>(&mut self) -> Result<u32, Error>where
T: ByteOrder,
§fn read_i32<T>(&mut self) -> Result<i32, Error>where
T: ByteOrder,
fn read_i32<T>(&mut self) -> Result<i32, Error>where
T: ByteOrder,
§fn read_u48<T>(&mut self) -> Result<u64, Error>where
T: ByteOrder,
fn read_u48<T>(&mut self) -> Result<u64, Error>where
T: ByteOrder,
§fn read_i48<T>(&mut self) -> Result<i64, Error>where
T: ByteOrder,
fn read_i48<T>(&mut self) -> Result<i64, Error>where
T: ByteOrder,
§fn read_u64<T>(&mut self) -> Result<u64, Error>where
T: ByteOrder,
fn read_u64<T>(&mut self) -> Result<u64, Error>where
T: ByteOrder,
§fn read_i64<T>(&mut self) -> Result<i64, Error>where
T: ByteOrder,
fn read_i64<T>(&mut self) -> Result<i64, Error>where
T: ByteOrder,
§fn read_u128<T>(&mut self) -> Result<u128, Error>where
T: ByteOrder,
fn read_u128<T>(&mut self) -> Result<u128, Error>where
T: ByteOrder,
§fn read_i128<T>(&mut self) -> Result<i128, Error>where
T: ByteOrder,
fn read_i128<T>(&mut self) -> Result<i128, Error>where
T: ByteOrder,
§fn read_uint<T>(&mut self, nbytes: usize) -> Result<u64, Error>where
T: ByteOrder,
fn read_uint<T>(&mut self, nbytes: usize) -> Result<u64, Error>where
T: ByteOrder,
§fn read_int<T>(&mut self, nbytes: usize) -> Result<i64, Error>where
T: ByteOrder,
fn read_int<T>(&mut self, nbytes: usize) -> Result<i64, Error>where
T: ByteOrder,
§fn read_uint128<T>(&mut self, nbytes: usize) -> Result<u128, Error>where
T: ByteOrder,
fn read_uint128<T>(&mut self, nbytes: usize) -> Result<u128, Error>where
T: ByteOrder,
§fn read_int128<T>(&mut self, nbytes: usize) -> Result<i128, Error>where
T: ByteOrder,
fn read_int128<T>(&mut self, nbytes: usize) -> Result<i128, Error>where
T: ByteOrder,
§fn read_f32<T>(&mut self) -> Result<f32, Error>where
T: ByteOrder,
fn read_f32<T>(&mut self) -> Result<f32, Error>where
T: ByteOrder,
§fn read_f64<T>(&mut self) -> Result<f64, Error>where
T: ByteOrder,
fn read_f64<T>(&mut self) -> Result<f64, Error>where
T: ByteOrder,
§fn read_u16_into<T>(&mut self, dst: &mut [u16]) -> Result<(), Error>where
T: ByteOrder,
fn read_u16_into<T>(&mut self, dst: &mut [u16]) -> Result<(), Error>where
T: ByteOrder,
§fn read_u32_into<T>(&mut self, dst: &mut [u32]) -> Result<(), Error>where
T: ByteOrder,
fn read_u32_into<T>(&mut self, dst: &mut [u32]) -> Result<(), Error>where
T: ByteOrder,
§fn read_u64_into<T>(&mut self, dst: &mut [u64]) -> Result<(), Error>where
T: ByteOrder,
fn read_u64_into<T>(&mut self, dst: &mut [u64]) -> Result<(), Error>where
T: ByteOrder,
§fn read_u128_into<T>(&mut self, dst: &mut [u128]) -> Result<(), Error>where
T: ByteOrder,
fn read_u128_into<T>(&mut self, dst: &mut [u128]) -> Result<(), Error>where
T: ByteOrder,
§fn read_i8_into(&mut self, dst: &mut [i8]) -> Result<(), Error>
fn read_i8_into(&mut self, dst: &mut [i8]) -> Result<(), Error>
§fn read_i16_into<T>(&mut self, dst: &mut [i16]) -> Result<(), Error>where
T: ByteOrder,
fn read_i16_into<T>(&mut self, dst: &mut [i16]) -> Result<(), Error>where
T: ByteOrder,
§fn read_i32_into<T>(&mut self, dst: &mut [i32]) -> Result<(), Error>where
T: ByteOrder,
fn read_i32_into<T>(&mut self, dst: &mut [i32]) -> Result<(), Error>where
T: ByteOrder,
§fn read_i64_into<T>(&mut self, dst: &mut [i64]) -> Result<(), Error>where
T: ByteOrder,
fn read_i64_into<T>(&mut self, dst: &mut [i64]) -> Result<(), Error>where
T: ByteOrder,
§fn read_i128_into<T>(&mut self, dst: &mut [i128]) -> Result<(), Error>where
T: ByteOrder,
fn read_i128_into<T>(&mut self, dst: &mut [i128]) -> Result<(), Error>where
T: ByteOrder,
§fn read_f32_into<T>(&mut self, dst: &mut [f32]) -> Result<(), Error>where
T: ByteOrder,
fn read_f32_into<T>(&mut self, dst: &mut [f32]) -> Result<(), Error>where
T: ByteOrder,
§fn read_f32_into_unchecked<T>(&mut self, dst: &mut [f32]) -> Result<(), Error>where
T: ByteOrder,
fn read_f32_into_unchecked<T>(&mut self, dst: &mut [f32]) -> Result<(), Error>where
T: ByteOrder,
read_f32_into
instead§impl<T> Tap for T
impl<T> Tap for T
§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
Borrow<B>
of a value. Read more§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
BorrowMut<B>
of a value. Read more§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
AsRef<R>
view of a value. Read more§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
AsMut<R>
view of a value. Read more§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
Deref::Target
of a value. Read more§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
Deref::Target
of a value. Read more§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap()
only in debug builds, and is erased in release builds.§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut()
only in debug builds, and is erased in release
builds.§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
.tap_borrow()
only in debug builds, and is erased in release
builds.§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
.tap_borrow_mut()
only in debug builds, and is erased in release
builds.§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
.tap_ref()
only in debug builds, and is erased in release
builds.§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
.tap_ref_mut()
only in debug builds, and is erased in release
builds.§fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
.tap_deref()
only in debug builds, and is erased in release
builds.§impl<T> TryConv for T
impl<T> TryConv for T
§impl<T> WithSubscriber for T
impl<T> WithSubscriber for T
§fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> ⓘwhere
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> ⓘwhere
S: Into<Dispatch>,
§fn with_current_subscriber(self) -> WithDispatch<Self> ⓘ
fn with_current_subscriber(self) -> WithDispatch<Self> ⓘ
§impl<W> WriteBytesExt for W
impl<W> WriteBytesExt for W
§fn write_u8(&mut self, n: u8) -> Result<(), Error>
fn write_u8(&mut self, n: u8) -> Result<(), Error>
§fn write_i8(&mut self, n: i8) -> Result<(), Error>
fn write_i8(&mut self, n: i8) -> Result<(), Error>
§fn write_u16<T>(&mut self, n: u16) -> Result<(), Error>where
T: ByteOrder,
fn write_u16<T>(&mut self, n: u16) -> Result<(), Error>where
T: ByteOrder,
§fn write_i16<T>(&mut self, n: i16) -> Result<(), Error>where
T: ByteOrder,
fn write_i16<T>(&mut self, n: i16) -> Result<(), Error>where
T: ByteOrder,
§fn write_u24<T>(&mut self, n: u32) -> Result<(), Error>where
T: ByteOrder,
fn write_u24<T>(&mut self, n: u32) -> Result<(), Error>where
T: ByteOrder,
§fn write_i24<T>(&mut self, n: i32) -> Result<(), Error>where
T: ByteOrder,
fn write_i24<T>(&mut self, n: i32) -> Result<(), Error>where
T: ByteOrder,
§fn write_u32<T>(&mut self, n: u32) -> Result<(), Error>where
T: ByteOrder,
fn write_u32<T>(&mut self, n: u32) -> Result<(), Error>where
T: ByteOrder,
§fn write_i32<T>(&mut self, n: i32) -> Result<(), Error>where
T: ByteOrder,
fn write_i32<T>(&mut self, n: i32) -> Result<(), Error>where
T: ByteOrder,
§fn write_u48<T>(&mut self, n: u64) -> Result<(), Error>where
T: ByteOrder,
fn write_u48<T>(&mut self, n: u64) -> Result<(), Error>where
T: ByteOrder,
§fn write_i48<T>(&mut self, n: i64) -> Result<(), Error>where
T: ByteOrder,
fn write_i48<T>(&mut self, n: i64) -> Result<(), Error>where
T: ByteOrder,
§fn write_u64<T>(&mut self, n: u64) -> Result<(), Error>where
T: ByteOrder,
fn write_u64<T>(&mut self, n: u64) -> Result<(), Error>where
T: ByteOrder,
§fn write_i64<T>(&mut self, n: i64) -> Result<(), Error>where
T: ByteOrder,
fn write_i64<T>(&mut self, n: i64) -> Result<(), Error>where
T: ByteOrder,
§fn write_u128<T>(&mut self, n: u128) -> Result<(), Error>where
T: ByteOrder,
fn write_u128<T>(&mut self, n: u128) -> Result<(), Error>where
T: ByteOrder,
§fn write_i128<T>(&mut self, n: i128) -> Result<(), Error>where
T: ByteOrder,
fn write_i128<T>(&mut self, n: i128) -> Result<(), Error>where
T: ByteOrder,
§fn write_uint<T>(&mut self, n: u64, nbytes: usize) -> Result<(), Error>where
T: ByteOrder,
fn write_uint<T>(&mut self, n: u64, nbytes: usize) -> Result<(), Error>where
T: ByteOrder,
§fn write_int<T>(&mut self, n: i64, nbytes: usize) -> Result<(), Error>where
T: ByteOrder,
fn write_int<T>(&mut self, n: i64, nbytes: usize) -> Result<(), Error>where
T: ByteOrder,
§fn write_uint128<T>(&mut self, n: u128, nbytes: usize) -> Result<(), Error>where
T: ByteOrder,
fn write_uint128<T>(&mut self, n: u128, nbytes: usize) -> Result<(), Error>where
T: ByteOrder,
§fn write_int128<T>(&mut self, n: i128, nbytes: usize) -> Result<(), Error>where
T: ByteOrder,
fn write_int128<T>(&mut self, n: i128, nbytes: usize) -> Result<(), Error>where
T: ByteOrder,
impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
impl<T> ErasedDestructor for Twhere
T: 'static,
impl<T> MaybeDebug for Twhere
T: Debug,
impl<T> MaybeSendSync for T
impl<T> MaybeSerde for Twhere
T: Serialize + for<'de> Deserialize<'de>,
impl<T> NippyJarHeader for T
impl<T> RpcObject for Twhere
T: RpcParam + RpcReturn,
impl<T> RpcParam for T
impl<T> RpcReturn for T
Layout§
Note: Most layout information is completely unstable and may even differ between compilations. The only exception is types with certain repr(...)
attributes. Please see the Rust Reference's “Type Layout” chapter for details on type layout guarantees.
Size: 24 bytes