reth_node_core::primitives::revm_primitives::bitvec::prelude

Struct BitSlice

pub struct BitSlice<T = usize, O = Lsb0>
where T: BitStore, O: BitOrder,
{ /* private fields */ }
Expand description

§Bit-Addressable Memory

A slice of individual bits, anywhere in memory.

BitSlice<T, O> is an unsized region type; you interact with it through &BitSlice<T, O> and &mut BitSlice<T, O> references, which work exactly like all other Rust references. As with the standard slice’s relationship to arrays and vectors, this is bitvec’s primary working type, but you will probably hold it through one of the provided BitArray, BitBox, or BitVec containers.

BitSlice is conceptually a [bool] slice, and provides a nearly complete mirror of [bool]’s API.

Every bit-vector crate can give you an opaque type that hides shift/mask calculations from you. BitSlice does far more than this: it offers you the full Rust guarantees about reference behavior, including lifetime tracking, mutability and aliasing awareness, and explicit memory control, as well as the full set of tools and APIs available to the standard [bool] slice type. BitSlice can arbitrarily split and subslice, just like [bool]. You can write a linear consuming function and keep the patterns you already know.

For example, to trim all the bits off either edge that match a condition, you could write

use bitvec::prelude::*;

fn trim<T: BitStore, O: BitOrder>(
  bits: &BitSlice<T, O>,
  to_trim: bool,
) -> &BitSlice<T, O> {
  let stop = |b: bool| b != to_trim;
  let front = bits.iter()
    .by_vals()
    .position(stop)
    .unwrap_or(0);
  let back = bits.iter()
    .by_vals()
    .rposition(stop)
    .map_or(0, |p| p + 1);
  &bits[front .. back]
}

to get behavior something like trim(&BitSlice[0, 0, 1, 1, 0, 1, 0], false) == &BitSlice[1, 1, 0, 1].

§Documentation

All APIs that mirror something in the standard library will have an Original section linking to the corresponding item. All APIs that have a different signature or behavior than the original will have an API Differences section explaining what has changed, and how to adapt your existing code to the change.

These sections look like this:

§Original

[bool]

§API Differences

The slice type [bool] has no type parameters. BitSlice<T, O> has two: one for the integer type used as backing storage, and one for the order of bits within that integer type.

&BitSlice<T, O> is capable of producing &bool references to read bits out of its memory, but is not capable of producing &mut bool references to write bits into its memory. Any [bool] API that would produce a &mut bool will instead produce a BitRef<Mut, T, O> proxy reference.

§Behavior

BitSlice is a wrapper over [T]. It describes a region of memory, and must be handled indirectly. This is most commonly done through the reference types &BitSlice and &mut BitSlice, which borrow memory owned by some other value in the program. These buffers can be directly owned by the sibling types BitBox, which behaves like Box<[T]>, and BitVec, which behaves like Vec<T>. It cannot be used as the type parameter to a pointer type such as Box, Rc, Arc, or any other indirection.

The BitSlice region provides access to each individual bit in the region, as if each bit had a memory address that you could use to dereference it. It packs each logical bit into exactly one bit of storage memory, just like std::bitset and std::vector<bool> in C++.

§Type Parameters

BitSlice has two type parameters which propagate through nearly every public API in the crate. These are very important to its operation, and your choice of type arguments informs nearly every part of this library’s behavior.

§T: BitStore

BitStore is the simpler of the two parameters. It refers to the integer type used to hold bits. It must be one of the Rust unsigned integer fundamentals: u8, u16, u32, usize, and on 64-bit systems only, u64. In addition, it can also be an alias-safe wrapper over them (see the access module) in order to permit bit-slices to share underlying memory without interfering with each other.

BitSlice references can only be constructed over the integers, not over their aliasing wrappers. BitSlice will only use aliasing types in its T slots when you invoke APIs that produce them, such as .split_at_mut().

The default type argument is usize.

The argument you choose is used as the basis of a [T] slice, over which the BitSlice view is produced. BitSlice<T, _> is subject to all of the rules about alignment that [T] is. If you are working with in-memory representation formats, chances are that you already have a T type with which you’ve been working, and should use it here.

If you are only using this crate to discard the seven wasted bits per bool in a collection of bools, and are not too concerned about the in-memory representation, then you should use the default type argument of usize. This is because most processors work best when moving an entire usize between memory and the processor itself, and using a smaller type may cause it to slow down. Additionally, processor instructions are typically optimized for the whole register, and the processor might need to do additional clearing work for narrower types.

§O: BitOrder

BitOrder is the more complex parameter. It has a default argument which, like usize, is a good baseline choice when you do not explicitly need to control the representation of bits in memory.

This parameter determines how bitvec indexes the bits within a single T memory element. Computers all agree that in a slice of T elements, the element with the lower index has a lower memory address than the element with the higher index. But the individual bits within an element do not have addresses, and so there is no uniform standard of which bit is the zeroth, which is the first, which is the penultimate, and which is the last.

To make matters even more confusing, there are two predominant ideas of in-element ordering that often correlate with the in-element byte ordering of integer types, but are in fact wholly unrelated! bitvec provides these two main orderings as types for you, and if you need a different one, it also provides the tools you need to write your own.

§Least Significant Bit Comes First

This ordering, named the Lsb0 type, indexes bits within an element by placing the 0 index at the least significant bit (numeric value 1) and the final index at the most significant bit (numeric value T::MIN for signed integers on most machines).

For example, this is the ordering used by most C compilers to lay out bit-field struct members on little-endian byte-ordered machines.

§Most Significant Bit Comes First

This ordering, named the Msb0 type, indexes bits within an element by placing the 0 index at the most significant bit (numeric value T::MIN for most signed integers) and the final index at the least significant bit (numeric value 1).

For example, this is the ordering used by the TCP wire format, and by most C compilers to lay out bit-field struct members on big-endian byte-ordered machines.

§Default Ordering

The default ordering is Lsb0, as it typically produces shorter object code than Msb0 does. If you are implementing a collection, then Lsb0 will likely give you better performance; if you are implementing a buffer protocol, then your choice of ordering is dictated by the protocol definition.

§Safety

BitSlice is designed to never introduce new memory unsafety that you did not provide yourself, either before or during the use of this crate. However, safety bugs have been identified before, and you are welcome to submit any discovered flaws as a defect report.

The &BitSlice reference type uses a private encoding scheme to hold all of the information needed in its stack value. This encoding is not part of the public API of the library, and is not binary-compatible with &[T]. Furthermore, in order to satisfy Rust’s requirements about alias conditions, BitSlice performs type transformations on the T parameter to ensure that it never creates the potential for undefined behavior or data races.

You must never attempt to type-cast a reference to BitSlice in any way. You must not use mem::transmute with BitSlice anywhere in its type arguments. You must not use as-casting to convert between *BitSlice and any other type. You must not attempt to modify the binary representation of a &BitSlice reference value. These actions will all lead to runtime memory unsafety, are (hopefully) likely to induce a program crash, and may possibly cause undefined behavior at compile-time.

Everything in the BitSlice public API, even the unsafe parts, are guaranteed to have no more unsafety than their equivalent items in the standard library. All unsafe APIs will have documentation explicitly detailing what the API requires you to uphold in order for it to function safely and correctly. All safe APIs will do so themselves.

§Performance

Like the standard library’s [T] slice, BitSlice is designed to be very easy to use safely, while supporting unsafe usage when necessary. Rust has a powerful optimizing engine, and BitSlice will frequently be compiled to have zero runtime cost. Where it is slower, it will not be significantly slower than a manual replacement.

As the machine instructions operate on registers rather than bits, your choice of T: BitStore type parameter can influence your bits-slice’s performance. Using larger register types means that bit-slices can gallop over completely-used interior elements faster, while narrower register types permit more graceful handling of subslicing and aliased splits.

§Construction

BitSlice views of memory can be constructed over borrowed data in a number of ways. As this is a reference-only type, it can only ever be built by borrowing an existing memory buffer and taking temporary control of your program’s view of the region.

§Macro Constructor

BitSlice buffers can be constructed at compile-time through the bits! macro. This macro accepts a superset of the vec! arguments, and creates an appropriate buffer in the local scope. The macro expands to a borrowed BitArray temporary, which will live for the duration of the bound name.

use bitvec::prelude::*;

let immut = bits![u8, Lsb0; 0, 1, 0, 0, 1, 0, 0, 1];
let mutable: &mut BitSlice<_, _> = bits![mut u8, Msb0; 0; 8];

assert_ne!(immut, mutable);
mutable.clone_from_bitslice(immut);
assert_eq!(immut, mutable);

§Borrowing Constructors

You may borrow existing elements or slices with the following functions:

These take references to existing memory and construct BitSlice references from them. These are the most basic ways to borrow memory and view it as bits; however, you should prefer the BitView trait methods instead.

use bitvec::prelude::*;

let data = [0u16; 3];
let local_borrow = BitSlice::<_, Lsb0>::from_slice(&data);

let mut data = [0u8; 5];
let local_mut = BitSlice::<_, Lsb0>::from_slice_mut(&mut data);

§Trait Method Constructors

The BitView trait implements .view_bits::<O>() and .view_bits_mut::<O>() methods on elements, arrays, and slices. This trait, imported in the crate prelude, is probably the easiest way for you to borrow memory as bits.

use bitvec::prelude::*;

let data = [0u32; 5];
let trait_view = data.view_bits::<Lsb0>();

let mut data = 0usize;
let trait_mut = data.view_bits_mut::<Msb0>();

§Owned Bit Slices

If you wish to take ownership of a memory region and enforce that it is always viewed as a BitSlice by default, you can use one of the BitArray, BitBox, or BitVec types, rather than pairing ordinary buffer types with the borrowing constructors.

use bitvec::prelude::*;

let slice = bits![0; 27];
let array = bitarr![u8, LocalBits; 0; 10];
let boxed = bitbox![0; 10];
let vec = bitvec![0; 20];

// arrays always round up
assert_eq!(array.as_bitslice(), slice[.. 16]);
assert_eq!(boxed.as_bitslice(), slice[.. 10]);
assert_eq!(vec.as_bitslice(), slice[.. 20]);

§Usage

BitSlice implements the full standard-library [bool] API. The documentation for these API surfaces is intentionally sparse, and forwards to the standard library rather than try to replicate it.

BitSlice also has a great deal of novel API surfaces. These are broken into separate impl blocks below. A short summary:

  • Since there is no BitSlice literal, the constructor functions ::empty(), ::from_element(), ::from_slice(), and ::try_from_slice(), and their _mut counterparts, create bit-slices as needed.
  • Since bits[idx] = value does not exist, you can use .set() or .replace() (as well as their _unchecked and _aliased counterparts) to write into a bit-slice.
  • Raw memory can be inspected with .domain() and .domain_mut(), and a bit-slice can be split on aliasing lines with .bit_domain() and .bit_domain_mut().
  • The population can be queried for which indices have 0 or 1 bits by iterating across all such indices, counting them, or counting leading or trailing blocks. Additionally, .any(), .all(), .not_any(), .not_all(), and .some() test whether bit-slices satisfy aggregate Boolean qualities.
  • Buffer contents can be relocated internally by shifting or rotating to the left or right.

§Trait Implementations

BitSlice adds trait implementations that [bool] and [T] do not necessarily have, including numeric formatting and Boolean arithmetic operators. Additionally, the BitField trait allows bit-slices to act as a buffer for wide-value storage.

Implementations§

§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

Port of the [T] inherent API.

pub fn len(&self) -> usize

Gets the number of bits in the bit-slice.

§Original

slice::len

§Examples
use bitvec::prelude::*;

assert_eq!(bits![].len(), 0);
assert_eq!(bits![0; 10].len(), 10);

pub fn is_empty(&self) -> bool

Tests if the bit-slice is empty (length zero).

§Original

slice::is_empty

§Examples
use bitvec::prelude::*;

assert!(bits![].is_empty());
assert!(!bits![0; 10].is_empty());

pub fn first(&self) -> Option<BitRef<'_, Const, T, O>>

Gets a reference to the first bit of the bit-slice, or None if it is empty.

§Original

slice::first

§API Differences

bitvec uses a custom structure for both read-only and mutable references to bool.

§Examples
use bitvec::prelude::*;

let bits = bits![1, 0, 0];
assert_eq!(bits.first().as_deref(), Some(&true));

assert!(bits![].first().is_none());

pub fn first_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>

Gets a mutable reference to the first bit of the bit-slice, or None if it is empty.

§Original

slice::first_mut

§API Differences

bitvec uses a custom structure for both read-only and mutable references to bool. This must be bound as mut in order to write through it.

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0; 3];
if let Some(mut first) = bits.first_mut() {
  *first = true;
}
assert_eq!(bits, bits![1, 0, 0]);

assert!(bits![mut].first_mut().is_none());

pub fn split_first(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>

Splits the bit-slice into a reference to its first bit, and the rest of the bit-slice. Returns None when empty.

§Original

slice::split_first

§API Differences

bitvec uses a custom structure for both read-only and mutable references to bool.

§Examples
use bitvec::prelude::*;

let bits = bits![1, 0, 0];
let (first, rest) = bits.split_first().unwrap();
assert_eq!(first, &true);
assert_eq!(rest, bits![0; 2]);

pub fn split_first_mut( &mut self, ) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>

Splits the bit-slice into mutable references of its first bit, and the rest of the bit-slice. Returns None when empty.

§Original

slice::split_first_mut

§API Differences

bitvec uses a custom structure for both read-only and mutable references to bool. This must be bound as mut in order to write through it.

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0; 3];
if let Some((mut first, rest)) = bits.split_first_mut() {
  *first = true;
  assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![1, 0, 0]);

pub fn split_last(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>

Splits the bit-slice into a reference to its last bit, and the rest of the bit-slice. Returns None when empty.

§Original

slice::split_last

§API Differences

bitvec uses a custom structure for both read-only and mutable references to bool.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 0, 1];
let (last, rest) = bits.split_last().unwrap();
assert_eq!(last, &true);
assert_eq!(rest, bits![0; 2]);

pub fn split_last_mut( &mut self, ) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>

Splits the bit-slice into mutable references to its last bit, and the rest of the bit-slice. Returns None when empty.

§Original

slice::split_last_mut

§API Differences

bitvec uses a custom structure for both read-only and mutable references to bool. This must be bound as mut in order to write through it.

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0; 3];
if let Some((mut last, rest)) = bits.split_last_mut() {
  *last = true;
  assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![0, 0, 1]);

pub fn last(&self) -> Option<BitRef<'_, Const, T, O>>

Gets a reference to the last bit of the bit-slice, or None if it is empty.

§Original

slice::last

§API Differences

bitvec uses a custom structure for both read-only and mutable references to bool.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 0, 1];
assert_eq!(bits.last().as_deref(), Some(&true));

assert!(bits![].last().is_none());

pub fn last_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>

Gets a mutable reference to the last bit of the bit-slice, or None if it is empty.

§Original

slice::last_mut

§API Differences

bitvec uses a custom structure for both read-only and mutable references to bool. This must be bound as mut in order to write through it.

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0; 3];
if let Some(mut last) = bits.last_mut() {
  *last = true;
}
assert_eq!(bits, bits![0, 0, 1]);

assert!(bits![mut].last_mut().is_none());

pub fn get<'a, I>( &'a self, index: I, ) -> Option<<I as BitSliceIndex<'a, T, O>>::Immut>
where I: BitSliceIndex<'a, T, O>,

Gets a reference to a single bit or a subsection of the bit-slice, depending on the type of index.

  • If given a usize, this produces a reference structure to the bool at the position.
  • If given any form of range, this produces a smaller bit-slice.

This returns None if the index departs the bounds of self.

§Original

slice::get

§API Differences

BitSliceIndex uses discrete types for immutable and mutable references, rather than a single referent type.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0];
assert_eq!(bits.get(1).as_deref(), Some(&true));
assert_eq!(bits.get(0 .. 2), Some(bits![0, 1]));
assert!(bits.get(3).is_none());
assert!(bits.get(0 .. 4).is_none());

pub fn get_mut<'a, I>( &'a mut self, index: I, ) -> Option<<I as BitSliceIndex<'a, T, O>>::Mut>
where I: BitSliceIndex<'a, T, O>,

Gets a mutable reference to a single bit or a subsection of the bit-slice, depending on the type of index.

  • If given a usize, this produces a reference structure to the bool at the position.
  • If given any form of range, this produces a smaller bit-slice.

This returns None if the index departs the bounds of self.

§Original

slice::get_mut

§API Differences

BitSliceIndex uses discrete types for immutable and mutable references, rather than a single referent type.

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0; 3];

*bits.get_mut(0).unwrap() = true;
bits.get_mut(1 ..).unwrap().fill(true);
assert_eq!(bits, bits![1; 3]);

pub unsafe fn get_unchecked<'a, I>( &'a self, index: I, ) -> <I as BitSliceIndex<'a, T, O>>::Immut
where I: BitSliceIndex<'a, T, O>,

Gets a reference to a single bit or to a subsection of the bit-slice, without bounds checking.

This has the same arguments and behavior as .get(), except that it does not check that index is in bounds.

§Original

slice::get_unchecked

§Safety

You must ensure that index is within bounds (within the range 0 .. self.len()), or this method will introduce memory safety and/or undefined behavior.

It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.

§Examples
use bitvec::prelude::*;

let data = 0b0001_0010u8;
let bits = &data.view_bits::<Lsb0>()[.. 3];

unsafe {
  assert!(bits.get_unchecked(1));
  assert!(bits.get_unchecked(4));
}

pub unsafe fn get_unchecked_mut<'a, I>( &'a mut self, index: I, ) -> <I as BitSliceIndex<'a, T, O>>::Mut
where I: BitSliceIndex<'a, T, O>,

Gets a mutable reference to a single bit or a subsection of the bit-slice, depending on the type of index.

This has the same arguments and behavior as .get_mut(), except that it does not check that index is in bounds.

§Original

slice::get_unchecked_mut

§Safety

You must ensure that index is within bounds (within the range 0 .. self.len()), or this method will introduce memory safety and/or undefined behavior.

It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.

§Examples
use bitvec::prelude::*;

let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 3];

unsafe {
  bits.get_unchecked_mut(1).commit(true);
  bits.get_unchecked_mut(4 .. 6).fill(true);
}
assert_eq!(data, 0b0011_0010);

pub fn as_ptr(&self) -> BitPtr<Const, T, O>

👎Deprecated: use .as_bitptr() instead
Available on non-tarpaulin_include only.

pub fn as_mut_ptr(&mut self) -> BitPtr<Mut, T, O>

👎Deprecated: use .as_mut_bitptr() instead
Available on non-tarpaulin_include only.

pub fn as_ptr_range(&self) -> Range<BitPtr<Const, T, O>>

Available on non-tarpaulin_include only.

Produces a range of bit-pointers to each bit in the bit-slice.

This is a standard-library range, which has no real functionality for pointer types. You should prefer .as_bitptr_range() instead, as it produces a custom structure that provides expected ranging functionality.

§Original

slice::as_ptr_range

pub fn as_mut_ptr_range(&mut self) -> Range<BitPtr<Mut, T, O>>

Available on non-tarpaulin_include only.

Produces a range of mutable bit-pointers to each bit in the bit-slice.

This is a standard-library range, which has no real functionality for pointer types. You should prefer .as_mut_bitptr_range() instead, as it produces a custom structure that provides expected ranging functionality.

§Original

slice::as_mut_ptr_range

pub fn swap(&mut self, a: usize, b: usize)

Exchanges the bit values at two indices.

§Original

slice::swap

§Panics

This panics if either a or b are out of bounds.

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 1];
bits.swap(0, 1);
assert_eq!(bits, bits![1, 0]);

pub fn reverse(&mut self)

Reverses the order of bits in a bit-slice.

§Original

slice::reverse

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 0, 1, 0, 1, 1, 0, 0, 1];
bits.reverse();
assert_eq!(bits, bits![1, 0, 0, 1, 1, 0, 1, 0, 0]);

pub fn iter(&self) -> Iter<'_, T, O>

Produces an iterator over each bit in the bit-slice.

§Original

slice::iter

§API Differences

This iterator yields proxy-reference structures, not &bool. It can be adapted to yield &bool with the .by_refs() method, or bool with .by_vals().

This iterator, and its adapters, are fast. Do not try to be more clever than them by abusing .as_bitptr_range().

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0, 1];
let mut iter = bits.iter();

assert!(!iter.next().unwrap());
assert!( iter.next().unwrap());
assert!( iter.next_back().unwrap());
assert!(!iter.next_back().unwrap());
assert!( iter.next().is_none());

pub fn iter_mut(&mut self) -> IterMut<'_, T, O>

Produces a mutable iterator over each bit in the bit-slice.

§Original

slice::iter_mut

§API Differences

This iterator yields proxy-reference structures, not &mut bool. In addition, it marks each proxy as alias-tainted.

If you are using this in an ordinary loop and not keeping multiple yielded proxy-references alive at the same scope, you may use the .remove_alias() adapter to undo the alias marking.

This iterator is fast. Do not try to be more clever than it by abusing .as_mut_bitptr_range().

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0; 4];
let mut iter = bits.iter_mut();

iter.nth(1).unwrap().commit(true); // index 1
iter.next_back().unwrap().commit(true); // index 3

assert!(iter.next().is_some()); // index 2
assert!(iter.next().is_none()); // complete
assert_eq!(bits, bits![0, 1, 0, 1]);

pub fn windows(&self, size: usize) -> Windows<'_, T, O>

Iterates over consecutive windowing subslices in a bit-slice.

Windows are overlapping views of the bit-slice. Each window advances one bit from the previous, so in a bit-slice [A, B, C, D, E], calling .windows(3) will yield [A, B, C], [B, C, D], and [C, D, E].

§Original

slice::windows

§Panics

This panics if size is 0.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.windows(3);

assert_eq!(iter.next(), Some(bits![0, 1, 0]));
assert_eq!(iter.next(), Some(bits![1, 0, 0]));
assert_eq!(iter.next(), Some(bits![0, 0, 1]));
assert!(iter.next().is_none());

pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, T, O>

Iterates over non-overlapping subslices of a bit-slice.

Unlike .windows(), the subslices this yields do not overlap with each other. If self.len() is not an even multiple of chunk_size, then the last chunk yielded will be shorter.

§Original

slice::chunks

§Sibling Methods
  • .chunks_mut() has the same division logic, but each yielded bit-slice is mutable.
  • .chunks_exact() does not yield the final chunk if it is shorter than chunk_size.
  • .rchunks() iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
§Panics

This panics if chunk_size is 0.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks(2);

assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert_eq!(iter.next(), Some(bits![1]));
assert!(iter.next().is_none());

pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, T, O>

Iterates over non-overlapping mutable subslices of a bit-slice.

Iterators do not require that each yielded item is destroyed before the next is produced. This means that each bit-slice yielded must be marked as aliased. If you are using this in a loop that does not collect multiple yielded subslices for the same scope, then you can remove the alias marking by calling the (unsafe) method .remove_alias() on the iterator.

§Original

slice::chunks_mut

§Sibling Methods
  • .chunks() has the same division logic, but each yielded bit-slice is immutable.
  • .chunks_exact_mut() does not yield the final chunk if it is shorter than chunk_size.
  • .rchunks_mut() iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
§Panics

This panics if chunk_size is 0.

§Examples
use bitvec::prelude::*;

let bits = bits![mut u8, Msb0; 0; 5];

for (idx, chunk) in unsafe {
  bits.chunks_mut(2).remove_alias()
}.enumerate() {
  chunk.store(idx + 1);
}
assert_eq!(bits, bits![0, 1, 1, 0, 1]);
//                     ^^^^  ^^^^  ^

pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, T, O>

Iterates over non-overlapping subslices of a bit-slice.

If self.len() is not an even multiple of chunk_size, then the last few bits are not yielded by the iterator at all. They can be accessed with the .remainder() method if the iterator is bound to a name.

§Original

slice::chunks_exact

§Sibling Methods
  • .chunks() yields any leftover bits at the end as a shorter chunk during iteration.
  • .chunks_exact_mut() has the same division logic, but each yielded bit-slice is mutable.
  • .rchunks_exact() iterates from the back of the bit-slice to the front, with the unyielded remainder segment at the front edge.
§Panics

This panics if chunk_size is 0.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks_exact(2);

assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![1]);

pub fn chunks_exact_mut( &mut self, chunk_size: usize, ) -> ChunksExactMut<'_, T, O>

Iterates over non-overlapping mutable subslices of a bit-slice.

If self.len() is not an even multiple of chunk_size, then the last few bits are not yielded by the iterator at all. They can be accessed with the .into_remainder() method if the iterator is bound to a name.

Iterators do not require that each yielded item is destroyed before the next is produced. This means that each bit-slice yielded must be marked as aliased. If you are using this in a loop that does not collect multiple yielded subslices for the same scope, then you can remove the alias marking by calling the (unsafe) method .remove_alias() on the iterator.

§Original

slice::chunks_exact_mut

§Sibling Methods
  • .chunks_mut() yields any leftover bits at the end as a shorter chunk during iteration.
  • .chunks_exact() has the same division logic, but each yielded bit-slice is immutable.
  • .rchunks_exact_mut() iterates from the back of the bit-slice forwards, with the unyielded remainder segment at the front edge.
§Panics

This panics if chunk_size is 0.

§Examples
use bitvec::prelude::*;

let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.chunks_exact_mut(2);

for (idx, chunk) in iter.by_ref().enumerate() {
  chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);

assert_eq!(bits, bits![0, 1, 1, 0, 1]);
//                       remainder ^

pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, T, O>

Iterates over non-overlapping subslices of a bit-slice, from the back edge.

Unlike .chunks(), this aligns its chunks to the back edge of self. If self.len() is not an even multiple of chunk_size, then the leftover partial chunk is self[0 .. len % chunk_size].

§Original

slice::rchunks

§Sibling Methods
  • .rchunks_mut() has the same division logic, but each yielded bit-slice is mutable.
  • .rchunks_exact() does not yield the final chunk if it is shorter than chunk_size.
  • .chunks() iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
§Panics

This panics if chunk_size is 0.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks(2);

assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert_eq!(iter.next(), Some(bits![0]));
assert!(iter.next().is_none());

pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, T, O>

Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.

Unlike .chunks_mut(), this aligns its chunks to the back edge of self. If self.len() is not an even multiple of chunk_size, then the leftover partial chunk is self[0 .. len % chunk_size].

Iterators do not require that each yielded item is destroyed before the next is produced. This means that each bit-slice yielded must be marked as aliased. If you are using this in a loop that does not collect multiple yielded values for the same scope, then you can remove the alias marking by calling the (unsafe) method .remove_alias() on the iterator.

§Original

slice::rchunks_mut

§Sibling Methods
  • .rchunks() has the same division logic, but each yielded bit-slice is immutable.
  • .rchunks_exact_mut() does not yield the final chunk if it is shorter than chunk_size.
  • .chunks_mut() iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
§Examples
use bitvec::prelude::*;

let bits = bits![mut u8, Msb0; 0; 5];
for (idx, chunk) in unsafe {
  bits.rchunks_mut(2).remove_alias()
}.enumerate() {
  chunk.store(idx + 1);
}
assert_eq!(bits, bits![1, 1, 0, 0, 1]);
//           remainder ^  ^^^^  ^^^^

pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, T, O>

Iterates over non-overlapping subslices of a bit-slice, from the back edge.

If self.len() is not an even multiple of chunk_size, then the first few bits are not yielded by the iterator at all. They can be accessed with the .remainder() method if the iterator is bound to a name.

§Original

slice::rchunks_exact

§Sibling Methods
  • .rchunks() yields any leftover bits at the front as a shorter chunk during iteration.
  • .rchunks_exact_mut() has the same division logic, but each yielded bit-slice is mutable.
  • .chunks_exact() iterates from the front of the bit-slice to the back, with the unyielded remainder segment at the back edge.
§Panics

This panics if chunk_size is 0.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks_exact(2);

assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![0]);

pub fn rchunks_exact_mut( &mut self, chunk_size: usize, ) -> RChunksExactMut<'_, T, O>

Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.

If self.len() is not an even multiple of chunk_size, then the first few bits are not yielded by the iterator at all. They can be accessed with the .into_remainder() method if the iterator is bound to a name.

Iterators do not require that each yielded item is destroyed before the next is produced. This means that each bit-slice yielded must be marked as aliased. If you are using this in a loop that does not collect multiple yielded subslices for the same scope, then you can remove the alias marking by calling the (unsafe) method .remove_alias() on the iterator.

§Sibling Methods
  • .rchunks_mut() yields any leftover bits at the front as a shorter chunk during iteration.
  • .rchunks_exact() has the same division logic, but each yielded bit-slice is immutable.
  • .chunks_exact_mut() iterates from the front of the bit-slice backwards, with the unyielded remainder segment at the back edge.
§Panics

This panics if chunk_size is 0.

§Examples
use bitvec::prelude::*;

let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.rchunks_exact_mut(2);

for (idx, chunk) in iter.by_ref().enumerate() {
  chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);

assert_eq!(bits, bits![1, 1, 0, 0, 1]);
//           remainder ^

pub fn split_at(&self, mid: usize) -> (&BitSlice<T, O>, &BitSlice<T, O>)

Splits a bit-slice in two parts at an index.

The returned bit-slices are self[.. mid] and self[mid ..]. mid is included in the right bit-slice, not the left.

If mid is 0 then the left bit-slice is empty; if it is self.len() then the right bit-slice is empty.

This method guarantees that even when either partition is empty, the encoded bit-pointer values of the bit-slice references is &self[0] and &self[mid].

§Original

slice::split_at

§Panics

This panics if mid is greater than self.len(). It is allowed to be equal to the length, in which case the right bit-slice is simply empty.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 0, 0, 1, 1, 1];
let base = bits.as_bitptr();

let (a, b) = bits.split_at(0);
assert_eq!(unsafe { a.as_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 0);

let (a, b) = bits.split_at(6);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 6);

let (a, b) = bits.split_at(3);
assert_eq!(a, bits![0; 3]);
assert_eq!(b, bits![1; 3]);

pub fn split_at_mut( &mut self, mid: usize, ) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)

Splits a mutable bit-slice in two parts at an index.

The returned bit-slices are self[.. mid] and self[mid ..]. mid is included in the right bit-slice, not the left.

If mid is 0 then the left bit-slice is empty; if it is self.len() then the right bit-slice is empty.

This method guarantees that even when either partition is empty, the encoded bit-pointer values of the bit-slice references is &self[0] and &self[mid].

§Original

slice::split_at_mut

§API Differences

The end bits of the left half and the start bits of the right half might be stored in the same memory element. In order to avoid breaking bitvec’s memory-safety guarantees, both bit-slices are marked as T::Alias. This marking allows them to be used without interfering with each other when they interact with memory.

§Panics

This panics if mid is greater than self.len(). It is allowed to be equal to the length, in which case the right bit-slice is simply empty.

§Examples
use bitvec::prelude::*;

let bits = bits![mut u8, Msb0; 0; 6];
let base = bits.as_mut_bitptr();

let (a, b) = bits.split_at_mut(0);
assert_eq!(unsafe { a.as_mut_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 0);

let (a, b) = bits.split_at_mut(6);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 6);

let (a, b) = bits.split_at_mut(3);
a.store(3);
b.store(5);

assert_eq!(bits, bits![0, 1, 1, 1, 0, 1]);

pub fn split<F>(&self, pred: F) -> Split<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.

§Original

slice::split

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
  • .split_mut() has the same splitting logic, but each yielded bit-slice is mutable.
  • .split_inclusive() includes the matched bit in the yielded bit-slice.
  • .rsplit() iterates from the back of the bit-slice instead of the front.
  • .splitn() times out after n yields.
§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 1, 0];
//                     ^
let mut iter = bits.split(|pos, _bit| pos % 3 == 2);

assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert_eq!(iter.next().unwrap(), bits![0]);
assert!(iter.next().is_none());

If the first bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the last bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.

use bitvec::prelude::*;

let bits = bits![0, 0, 1];
//                     ^
let mut iter = bits.split(|_pos, bit| *bit);

assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().unwrap().is_empty());
assert!(iter.next().is_none());

If two matched bits are directly adjacent, then an empty bit-slice will be yielded between them:

use bitvec::prelude::*;

let bits = bits![1, 0, 0, 1];
//                  ^  ^
let mut iter = bits.split(|_pos, bit| !*bit);

assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());

pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over mutable subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.

Iterators do not require that each yielded item is destroyed before the next is produced. This means that each bit-slice yielded must be marked as aliased. If you are using this in a loop that does not collect multiple yielded subslices for the same scope, then you can remove the alias marking by calling the (unsafe) method .remove_alias() on the iterator.

§Original

slice::split_mut

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 0, 1, 0, 1, 0];
//                         ^     ^
for group in bits.split_mut(|_pos, bit| *bit) {
  group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);

pub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over subslices separated by bits that match a predicate. Unlike .split(), this does include the matching bit as the last bit in the yielded bit-slice.

§Original

slice::split_inclusive

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
  • .split_inclusive_mut() has the same splitting logic, but each yielded bit-slice is mutable.
  • .split() does not include the matched bit in the yielded bit-slice.
§Examples
use bitvec::prelude::*;

let bits = bits![0, 0, 1, 0, 1];
//                     ^     ^
let mut iter = bits.split_inclusive(|_pos, bit| *bit);

assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());

pub fn split_inclusive_mut<F>( &mut self, pred: F, ) -> SplitInclusiveMut<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over mutable subslices separated by bits that match a predicate. Unlike .split_mut(), this does include the matching bit as the last bit in the bit-slice.

Iterators do not require that each yielded item is destroyed before the next is produced. This means that each bit-slice yielded must be marked as aliased. If you are using this in a loop that does not collect multiple yielded subslices for the same scope, then you can remove the alias marking by calling the (unsafe) method .remove_alias() on the iterator.

§Original

slice::split_inclusive_mut

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
  • .split_inclusive() has the same splitting logic, but each yielded bit-slice is immutable.
  • .split_mut() does not include the matched bit in the yielded bit-slice.
§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 0, 0, 0, 0];
//                         ^
for group in bits.split_inclusive_mut(|pos, _bit| pos % 3 == 2) {
  group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 0, 1, 0]);

pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over subslices separated by bits that match a predicate, from the back edge. The matched bit is not contained in the yielded bit-slices.

§Original

slice::rsplit

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
  • .rsplit_mut() has the same splitting logic, but each yielded bit-slice is mutable.
  • .split() iterates from the front of the bit-slice instead of the back.
  • .rsplitn() times out after n yields.
§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 1, 0];
//                     ^
let mut iter = bits.rsplit(|pos, _bit| pos % 3 == 2);

assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());

If the last bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the first bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.

use bitvec::prelude::*;

let bits = bits![0, 0, 1];
//                     ^
let mut iter = bits.rsplit(|_pos, bit| *bit);

assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().is_none());

If two yielded bits are directly adjacent, then an empty bit-slice will be yielded between them:

use bitvec::prelude::*;

let bits = bits![1, 0, 0, 1];
//                  ^  ^
let mut iter = bits.split(|_pos, bit| !*bit);

assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());

pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over mutable subslices separated by bits that match a predicate, from the back. The matched bit is not contained in the yielded bit-slices.

Iterators do not require that each yielded item is destroyed before the next is produced. This means that each bit-slice yielded must be marked as aliased. If you are using this in a loop that does not collect multiple yielded subslices for the same scope, then you can remove the alias marking by calling the (unsafe) method .remove_alias() on the iterator.

§Original

slice::rsplit_mut

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
  • .rsplit() has the same splitting logic, but each yielded bit-slice is immutable.
  • .split_mut() iterates from the front of the bit-slice to the back.
  • .rsplitn_mut() iterates from the front of the bit-slice to the back.
§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 0, 1, 0, 1, 0];
//                         ^     ^
for group in bits.rsplit_mut(|_pos, bit| *bit) {
  group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);

pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over subslices separated by bits that match a predicate, giving up after yielding n times. The nth yield contains the rest of the bit-slice. As with .split(), the yielded bit-slices do not contain the matched bit.

§Original

slice::splitn

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
  • .splitn_mut() has the same splitting logic, but each yielded bit-slice is mutable.
  • .rsplitn() iterates from the back of the bit-slice instead of the front.
  • .split() has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;

let bits = bits![0, 0, 1, 0, 1, 0];
let mut iter = bits.splitn(2, |_pos, bit| *bit);

assert_eq!(iter.next().unwrap(), bits![0, 0]);
assert_eq!(iter.next().unwrap(), bits![0, 1, 0]);
assert!(iter.next().is_none());

pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over mutable subslices separated by bits that match a predicate, giving up after yielding n times. The nth yield contains the rest of the bit-slice. As with .split_mut(), the yielded bit-slices do not contain the matched bit.

Iterators do not require that each yielded item is destroyed before the next is produced. This means that each bit-slice yielded must be marked as aliased. If you are using this in a loop that does not collect multiple yielded subslices for the same scope, then you can remove the alias marking by calling the (unsafe) method .remove_alias() on the iterator.

§Original

slice::splitn_mut

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
  • .splitn() has the same splitting logic, but each yielded bit-slice is immutable.
  • .rsplitn_mut() iterates from the back of the bit-slice instead of the front.
  • .split_mut() has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 0, 1, 0, 1, 0];
for group in bits.splitn_mut(2, |_pos, bit| *bit) {
  group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 0]);

pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over mutable subslices separated by bits that match a predicate from the back edge, giving up after yielding n times. The nth yield contains the rest of the bit-slice. As with .split_mut(), the yielded bit-slices do not contain the matched bit.

§Original

slice::rsplitn

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
  • .rsplitn_mut() has the same splitting logic, but each yielded bit-slice is mutable.
  • .splitn(): iterates from the front of the bit-slice instead of the back.
  • .rsplit() has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;

let bits = bits![0, 0, 1, 1, 0];
//                        ^
let mut iter = bits.rsplitn(2, |_pos, bit| *bit);

assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert!(iter.next().is_none());

pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, O, F>
where F: FnMut(usize, &bool) -> bool,

Iterates over mutable subslices separated by bits that match a predicate from the back edge, giving up after yielding n times. The nth yield contains the rest of the bit-slice. As with .split_mut(), the yielded bit-slices do not contain the matched bit.

Iterators do not require that each yielded item is destroyed before the next is produced. This means that each bit-slice yielded must be marked as aliased. If you are using this in a loop that does not collect multiple yielded subslices for the same scope, then you can remove the alias marking by calling the (unsafe) method .remove_alias() on the iterator.

§Original

slice::rsplitn_mut

§API Differences

The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.

§Sibling Methods
  • .rsplitn() has the same splitting logic, but each yielded bit-slice is immutable.
  • .splitn_mut() iterates from the front of the bit-slice instead of the back.
  • .rsplit_mut() has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 0, 1, 0, 0, 1, 0, 0, 0];
for group in bits.rsplitn_mut(2, |_idx, bit| *bit) {
  group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 0, 0, 1, 1, 0, 0]);
//                     ^ group 2         ^ group 1

pub fn contains<T2, O2>(&self, other: &BitSlice<T2, O2>) -> bool
where T2: BitStore, O2: BitOrder,

Tests if the bit-slice contains the given sequence anywhere within it.

This scans over self.windows(other.len()) until one of the windows matches. The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.

§Original

slice::contains

§Examples
use bitvec::prelude::*;

let bits = bits![0, 0, 1, 0, 1, 1, 0, 0];
assert!( bits.contains(bits![0, 1, 1, 0]));
assert!(!bits.contains(bits![1, 0, 0, 1]));

pub fn starts_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool
where T2: BitStore, O2: BitOrder,

Tests if the bit-slice begins with the given sequence.

The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.

§Original

slice::starts_with

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 1, 0];
assert!( bits.starts_with(bits![0, 1]));
assert!(!bits.starts_with(bits![1, 0]));

This always returns true if the needle is empty:

use bitvec::prelude::*;

let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.starts_with(empty));
assert!(empty.starts_with(empty));

pub fn ends_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool
where T2: BitStore, O2: BitOrder,

Tests if the bit-slice ends with the given sequence.

The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.

§Original

slice::ends_with

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 1, 0];
assert!( bits.ends_with(bits![1, 0]));
assert!(!bits.ends_with(bits![0, 1]));

This always returns true if the needle is empty:

use bitvec::prelude::*;

let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.ends_with(empty));
assert!(empty.ends_with(empty));

pub fn strip_prefix<T2, O2>( &self, prefix: &BitSlice<T2, O2>, ) -> Option<&BitSlice<T, O>>
where T2: BitStore, O2: BitOrder,

Removes a prefix bit-slice, if present.

Like .starts_with(), the search key does not need to share type parameters with the bit-slice being stripped. If self.starts_with(suffix), then this returns Some(&self[prefix.len() ..]), otherwise it returns None.

§Original

slice::strip_prefix

§API Differences

BitSlice does not support pattern searches; instead, it permits self and prefix to differ in type parameters.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_prefix(bits![0, 1]).unwrap(), bits[2 ..]);
assert_eq!(bits.strip_prefix(bits![0, 1, 0, 0,]).unwrap(), bits[4 ..]);
assert!(bits.strip_prefix(bits![1, 0]).is_none());

pub fn strip_suffix<T2, O2>( &self, suffix: &BitSlice<T2, O2>, ) -> Option<&BitSlice<T, O>>
where T2: BitStore, O2: BitOrder,

Removes a suffix bit-slice, if present.

Like .ends_with(), the search key does not need to share type parameters with the bit-slice being stripped. If self.ends_with(suffix), then this returns Some(&self[.. self.len() - suffix.len()]), otherwise it returns None.

§Original

slice::strip_suffix

§API Differences

BitSlice does not support pattern searches; instead, it permits self and suffix to differ in type parameters.

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_suffix(bits![1, 0]).unwrap(), bits[.. 7]);
assert_eq!(bits.strip_suffix(bits![0, 1, 1, 0]).unwrap(), bits[.. 5]);
assert!(bits.strip_suffix(bits![0, 1]).is_none());

pub fn rotate_left(&mut self, by: usize)

Rotates the contents of a bit-slice to the left (towards the zero index).

This essentially splits the bit-slice at by, then exchanges the two pieces. self[.. by] becomes the first section, and is then followed by self[.. by].

The implementation is batch-accelerated where possible. It should have a runtime complexity much lower than O(by).

§Original

slice::rotate_left

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 0, 1, 0, 1, 0];
//      split occurs here ^
bits.rotate_left(2);
assert_eq!(bits, bits![1, 0, 1, 0, 0, 0]);

pub fn rotate_right(&mut self, by: usize)

Rotates the contents of a bit-slice to the right (away from the zero index).

This essentially splits the bit-slice at self.len() - by, then exchanges the two pieces. self[len - by ..] becomes the first section, and is then followed by self[.. len - by].

The implementation is batch-accelerated where possible. It should have a runtime complexity much lower than O(by).

§Original

slice::rotate_right

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 0, 1, 1, 1, 0];
//            split occurs here ^
bits.rotate_right(2);
assert_eq!(bits, bits![1, 0, 0, 0, 1, 1]);

pub fn fill(&mut self, value: bool)

Fills the bit-slice with a given bit.

This is a recent stabilization in the standard library. bitvec previously offered this behavior as the novel API .set_all(). That method name is now removed in favor of this standard-library analogue.

§Original

slice::fill

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0; 5];
bits.fill(true);
assert_eq!(bits, bits![1; 5]);

pub fn fill_with<F>(&mut self, func: F)
where F: FnMut(usize) -> bool,

Fills the bit-slice with bits produced by a generator function.

§Original

slice::fill_with

§API Differences

The generator function receives the index of the bit being initialized as an argument.

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0; 5];
bits.fill_with(|idx| idx % 2 == 0);
assert_eq!(bits, bits![1, 0, 1, 0, 1]);

pub fn clone_from_slice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)
where T2: BitStore, O2: BitOrder,

👎Deprecated: use .clone_from_bitslice() instead
Available on non-tarpaulin_include only.

pub fn copy_from_slice(&mut self, src: &BitSlice<T, O>)

👎Deprecated: use .copy_from_bitslice() instead
Available on non-tarpaulin_include only.

pub fn copy_within<R>(&mut self, src: R, dest: usize)
where R: RangeExt<usize>,

Copies a span of bits to another location in the bit-slice.

src is the range of bit-indices in the bit-slice to copy, and dest is the starting index of the destination range. srcanddest .. dest + src.len()are permitted to overlap; the copy will automatically detect and manage this. However, bothsrcanddest .. dest + src.len()**must** fall within the bounds ofself`.

§Original

slice::copy_within

§Panics

This panics if either the source or destination range exceed self.len().

§Examples
use bitvec::prelude::*;

let bits = bits![mut 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0];
bits.copy_within(1 .. 5, 8);
//                        v  v  v  v
assert_eq!(bits, bits![1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0]);
//                                             ^  ^  ^  ^

pub fn swap_with_slice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)
where T2: BitStore, O2: BitOrder,

👎Deprecated: use .swap_with_bitslice() instead

pub unsafe fn align_to<U>( &self, ) -> (&BitSlice<T, O>, &BitSlice<U, O>, &BitSlice<T, O>)
where U: BitStore,

Produces bit-slice view(s) with different underlying storage types.

This may have unexpected effects, and you cannot assume that before[idx] == after[idx]! Consult the tables in the manual for information about memory layouts.

§Original

slice::align_to

§Notes

Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.

§Safety

You may not use this to cast away alias protections. Rust does not have support for higher-kinded types, so this cannot express the relation Outer<T> -> Outer<U> where Outer: BitStoreContainer, but memory safety does require that you respect this rule. Reälign integers to integers, Cells to Cells, and atomics to atomics, but do not cross these boundaries.

§Examples
use bitvec::prelude::*;

let bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
  bits.align_to::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);

pub unsafe fn align_to_mut<U>( &mut self, ) -> (&mut BitSlice<T, O>, &mut BitSlice<U, O>, &mut BitSlice<T, O>)
where U: BitStore,

Produces bit-slice view(s) with different underlying storage types.

This may have unexpected effects, and you cannot assume that before[idx] == after[idx]! Consult the tables in the manual for information about memory layouts.

§Original

slice::align_to_mut

§Notes

Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.

§Safety

You may not use this to cast away alias protections. Rust does not have support for higher-kinded types, so this cannot express the relation Outer<T> -> Outer<U> where Outer: BitStoreContainer, but memory safety does require that you respect this rule. Reälign integers to integers, Cells to Cells, and atomics to atomics, but do not cross these boundaries.

§Examples
use bitvec::prelude::*;

let mut bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits_mut::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
  bits.align_to_mut::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);
§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

pub fn to_vec(&self) -> BitVec<<T as BitStore>::Unalias, O>

👎Deprecated: use .to_bitvec() instead
Available on crate feature alloc only.

pub fn repeat(&self, n: usize) -> BitVec<<T as BitStore>::Unalias, O>

Available on crate feature alloc only.

Creates a bit-vector by repeating a bit-slice n times.

§Original

slice::repeat

§Panics

This method panics if self.len() * n exceeds the BitVec capacity.

§Examples
use bitvec::prelude::*;

assert_eq!(bits![0, 1].repeat(3), bitvec![0, 1, 0, 1, 0, 1]);

This panics by exceeding bit-vector maximum capacity:

use bitvec::prelude::*;

bits![0, 1].repeat(BitSlice::<usize, Lsb0>::MAX_BITS);
§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

Constructors.

pub fn empty<'a>() -> &'a BitSlice<T, O>

Produces an empty bit-slice with an arbitrary lifetime.

§Original

This is equivalent to the &[] literal.

§Examples
use bitvec::prelude::*;

assert!(BitSlice::<u16, LocalBits>::empty().is_empty());
assert_eq!(bits![], BitSlice::<u8, Msb0>::empty());

pub fn empty_mut<'a>() -> &'a mut BitSlice<T, O>

Produces an empty bit-slice with an arbitrary lifetime.

§Original

This is equivalent to the &mut [] literal.

§Examples
use bitvec::prelude::*;

assert!(BitSlice::<u16, LocalBits>::empty_mut().is_empty());
assert_eq!(bits![mut], BitSlice::<u8, Msb0>::empty_mut());

pub fn from_element(elem: &T) -> &BitSlice<T, O>

Constructs a shared &BitSlice reference over a shared element.

The BitView trait, implemented on all BitStore implementors, provides a .view_bits::<O>() method which delegates to this function and may be more convenient for you to write.

§Parameters
  • elem: A shared reference to a memory element.
§Returns

A shared &BitSlice over elem.

§Examples
use bitvec::prelude::*;

let elem = 0u8;
let bits = BitSlice::<_, Lsb0>::from_element(&elem);
assert_eq!(bits.len(), 8);

let bits = elem.view_bits::<Lsb0>();

pub fn from_element_mut(elem: &mut T) -> &mut BitSlice<T, O>

Constructs an exclusive &mut BitSlice reference over an element.

The BitView trait, implemented on all BitStore implementors, provides a .view_bits_mut::<O>() method which delegates to this function and may be more convenient for you to write.

§Parameters
  • elem: An exclusive reference to a memory element.
§Returns

An exclusive &mut BitSlice over elem.

Note that the original elem reference will be inaccessible for the duration of the returned bit-slice handle’s lifetime.

§Examples
use bitvec::prelude::*;

let mut elem = 0u8;
let bits = BitSlice::<_, Lsb0>::from_element_mut(&mut elem);
bits.set(1, true);
assert!(bits[1]);
assert_eq!(elem, 2);

let bits = elem.view_bits_mut::<Lsb0>();

pub fn from_slice(slice: &[T]) -> &BitSlice<T, O>

Constructs a shared &BitSlice reference over a slice of elements.

The BitView trait, implemented on all [T] slices, provides a .view_bits::<O>() method which delegates to this function and may be more convenient for you to write.

§Parameters
  • slice: A shared reference to a slice of memory elements.
§Returns

A shared BitSlice reference over all of slice.

§Panics

This will panic if slice is too long to encode as a bit-slice view.

§Examples
use bitvec::prelude::*;

let data = [0u16, 1];
let bits = BitSlice::<_, Lsb0>::from_slice(&data);
assert!(bits[16]);

let bits = data.view_bits::<Lsb0>();

pub fn try_from_slice(slice: &[T]) -> Result<&BitSlice<T, O>, BitSpanError<T>>

Attempts to construct a shared &BitSlice reference over a slice of elements.

The BitView, implemented on all [T] slices, provides a .try_view_bits::<O>() method which delegates to this function and may be more convenient for you to write.

This is very hard, if not impossible, to cause to fail. Rust will not create excessive arrays on 64-bit architectures.

§Parameters
  • slice: A shared reference to a slice of memory elements.
§Returns

A shared &BitSlice over slice. If slice is longer than can be encoded into a &BitSlice (see MAX_ELTS), this will fail and return the original slice as an error.

§Examples
use bitvec::prelude::*;

let data = [0u8, 1];
let bits = BitSlice::<_, Msb0>::try_from_slice(&data).unwrap();
assert!(bits[15]);

let bits = data.try_view_bits::<Msb0>().unwrap();

pub fn from_slice_mut(slice: &mut [T]) -> &mut BitSlice<T, O>

Constructs an exclusive &mut BitSlice reference over a slice of elements.

The BitView trait, implemented on all [T] slices, provides a .view_bits_mut::<O>() method which delegates to this function and may be more convenient for you to write.

§Parameters
  • slice: An exclusive reference to a slice of memory elements.
§Returns

An exclusive &mut BitSlice over all of slice.

§Panics

This panics if slice is too long to encode as a bit-slice view.

§Examples
use bitvec::prelude::*;

let mut data = [0u16; 2];
let bits = BitSlice::<_, Lsb0>::from_slice_mut(&mut data);
bits.set(0, true);
bits.set(17, true);
assert_eq!(data, [1, 2]);

let bits = data.view_bits_mut::<Lsb0>();

pub fn try_from_slice_mut( slice: &mut [T], ) -> Result<&mut BitSlice<T, O>, BitSpanError<T>>

Attempts to construct an exclusive &mut BitSlice reference over a slice of elements.

The BitView trait, implemented on all [T] slices, provides a .try_view_bits_mut::<O>() method which delegates to this function and may be more convenient for you to write.

§Parameters
  • slice: An exclusive reference to a slice of memory elements.
§Returns

An exclusive &mut BitSlice over slice. If slice is longer than can be encoded into a &mut BitSlice (see MAX_ELTS), this will fail and return the original slice as an error.

§Examples
use bitvec::prelude::*;

let mut data = [0u8; 2];
let bits = BitSlice::<_, Msb0>::try_from_slice_mut(&mut data).unwrap();
bits.set(7, true);
bits.set(15, true);
assert_eq!(data, [1; 2]);

let bits = data.try_view_bits_mut::<Msb0>().unwrap();

pub unsafe fn from_slice_unchecked(slice: &[T]) -> &BitSlice<T, O>

Constructs a shared &BitSlice over an element slice, without checking its length.

If slice is too long to encode into a &BitSlice, then the produced bit-slice’s length is unspecified.

§Safety

You must ensure that slice.len() < BitSlice::MAX_ELTS.

Calling this function with an over-long slice is library-level undefined behavior. You may not assume anything about its implementation or behavior, and must conservatively assume that over-long slices cause compiler UB.

pub unsafe fn from_slice_unchecked_mut(slice: &mut [T]) -> &mut BitSlice<T, O>

Constructs an exclusive &mut BitSlice over an element slice, without checking its length.

If slice is too long to encode into a &mut BitSlice, then the produced bit-slice’s length is unspecified.

§Safety

You must ensure that slice.len() < BitSlice::MAX_ELTS.

Calling this function with an over-long slice is library-level undefined behavior. You may not assume anything about its implementation or behavior, and must conservatively assume that over-long slices cause compiler UB.

§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

Alternates of standard APIs.

pub fn as_bitptr(&self) -> BitPtr<Const, T, O>

Gets a raw pointer to the zeroth bit of the bit-slice.

§Original

slice::as_ptr

§API Differences

This is renamed in order to indicate that it is returning a bitvec structure, not a raw pointer.

pub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>

Gets a raw, write-capable pointer to the zeroth bit of the bit-slice.

§Original

slice::as_mut_ptr

§API Differences

This is renamed in order to indicate that it is returning a bitvec structure, not a raw pointer.

pub fn as_bitptr_range(&self) -> BitPtrRange<Const, T, O>

Views the bit-slice as a half-open range of bit-pointers, to its first bit in the bit-slice and first bit beyond it.

§Original

slice::as_ptr_range

§API Differences

This is renamed to indicate that it returns a bitvec structure, rather than an ordinary Range.

§Notes

BitSlice does define a .as_ptr_range(), which returns a Range<BitPtr>. BitPtrRange has additional capabilities that Range<*const T> and Range<BitPtr> do not.

pub fn as_mut_bitptr_range(&mut self) -> BitPtrRange<Mut, T, O>

Views the bit-slice as a half-open range of write-capable bit-pointers, to its first bit in the bit-slice and the first bit beyond it.

§Original

slice::as_mut_ptr_range

§API Differences

This is renamed to indicate that it returns a bitvec structure, rather than an ordinary Range.

§Notes

BitSlice does define a [.as_mut_ptr_range()], which returns a Range<BitPtr>. BitPtrRange has additional capabilities that Range<*mut T> and Range<BitPtr> do not.

pub fn clone_from_bitslice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)
where T2: BitStore, O2: BitOrder,

Copies the bits from src into self.

self and src must have the same length.

§Performance

If src has the same type arguments as self, it will use the same implementation as .copy_from_bitslice(); if you know that this will always be the case, you should prefer to use that method directly.

Only .copy_from_bitslice() is able to perform acceleration; this method is always required to perform a bit-by-bit crawl over both bit-slices.

§Original

slice::clone_from_slice

§API Differences

This is renamed to reflect that it copies from another bit-slice, not from an element slice.

In order to support general usage, it allows src to have different type parameters than self, at the cost of performance optimizations.

§Panics

This panics if the two bit-slices have different lengths.

§Examples
use bitvec::prelude::*;

pub fn copy_from_bitslice(&mut self, src: &BitSlice<T, O>)

Copies all bits from src into self, using batched acceleration when possible.

self and src must have the same length.

§Original

slice::copy_from_slice

§Panics

This panics if the two bit-slices have different lengths.

§Examples
use bitvec::prelude::*;

pub fn swap_with_bitslice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)
where T2: BitStore, O2: BitOrder,

Swaps the contents of two bit-slices.

self and other must have the same length.

§Original

slice::swap_with_slice

§API Differences

This method is renamed, as it takes a bit-slice rather than an element slice.

§Panics

This panics if the two bit-slices have different lengths.

§Examples
use bitvec::prelude::*;

let mut one = [0xA5u8, 0x69];
let mut two = 0x1234u16;
let one_bits = one.view_bits_mut::<Msb0>();
let two_bits = two.view_bits_mut::<Lsb0>();

one_bits.swap_with_bitslice(two_bits);

assert_eq!(one, [0x2C, 0x48]);
assert_eq!(two, 0x96A5);
§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

Extensions of standard APIs.

pub fn set(&mut self, index: usize, value: bool)

Writes a new value into a single bit.

This is the replacement for *slice[index] = value;, as bitvec is not able to express that under the current IndexMut API signature.

§Parameters
  • &mut self
  • index: The bit-index to set. It must be in 0 .. self.len().
  • value: The new bit-value to write into the bit at index.
§Panics

This panics if index is out of bounds.

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 1];
bits.set(0, true);
bits.set(1, false);

assert_eq!(bits, bits![1, 0]);

pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)

Writes a new value into a single bit, without bounds checking.

§Parameters
  • &mut self
  • index: The bit-index to set. It must be in 0 .. self.len().
  • value: The new bit-value to write into the bit at index.
§Safety

You must ensure that index is in the range 0 .. self.len().

This performs bit-pointer offset arithmetic without doing any bounds checks. If index is out of bounds, then this will issue an out-of-bounds access and will trigger memory unsafety.

§Examples
use bitvec::prelude::*;

let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 2];
assert_eq!(bits.len(), 2);
unsafe {
  bits.set_unchecked(3, true);
}
assert_eq!(data, 8);

pub fn replace(&mut self, index: usize, value: bool) -> bool

Writes a new value into a bit, and returns its previous value.

§Panics

This panics if index is not less than self.len().

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0];
assert!(!bits.replace(0, true));
assert!(bits[0]);

pub unsafe fn replace_unchecked(&mut self, index: usize, value: bool) -> bool

Writes a new value into a bit, returning the previous value, without bounds checking.

§Safety

index must be less than self.len().

§Examples
use bitvec::prelude::*;

let bits = bits![mut 0, 0];
let old = unsafe {
  let a = &mut bits[.. 1];
  a.replace_unchecked(1, true)
};
assert!(!old);
assert!(bits[1]);

pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)

Swaps two bits in a bit-slice, without bounds checking.

See .swap() for documentation.

§Safety

You must ensure that a and b are both in the range 0 .. self.len().

This method performs bit-pointer offset arithmetic without doing any bounds checks. If a or b are out of bounds, then this will issue an out-of-bounds access and will trigger memory unsafety.

pub unsafe fn split_at_unchecked( &self, mid: usize, ) -> (&BitSlice<T, O>, &BitSlice<T, O>)

Splits a bit-slice at an index, without bounds checking.

See .split_at() for documentation.

§Safety

You must ensure that mid is in the range 0 ..= self.len().

This method produces new bit-slice references. If mid is out of bounds, its behavior is library-level undefined. You must conservatively assume that an out-of-bounds split point produces compiler-level UB.

pub unsafe fn split_at_unchecked_mut( &mut self, mid: usize, ) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)

Splits a mutable bit-slice at an index, without bounds checking.

See .split_at_mut() for documentation.

§Safety

You must ensure that mid is in the range 0 ..= self.len().

This method produces new bit-slice references. If mid is out of bounds, its behavior is library-level undefined. You must conservatively assume that an out-of-bounds split point produces compiler-level UB.

pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize)
where R: RangeExt<usize>,

Copies bits from one region of the bit-slice to another region of itself, without doing bounds checks.

The regions are allowed to overlap.

§Parameters
  • &mut self
  • src: The range within self from which to copy.
  • dst: The starting index within self at which to paste.
§Effects

self[src] is copied to self[dest .. dest + src.len()]. The bits of self[src] are in an unspecified, but initialized, state.

§Safety

src.end() and dest + src.len() must be entirely within bounds.

§Examples
use bitvec::prelude::*;

let mut data = 0b1011_0000u8;
let bits = data.view_bits_mut::<Msb0>();

unsafe {
  bits.copy_within_unchecked(.. 4, 2);
}
assert_eq!(data, 0b1010_1100);
§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

Views of underlying memory.

pub fn bit_domain(&self) -> BitDomain<'_, Const, T, O>

Available on non-tarpaulin_include only.

Partitions a bit-slice into maybe-contended and known-uncontended parts.

The documentation of BitDomain goes into this in more detail. In short, this produces a &BitSlice that is as large as possible without requiring alias protection, as well as any bits that were not able to be included in the unaliased bit-slice.

pub fn bit_domain_mut(&mut self) -> BitDomain<'_, Mut, T, O>

Available on non-tarpaulin_include only.

Partitions a mutable bit-slice into maybe-contended and known-uncontended parts.

The documentation of BitDomain goes into this in more detail. In short, this produces a &mut BitSlice that is as large as possible without requiring alias protection, as well as any bits that were not able to be included in the unaliased bit-slice.

pub fn domain(&self) -> Domain<'_, Const, T, O>

Available on non-tarpaulin_include only.

Views the underlying memory of a bit-slice, removing alias protections where possible.

The documentation of Domain goes into this in more detail. In short, this produces a &[T] slice with alias protections removed, covering all elements that self completely fills. Partially-used elements on either the front or back edge of the slice are returned separately.

pub fn domain_mut(&mut self) -> Domain<'_, Mut, T, O>

Available on non-tarpaulin_include only.

Views the underlying memory of a bit-slice, removing alias protections where possible.

The documentation of Domain goes into this in more detail. In short, this produces a &mut [T] slice with alias protections removed, covering all elements that self completely fills. Partially-used elements on the front or back edge of the slice are returned separately.

§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

Bit-value queries.

pub fn count_ones(&self) -> usize

Counts the number of bits set to 1 in the bit-slice contents.

§Examples
use bitvec::prelude::*;

let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_ones(), 2);
assert_eq!(bits[2 ..].count_ones(), 0);
assert_eq!(bits![].count_ones(), 0);

pub fn count_zeros(&self) -> usize

Counts the number of bits cleared to 0 in the bit-slice contents.

§Examples
use bitvec::prelude::*;

let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_zeros(), 0);
assert_eq!(bits[2 ..].count_zeros(), 2);
assert_eq!(bits![].count_zeros(), 0);

pub fn iter_ones(&self) -> IterOnes<'_, T, O>

Enumerates the index of each bit in a bit-slice set to 1.

This is a shorthand for a .enumerate().filter_map() iterator that selects the index of each true bit; however, its implementation is eligible for optimizations that the individual-bit iterator is not.

Specializations for the Lsb0 and Msb0 orderings allow processors with instructions that seek particular bits within an element to operate on whole elements, rather than on each bit individually.

§Examples

This example uses .iter_ones(), a .filter_map() that finds the index of each set bit, and the known indices, in order to show that they have equivalent behavior.

use bitvec::prelude::*;

let bits = bits![0, 1, 0, 0, 1, 0, 0, 0, 1];

let iter_ones = bits.iter_ones();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
  .by_vals()
  .enumerate()
  .filter_map(|(idx, bit)| if bit { Some(idx) } else { None });
let all = iter_ones.zip(known_indices).zip(filter);

for ((iter_one, known), filtered) in all {
  assert_eq!(iter_one, known);
  assert_eq!(known, filtered);
}

pub fn iter_zeros(&self) -> IterZeros<'_, T, O>

Enumerates the index of each bit in a bit-slice cleared to 0.

This is a shorthand for a .enumerate().filter_map() iterator that selects the index of each false bit; however, its implementation is eligible for optimizations that the individual-bit iterator is not.

Specializations for the Lsb0 and Msb0 orderings allow processors with instructions that seek particular bits within an element to operate on whole elements, rather than on each bit individually.

§Examples

This example uses .iter_zeros(), a .filter_map() that finds the index of each cleared bit, and the known indices, in order to show that they have equivalent behavior.

use bitvec::prelude::*;

let bits = bits![1, 0, 1, 1, 0, 1, 1, 1, 0];

let iter_zeros = bits.iter_zeros();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
  .by_vals()
  .enumerate()
  .filter_map(|(idx, bit)| if !bit { Some(idx) } else { None });
let all = iter_zeros.zip(known_indices).zip(filter);

for ((iter_zero, known), filtered) in all {
  assert_eq!(iter_zero, known);
  assert_eq!(known, filtered);
}

pub fn first_one(&self) -> Option<usize>

Finds the index of the first bit in the bit-slice set to 1.

Returns None if there is no true bit in the bit-slice.

§Examples
use bitvec::prelude::*;

assert!(bits![].first_one().is_none());
assert!(bits![0].first_one().is_none());
assert_eq!(bits![0, 1].first_one(), Some(1));

pub fn first_zero(&self) -> Option<usize>

Finds the index of the first bit in the bit-slice cleared to 0.

Returns None if there is no false bit in the bit-slice.

§Examples
use bitvec::prelude::*;

assert!(bits![].first_zero().is_none());
assert!(bits![1].first_zero().is_none());
assert_eq!(bits![1, 0].first_zero(), Some(1));

pub fn last_one(&self) -> Option<usize>

Finds the index of the last bit in the bit-slice set to 1.

Returns None if there is no true bit in the bit-slice.

§Examples
use bitvec::prelude::*;

assert!(bits![].last_one().is_none());
assert!(bits![0].last_one().is_none());
assert_eq!(bits![1, 0].last_one(), Some(0));

pub fn last_zero(&self) -> Option<usize>

Finds the index of the last bit in the bit-slice cleared to 0.

Returns None if there is no false bit in the bit-slice.

§Examples
use bitvec::prelude::*;

assert!(bits![].last_zero().is_none());
assert!(bits![1].last_zero().is_none());
assert_eq!(bits![0, 1].last_zero(), Some(0));

pub fn leading_ones(&self) -> usize

Counts the number of bits from the start of the bit-slice to the first bit set to 0.

This returns 0 if the bit-slice is empty.

§Examples
use bitvec::prelude::*;

assert_eq!(bits![].leading_ones(), 0);
assert_eq!(bits![0].leading_ones(), 0);
assert_eq!(bits![1, 0].leading_ones(), 1);

pub fn leading_zeros(&self) -> usize

Counts the number of bits from the start of the bit-slice to the first bit set to 1.

This returns 0 if the bit-slice is empty.

§Examples
use bitvec::prelude::*;

assert_eq!(bits![].leading_zeros(), 0);
assert_eq!(bits![1].leading_zeros(), 0);
assert_eq!(bits![0, 1].leading_zeros(), 1);

pub fn trailing_ones(&self) -> usize

Counts the number of bits from the end of the bit-slice to the last bit set to 0.

This returns 0 if the bit-slice is empty.

§Examples
use bitvec::prelude::*;

assert_eq!(bits![].trailing_ones(), 0);
assert_eq!(bits![0].trailing_ones(), 0);
assert_eq!(bits![0, 1].trailing_ones(), 1);

pub fn trailing_zeros(&self) -> usize

Counts the number of bits from the end of the bit-slice to the last bit set to 1.

This returns 0 if the bit-slice is empty.

§Examples
use bitvec::prelude::*;

assert_eq!(bits![].trailing_zeros(), 0);
assert_eq!(bits![1].trailing_zeros(), 0);
assert_eq!(bits![1, 0].trailing_zeros(), 1);

pub fn any(&self) -> bool

Tests if there is at least one bit set to 1 in the bit-slice.

Returns false when self is empty.

§Examples
use bitvec::prelude::*;

assert!(!bits![].any());
assert!(!bits![0].any());
assert!(bits![0, 1].any());

pub fn all(&self) -> bool

Tests if every bit is set to 1 in the bit-slice.

Returns true when self is empty.

§Examples
use bitvec::prelude::*;

assert!( bits![].all());
assert!(!bits![0].all());
assert!( bits![1].all());

pub fn not_any(&self) -> bool

Tests if every bit is cleared to 0 in the bit-slice.

Returns true when self is empty.

§Examples
use bitvec::prelude::*;

assert!( bits![].not_any());
assert!(!bits![1].not_any());
assert!( bits![0].not_any());

pub fn not_all(&self) -> bool

Tests if at least one bit is cleared to 0 in the bit-slice.

Returns false when self is empty.

§Examples
use bitvec::prelude::*;

assert!(!bits![].not_all());
assert!(!bits![1].not_all());
assert!( bits![0].not_all());

pub fn some(&self) -> bool

Tests if at least one bit is set to 1, and at least one bit is cleared to 0, in the bit-slice.

Returns false when self is empty.

§Examples
use bitvec::prelude::*;

assert!(!bits![].some());
assert!(!bits![0].some());
assert!(!bits![1].some());
assert!( bits![0, 1].some());
§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

Buffer manipulation.

pub fn shift_left(&mut self, by: usize)

Shifts the contents of a bit-slice “left” (towards the zero-index), clearing the “right” bits to 0.

This is a strictly-worse analogue to taking bits = &bits[by ..]: it has to modify the entire memory region that bits governs, and destroys contained information. Unless the actual memory layout and contents of your bit-slice matters to your program, you should probably prefer to munch your way forward through a bit-slice handle.

Note also that the “left” here is semantic only, and does not necessarily correspond to a left-shift instruction applied to the underlying integer storage.

This has no effect when by is 0. When by is self.len(), the bit-slice is entirely cleared to 0.

§Panics

This panics if by is not less than self.len().

§Examples
use bitvec::prelude::*;

let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits are retained ^--------------------------^
bits.shift_left(2);
assert_eq!(bits, bits![1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0]);
// and move here       ^--------------------------^

let bits = bits![mut 1; 2];
bits.shift_left(2);
assert_eq!(bits, bits![0; 2]);

pub fn shift_right(&mut self, by: usize)

Shifts the contents of a bit-slice “right” (away from the zero-index), clearing the “left” bits to 0.

This is a strictly-worse analogue to taking `bits = &bits[.. bits.len()

  • by]: it must modify the entire memory region that bits` governs, and destroys contained information. Unless the actual memory layout and contents of your bit-slice matters to your program, you should probably prefer to munch your way backward through a bit-slice handle.

Note also that the “right” here is semantic only, and does not necessarily correspond to a right-shift instruction applied to the underlying integer storage.

This has no effect when by is 0. When by is self.len(), the bit-slice is entirely cleared to 0.

§Panics

This panics if by is not less than self.len().

§Examples
use bitvec::prelude::*;

let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits stay   ^--------------------------^
bits.shift_right(2);
assert_eq!(bits, bits![0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1]);
// and move here             ^--------------------------^

let bits = bits![mut 1; 2];
bits.shift_right(2);
assert_eq!(bits, bits![0; 2]);
§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

This impl block contains no items.

Crate internals.

§

impl<T, O> BitSlice<T, O>
where T: BitStore + Radium, O: BitOrder,

Methods available only when T allows shared mutability.

pub fn set_aliased(&self, index: usize, value: bool)

Writes a new value into a single bit, using alias-safe operations.

This is equivalent to .set(), except that it does not require an &mut reference, and allows bit-slices with alias-safe storage to share write permissions.

§Parameters
  • &self: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.
  • index: The bit index to set. It must be in 0 .. self.len().
  • value: The new bit-value to write into the bit at index.
§Panics

This panics if index is out of bounds.

§Examples
use bitvec::prelude::*;
use core::cell::Cell;

let bits: &BitSlice<_, _> = bits![Cell<usize>, Lsb0; 0, 1];
bits.set_aliased(0, true);
bits.set_aliased(1, false);

assert_eq!(bits, bits![1, 0]);

pub unsafe fn set_aliased_unchecked(&self, index: usize, value: bool)

Writes a new value into a single bit, using alias-safe operations and without bounds checking.

This is equivalent to .set_unchecked(), except that it does not require an &mut reference, and allows bit-slices with alias-safe storage to share write permissions.

§Parameters
  • &self: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.
  • index: The bit index to set. It must be in 0 .. self.len().
  • value: The new bit-value to write into the bit at index.
§Safety

The caller must ensure that index is not out of bounds.

§Examples
use bitvec::prelude::*;
use core::cell::Cell;

let data = Cell::new(0u8);
let bits = &data.view_bits::<Lsb0>()[.. 2];
unsafe {
  bits.set_aliased_unchecked(3, true);
}
assert_eq!(data.get(), 8);
§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

Miscellaneous information.

pub const MAX_BITS: usize = 2_305_843_009_213_693_951usize

The inclusive maximum length of a BitSlice<_, T>.

As BitSlice is zero-indexed, the largest possible index is one less than this value.

CPU word widthValue
32 bits0x1fff_ffff
64 bits0x1fff_ffff_ffff_ffff

pub const MAX_ELTS: usize = BitSpan<Const, T, O>::REGION_MAX_ELTS

The inclusive maximum length that a [T] slice can be for BitSlice<_, T> to cover it.

A BitSlice<_, T> that begins in the interior of an element and contains the maximum number of bits will extend one element past the cutoff that would occur if the bit-slice began at the zeroth bit. Such a bit-slice is difficult to manually construct, but would not otherwise fail.

Type BitsMax Elements (32-bit)Max Elements (64-bit)
80x0400_00010x0400_0000_0000_0001
160x0200_00010x0200_0000_0000_0001
320x0100_00010x0100_0000_0000_0001
640x0080_00010x0080_0000_0000_0001
§

impl<T, O> BitSlice<T, O>
where T: BitStore, O: BitOrder,

pub fn to_bitvec(&self) -> BitVec<<T as BitStore>::Unalias, O>

Available on crate feature alloc only.

Copies a bit-slice into an owned bit-vector.

Since the new vector is freshly owned, this gets marked as ::Unalias to remove any guards that may have been inserted by the bit-slice’s history.

It does not use the underlying memory type, so that a BitSlice<_, Cell<_>> will produce a BitVec<_, Cell<_>>.

§Original

slice::to_vec

§Examples
use bitvec::prelude::*;

let bits = bits![0, 1, 0, 1];
let bv = bits.to_bitvec();
assert_eq!(bits, bv);

Trait Implementations§

§

impl<A, O> AsMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_mut(&mut self) -> &mut BitSlice<<A as BitView>::Store, O>

Converts this type into a mutable reference of the (usually inferred) input type.
§

impl<T, O> AsMut<BitSlice<T, O>> for BitBox<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_mut(&mut self) -> &mut BitSlice<T, O>

Converts this type into a mutable reference of the (usually inferred) input type.
§

impl<T, O> AsMut<BitSlice<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

fn as_mut(&mut self) -> &mut BitSlice<T, O>

Converts this type into a mutable reference of the (usually inferred) input type.
§

impl<T, O> AsMut<BitSlice<T, O>> for BitVec<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_mut(&mut self) -> &mut BitSlice<T, O>

Converts this type into a mutable reference of the (usually inferred) input type.
§

impl<A, O> AsRef<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_ref(&self) -> &BitSlice<<A as BitView>::Store, O>

Converts this type into a shared reference of the (usually inferred) input type.
§

impl<T, O> AsRef<BitSlice<<T as BitStore>::Alias, O>> for IterMut<'_, T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_ref(&self) -> &BitSlice<<T as BitStore>::Alias, O>

Converts this type into a shared reference of the (usually inferred) input type.
§

impl<T, O> AsRef<BitSlice<T, O>> for BitBox<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_ref(&self) -> &BitSlice<T, O>

Converts this type into a shared reference of the (usually inferred) input type.
§

impl<T, O> AsRef<BitSlice<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

fn as_ref(&self) -> &BitSlice<T, O>

Converts this type into a shared reference of the (usually inferred) input type.
§

impl<T, O> AsRef<BitSlice<T, O>> for BitVec<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_ref(&self) -> &BitSlice<T, O>

Converts this type into a shared reference of the (usually inferred) input type.
§

impl<T, O> AsRef<BitSlice<T, O>> for Drain<'_, T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_ref(&self) -> &BitSlice<T, O>

Converts this type into a shared reference of the (usually inferred) input type.
§

impl<T, O> AsRef<BitSlice<T, O>> for IntoIter<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_ref(&self) -> &BitSlice<T, O>

Converts this type into a shared reference of the (usually inferred) input type.
§

impl<T, O> AsRef<BitSlice<T, O>> for Iter<'_, T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn as_ref(&self) -> &BitSlice<T, O>

Converts this type into a shared reference of the (usually inferred) input type.
§

impl<T, O> Binary for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§Bit-Slice Rendering

This implementation prints the contents of a &BitSlice in one of binary, octal, or hexadecimal. It is important to note that this does not render the raw underlying memory! They render the semantically-ordered contents of the bit-slice as numerals. This distinction matters if you use type parameters that differ from those presumed by your debugger (which is usually <u8, Msb0>).

The output separates the T elements as individual list items, and renders each element as a base- 2, 8, or 16 numeric string. When walking an element, the bits traversed by the bit-slice are considered to be stored in most-significant-bit-first ordering. This means that index [0] is the high bit of the left-most digit, and index [n] is the low bit of the right-most digit, in a given printed word.

In order to render according to expectations of the Arabic numeral system, an element being transcribed is chunked into digits from the least-significant end of its rendered form. This is most noticeable in octal, which will always have a smaller ceiling on the left-most digit in a printed word, while the right-most digit in that word is able to use the full 0 ..= 7 numeral range.

§Examples
use bitvec::prelude::*;

let data = [
  0b000000_10u8,
// digits print LTR
  0b10_001_101,
// significance is computed RTL
  0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];

assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");

The {:#} format modifier causes the standard 0b, 0o, or 0x prefix to be applied to each printed word. The other format specifiers are not interpreted by this implementation, and apply to the entire rendered text, not to individual words.

§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
§

impl<A, O> BitAndAssign<&BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitand_assign(&mut self, rhs: &BitArray<A, O>)

Performs the &= operation. Read more
§

impl<T, O> BitAndAssign<&BitBox<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitand_assign(&mut self, rhs: &BitBox<T, O>)

Performs the &= operation. Read more
§

impl<T1, T2, O1, O2> BitAndAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn bitand_assign(&mut self, rhs: &BitSlice<T2, O2>)

§Boolean Arithmetic

This merges another bit-slice into self with a Boolean arithmetic operation. If the other bit-slice is shorter than self, it is zero-extended. For BitAnd, this clears all excess bits of self to 0; for BitOr and BitXor, it leaves them untouched

§Behavior

The Boolean operation proceeds across each bit-slice in iteration order. This is 3O(n) in the length of the shorter of self and rhs. However, it can be accelerated if rhs has the same type parameters as self, and both are using one of the orderings provided by bitvec. In this case, the implementation specializes to use BitField batch operations to operate on the slices one word at a time, rather than one bit.

Acceleration is not currently provided for custom bit-orderings that use the same storage type.

§Pre-1.0 Behavior

In the 0. development series, Boolean arithmetic was implemented against all I: Iterator<Item = bool>. This allowed code such as bits |= [false, true];, but forbad acceleration in the most common use case (combining two bit-slices) because BitSlice is not such an iterator.

Usage surveys indicate that it is better for the arithmetic operators to operate on bit-slices, and to allow the possibility of specialized acceleration, rather than to allow folding against any iterator of bools.

If pre-1.0 code relies on this behavior specifically, and has non-BitSlice arguments to the Boolean sigils, then they will need to be replaced with the equivalent loop.

§Examples
use bitvec::prelude::*;

let a = bits![mut 0, 0, 1, 1];
let b = bits![    0, 1, 0, 1];

*a ^= b;
assert_eq!(a, bits![0, 1, 1, 0]);

let c = bits![mut 0, 0, 1, 1];
let d = [false, true, false, true];

// no longer allowed
// c &= d.into_iter().by_vals();
for (mut c, d) in c.iter_mut().zip(d.into_iter())
{
  *c ^= d;
}
assert_eq!(c, bits![0, 1, 1, 0]);
§

impl<T, O> BitAndAssign<&BitVec<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitand_assign(&mut self, rhs: &BitVec<T, O>)

Performs the &= operation. Read more
§

impl<A, O> BitAndAssign<BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitand_assign(&mut self, rhs: BitArray<A, O>)

Performs the &= operation. Read more
§

impl<T, O> BitAndAssign<BitBox<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitand_assign(&mut self, rhs: BitBox<T, O>)

Performs the &= operation. Read more
§

impl<T, O> BitAndAssign<BitVec<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitand_assign(&mut self, rhs: BitVec<T, O>)

Performs the &= operation. Read more
§

impl<T> BitField for BitSlice<T>
where T: BitStore,

§Lsb0 Bit-Field Behavior

BitField has no requirements about the in-memory representation or layout of stored integers within a bit-slice, only that round-tripping an integer through a store and a load of the same element suffix on the same bit-slice is idempotent (with respect to sign truncation).

Lsb0 provides a contiguous translation from bit-index to real memory: for any given bit index n and its position P(n), P(n + 1) is P(n) + 1. This allows it to provide batched behavior: since the section of contiguous indices used within an element translates to a section of contiguous bits in real memory, the transaction is always a single shift/mask operation.

Each implemented method contains documentation and examples showing exactly how the abstract integer space is mapped to real memory.

§

fn load_le<I>(&self) -> I
where I: Integral,

§Lsb0 Little-Endian Integer Loading

This implementation uses the Lsb0 bit-ordering to determine which bits in a partially-occupied memory element contain the contents of an integer to be loaded, using little-endian element ordering.

See the trait method definition for an overview of what element ordering means.

§Signed-Integer Loading

As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means the most-significant loaded bit of the final element.

§Examples

In each memory element, the Lsb0 ordering counts indices leftward from the right edge:

use bitvec::prelude::*;

let raw = 0b00_10110_0u8;
//           76 54321 0
//              ^ sign bit
assert_eq!(
  raw.view_bits::<Lsb0>()
     [1 .. 6]
     .load_le::<u8>(),
  0b000_10110,
);
assert_eq!(
  raw.view_bits::<Lsb0>()
     [1 .. 6]
     .load_le::<i8>(),
  0b111_10110u8 as i8,
);

In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:

use bitvec::prelude::*;

let raw = [
  0x8_Fu8,
//  7 0
  0x0_1u8,
// 15 8
  0b1111_0010u8,
//       ^ sign bit
// 23       16
];
assert_eq!(
  raw.view_bits::<Lsb0>()
     [4 .. 20]
     .load_le::<u16>(),
  0x2018u16,
);

Note that while these examples use u8 storage for convenience in displaying the literals, BitField operates identically with any storage type. As most machines use little-endian byte ordering within wider element types, and bitvec exclusively operates on elements, the actual bytes of memory may rapidly start to behave oddly when translating between numeric literals and in-memory representation.

The user guide has a chapter that translates bit indices into memory positions for each combination of <T: BitStore, O: BitOrder>, and may be of additional use when choosing a combination of type parameters and load functions.

§

fn load_be<I>(&self) -> I
where I: Integral,

§Lsb0 Big-Endian Integer Loading

This implementation uses the Lsb0 bit-ordering to determine which bits in a partially-occupied memory element contain the contents of an integer to be loaded, using big-endian element ordering.

See the trait method definition for an overview of what element ordering means.

§Signed-Integer Loading

As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means that the most-significant bit of the first element.

§Examples

In each memory element, the Lsb0 ordering counts indices leftward from the right edge:

use bitvec::prelude::*;

let raw = 0b00_10110_0u8;
//           76 54321 0
//              ^ sign bit
assert_eq!(
  raw.view_bits::<Lsb0>()
     [1 .. 6]
     .load_be::<u8>(),
  0b000_10110,
);
assert_eq!(
  raw.view_bits::<Lsb0>()
     [1 .. 6]
     .load_be::<i8>(),
  0b111_10110u8 as i8,
);

In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases while numeric significance decreases:

use bitvec::prelude::*;

let raw = [
  0b0010_1111u8,
//  ^ sign bit
//  7       0
  0x0_1u8,
// 15 8
  0xF_8u8,
// 23 16
];
assert_eq!(
  raw.view_bits::<Lsb0>()
     [4 .. 20]
     .load_be::<u16>(),
  0x2018u16,
);

Note that while these examples use u8 storage for convenience in displaying the literals, BitField operates identically with any storage type. As most machines use little-endian byte ordering within wider element types, and bitvec exclusively operates on elements, the actual bytes of memory may rapidly start to behave oddly when translating between numeric literals and in-memory representation.

The user guide has a chapter that translates bit indices into memory positions for each combination of <T: BitStore, O: BitOrder>, and may be of additional use when choosing a combination of type parameters and load functions.

§

fn store_le<I>(&mut self, value: I)
where I: Integral,

§Lsb0 Little-Endian Integer Storing

This implementation uses the Lsb0 bit-ordering to determine which bits in a partially-occupied memory element are used for storage, using little-endian element ordering.

See the trait method definition for an overview of what element ordering means.

§Narrowing Behavior

Integers are truncated from the high end. When storing into a bit-slice of length n, the n least numerically significant bits are stored, and any remaining high bits are ignored.

Be aware of this behavior if you are storing signed integers! The signed integer -14i8 (bit pattern 0b1111_0010u8) will, when stored into and loaded back from a 4-bit slice, become the value 2i8.

§Examples
use bitvec::prelude::*;

let mut raw = 0u8;
raw.view_bits_mut::<Lsb0>()
   [1 .. 6]
   .store_le(22u8);
assert_eq!(raw, 0b00_10110_0);
//                 76 54321 0
raw.view_bits_mut::<Lsb0>()
   [1 .. 6]
   .store_le(-10i8);
assert_eq!(raw, 0b00_10110_0);

In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:

use bitvec::prelude::*;

let mut raw = [!0u8; 3];
raw.view_bits_mut::<Lsb0>()
   [4 .. 20]
   .store_le(0x2018u16);
assert_eq!(raw, [
  0x8_F,
//  7 0
  0x0_1,
// 15 8
  0xF_2,
// 23 16
]);

Note that while these examples use u8 storage for convenience in displaying the literals, BitField operates identically with any storage type. As most machines use little-endian byte ordering within wider element types, and bitvec exclusively operates on elements, the actual bytes of memory may rapidly start to behave oddly when translating between numeric literals and in-memory representation.

The user guide has a chapter that translates bit indices into memory positions for each combination of <T: BitStore, O: BitOrder>, and may be of additional use when choosing a combination of type parameters and store functions.

§

fn store_be<I>(&mut self, value: I)
where I: Integral,

§Lsb0 Big-Endian Integer Storing

This implementation uses the Lsb0 bit-ordering to determine which bits in a partially-occupied memory element are used for storage, using big-endian element ordering.

See the trait method definition for an overview of what element ordering means.

§Narrowing Behavior

Integers are truncated from the high end. When storing into a bit-slice of length n, the n least numerically significant bits are stored, and any remaining high bits are ignored.

Be aware of this behavior if you are storing signed integers! The signed integer -14i8 (bit pattern 0b1111_0010u8) will, when stored into and loaded back from a 4-bit slice, become the value 2i8.

§Examples
use bitvec::prelude::*;

let mut raw = 0u8;
raw.view_bits_mut::<Lsb0>()
   [1 .. 6]
   .store_be(22u8);
assert_eq!(raw, 0b00_10110_0);
//                 76 54321 0
raw.view_bits_mut::<Lsb0>()
   [1 .. 6]
   .store_be(-10i8);
assert_eq!(raw, 0b00_10110_0);

In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases while numerical significance decreases:

use bitvec::prelude::*;

let mut raw = [!0u8; 3];
raw.view_bits_mut::<Lsb0>()
   [4 .. 20]
   .store_be(0x2018u16);
assert_eq!(raw, [
  0x2_F,
//  7 0
  0x0_1,
// 15 8
  0xF_8,
// 23 16
]);

Note that while these examples use u8 storage for convenience in displaying the literals, BitField operates identically with any storage type. As most machines use little-endian byte ordering within wider element types, and bitvec exclusively operates on elements, the actual bytes of memory may rapidly start to behave oddly when translating between numeric literals and in-memory representation.

The user guide has a chapter that translates bit indices into memory positions for each combination of <T: BitStore, O: BitOrder>, and may be of additional use when choosing a combination of type parameters and store functions.

§

fn load<I>(&self) -> I
where I: Integral,

Available on non-tarpaulin_include only.
Integer Loading Read more
§

fn store<I>(&mut self, value: I)
where I: Integral,

Available on non-tarpaulin_include only.
Integer Storing Read more
§

impl<T> BitField for BitSlice<T, Msb0>
where T: BitStore,

§Msb0 Bit-Field Behavior

BitField has no requirements about the in-memory representation or layout of stored integers within a bit-slice, only that round-tripping an integer through a store and a load of the same element suffix on the same bit-slice is idempotent (with respect to sign truncation).

Msb0 provides a contiguous translation from bit-index to real memory: for any given bit index n and its position P(n), P(n + 1) is P(n) - 1. This allows it to provide batched behavior: since the section of contiguous indices used within an element translates to a section of contiguous bits in real memory, the transaction is always a single shift-mask operation.

Each implemented method contains documentation and examples showing exactly how the abstract integer space is mapped to real memory.

§Notes

In particular, note that while Msb0 indexes bits from the most significant down to the least, and integers index from the least up to the most, this does not reörder any bits of the integer value! This ordering only finds a region in real memory; it does not affect the partial-integer contents stored in that region.

§

fn load_le<I>(&self) -> I
where I: Integral,

§Msb0 Little-Endian Integer Loading

This implementation uses the Msb0 bit-ordering to determine which bits in a partially-occupied memory element contain the contents of an integer to be loaded, using little-endian element ordering.

See the trait method definition for an overview of what element ordering means.

§Signed-Integer Loading

As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means the most-significant loaded bit of the final element.

§Examples

In each memory element, the Msb0 ordering counts indices rightward from the left edge:

use bitvec::prelude::*;

let raw = 0b00_10110_0u8;
//           01 23456 7
//              ^ sign bit
assert_eq!(
  raw.view_bits::<Msb0>()
     [2 .. 7]
     .load_le::<u8>(),
  0b000_10110,
);
assert_eq!(
  raw.view_bits::<Msb0>()
     [2 .. 7]
     .load_le::<i8>(),
  0b111_10110u8 as i8,
);

In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:

use bitvec::prelude::*;

let raw = [
  0xF_8u8,
//  0 7
  0x0_1u8,
//  8 15
  0b0010_1111u8,
//  ^ sign bit
// 16       23
];
assert_eq!(
  raw.view_bits::<Msb0>()
     [4 .. 20]
     .load_le::<u16>(),
  0x2018u16,
);

Note that while these examples use u8 storage for convenience in displaying the literals, BitField operates identically with any storage type. As most machines use little-endian byte ordering within wider element types, and bitvec exclusively operates on elements, the actual bytes of memory may rapidly start to behave oddly when translating between numeric literals and in-memory representation.

The user guide has a chapter that translates bit indices into memory positions for each combination of <T: BitStore, O: BitOrder>, and may be of additional use when choosing a combination of type parameters and load functions.

§

fn load_be<I>(&self) -> I
where I: Integral,

§Msb0 Big-Endian Integer Loading

This implementation uses the Msb0 bit-ordering to determine which bits in a partially-occupied memory element contain the contents of an integer to be loaded, using big-endian element ordering.

See the trait method definition for an overview of what element ordering means.

§Signed-Integer Loading

As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means the most-significant loaded bit of the first element.

§Examples

In each memory element, the Msb0 ordering counts indices rightward from the left edge:

use bitvec::prelude::*;

let raw = 0b00_10110_0u8;
//           01 23456 7
//              ^ sign bit
assert_eq!(
  raw.view_bits::<Msb0>()
     [2 .. 7]
     .load_be::<u8>(),
  0b000_10110,
);
assert_eq!(
  raw.view_bits::<Msb0>()
     [2 .. 7]
     .load_be::<i8>(),
  0b111_10110u8 as i8,
);

In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases with numerical significance:

use bitvec::prelude::*;

let raw = [
  0b1111_0010u8,
//       ^ sign bit
//  0       7
  0x0_1u8,
//  8 15
  0x8_Fu8,
// 16 23
];
assert_eq!(
  raw.view_bits::<Msb0>()
     [4 .. 20]
     .load_be::<u16>(),
  0x2018u16,
);

Note that while these examples use u8 storage for convenience in displaying the literals, BitField operates identically with any storage type. As most machines use little-endian byte ordering within wider element types, and bitvec exclusively operates on elements, the actual bytes of memory may rapidly start to behave oddly when translating between numeric literals and in-memory representation.

The user guide has a chapter that translates bit indices into memory positions for each combination of <T: BitStore, O: BitOrder>, and may be of additional use when choosing a combination of type parameters and load functions.

§

fn store_le<I>(&mut self, value: I)
where I: Integral,

§Msb0 Little-Endian Integer Storing

This implementation uses the Msb0 bit-ordering to determine which bits in a partially-occupied memory element are used for storage, using little-endian element ordering.

See the trait method definition for an overview of what element ordering means.

§Narrowing Behavior

Integers are truncated from the high end. When storing into a bit-slice of length n, the n least numerically significant bits are stored, and any remaining high bits are ignored.

Be aware of this behavior if you are storing signed integers! The signed integer -14i8 (bit pattern 0b1111_0010u8) will, when stored into and loaded back from a 4-bit slice, become the value 2i8.

§Examples
use bitvec::prelude::*;

let mut raw = 0u8;
raw.view_bits_mut::<Msb0>()
   [2 .. 7]
   .store_le(22u8);
assert_eq!(raw, 0b00_10110_0);
//                 01 23456 7
raw.view_bits_mut::<Msb0>()
   [2 .. 7]
   .store_le(-10i8);
assert_eq!(raw, 0b00_10110_0);

In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:

use bitvec::prelude::*;

let mut raw = [!0u8; 3];
raw.view_bits_mut::<Msb0>()
   [4 .. 20]
   .store_le(0x2018u16);
assert_eq!(raw, [
  0xF_8,
//  0 7
  0x0_1,
//  8 15
  0x2_F,
// 16 23
]);

Note that while these examples use u8 storage for convenience in displaying the literals, BitField operates identically with any storage type. As most machines use little-endian byte ordering within wider element types, and bitvec exclusively operates on elements, the actual bytes of memory may rapidly start to behave oddly when translating between numeric literals and in-memory representation.

The user guide has a chapter that translates bit indices into memory positions for each combination of <T: BitStore, O: BitOrder>, and may be of additional use when choosing a combination of type parameters and store functions.

§

fn store_be<I>(&mut self, value: I)
where I: Integral,

§Msb0 Big-Endian Integer Storing

This implementation uses the Msb0 bit-ordering to determine which bits in a partially-occupied memory element are used for storage, using big-endian element ordering.

See the trait method definition for an overview of what element ordering means.

§Narrowing Behavior

Integers are truncated from the high end. When storing into a bit-slice of length n, the n least numerically significant bits are stored, and any remaining high bits are ignored.

Be aware of this behavior if you are storing signed integers! The signed integer -14i8 (bit pattern 0b1111_0010u8) will, when stored into and loaded back from a 4-bit slice, become the value 2i8.

§Examples
use bitvec::prelude::*;

let mut raw = 0u8;
raw.view_bits_mut::<Msb0>()
   [2 .. 7]
   .store_be(22u8);
assert_eq!(raw, 0b00_10110_0);
//                 01 23456 7
raw.view_bits_mut::<Msb0>()
   [2 .. 7]
   .store_be(-10i8);
assert_eq!(raw, 0b00_10110_0);

In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases while numerical significance decreases:

use bitvec::prelude::*;

let mut raw = [!0u8; 3];
raw.view_bits_mut::<Msb0>()
   [4 .. 20]
   .store_be(0x2018u16);
assert_eq!(raw, [
  0xF_2,
//  0 7
  0x0_1,
//  8 15
  0x8_F,
// 16 23
]);

Note that while these examples use u8 storage for convenience in displaying the literals, BitField operates identically with any storage type. As most machines use little-endian byte ordering within wider element types, and bitvec exclusively operates on elements, the actual bytes of memory may rapidly start to behave oddly when translating between numeric literals and in-memory representation.

The user guide has a chapter that translates bit indices into memory positions for each combination of <T: BitStore, O: BitOrder>, and may be of additional use when choosing a combination of type parameters and store functions.

§

fn load<I>(&self) -> I
where I: Integral,

Available on non-tarpaulin_include only.
Integer Loading Read more
§

fn store<I>(&mut self, value: I)
where I: Integral,

Available on non-tarpaulin_include only.
Integer Storing Read more
§

impl<A, O> BitOrAssign<&BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitor_assign(&mut self, rhs: &BitArray<A, O>)

Performs the |= operation. Read more
§

impl<T, O> BitOrAssign<&BitBox<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitor_assign(&mut self, rhs: &BitBox<T, O>)

Performs the |= operation. Read more
§

impl<T1, T2, O1, O2> BitOrAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn bitor_assign(&mut self, rhs: &BitSlice<T2, O2>)

§Boolean Arithmetic

This merges another bit-slice into self with a Boolean arithmetic operation. If the other bit-slice is shorter than self, it is zero-extended. For BitAnd, this clears all excess bits of self to 0; for BitOr and BitXor, it leaves them untouched

§Behavior

The Boolean operation proceeds across each bit-slice in iteration order. This is 3O(n) in the length of the shorter of self and rhs. However, it can be accelerated if rhs has the same type parameters as self, and both are using one of the orderings provided by bitvec. In this case, the implementation specializes to use BitField batch operations to operate on the slices one word at a time, rather than one bit.

Acceleration is not currently provided for custom bit-orderings that use the same storage type.

§Pre-1.0 Behavior

In the 0. development series, Boolean arithmetic was implemented against all I: Iterator<Item = bool>. This allowed code such as bits |= [false, true];, but forbad acceleration in the most common use case (combining two bit-slices) because BitSlice is not such an iterator.

Usage surveys indicate that it is better for the arithmetic operators to operate on bit-slices, and to allow the possibility of specialized acceleration, rather than to allow folding against any iterator of bools.

If pre-1.0 code relies on this behavior specifically, and has non-BitSlice arguments to the Boolean sigils, then they will need to be replaced with the equivalent loop.

§Examples
use bitvec::prelude::*;

let a = bits![mut 0, 0, 1, 1];
let b = bits![    0, 1, 0, 1];

*a ^= b;
assert_eq!(a, bits![0, 1, 1, 0]);

let c = bits![mut 0, 0, 1, 1];
let d = [false, true, false, true];

// no longer allowed
// c &= d.into_iter().by_vals();
for (mut c, d) in c.iter_mut().zip(d.into_iter())
{
  *c ^= d;
}
assert_eq!(c, bits![0, 1, 1, 0]);
§

impl<T, O> BitOrAssign<&BitVec<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitor_assign(&mut self, rhs: &BitVec<T, O>)

Performs the |= operation. Read more
§

impl<A, O> BitOrAssign<BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitor_assign(&mut self, rhs: BitArray<A, O>)

Performs the |= operation. Read more
§

impl<T, O> BitOrAssign<BitBox<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitor_assign(&mut self, rhs: BitBox<T, O>)

Performs the |= operation. Read more
§

impl<T, O> BitOrAssign<BitVec<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitor_assign(&mut self, rhs: BitVec<T, O>)

Performs the |= operation. Read more
§

impl<A, O> BitXorAssign<&BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitxor_assign(&mut self, rhs: &BitArray<A, O>)

Performs the ^= operation. Read more
§

impl<T, O> BitXorAssign<&BitBox<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitxor_assign(&mut self, rhs: &BitBox<T, O>)

Performs the ^= operation. Read more
§

impl<T1, T2, O1, O2> BitXorAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn bitxor_assign(&mut self, rhs: &BitSlice<T2, O2>)

§Boolean Arithmetic

This merges another bit-slice into self with a Boolean arithmetic operation. If the other bit-slice is shorter than self, it is zero-extended. For BitAnd, this clears all excess bits of self to 0; for BitOr and BitXor, it leaves them untouched

§Behavior

The Boolean operation proceeds across each bit-slice in iteration order. This is 3O(n) in the length of the shorter of self and rhs. However, it can be accelerated if rhs has the same type parameters as self, and both are using one of the orderings provided by bitvec. In this case, the implementation specializes to use BitField batch operations to operate on the slices one word at a time, rather than one bit.

Acceleration is not currently provided for custom bit-orderings that use the same storage type.

§Pre-1.0 Behavior

In the 0. development series, Boolean arithmetic was implemented against all I: Iterator<Item = bool>. This allowed code such as bits |= [false, true];, but forbad acceleration in the most common use case (combining two bit-slices) because BitSlice is not such an iterator.

Usage surveys indicate that it is better for the arithmetic operators to operate on bit-slices, and to allow the possibility of specialized acceleration, rather than to allow folding against any iterator of bools.

If pre-1.0 code relies on this behavior specifically, and has non-BitSlice arguments to the Boolean sigils, then they will need to be replaced with the equivalent loop.

§Examples
use bitvec::prelude::*;

let a = bits![mut 0, 0, 1, 1];
let b = bits![    0, 1, 0, 1];

*a ^= b;
assert_eq!(a, bits![0, 1, 1, 0]);

let c = bits![mut 0, 0, 1, 1];
let d = [false, true, false, true];

// no longer allowed
// c &= d.into_iter().by_vals();
for (mut c, d) in c.iter_mut().zip(d.into_iter())
{
  *c ^= d;
}
assert_eq!(c, bits![0, 1, 1, 0]);
§

impl<T, O> BitXorAssign<&BitVec<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitxor_assign(&mut self, rhs: &BitVec<T, O>)

Performs the ^= operation. Read more
§

impl<A, O> BitXorAssign<BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitxor_assign(&mut self, rhs: BitArray<A, O>)

Performs the ^= operation. Read more
§

impl<T, O> BitXorAssign<BitBox<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitxor_assign(&mut self, rhs: BitBox<T, O>)

Performs the ^= operation. Read more
§

impl<T, O> BitXorAssign<BitVec<T, O>> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn bitxor_assign(&mut self, rhs: BitVec<T, O>)

Performs the ^= operation. Read more
§

impl<A, O> Borrow<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn borrow(&self) -> &BitSlice<<A as BitView>::Store, O>

Immutably borrows from an owned value. Read more
§

impl<T, O> Borrow<BitSlice<T, O>> for BitBox<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn borrow(&self) -> &BitSlice<T, O>

Immutably borrows from an owned value. Read more
§

impl<T, O> Borrow<BitSlice<T, O>> for BitVec<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn borrow(&self) -> &BitSlice<T, O>

Immutably borrows from an owned value. Read more
§

impl<A, O> BorrowMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>
where A: BitViewSized, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn borrow_mut(&mut self) -> &mut BitSlice<<A as BitView>::Store, O>

Mutably borrows from an owned value. Read more
§

impl<T, O> BorrowMut<BitSlice<T, O>> for BitBox<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn borrow_mut(&mut self) -> &mut BitSlice<T, O>

Mutably borrows from an owned value. Read more
§

impl<T, O> BorrowMut<BitSlice<T, O>> for BitVec<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn borrow_mut(&mut self) -> &mut BitSlice<T, O>

Mutably borrows from an owned value. Read more
§

impl<T, O> Debug for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
§

impl<T, O> Default for &BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

fn default() -> &BitSlice<T, O>

Returns the “default value” for a type. Read more
§

impl<T, O> Default for &mut BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

fn default() -> &mut BitSlice<T, O>

Returns the “default value” for a type. Read more
§

impl<'de, O> Deserialize<'de> for &'de BitSlice<u8, O>
where O: BitOrder,

§

fn deserialize<D>( deserializer: D, ) -> Result<&'de BitSlice<u8, O>, <D as Deserializer<'de>>::Error>
where D: Deserializer<'de>,

Deserialize this value from the given Serde deserializer. Read more
§

impl<T, O> Display for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
§

impl<T, O> From<&BitSlice<T, O>> for BitBox<T, O>
where T: BitStore, O: BitOrder,

§

fn from(slice: &BitSlice<T, O>) -> BitBox<T, O>

Converts to this type from the input type.
§

impl<T, O> From<&BitSlice<T, O>> for BitVec<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn from(slice: &BitSlice<T, O>) -> BitVec<T, O>

Converts to this type from the input type.
§

impl<T, O> From<&mut BitSlice<T, O>> for BitVec<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn from(slice: &mut BitSlice<T, O>) -> BitVec<T, O>

Converts to this type from the input type.
§

impl<T, O> Hash for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn hash<H>(&self, hasher: &mut H)
where H: Hasher,

Feeds this value into the given Hasher. Read more
§

impl<T, O> Index<Range<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

type Output = BitSlice<T, O>

The returned type after indexing.
§

fn index( &self, index: Range<usize>, ) -> &<BitSlice<T, O> as Index<Range<usize>>>::Output

Performs the indexing (container[index]) operation. Read more
§

impl<T, O> Index<RangeFrom<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

type Output = BitSlice<T, O>

The returned type after indexing.
§

fn index( &self, index: RangeFrom<usize>, ) -> &<BitSlice<T, O> as Index<RangeFrom<usize>>>::Output

Performs the indexing (container[index]) operation. Read more
§

impl<T, O> Index<RangeFull> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

type Output = BitSlice<T, O>

The returned type after indexing.
§

fn index( &self, index: RangeFull, ) -> &<BitSlice<T, O> as Index<RangeFull>>::Output

Performs the indexing (container[index]) operation. Read more
§

impl<T, O> Index<RangeInclusive<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

type Output = BitSlice<T, O>

The returned type after indexing.
§

fn index( &self, index: RangeInclusive<usize>, ) -> &<BitSlice<T, O> as Index<RangeInclusive<usize>>>::Output

Performs the indexing (container[index]) operation. Read more
§

impl<T, O> Index<RangeTo<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

type Output = BitSlice<T, O>

The returned type after indexing.
§

fn index( &self, index: RangeTo<usize>, ) -> &<BitSlice<T, O> as Index<RangeTo<usize>>>::Output

Performs the indexing (container[index]) operation. Read more
§

impl<T, O> Index<RangeToInclusive<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

type Output = BitSlice<T, O>

The returned type after indexing.
§

fn index( &self, index: RangeToInclusive<usize>, ) -> &<BitSlice<T, O> as Index<RangeToInclusive<usize>>>::Output

Performs the indexing (container[index]) operation. Read more
§

impl<T, O> Index<usize> for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

fn index(&self, index: usize) -> &<BitSlice<T, O> as Index<usize>>::Output

Looks up a single bit by its semantic index.

§Examples
use bitvec::prelude::*;

let bits = bits![u8, Msb0; 0, 1, 0];
assert!(!bits[0]); // -----^  |  |
assert!( bits[1]); // --------^  |
assert!(!bits[2]); // -----------^

If the index is greater than or equal to the length, indexing will panic.

The below test will panic when accessing index 1, as only index 0 is valid.

use bitvec::prelude::*;

let bits = bits![0,  ];
bits[1]; // --------^
§

type Output = bool

The returned type after indexing.
§

impl<T, O> IndexMut<Range<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

fn index_mut( &mut self, index: Range<usize>, ) -> &mut <BitSlice<T, O> as Index<Range<usize>>>::Output

Performs the mutable indexing (container[index]) operation. Read more
§

impl<T, O> IndexMut<RangeFrom<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

fn index_mut( &mut self, index: RangeFrom<usize>, ) -> &mut <BitSlice<T, O> as Index<RangeFrom<usize>>>::Output

Performs the mutable indexing (container[index]) operation. Read more
§

impl<T, O> IndexMut<RangeFull> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

fn index_mut( &mut self, index: RangeFull, ) -> &mut <BitSlice<T, O> as Index<RangeFull>>::Output

Performs the mutable indexing (container[index]) operation. Read more
§

impl<T, O> IndexMut<RangeInclusive<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

fn index_mut( &mut self, index: RangeInclusive<usize>, ) -> &mut <BitSlice<T, O> as Index<RangeInclusive<usize>>>::Output

Performs the mutable indexing (container[index]) operation. Read more
§

impl<T, O> IndexMut<RangeTo<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

fn index_mut( &mut self, index: RangeTo<usize>, ) -> &mut <BitSlice<T, O> as Index<RangeTo<usize>>>::Output

Performs the mutable indexing (container[index]) operation. Read more
§

impl<T, O> IndexMut<RangeToInclusive<usize>> for BitSlice<T, O>
where O: BitOrder, T: BitStore,

§

fn index_mut( &mut self, index: RangeToInclusive<usize>, ) -> &mut <BitSlice<T, O> as Index<RangeToInclusive<usize>>>::Output

Performs the mutable indexing (container[index]) operation. Read more
§

impl<'a, T, O> IntoIterator for &'a BitSlice<T, O>
where T: 'a + BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

type IntoIter = Iter<'a, T, O>

Which kind of iterator are we turning this into?
§

type Item = <<&'a BitSlice<T, O> as IntoIterator>::IntoIter as Iterator>::Item

The type of the elements being iterated over.
§

fn into_iter(self) -> <&'a BitSlice<T, O> as IntoIterator>::IntoIter

Creates an iterator from a value. Read more
§

impl<'a, T, O> IntoIterator for &'a mut BitSlice<T, O>
where T: 'a + BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

type IntoIter = IterMut<'a, T, O>

Which kind of iterator are we turning this into?
§

type Item = <<&'a mut BitSlice<T, O> as IntoIterator>::IntoIter as Iterator>::Item

The type of the elements being iterated over.
§

fn into_iter(self) -> <&'a mut BitSlice<T, O> as IntoIterator>::IntoIter

Creates an iterator from a value. Read more
§

impl<T, O> LowerHex for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§Bit-Slice Rendering

This implementation prints the contents of a &BitSlice in one of binary, octal, or hexadecimal. It is important to note that this does not render the raw underlying memory! They render the semantically-ordered contents of the bit-slice as numerals. This distinction matters if you use type parameters that differ from those presumed by your debugger (which is usually <u8, Msb0>).

The output separates the T elements as individual list items, and renders each element as a base- 2, 8, or 16 numeric string. When walking an element, the bits traversed by the bit-slice are considered to be stored in most-significant-bit-first ordering. This means that index [0] is the high bit of the left-most digit, and index [n] is the low bit of the right-most digit, in a given printed word.

In order to render according to expectations of the Arabic numeral system, an element being transcribed is chunked into digits from the least-significant end of its rendered form. This is most noticeable in octal, which will always have a smaller ceiling on the left-most digit in a printed word, while the right-most digit in that word is able to use the full 0 ..= 7 numeral range.

§Examples
use bitvec::prelude::*;

let data = [
  0b000000_10u8,
// digits print LTR
  0b10_001_101,
// significance is computed RTL
  0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];

assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");

The {:#} format modifier causes the standard 0b, 0o, or 0x prefix to be applied to each printed word. The other format specifiers are not interpreted by this implementation, and apply to the entire rendered text, not to individual words.

§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
§

impl<'a, T, O> Not for &'a mut BitSlice<T, O>
where T: BitStore, O: BitOrder,

Inverts each bit in the bit-slice.

Unlike the &, |, and ^ operators, this implementation is guaranteed to update each memory element only once, and is not required to traverse every live bit in the underlying region.

§

type Output = &'a mut BitSlice<T, O>

The resulting type after applying the ! operator.
§

fn not(self) -> <&'a mut BitSlice<T, O> as Not>::Output

Performs the unary ! operation. Read more
§

impl<T, O> Octal for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§Bit-Slice Rendering

This implementation prints the contents of a &BitSlice in one of binary, octal, or hexadecimal. It is important to note that this does not render the raw underlying memory! They render the semantically-ordered contents of the bit-slice as numerals. This distinction matters if you use type parameters that differ from those presumed by your debugger (which is usually <u8, Msb0>).

The output separates the T elements as individual list items, and renders each element as a base- 2, 8, or 16 numeric string. When walking an element, the bits traversed by the bit-slice are considered to be stored in most-significant-bit-first ordering. This means that index [0] is the high bit of the left-most digit, and index [n] is the low bit of the right-most digit, in a given printed word.

In order to render according to expectations of the Arabic numeral system, an element being transcribed is chunked into digits from the least-significant end of its rendered form. This is most noticeable in octal, which will always have a smaller ceiling on the left-most digit in a printed word, while the right-most digit in that word is able to use the full 0 ..= 7 numeral range.

§Examples
use bitvec::prelude::*;

let data = [
  0b000000_10u8,
// digits print LTR
  0b10_001_101,
// significance is computed RTL
  0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];

assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");

The {:#} format modifier causes the standard 0b, 0o, or 0x prefix to be applied to each printed word. The other format specifiers are not interpreted by this implementation, and apply to the entire rendered text, not to individual words.

§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
§

impl<T, O> Ord for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

fn cmp(&self, rhs: &BitSlice<T, O>) -> Ordering

This method returns an Ordering between self and other. Read more
§

impl<T1, T2, O1, O2> PartialEq<&BitSlice<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn eq(&self, rhs: &&BitSlice<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<T1, T2, O1, O2> PartialEq<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn eq(&self, rhs: &&mut BitSlice<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<O1, A, O2, T> PartialEq<BitArray<A, O2>> for BitSlice<T, O1>
where O1: BitOrder, O2: BitOrder, A: BitViewSized, T: BitStore,

Available on non-tarpaulin_include only.
§

fn eq(&self, other: &BitArray<A, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for &BitSlice<T1, O1>
where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,

Available on non-tarpaulin_include only.
§

fn eq(&self, other: &BitBox<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for &mut BitSlice<T1, O1>
where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,

Available on non-tarpaulin_include only.
§

fn eq(&self, other: &BitBox<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for BitSlice<T1, O1>
where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,

Available on non-tarpaulin_include only.
§

fn eq(&self, other: &BitBox<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for &BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn eq(&self, rhs: &BitSlice<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for &mut BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn eq(&self, rhs: &BitSlice<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

Tests if two BitSlices are semantically — not representationally — equal.

It is valid to compare slices of different ordering or memory types.

The equality condition requires that they have the same length and that at each index, the two slices have the same bit value.

Original

§

fn eq(&self, rhs: &BitSlice<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

Available on non-tarpaulin_include only.
§

fn eq(&self, other: &BitVec<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &mut BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

Available on non-tarpaulin_include only.
§

fn eq(&self, other: &BitVec<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

Available on non-tarpaulin_include only.
§

fn eq(&self, other: &BitVec<T2, O2>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
§

impl<T1, T2, O1, O2> PartialOrd<&BitSlice<T2, O2>> for &mut BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn partial_cmp(&self, rhs: &&BitSlice<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<T1, T2, O1, O2> PartialOrd<&BitSlice<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn partial_cmp(&self, rhs: &&BitSlice<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<T1, T2, O1, O2> PartialOrd<&mut BitSlice<T2, O2>> for &BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn partial_cmp(&self, rhs: &&mut BitSlice<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<T1, T2, O1, O2> PartialOrd<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn partial_cmp(&self, rhs: &&mut BitSlice<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<A, T, O> PartialOrd<BitArray<A, O>> for BitSlice<T, O>
where A: BitViewSized, T: BitStore, O: BitOrder,

Available on non-tarpaulin_include only.
§

fn partial_cmp(&self, other: &BitArray<A, O>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<'a, O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for &'a BitSlice<T1, O1>
where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,

Available on non-tarpaulin_include only.
§

fn partial_cmp(&self, other: &BitBox<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<'a, O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for &'a mut BitSlice<T1, O1>
where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,

Available on non-tarpaulin_include only.
§

fn partial_cmp(&self, other: &BitBox<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for BitSlice<T1, O1>
where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,

Available on non-tarpaulin_include only.
§

fn partial_cmp(&self, other: &BitBox<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for &BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn partial_cmp(&self, rhs: &BitSlice<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for &mut BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

§

fn partial_cmp(&self, rhs: &BitSlice<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

Compares two BitSlices by semantic — not representational — ordering.

The comparison sorts by testing at each index if one slice has a high bit where the other has a low. At the first index where the slices differ, the slice with the high bit is greater. If the slices are equal until at least one terminates, then they are compared by length.

Original

§

fn partial_cmp(&self, rhs: &BitSlice<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

Available on non-tarpaulin_include only.
§

fn partial_cmp(&self, other: &BitVec<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a mut BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

Available on non-tarpaulin_include only.
§

fn partial_cmp(&self, other: &BitVec<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for BitSlice<T1, O1>
where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,

Available on non-tarpaulin_include only.
§

fn partial_cmp(&self, other: &BitVec<T2, O2>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
§

impl<T, O> Pointer for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
§

impl<T, O> Read for &BitSlice<T, O>
where T: BitStore, O: BitOrder, BitSlice<T, O>: BitField,

§Reading From a Bit-Slice

The implementation loads bytes out of the referenced bit-slice until either the destination buffer is filled or the source has no more bytes to provide. When .read() returns, the provided bit-slice handle will have been updated to no longer include the leading segment copied out as bytes into buf.

Note that the return value of .read() is always the number of bytes of buf filled!

The implementation uses BitField::load_be to collect bytes. Note that unlike the standard library, it is implemented on bit-slices of any underlying element type. However, using a BitSlice<_, u8> is still likely to be fastest.

§Original

impl Read for [u8]

§

fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error>

Pull some bytes from this source into the specified buffer, returning how many bytes were read. Read more
1.36.0 · Source§

fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error>

Like read, except that it reads into a slice of buffers. Read more
Source§

fn is_read_vectored(&self) -> bool

🔬This is a nightly-only experimental API. (can_vector #69941)
Determines if this Reader has an efficient read_vectored implementation. Read more
1.0.0 · Source§

fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize, Error>

Reads all bytes until EOF in this source, placing them into buf. Read more
1.0.0 · Source§

fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error>

Reads all bytes until EOF in this source, appending them to buf. Read more
1.6.0 · Source§

fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error>

Reads the exact number of bytes required to fill buf. Read more
Source§

fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> Result<(), Error>

🔬This is a nightly-only experimental API. (read_buf #78485)
Pull some bytes from this source into the specified buffer. Read more
Source§

fn read_buf_exact(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error>

🔬This is a nightly-only experimental API. (read_buf #78485)
Reads the exact number of bytes required to fill cursor. Read more
1.0.0 · Source§

fn by_ref(&mut self) -> &mut Self
where Self: Sized,

Creates a “by reference” adaptor for this instance of Read. Read more
1.0.0 · Source§

fn bytes(self) -> Bytes<Self>
where Self: Sized,

Transforms this Read instance to an Iterator over its bytes. Read more
1.0.0 · Source§

fn chain<R>(self, next: R) -> Chain<Self, R>
where R: Read, Self: Sized,

Creates an adapter which will chain this stream with another. Read more
1.0.0 · Source§

fn take(self, limit: u64) -> Take<Self>
where Self: Sized,

Creates an adapter which will read at most limit bytes from it. Read more
§

impl<T, O> Serialize for BitSlice<T, O>
where T: BitStore, O: BitOrder, <T as BitStore>::Mem: Serialize,

§

fn serialize<S>( &self, serializer: S, ) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error>
where S: Serializer,

Serialize this value into the given Serde serializer. Read more
§

impl<T, O> ToOwned for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Available on crate feature alloc and non-tarpaulin_include only.
§

type Owned = BitVec<T, O>

The resulting type after obtaining ownership.
§

fn to_owned(&self) -> <BitSlice<T, O> as ToOwned>::Owned

Creates owned data from borrowed data, usually by cloning. Read more
1.63.0 · Source§

fn clone_into(&self, target: &mut Self::Owned)

Uses borrowed data to replace owned data, usually by cloning. Read more
§

impl<'a, T, O> TryFrom<&'a [T]> for &'a BitSlice<T, O>
where T: BitStore, O: BitOrder,

Calls BitSlice::try_from_slice, but returns the original Rust slice on error instead of the failure event.

This only fails if slice.len() exceeds BitSlice::MAX_ELTS.

§

type Error = &'a [T]

The type returned in the event of a conversion error.
§

fn try_from( slice: &'a [T], ) -> Result<&'a BitSlice<T, O>, <&'a BitSlice<T, O> as TryFrom<&'a [T]>>::Error>

Performs the conversion.
§

impl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for &BitArray<A, O>
where A: BitViewSized, O: BitOrder,

§

type Error = TryFromBitSliceError

The type returned in the event of a conversion error.
§

fn try_from( src: &BitSlice<<A as BitView>::Store, O>, ) -> Result<&BitArray<A, O>, <&BitArray<A, O> as TryFrom<&BitSlice<<A as BitView>::Store, O>>>::Error>

Performs the conversion.
§

impl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>
where A: BitViewSized, O: BitOrder,

§

type Error = TryFromBitSliceError

The type returned in the event of a conversion error.
§

fn try_from( src: &BitSlice<<A as BitView>::Store, O>, ) -> Result<BitArray<A, O>, <BitArray<A, O> as TryFrom<&BitSlice<<A as BitView>::Store, O>>>::Error>

Performs the conversion.
§

impl<'a, T, O> TryFrom<&'a mut [T]> for &'a mut BitSlice<T, O>
where T: BitStore, O: BitOrder,

Calls BitSlice::try_from_slice_mut, but returns the original Rust slice on error instead of the failure event.

This only fails if slice.len() exceeds BitSlice::MAX_ELTS.

§

type Error = &'a mut [T]

The type returned in the event of a conversion error.
§

fn try_from( slice: &'a mut [T], ) -> Result<&'a mut BitSlice<T, O>, <&'a mut BitSlice<T, O> as TryFrom<&'a mut [T]>>::Error>

Performs the conversion.
§

impl<A, O> TryFrom<&mut BitSlice<<A as BitView>::Store, O>> for &mut BitArray<A, O>
where A: BitViewSized, O: BitOrder,

§

type Error = TryFromBitSliceError

The type returned in the event of a conversion error.
§

fn try_from( src: &mut BitSlice<<A as BitView>::Store, O>, ) -> Result<&mut BitArray<A, O>, <&mut BitArray<A, O> as TryFrom<&mut BitSlice<<A as BitView>::Store, O>>>::Error>

Performs the conversion.
§

impl<T, O> UpperHex for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§Bit-Slice Rendering

This implementation prints the contents of a &BitSlice in one of binary, octal, or hexadecimal. It is important to note that this does not render the raw underlying memory! They render the semantically-ordered contents of the bit-slice as numerals. This distinction matters if you use type parameters that differ from those presumed by your debugger (which is usually <u8, Msb0>).

The output separates the T elements as individual list items, and renders each element as a base- 2, 8, or 16 numeric string. When walking an element, the bits traversed by the bit-slice are considered to be stored in most-significant-bit-first ordering. This means that index [0] is the high bit of the left-most digit, and index [n] is the low bit of the right-most digit, in a given printed word.

In order to render according to expectations of the Arabic numeral system, an element being transcribed is chunked into digits from the least-significant end of its rendered form. This is most noticeable in octal, which will always have a smaller ceiling on the left-most digit in a printed word, while the right-most digit in that word is able to use the full 0 ..= 7 numeral range.

§Examples
use bitvec::prelude::*;

let data = [
  0b000000_10u8,
// digits print LTR
  0b10_001_101,
// significance is computed RTL
  0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];

assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");

The {:#} format modifier causes the standard 0b, 0o, or 0x prefix to be applied to each printed word. The other format specifiers are not interpreted by this implementation, and apply to the entire rendered text, not to individual words.

§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
§

impl<T, O> Write for &mut BitSlice<T, O>
where T: BitStore, O: BitOrder, BitSlice<T, O>: BitField,

§Writing Into a Bit-Slice

The implementation stores bytes into the referenced bit-slice until either the source buffer is exhausted or the destination has no more slots to fill. When .write() returns, the provided bit-slice handle will have been updated to no longer include the leading segment filled with bytes from buf.

Note that the return value of .write() is always the number of bytes of buf consumed!

The implementation uses BitField::store_be to fill bytes. Note that unlike the standard library, it is implemented on bit-slices of any underlying element type. However, using a BitSlice<_, u8> is still likely to be fastest.

§Original

impl Write for [u8]

§

fn write(&mut self, buf: &[u8]) -> Result<usize, Error>

Writes a buffer into this writer, returning how many bytes were written. Read more
§

fn flush(&mut self) -> Result<(), Error>

Flushes this output stream, ensuring that all intermediately buffered contents reach their destination. Read more
1.36.0 · Source§

fn write_vectored(&mut self, bufs: &[IoSlice<'_>]) -> Result<usize, Error>

Like write, except that it writes from a slice of buffers. Read more
Source§

fn is_write_vectored(&self) -> bool

🔬This is a nightly-only experimental API. (can_vector #69941)
Determines if this Writer has an efficient write_vectored implementation. Read more
1.0.0 · Source§

fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>

Attempts to write an entire buffer into this writer. Read more
Source§

fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>

🔬This is a nightly-only experimental API. (write_all_vectored #70436)
Attempts to write multiple buffers into this writer. Read more
1.0.0 · Source§

fn write_fmt(&mut self, fmt: Arguments<'_>) -> Result<(), Error>

Writes a formatted string into this writer, returning any error encountered. Read more
1.0.0 · Source§

fn by_ref(&mut self) -> &mut Self
where Self: Sized,

Creates a “by reference” adapter for this instance of Write. Read more
§

impl<T, O> Eq for BitSlice<T, O>
where T: BitStore, O: BitOrder,

§

impl<T, O> Send for BitSlice<T, O>
where T: BitStore + Sync, O: BitOrder,

§Bit-Slice Thread Safety

This allows bit-slice references to be moved across thread boundaries only when the underlying T element can tolerate concurrency.

All BitSlice references, shared or exclusive, are only threadsafe if the T element type is Send, because any given bit-slice reference may only have partial control of a memory element that is also being shared by a bit-slice reference on another thread. As such, this is never implemented for Cell<U>, but always implemented for AtomicU and U for a given unsigned integer type U.

Atomic integers safely handle concurrent writes, cells do not allow concurrency at all, so the only missing piece is &mut BitSlice<_, U: Unsigned>. This is handled by the aliasing system that the mutable splitters employ: a mutable reference to an unsynchronized bit-slice can only cross threads when no other handle is able to exist to the elements it governs. Splitting a mutable bit-slice causes the split halves to change over to either atomics or cells, so concurrency is either safe or impossible.

§

impl<T, O> Sync for BitSlice<T, O>
where T: BitStore + Sync, O: BitOrder,

§Bit-Slice Thread Safety

This allows bit-slice references to be moved across thread boundaries only when the underlying T element can tolerate concurrency.

All BitSlice references, shared or exclusive, are only threadsafe if the T element type is Send, because any given bit-slice reference may only have partial control of a memory element that is also being shared by a bit-slice reference on another thread. As such, this is never implemented for Cell<U>, but always implemented for AtomicU and U for a given unsigned integer type U.

Atomic integers safely handle concurrent writes, cells do not allow concurrency at all, so the only missing piece is &mut BitSlice<_, U: Unsigned>. This is handled by the aliasing system that the mutable splitters employ: a mutable reference to an unsynchronized bit-slice can only cross threads when no other handle is able to exist to the elements it governs. Splitting a mutable bit-slice causes the split halves to change over to either atomics or cells, so concurrency is either safe or impossible.

§

impl<T, O> Unpin for BitSlice<T, O>
where T: BitStore, O: BitOrder,

Auto Trait Implementations§

§

impl<T, O> Freeze for BitSlice<T, O>

§

impl<T, O> RefUnwindSafe for BitSlice<T, O>

§

impl<T = usize, O = Lsb0> !Sized for BitSlice<T, O>

§

impl<T, O> UnwindSafe for BitSlice<T, O>
where O: UnwindSafe, T: UnwindSafe,

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<Q, K> Comparable<K> for Q
where Q: Ord + ?Sized, K: Borrow<Q> + ?Sized,

§

fn compare(&self, key: &K) -> Ordering

Compare self to key and return their ordering.
§

impl<Q, K> Equivalent<K> for Q
where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,

§

fn equivalent(&self, key: &K) -> bool

Compare self to key and return true if they are equal.
§

impl<Q, K> Equivalent<K> for Q
where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,

§

fn equivalent(&self, key: &K) -> bool

Checks if this value is equivalent to the given key. Read more
§

impl<Q, K> Equivalent<K> for Q
where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,

§

fn equivalent(&self, key: &K) -> bool

Checks if this value is equivalent to the given key. Read more
§

impl<T> Pipe for T
where T: ?Sized,

§

fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> R
where Self: Sized,

Pipes by value. This is generally the method you want to use. Read more
§

fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R
where R: 'a,

Borrows self and passes that borrow into the pipe function. Read more
§

fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R
where R: 'a,

Mutably borrows self and passes that borrow into the pipe function. Read more
§

fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
where Self: Borrow<B>, B: 'a + ?Sized, R: 'a,

Borrows self, then passes self.borrow() into the pipe function. Read more
§

fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R, ) -> R
where Self: BorrowMut<B>, B: 'a + ?Sized, R: 'a,

Mutably borrows self, then passes self.borrow_mut() into the pipe function. Read more
§

fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
where Self: AsRef<U>, U: 'a + ?Sized, R: 'a,

Borrows self, then passes self.as_ref() into the pipe function.
§

fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
where Self: AsMut<U>, U: 'a + ?Sized, R: 'a,

Mutably borrows self, then passes self.as_mut() into the pipe function.
§

fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
where Self: Deref<Target = T>, T: 'a + ?Sized, R: 'a,

Borrows self, then passes self.deref() into the pipe function.
§

fn pipe_deref_mut<'a, T, R>( &'a mut self, func: impl FnOnce(&'a mut T) -> R, ) -> R
where Self: DerefMut<Target = T> + Deref, T: 'a + ?Sized, R: 'a,

Mutably borrows self, then passes self.deref_mut() into the pipe function.
Source§

impl<T> ToString for T
where T: Display + ?Sized,

Source§

default fn to_string(&self) -> String

Converts the given value to a String. Read more

Layout§

Note: Most layout information is completely unstable and may even differ between compilations. The only exception is types with certain repr(...) attributes. Please see the Rust Reference's “Type Layout” chapter for details on type layout guarantees.

Size: (unsized)