Real Time I/O (RTIO)
RTIO provides a framework for doing asynchronous operation chains with event driven I/O. This section covers the RTIO API, queues, executor, iodev, and common usage patterns with peripheral devices.
RTIO takes a lot of inspiration from Linux’s io_uring in its operations and API as that API matches up well with hardware transfer queues and descriptions such as DMA transfer lists.
Problem
An application wishing to do complex DMA or interrupt driven operations today in Zephyr requires direct knowledge of the hardware and how it works. There is no understanding in the DMA API of other Zephyr devices and how they relate.
This means doing complex audio, video, or sensor streaming requires direct hardware knowledge or leaky abstractions over DMA controllers. Neither is ideal.
To enable asynchronous operations, especially with DMA, a description of what to do rather than direct operations through C and callbacks is needed. Enabling DMA features such as channels with priority, and sequences of transfers requires more than a simple list of descriptions.
Using DMA and/or interrupt driven I/O shouldn’t dictate whether or not the call is blocking or not.
Inspiration, introducing io_uring
It’s better not to reinvent the wheel (or ring in this case) and io_uring as an API from the Linux kernel provides a winning model. In io_uring there are two lock-free ring buffers acting as queues shared between the kernel and a userland application. One queue for submission entries which may be chained and flushed to create concurrent sequential requests. A second queue for completion queue events. Only a single syscall is actually required to execute many operations, the io_uring_submit call. This call may block the caller when a number of operations to wait on is given.
This model maps well to DMA and interrupt driven transfers. A request to do a sequence of operations in an asynchronous way directly relates to the way hardware typically works with interrupt driven state machines potentially involving multiple peripheral IPs like bus and DMA controllers.
Submission Queue
The submission queue (sq), is the description of the operations to perform in concurrent chains.
For example imagine a typical SPI transfer where you wish to write a register address to then read from. So the sequence of operations might be…
Chip Select
Clock Enable
Write register address into SPI transmit register
Read from the SPI receive register into a buffer
Disable clock
Disable Chip Select
If anything in this chain of operations fails give up. Some of those operations can be embodied in a device abstraction that understands a read or write implicitly means setup the clock and chip select. The transactional nature of the request also needs to be embodied in some manner. Of the operations above perhaps the read could be done using DMA as its large enough make sense. That requires an understanding of how to setup the device’s particular DMA to do so.
The above sequence of operations is embodied in RTIO as chain of submission queue entries (sqe). Chaining is done by setting a bitflag in an sqe to signify the next sqe must wait on the current one.
Because the chip select and clocking is common to a particular SPI controller and device on the bus it is embodied in what RTIO calls an iodev.
Multiple operations against the same iodev are done in the order provided as soon as possible. If two operation chains have varying points using the same device its possible one chain will have to wait for another to complete.
Completion Queue
In order to know when a sqe has completed there is a completion queue (cq) with completion queue events (cqe). A sqe once completed results in a cqe being pushed into the cq. The ordering of cqe may not be the same order of sqe. A chain of sqe will however ensure ordering and failure cascading.
Other potential schemes are possible but a completion queue is a well trod idea with io_uring and other similar operating system APIs.
Executor
The RTIO executor is a low overhead concurrent I/O task scheduler. It ensures certain request flags provide the expected behavior. It takes a list of submissions working through them in order. Various flags allow for changing the behavior of how submissions are worked through. Flags to form in order chains of submissions, transactional sets of submissions, or create multi-shot (continuously producing) requests are all possible!
IO Device
Turning submission queue entries (sqe) into completion queue events (cqe) is the job of objects implementing the iodev (IO device) API. This API accepts requests in the form of the iodev submit API call. It is the io devices job to work through its internal queue of submissions and convert them into completions. In effect every io device can be viewed as an independent, event driven actor like object, that accepts a never ending queue of I/O like requests. How the iodev does this work is up to the author of the iodev, perhaps the entire queue of operations can be converted to a set of DMA transfer descriptors, meaning the hardware does almost all of the real work.
Cancellation
Canceling an already queued operation is possible but not guaranteed. If the
SQE has not yet started, it’s likely that a call to rtio_sqe_cancel()
will remove the SQE and never run it. If, however, the SQE already started
running, the cancel request will be ignored.
Memory pools
In some cases requests to read may not know how much data will be produced.
Alternatively, a reader might be handling data from multiple io devices where
the frequency of the data is unpredictable. In these cases it may be wasteful
to bind memory to in flight read requests. Instead with memory pools the memory
to read into is left to the iodev to allocate from a memory pool associated with
the RTIO context that the read was associated with. To create such an RTIO
context the RTIO_DEFINE_WITH_MEMPOOL
can be used. It allows creating
an RTIO context with a dedicated pool of “memory blocks” which can be consumed by
the iodev. Below is a snippet setting up the RTIO context with a memory pool.
The memory pool has 128 blocks, each block has the size of 16 bytes, and the data
is 4 byte aligned.
#include <zephyr/rtio/rtio.h>
#define SQ_SIZE 4
#define CQ_SIZE 4
#define MEM_BLK_COUNT 128
#define MEM_BLK_SIZE 16
#define MEM_BLK_ALIGN 4
RTIO_DEFINE_WITH_MEMPOOL(rtio_context,
SQ_SIZE, CQ_SIZE, MEM_BLK_COUNT, MEM_BLK_SIZE, MEM_BLK_ALIGN);
When a read is needed, the caller simply needs to replace the call
rtio_sqe_prep_read()
(which takes a pointer to a buffer and a length)
with a call to rtio_sqe_prep_read_with_pool()
. The iodev requires
only a small change which works with both pre-allocated data buffers as well as
the mempool. When the read is ready, instead of getting the buffers directly
from the rtio_iodev_sqe
, the iodev should get the buffer and count
by calling rtio_sqe_rx_buf()
like so:
uint8_t *buf;
uint32_t buf_len;
int rc = rtio_sqe_rx_buff(iodev_sqe, MIN_BUF_LEN, DESIRED_BUF_LEN, &buf, &buf_len);
if (rc != 0) {
LOG_ERR("Failed to get buffer of at least %u bytes", MIN_BUF_LEN);
return;
}
Finally, the consumer will be able to access the allocated buffer via
rtio_cqe_get_mempool_buffer()
.
uint8_t *buf;
uint32_t buf_len;
int rc = rtio_cqe_get_mempool_buffer(&rtio_context, &cqe, &buf, &buf_len);
if (rc != 0) {
LOG_ERR("Failed to get mempool buffer");
return rc;
}
/* Release the cqe events (note that the buffer is not released yet */
rtio_cqe_release_all(&rtio_context);
/* Do something with the memory */
/* Release the mempool buffer */
rtio_release_buffer(&rtio_context, buf);
When to Use
RTIO is useful in cases where concurrent or batch like I/O flows are useful.
From the driver/hardware perspective the API enables batching of I/O requests, potentially in an optimal way. Many requests to the same SPI peripheral for example might be translated to hardware command queues or DMA transfer descriptors entirely. Meaning the hardware can potentially do more than ever.
There is a small cost to each RTIO context and iodev. This cost could be weighed against using a thread for each concurrent I/O operation or custom queues and threads per peripheral. RTIO is much lower cost than that.
API Reference
- group rtio
RTIO.
- Since
3.2
- Version
0.1.0
Defines
-
RTIO_IODEV_I2C_STOP
Equivalent to the I2C_MSG_STOP flag.
-
RTIO_IODEV_I2C_RESTART
Equivalent to the I2C_MSG_RESTART flag.
-
RTIO_IODEV_I2C_10_BITS
Equivalent to the I2C_MSG_ADDR_10_BITS.
-
RTIO_OP_NOP
An operation that does nothing and will complete immediately.
-
RTIO_OP_RX
An operation that receives (reads)
-
RTIO_OP_TX
An operation that transmits (writes)
-
RTIO_OP_TINY_TX
An operation that transmits tiny writes by copying the data to write.
-
RTIO_OP_CALLBACK
An operation that calls a given function (callback)
-
RTIO_OP_TXRX
An operation that transceives (reads and writes simultaneously)
-
RTIO_OP_I2C_RECOVER
An operation to recover I2C buses.
-
RTIO_OP_I2C_CONFIGURE
An operation to configure I2C buses.
-
RTIO_IODEV_DEFINE(name, iodev_api, iodev_data)
Statically define and initialize an RTIO IODev.
- Parameters:
name – Name of the iodev
iodev_api – Pointer to struct rtio_iodev_api
iodev_data – Data pointer
-
RTIO_BMEM
Allocate to bss if available.
If CONFIG_USERSPACE is selected, allocate to the rtio_partition bss. Maps to: K_APP_BMEM(rtio_partition) static
If CONFIG_USERSPACE is disabled, allocate as plain static: static
-
RTIO_DMEM
Allocate as initialized memory if available.
If CONFIG_USERSPACE is selected, allocate to the rtio_partition init. Maps to: K_APP_DMEM(rtio_partition) static
If CONFIG_USERSPACE is disabled, allocate as plain static: static
-
RTIO_DEFINE(name, sq_sz, cq_sz)
Statically define and initialize an RTIO context.
- Parameters:
name – Name of the RTIO
sq_sz – Size of the submission queue entry pool
cq_sz – Size of the completion queue entry pool
-
RTIO_DEFINE_WITH_MEMPOOL(name, sq_sz, cq_sz, num_blks, blk_size, balign)
Statically define and initialize an RTIO context.
- Parameters:
name – Name of the RTIO
sq_sz – Size of the submission queue, must be power of 2
cq_sz – Size of the completion queue, must be power of 2
num_blks – Number of blocks in the memory pool
blk_size – The number of bytes in each block
balign – The block alignment
Typedefs
Functions
-
static inline size_t rtio_mempool_block_size(const struct rtio *r)
Get the mempool block size of the RTIO context.
- Parameters:
r – [in] The RTIO context
- Returns:
The size of each block in the context’s mempool
- Returns:
0 if the context doesn’t have a mempool
-
static inline void rtio_sqe_prep_nop(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, void *userdata)
Prepare a nop (no op) submission.
-
static inline void rtio_sqe_prep_read(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, int8_t prio, uint8_t *buf, uint32_t len, void *userdata)
Prepare a read op submission.
-
static inline void rtio_sqe_prep_read_with_pool(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, int8_t prio, void *userdata)
Prepare a read op submission with context’s mempool.
See also
-
static inline void rtio_sqe_prep_read_multishot(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, int8_t prio, void *userdata)
-
static inline void rtio_sqe_prep_write(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, int8_t prio, uint8_t *buf, uint32_t len, void *userdata)
Prepare a write op submission.
-
static inline void rtio_sqe_prep_tiny_write(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, int8_t prio, const uint8_t *tiny_write_data, uint8_t tiny_write_len, void *userdata)
Prepare a tiny write op submission.
Unlike the normal write operation where the source buffer must outlive the call the tiny write data in this case is copied to the sqe. It must be tiny to fit within the specified size of a rtio_sqe.
This is useful in many scenarios with RTL logic where a write of the register to subsequently read must be done.
-
static inline void rtio_sqe_prep_callback(struct rtio_sqe *sqe, rtio_callback_t callback, void *arg0, void *userdata)
Prepare a callback op submission.
A somewhat special operation in that it may only be done in kernel mode.
Used where general purpose logic is required in a queue of io operations to do transforms or logic.
-
static inline void rtio_sqe_prep_transceive(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, int8_t prio, uint8_t *tx_buf, uint8_t *rx_buf, uint32_t buf_len, void *userdata)
Prepare a transceive op submission.
-
static inline struct rtio_iodev_sqe *rtio_sqe_pool_alloc(struct rtio_sqe_pool *pool)
-
static inline void rtio_sqe_pool_free(struct rtio_sqe_pool *pool, struct rtio_iodev_sqe *iodev_sqe)
-
static inline struct rtio_cqe *rtio_cqe_pool_alloc(struct rtio_cqe_pool *pool)
-
static inline void rtio_cqe_pool_free(struct rtio_cqe_pool *pool, struct rtio_cqe *cqe)
-
static inline int rtio_block_pool_alloc(struct rtio *r, size_t min_sz, size_t max_sz, uint8_t **buf, uint32_t *buf_len)
-
static inline uint32_t rtio_sqe_acquirable(struct rtio *r)
Count of acquirable submission queue events.
- Parameters:
r – RTIO context
- Returns:
Count of acquirable submission queue events
-
static inline struct rtio_iodev_sqe *rtio_txn_next(const struct rtio_iodev_sqe *iodev_sqe)
Get the next sqe in the transaction.
- Parameters:
iodev_sqe – Submission queue entry
- Return values:
NULL – if current sqe is last in transaction
struct – rtio_sqe * if available
-
static inline struct rtio_iodev_sqe *rtio_chain_next(const struct rtio_iodev_sqe *iodev_sqe)
Get the next sqe in the chain.
- Parameters:
iodev_sqe – Submission queue entry
- Return values:
NULL – if current sqe is last in chain
struct – rtio_sqe * if available
-
static inline struct rtio_iodev_sqe *rtio_iodev_sqe_next(const struct rtio_iodev_sqe *iodev_sqe)
Get the next sqe in the chain or transaction.
- Parameters:
iodev_sqe – Submission queue entry
- Return values:
NULL – if current sqe is last in chain
struct – rtio_iodev_sqe * if available
-
static inline struct rtio_sqe *rtio_sqe_acquire(struct rtio *r)
Acquire a single submission queue event if available.
- Parameters:
r – RTIO context
- Return values:
sqe – A valid submission queue event acquired from the submission queue
NULL – No subsmission queue event available
-
static inline void rtio_sqe_drop_all(struct rtio *r)
Drop all previously acquired sqe.
- Parameters:
r – RTIO context
-
static inline struct rtio_cqe *rtio_cqe_acquire(struct rtio *r)
Acquire a complete queue event if available.
-
static inline void rtio_cqe_produce(struct rtio *r, struct rtio_cqe *cqe)
Produce a complete queue event if available.
-
static inline struct rtio_cqe *rtio_cqe_consume(struct rtio *r)
Consume a single completion queue event if available.
If a completion queue event is returned rtio_cq_release(r) must be called at some point to release the cqe spot for the cqe producer.
- Parameters:
r – RTIO context
- Return values:
cqe – A valid completion queue event consumed from the completion queue
NULL – No completion queue event available
-
static inline struct rtio_cqe *rtio_cqe_consume_block(struct rtio *r)
Wait for and consume a single completion queue event.
If a completion queue event is returned rtio_cq_release(r) must be called at some point to release the cqe spot for the cqe producer.
- Parameters:
r – RTIO context
- Return values:
cqe – A valid completion queue event consumed from the completion queue
-
static inline void rtio_cqe_release(struct rtio *r, struct rtio_cqe *cqe)
Release consumed completion queue event.
- Parameters:
r – RTIO context
cqe – Completion queue entry
-
static inline uint32_t rtio_cqe_compute_flags(struct rtio_iodev_sqe *iodev_sqe)
Compute the CQE flags from the rtio_iodev_sqe entry.
- Parameters:
iodev_sqe – The SQE entry in question.
- Returns:
The value that should be set for the CQE’s flags field.
-
int rtio_cqe_get_mempool_buffer(const struct rtio *r, struct rtio_cqe *cqe, uint8_t **buff, uint32_t *buff_len)
Retrieve the mempool buffer that was allocated for the CQE.
If the RTIO context contains a memory pool, and the SQE was created by calling rtio_sqe_read_with_pool(), this function can be used to retrieve the memory associated with the read. Once processing is done, it should be released by calling rtio_release_buffer().
- Parameters:
r – [in] RTIO context
cqe – [in] The CQE handling the event.
buff – [out] Pointer to the mempool buffer
buff_len – [out] Length of the allocated buffer
- Returns:
0 on success
- Returns:
-EINVAL if the buffer wasn’t allocated for this cqe
- Returns:
-ENOTSUP if memory blocks are disabled
-
void rtio_executor_ok(struct rtio_iodev_sqe *iodev_sqe, int result)
-
void rtio_executor_err(struct rtio_iodev_sqe *iodev_sqe, int result)
-
static inline void rtio_iodev_sqe_ok(struct rtio_iodev_sqe *iodev_sqe, int result)
Inform the executor of a submission completion with success.
This may start the next asynchronous request if one is available.
- Parameters:
iodev_sqe – IODev Submission that has succeeded
result – Result of the request
-
static inline void rtio_iodev_sqe_err(struct rtio_iodev_sqe *iodev_sqe, int result)
Inform the executor of a submissions completion with error.
This SHALL fail the remaining submissions in the chain.
- Parameters:
iodev_sqe – Submission that has failed
result – Result of the request
-
static inline void rtio_iodev_cancel_all(struct rtio_iodev *iodev)
Cancel all requests that are pending for the iodev.
- Parameters:
iodev – IODev to cancel all requests for
-
static inline void rtio_cqe_submit(struct rtio *r, int result, void *userdata, uint32_t flags)
Submit a completion queue event with a given result and userdata.
Called by the executor to produce a completion queue event, no inherent locking is performed and this is not safe to do from multiple callers.
- Parameters:
r – RTIO context
result – Integer result code (could be -errno)
userdata – Userdata to pass along to completion
flags – Flags to use for the CEQ see RTIO_CQE_FLAG_*
-
static inline int rtio_sqe_rx_buf(const struct rtio_iodev_sqe *iodev_sqe, uint32_t min_buf_len, uint32_t max_buf_len, uint8_t **buf, uint32_t *buf_len)
Get the buffer associate with the RX submission.
- Parameters:
iodev_sqe – [in] The submission to probe
min_buf_len – [in] The minimum number of bytes needed for the operation
max_buf_len – [in] The maximum number of bytes needed for the operation
buf – [out] Where to store the pointer to the buffer
buf_len – [out] Where to store the size of the buffer
- Returns:
0 if
buf
andbuf_len
were successfully filled- Returns:
-ENOMEM Not enough memory for
min_buf_len
-
void rtio_release_buffer(struct rtio *r, void *buff, uint32_t buff_len)
Release memory that was allocated by the RTIO’s memory pool.
If the RTIO context was created by a call to RTIO_DEFINE_WITH_MEMPOOL(), then the cqe data might contain a buffer that’s owned by the RTIO context. In those cases (if the read request was configured via rtio_sqe_read_with_pool()) the buffer must be returned back to the pool.
Call this function when processing is complete. This function will validate that the memory actually belongs to the RTIO context and will ignore invalid arguments.
- Parameters:
r – RTIO context
buff – Pointer to the buffer to be released.
buff_len – Number of bytes to free (will be rounded up to nearest memory block).
-
static inline void rtio_access_grant(struct rtio *r, struct k_thread *t)
Grant access to an RTIO context to a user thread.
-
int rtio_sqe_cancel(struct rtio_sqe *sqe)
Attempt to cancel an SQE.
If possible (not currently executing), cancel an SQE and generate a failure with -ECANCELED result.
- Parameters:
sqe – [in] The SQE to cancel
- Returns:
0 if the SQE was flagged for cancellation
- Returns:
<0 on error
-
int rtio_sqe_copy_in_get_handles(struct rtio *r, const struct rtio_sqe *sqes, struct rtio_sqe **handle, size_t sqe_count)
Copy an array of SQEs into the queue and get resulting handles back.
Copies one or more SQEs into the RTIO context and optionally returns their generated SQE handles. Handles can be used to cancel events via the rtio_sqe_cancel() call.
- Parameters:
r – [in] RTIO context
sqes – [in] Pointer to an array of SQEs
handle – [out] Optional pointer to rtio_sqe pointer to store the handle of the first generated SQE. Use NULL to ignore.
sqe_count – [in] Count of sqes in array
- Return values:
0 – success
-ENOMEM – not enough room in the queue
-
static inline int rtio_sqe_copy_in(struct rtio *r, const struct rtio_sqe *sqes, size_t sqe_count)
Copy an array of SQEs into the queue.
Useful if a batch of submissions is stored in ROM or RTIO is used from user mode where a copy must be made.
Partial copying is not done as chained SQEs need to be submitted as a whole set.
- Parameters:
r – RTIO context
sqes – Pointer to an array of SQEs
sqe_count – Count of sqes in array
- Return values:
0 – success
-ENOMEM – not enough room in the queue
-
int rtio_cqe_copy_out(struct rtio *r, struct rtio_cqe *cqes, size_t cqe_count, k_timeout_t timeout)
Copy an array of CQEs from the queue.
Copies from the RTIO context and its queue completion queue events, waiting for the given time period to gather the number of completions requested.
- Parameters:
r – RTIO context
cqes – Pointer to an array of SQEs
cqe_count – Count of sqes in array
timeout – Timeout to wait for each completion event. Total wait time is potentially timeout*cqe_count at maximum.
- Return values:
copy_count – Count of copied CQEs (0 to cqe_count)
-
int rtio_submit(struct rtio *r, uint32_t wait_count)
Submit I/O requests to the underlying executor.
Submits the queue of submission queue events to the executor. The executor will do the work of managing tasks representing each submission chain, freeing submission queue events when done, and producing completion queue events as submissions are completed.
- Parameters:
r – RTIO context
wait_count – Number of submissions to wait for completion of.
- Return values:
0 – On success
Variables
-
struct k_mem_partition rtio_partition
The memory partition associated with all RTIO context information.
-
struct rtio_sqe
- #include <rtio.h>
A submission queue event.
Public Members
-
uint8_t op
Op code.
-
uint8_t prio
Op priority.
-
uint16_t flags
Op Flags.
-
uint16_t iodev_flags
Op iodev flags.
-
const struct rtio_iodev *iodev
Device to operation on.
-
void *userdata
User provided data which is returned upon operation completion.
Could be a pointer or integer.
If unique identification of completions is desired this should be unique as well.
-
uint32_t buf_len
Length of buffer.
-
uint8_t *buf
Buffer to use.
-
uint8_t tiny_buf_len
Length of tiny buffer.
-
uint8_t tiny_buf[7]
Tiny buffer.
-
void *arg0
Last argument given to callback.
-
uint32_t i2c_config
OP_I2C_CONFIGURE.
-
uint8_t op
-
struct rtio_cqe
- #include <rtio.h>
A completion queue event.
-
struct rtio_sqe_pool
- #include <rtio.h>
-
struct rtio_cqe_pool
- #include <rtio.h>
-
struct rtio
- #include <rtio.h>
An RTIO context containing what can be viewed as a pair of queues.
A queue for submissions (available and in queue to be produced) as well as a queue of completions (available and ready to be consumed).
The rtio executor along with any objects implementing the rtio_iodev interface are the consumers of submissions and producers of completions.
No work is started until rtio_submit() is called.
-
struct rtio_iodev_sqe
- #include <rtio.h>
Compute the mempool block index for a given pointer.
IO device submission queue entry
May be cast safely to and from a rtio_sqe as they occupy the same memory provided by the pool
- Param r:
[in] RTIO context
- Param ptr:
[in] Memory pointer in the mempool
- Return:
Index of the mempool block associated with the pointer. Or UINT16_MAX if invalid.
-
struct rtio_iodev_api
- #include <rtio.h>
API that an RTIO IO device should implement.
Public Members
-
void (*submit)(struct rtio_iodev_sqe *iodev_sqe)
Submit to the iodev an entry to work on.
This call should be short in duration and most likely either enqueue or kick off an entry with the hardware.
- Param iodev_sqe:
Submission queue entry
-
void (*submit)(struct rtio_iodev_sqe *iodev_sqe)
-
struct rtio_iodev
- #include <rtio.h>
An IO device with a function table for submitting requests.
MPSC Lock-free Queue API
- group rtio_mpsc
RTIO Multiple Producer Single Consumer (MPSC) Queue API.
Defines
-
mpsc_ptr_get(ptr)
-
mpsc_ptr_set(ptr, val)
-
mpsc_ptr_set_get(ptr, val)
-
RTIO_MPSC_INIT(symbol)
Static initializer for a mpsc queue.
Since the queue is
- Parameters:
symbol – name of the queue
Typedefs
-
typedef atomic_ptr_t mpsc_ptr_t
Functions
-
static inline void rtio_mpsc_init(struct rtio_mpsc *q)
Initialize queue.
- Parameters:
q – Queue to initialize or reset
-
ALWAYS_INLINE static void rtio_mpsc_push(struct rtio_mpsc *q, struct rtio_mpsc_node *n)
Push a node.
- Parameters:
q – Queue to push the node to
n – Node to push into the queue
-
static inline struct rtio_mpsc_node *rtio_mpsc_pop(struct rtio_mpsc *q)
Pop a node off of the list.
- Return values:
NULL – When no node is available
node – When node is available
-
struct rtio_mpsc_node
- #include <rtio_mpsc.h>
Queue member.
-
struct rtio_mpsc
- #include <rtio_mpsc.h>
MPSC Queue.
-
mpsc_ptr_get(ptr)
SPSC Lock-free Queue API
- group rtio_spsc
RTIO Single Producer Single Consumer (SPSC) Queue API.
Defines
-
RTIO_SPSC_INITIALIZER(sz, buf)
Statically initialize an rtio_spsc.
- Parameters:
sz – Size of the spsc, must be power of 2 (ex: 2, 4, 8)
buf – Buffer pointer
-
RTIO_SPSC_DECLARE(name, type)
Declare an anonymous struct type for an rtio_spsc.
- Parameters:
name – Name of the spsc symbol to be provided
type – Type stored in the spsc
-
RTIO_SPSC_DEFINE(name, type, sz)
Define an rtio_spsc with a fixed size.
- Parameters:
name – Name of the spsc symbol to be provided
type – Type stored in the spsc
sz – Size of the spsc, must be power of 2 (ex: 2, 4, 8)
-
rtio_spsc_size(spsc)
Size of the SPSC queue.
- Parameters:
spsc – SPSC reference
-
rtio_spsc_reset(spsc)
Initialize/reset a spsc such that its empty.
Note that this is not safe to do while being used in a producer/consumer situation with multiple calling contexts (isrs/threads).
- Parameters:
spsc – SPSC to initialize/reset
-
rtio_spsc_acquire(spsc)
Acquire an element to produce from the SPSC.
- Parameters:
spsc – SPSC to acquire an element from for producing
- Returns:
A pointer to the acquired element or null if the spsc is full
-
rtio_spsc_produce(spsc)
Produce one previously acquired element to the SPSC.
This makes one element available to the consumer immediately
- Parameters:
spsc – SPSC to produce the previously acquired element or do nothing
-
rtio_spsc_produce_all(spsc)
Produce all previously acquired elements to the SPSC.
This makes all previous acquired elements available to the consumer immediately
- Parameters:
spsc – SPSC to produce all previously acquired elements or do nothing
-
rtio_spsc_drop_all(spsc)
Drop all previously acquired elements.
This makes all previous acquired elements available to be acquired again
- Parameters:
spsc – SPSC to drop all previously acquired elements or do nothing
-
rtio_spsc_consume(spsc)
Consume an element from the spsc.
- Parameters:
spsc – Spsc to consume from
- Returns:
Pointer to element or null if no consumable elements left
-
rtio_spsc_release(spsc)
Release a consumed element.
- Parameters:
spsc – SPSC to release consumed element or do nothing
-
rtio_spsc_release_all(spsc)
Release all consumed elements.
- Parameters:
spsc – SPSC to release consumed elements or do nothing
-
rtio_spsc_acquirable(spsc)
Count of acquirable in spsc.
- Parameters:
spsc – SPSC to get item count for
-
rtio_spsc_consumable(spsc)
Count of consumables in spsc.
- Parameters:
spsc – SPSC to get item count for
-
rtio_spsc_peek(spsc)
Peek at the first available item in queue.
- Parameters:
spsc – Spsc to peek into
- Returns:
Pointer to element or null if no consumable elements left
-
rtio_spsc_next(spsc, item)
Peek at the next item in the queue from a given one.
- Parameters:
spsc – SPSC to peek at
item – Pointer to an item in the queue
- Returns:
Pointer to element or null if none left
-
rtio_spsc_prev(spsc, item)
Get the previous item in the queue from a given one.
- Parameters:
spsc – SPSC to peek at
item – Pointer to an item in the queue
- Returns:
Pointer to element or null if none left
-
struct rtio_spsc
Common SPSC attributes.
Warning
Not to be manipulated without the macros!
-
RTIO_SPSC_INITIALIZER(sz, buf)