Event Manager Proxy

The Event Manager Proxy is a library that allows to exchange Event Manager events between cores. It connects two separate instances of Event Manager on different cores by passing registered messages using the IPC service.

See the Event Manager Proxy sample for an example of how to use this library.

Configuration

To use the Event Manager Proxy, enable the CONFIG_EVENT_MANAGER_PROXY Kconfig option. This option depends on CONFIG_IPC_SERVICE Kconfig option. Make sure that the IPC Service is configured together with the used backend.

When Event Manager Proxy is enabled, the required hooks in Application Event Manager are also enabled.

Additional configuration

You can also set the following Kconfig options when working with Event Manager Proxy:

  • CONFIG_EVENT_MANAGER_PROXY_CH_COUNT - This Kconfig sets the number of IPC instances that would be used. This option is related to the number of cores between which the events are exchanged. For example, having two cores means that there is one exchange taking place, and so you need one IPC instance.

  • CONFIG_EVENT_MANAGER_PROXY_BOND_TIMEOUT_MS - This Kconfig sets the timeout value of the bonding.

Implementing the proxy

When compiling the code that uses Event Manager Proxy, make sure that the event definitions shared between the cores are compiled for both cores. The declarations should be accessible during code compilation of both cores.

The application that wishes to use Event Manager Proxy requires a special initialization process. To implement an Event Manager Proxy, you must complete the following steps:

  1. Initialize Event Manager Proxy together with the Application Event Manager. The code should look as follows:

    /* Initialize Event Manager and Event Manager Proxy */
    ret = event_manager_init();
    /* Error handling */
    
  2. Add all remote IPC instances. The code should look as follows:

    ret = event_manager_proxy_add_remote(ipc1_instance);
    /* Error handling */
    ret = event_manager_proxy_add_remote(ipc2_instance);
    /* Error handling */
    
  3. Use event_manager_proxy_subscribe() function by passing all the required arguments for the submitter to pass the selected event. The auxiliary macro EVENT_MANAGER_PROXY_SUBSCRIBE prepares all the argument identifiers using the event definition. The code should look as follows:

    #include <event1_definition_file.h>
    #include <event2_definition_file.h>
    
    ret = EVENT_MANAGER_PROXY_SUBSCRIBE(ipc1_instance, event1);
    /* Error handling */
    ret = EVENT_MANAGER_PROXY_SUBSCRIBE(ipc2_instance, event2);
    /* Error handling */
    
  4. Configure the Event Manager Proxy into an active state when all the events are subscribed.

    ret = event_manager_proxy_start();
    /* Error handling */
    

    Now no more configuration messages should be passed between cores. Only subscribed events are transmitted.

  5. Use event_manager_proxy_start() function on both the cores to transmit the events between both cores. If you want to be sure that the link between cores is active before continuing, use the event_manager_proxy_wait_for_remotes() function. This function blocks until all registered instances report their readiness.

A call to event_manager_proxy_wait_for_remotes() is not required. The function is used if you want to send an event that is to be transmitted to all the registered remotes.

After the link is initialized, the subscribed events from the remote core appear in the local event queue. The events are then processed like any other local messages.

Implementation details

The proxy uses some of the Event Manager hooks to connect with the manager.

Initialization hook usage

The Application Event Manager provides an initialization hook for any module that relies on the Application Event Manager initialization before the first event is processed. The hook function should be declared in the int hook(void) format. If the hook function returns a non-zero value, the initialization process is interrupted and a related error is returned.

To register the initialization hook, use the macro APP_EVENT_MANAGER_HOOK_POSTINIT_REGISTER. For details, refer to API documentation.

The Event Manager Proxy uses the hook to append itself to the initialization procedure.

Tracing hook usage

The Application Event Manager uses flexible mechanism to implement hooks when an event is submitted, before it is processed, and after its processing. The tracing hooks are originally designed to implement event tracing, but you can use them for other purposes as well. The registered hook function should be declared in the void hook(const struct app_event_header *aeh) format.

The following macros are implemented to register event tracing hooks:

For details, refer to API documentation.

The Event Manager Proxy uses only a post-process hook to send the event after processing the local core to the remote core that is registered as a listener.

Subscribing to remote events

A core that wishes to listen to events from the remote core sends SUBSCRIBE command to that core during the initialization process. The SUBSCRIBE command is sent using the event_manager_proxy_subscribe() function, which passes the following arguments:

  • ipc - This argument is an IPC instance that identifies the communication channel between the cores.

  • local_event_id - This argument represents the local core ID that is received when the event is post-processed on the remote core. It is also used to get the event name to match the same event on the remote core.

The remote core during the command processing searches for an event with the given name and registers the given event ID in an array of events. The created array of events directly reflects the array of event types. This way, the complexity of searching the remote event ID connected to the currently processed event has O(1) complexity. The most time consuming search is realized during initialization, where events are searched by name with O(N) complexity.

Sending the event to the remote core

After the event is processed locally, the event post-process hook in the Event Manager Proxy is executed. The proxy gets the event index and then checks the matched position in the remote array. If the event is registered for this event for any of added remote IPC instance, the event is copied as-is. The event ID is replaced by the ID requested by the remote and is transmitted to the remote in the same form. This way, the remote can copy the event as-is and use the event as the remote’s local event.

Passing the event from the remote core

Once the remote and local core started Event Manager Proxy by calling the event_manager_proxy_start() function, every piece of incoming data is treated as a single event. A new event is allocated by event_manager_alloc() function and the event is submitted to the event queue by the _event_submit() function. From that moment, the event is treated similarly as any other locally generated event.

Note

If any of the shared events between the cores provide any kind of memory pointer, the pointed memory must be available for the target core if the core is to access the shared events.

Limitations

The event passed through the Event Manager Proxy is treated and processed in the same way as any locally generated event. The core that sources the event must not subscribe to the same event in another core. If it does, once the core receives such an event generated remotely, it automatically resends the event from that local core to the ones that subscribed to it. This results in a situation where two cores subscribe to the same event. Once generated, this event is continuously sent between the cores. The current approach is to create events for each core with different codes, even if they look similar.

API documentation

Header file: include/event_manager_proxy.h
Source files: subsys/event_manager_proxy/
group event_manager_proxy

Event Manager Proxy.

Defines

EVENT_MANAGER_PROXY_SUBSCRIBE(instance, ename)

Subscribe for the remote event.

Register the listener for the event from the remote core.

Parameters:
  • instance – The instance used for IPC service to transfer data between cores.

  • ename – Name of the event. The event name has to be the same on remote and local cores.

Returns:

See event_manager_proxy_subscribe.

Functions

int event_manager_proxy_add_remote(const struct device *instance)

Add remote core communication channel.

This function registers endpoint used to communication with another core.

Parameters:
  • instance – The instance used for IPC service to transfer data between cores.

Return values:
  • -EALREADY – Given remote instance was added already.

  • -ENOMEM – No place for the new endpoint. See CONFIG_EVENT_MANAGER_PROXY_CH_COUNT .

  • -EIO – Comes from IPC service, see ipc_service_open_instance or ipc_service_register_endpoint.

  • -EINVAL – Comes from IPC service, see ipc_service_open_instance or ipc_service_register_endpoint.

  • -EBUSY – Comes from IPC service, see ipc_service_register_endpoint.

  • 0 – On success.

  • other – errno codes depending on the IPC service backend implementation.

int event_manager_proxy_subscribe(const struct device *instance, const struct event_type *local_event_id)

Subscribe for the remote event.

This function registers the local event proxy to remote event proxy to listen selected event.

Note

This function may wait for the IPC endpoint to bond. To be sure that it will be available to bond write the code this way that all the remotes are added first by event_manager_proxy_add_remote and then start to add listeners. In other case if the other core adds more than one remote, we run into risk that we will wait for the endpoint to bond while the remote core waits for other endpoint to bond and never configures the requested endpoint.

Parameters:
  • instance – Remote IPC instance.

  • local_event_id – The id of the event we wish to receive for the event on the remote core.

Return values:
  • 0 – On success.

  • -EACCES – Function called after event_manager_proxy_start.

  • -EPIPE – The remote core did not bond in during timeout period. No communication.

  • -ETIME – Timeout while waiting for the response from the other core.

  • other – errno code.

int event_manager_proxy_start(void)

Start events transfer.

This function sends start command to all added remote cores. This command finalizes initialization process, therefore the functions event_manager_proxy_add_remote and event_manager_proxy_subscribe cannot be used anymore.

Return values:
  • 0 – On success.

  • -EPIPE – The remote core did not bond in during timeout period. No communication.

  • other – errno code.

int event_manager_proxy_wait_for_remotes(k_timeout_t timeout)

Wait for all the remote cores to report their readiness.

This function stops the current thread if there is any remote that did not finish its initialization.

Note

Call this function only after event_manager_proxy_start.

Note

This function may be called multiple times.

Parameters:
  • timeout – Waiting period for all the remote cores to finish theirs initialization.

Return values:
  • 0 – On success.

  • -ETIME – Timeout reached.