forked from ROMEO/nexosim
First release candidate for v0.1.0
This commit is contained in:
219
README.md
219
README.md
@ -1,82 +1,169 @@
|
||||
# Asynchronix
|
||||
|
||||
A high-performance asynchronous computation framework for system simulation.
|
||||
Asynchronix is a developer-friendly, highly optimized discrete-event simulation
|
||||
framework written in Rust. It is meant to scale from small, simple simulations
|
||||
to very large simulation benches with complex time-driven state machines.
|
||||
|
||||
## What is this?
|
||||
|
||||
> **Warning**: this page is at the moment mostly addressed at interested
|
||||
> contributors, but resources for users will be added soon.
|
||||
|
||||
In a nutshell, Asynchronix is an effort to develop a framework for
|
||||
discrete-event system simulation, with a particular focus on cyberphysical
|
||||
systems. In this context, a system might be something as large as a spacecraft,
|
||||
or as small as a IoT device.
|
||||
|
||||
Asynchronix draws from experience in the space industry but differs from
|
||||
existing tools in a number of respects, including:
|
||||
|
||||
1) *open-source license*: it is distributed under the very permissive MIT and
|
||||
Apache 2 licenses, with the intent to foster an ecosystem where models can be
|
||||
easily exchanged without reliance on proprietary APIs,
|
||||
2) *developer-friendly technology*: Rust's support for algebraic types and its
|
||||
powerful type system make it ideal for the "cyber" part in cyberphysical,
|
||||
i.e. for modelling digital devices with state machines,
|
||||
3) *very fast*: by leveraging Rust's excellent support for multithreading and
|
||||
async programming, simulation models can run efficiently in parallel with all
|
||||
required synchronization being transparently handled by the simulator.
|
||||
[](https://crates.io/crates/asynchronix)
|
||||
[](https://docs.rs/asynchronix)
|
||||
[](https://github.com/asynchronics/asynchronix#license)
|
||||
|
||||
|
||||
## General design
|
||||
## Overview
|
||||
|
||||
Asynchronix is an async compute framework for time-based discrete event
|
||||
simulation.
|
||||
Asynchronix is a simulator that leverages asynchronous programming to
|
||||
transparently and efficiently auto-parallelize simulations by means of a custom
|
||||
multi-threaded executor.
|
||||
|
||||
From the perspective of simulation model implementers and users, it closely
|
||||
resembles a flow-based programming framework: a model is essentially an isolated
|
||||
entity with a fixed set of typed inputs and outputs, communicating with other
|
||||
models and with the scheduler through message passing. Unlike in conventional
|
||||
flow-based programming, however, request-response patterns are also possible.
|
||||
It promotes a component-oriented architecture that is familiar to system
|
||||
engineers and closely resembles [flow-based programming][FBP]: a model is
|
||||
essentially an isolated entity with a fixed set of typed inputs and outputs,
|
||||
communicating with other models through message passing via connections defined
|
||||
during bench assembly.
|
||||
|
||||
Under the hood, Asynchronix' implementation is based on async Rust and the actor
|
||||
model. All inputs are forwarded to a single "mailbox" (an async channel),
|
||||
preserving the relative order of arrival of input messages.
|
||||
Although the main impetus for its development was the need for simulators able
|
||||
to handle large cyberphysical systems, Asynchronix is a general-purpose
|
||||
discrete-event simulator expected to be suitable for a wide range of simulation
|
||||
activities. It draws from experience on spacecraft real-time simulators but
|
||||
differs from existing tools in the space industry in a number of respects,
|
||||
including:
|
||||
|
||||
Computations proceed at discrete times. When executed, models can post events
|
||||
for the future, i.e. request the delayed activation of an input. Whenever the
|
||||
computation at a given time completes, the scheduler selects the nearest future
|
||||
time at which one or several events are scheduled, thus triggering another set
|
||||
of computations.
|
||||
1) *performance*: by taking advantage of Rust's excellent support for
|
||||
multithreading and asynchronous programming, simulation models can run
|
||||
efficiently in parallel with all required synchronization being transparently
|
||||
handled by the simulator,
|
||||
2) *developer-friendliness*: an ergonomic API and Rust's support for algebraic
|
||||
types make it ideal for the "cyber" part in cyberphysical, i.e. for modelling
|
||||
digital devices with even very complex state machines,
|
||||
3) *open-source*: last but not least, Asynchronix is distributed under the very
|
||||
permissive MIT and Apache 2 licenses, with the explicit intent to foster an
|
||||
ecosystem where models can be easily exchanged without reliance on
|
||||
proprietary APIs.
|
||||
|
||||
This computational process makes it difficult to use general-purposes runtimes
|
||||
such as Tokio, because the end of a set of computations is technically a
|
||||
deadlock: the computation completes when all model have nothing left to do and
|
||||
are blocked on an empty mailbox. Also, instead of managing a conventional
|
||||
reactor, the runtime manages a priority queue containing the posted events. For
|
||||
these reasons, it was necessary for Asynchronix to develop a fully custom
|
||||
[FBP]: https://en.wikipedia.org/wiki/Flow-based_programming
|
||||
|
||||
|
||||
## Documentation
|
||||
|
||||
The [API] documentation is relatively exhaustive and includes a practical
|
||||
overview which should provide all necessary information to get started.
|
||||
|
||||
More fleshed out examples can also be found in the dedicated
|
||||
[directory](examples).
|
||||
|
||||
[API]: https://docs.rs/asynchronix
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
Add this to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
asynchronix = "0.1.0"
|
||||
```
|
||||
|
||||
|
||||
## Example
|
||||
|
||||
```rust
|
||||
// A system made of 2 identical models.
|
||||
// Each model is a 2× multiplier with an output delayed by 1s.
|
||||
//
|
||||
// ┌──────────────┐ ┌──────────────┐
|
||||
// │ │ │ │
|
||||
// Input ●─────▶│ multiplier 1 ├─────▶│ multiplier 2 ├─────▶ Output
|
||||
// │ │ │ │
|
||||
// └──────────────┘ └──────────────┘
|
||||
use asynchronix::model::{Model, Output};
|
||||
use asynchronix::simulation::{Mailbox, SimInit};
|
||||
use asynchronix::time::{MonotonicTime, Scheduler};
|
||||
use std::time::Duration;
|
||||
|
||||
// A model that doubles its input and forwards it with a 1s delay.
|
||||
#[derive(Default)]
|
||||
pub struct DelayedMultiplier {
|
||||
pub output: Output<f64>,
|
||||
}
|
||||
impl DelayedMultiplier {
|
||||
pub fn input(&mut self, value: f64, scheduler: &Scheduler<Self>) {
|
||||
scheduler
|
||||
.schedule_in(Duration::from_secs(1), Self::send, 2.0 * value)
|
||||
.unwrap();
|
||||
}
|
||||
async fn send(&mut self, value: f64) {
|
||||
self.output.send(value).await;
|
||||
}
|
||||
}
|
||||
impl Model for DelayedMultiplier {}
|
||||
|
||||
// Instantiate models and their mailboxes.
|
||||
let mut multiplier1 = DelayedMultiplier::default();
|
||||
let mut multiplier2 = DelayedMultiplier::default();
|
||||
let multiplier1_mbox = Mailbox::new();
|
||||
let multiplier2_mbox = Mailbox::new();
|
||||
|
||||
// Connect the output of `multiplier1` to the input of `multiplier2`.
|
||||
multiplier1
|
||||
.output
|
||||
.connect(DelayedMultiplier::input, &multiplier2_mbox);
|
||||
|
||||
// Keep handles to the main input and output.
|
||||
let mut output_slot = multiplier2.output.connect_slot().0;
|
||||
let input_address = multiplier1_mbox.address();
|
||||
|
||||
// Instantiate the simulator
|
||||
let t0 = MonotonicTime::EPOCH; // arbitrary start time
|
||||
let mut simu = SimInit::new()
|
||||
.add_model(multiplier1, multiplier1_mbox)
|
||||
.add_model(multiplier2, multiplier2_mbox)
|
||||
.init(t0);
|
||||
|
||||
// Send a value to the first multiplier.
|
||||
simu.send_event(DelayedMultiplier::input, 3.5, &input_address);
|
||||
|
||||
// Advance time to the next event.
|
||||
simu.step();
|
||||
assert_eq!(simu.time(), t0 + Duration::from_secs(1));
|
||||
assert_eq!(output_slot.take(), None);
|
||||
|
||||
// Advance time to the next event.
|
||||
simu.step();
|
||||
assert_eq!(simu.time(), t0 + Duration::from_secs(2));
|
||||
assert_eq!(output_slot.take(), Some(14.0));
|
||||
```
|
||||
|
||||
# Implementation notes
|
||||
|
||||
Under the hood, Asynchronix is based on an asynchronous implementation of the
|
||||
actor model, where each simulation model is an actor. The messages actually
|
||||
exchanged between models are `async` closures which capture the event's or
|
||||
request's value and take the model as `&mut self` argument. The mailbox
|
||||
associated to a model and to which closures are forwarded is the receiver of an
|
||||
async, bounded MPSC channel.
|
||||
|
||||
Computations proceed at discrete times. When executed, models can request the
|
||||
scheduler to send an event (or rather, a closure capturing such event) at a
|
||||
certain simulation time. Whenever computations for the current time complete,
|
||||
the scheduler selects the nearest future time at which one or several events are
|
||||
scheduled (*next event increment*), thus triggering another set of computations.
|
||||
|
||||
This computational process makes it difficult to use general-purposes
|
||||
asynchronous runtimes such as [Tokio][tokio], because the end of a set of
|
||||
computations is technically a deadlock: the computation completes when all model
|
||||
have nothing left to do and are blocked on an empty mailbox. Also, instead of
|
||||
managing a conventional reactor, the runtime manages a priority queue containing
|
||||
the posted events. For these reasons, Asynchronix relies on a fully custom
|
||||
runtime.
|
||||
|
||||
Another crucial aspect of async compute is message-passing efficiency:
|
||||
oftentimes the processing of an input is a simple action, making inter-thread
|
||||
message-passing the bottleneck. This in turns calls for a very efficient
|
||||
channel implementation, heavily optimized for the case of starved receivers
|
||||
since models are most of the time waiting for an input to become available.
|
||||
Even though the runtime was largely influenced by Tokio, it features additional
|
||||
optimization that make its faster than any other multi-threaded Rust executor on
|
||||
the typically message-passing-heavy workloads seen in discrete-event simulation
|
||||
(see [benchmark]). Asynchronix also improves over the state of the art with a
|
||||
very fast custom MPSC channel, which performance has been demonstrated through
|
||||
[Tachyonix][tachyonix], a general-purpose offshoot of this channel.
|
||||
|
||||
|
||||
## Current state
|
||||
|
||||
The simulator is rapidly approaching MVP completion and has achieved 2 major
|
||||
milestones:
|
||||
|
||||
* completion of an extremely fast asynchronous multi-threaded channel,
|
||||
demonstrated in the [Tachyonix][tachyonix] project; this channel is the
|
||||
backbone of the actor model,
|
||||
* completion of a custom `async` executor optimized for message-passing and
|
||||
deadlock detection, which has demonstrated even better performance than Tokio
|
||||
for message-passing; this executor is already in the main branch and can be
|
||||
tested against other executors using the Tachyonix [benchmark].
|
||||
|
||||
Before it becomes usable, however, further work is required to implement the
|
||||
priority queue, implement model inputs and outputs and adapt the channel.
|
||||
[tokio]: https://github.com/tokio-rs/tokio
|
||||
|
||||
[tachyonix]: https://github.com/asynchronics/tachyonix
|
||||
|
||||
|
Reference in New Issue
Block a user