Instead of producing a SimInit object, a bench is now expected to return
a fully constructed simulation with its scheduler.
This means that the client does not necessarily need to provide the
starting time for the simulation. This start time may be hardcoded in
the bench, or may be taken as a parameter for the bench configuration.
This change make it possible for benches to do more, for instance to
pre-schedule some events, or to do less, for instance by hardcoding the
simulation time rather than accept an arbitrary simulation time.
The API style is now more uniform: both are passed by mutable ref, and
only expose accessors. Additionally, the methods that were initially
accessed through the scheduler field are now directly implemented on
`Context`.
Previously, the scheduler key used the target model as subkey to order
messages that target the same model.
Now this subkey is the origin model rather than the target, or in the
case of the global scheduler, 0. This doesn't change anythin in practice
for the local scheduler since the origin and target models were the
same, but for the global scheduler this provides additional guarranties.
For instance, if the global scheduler is used to schedule an event
targetting model A and then an event targetting model B where the latter
triggers a message to A, it is now guarranteed that the first message
will get to A before the second.
Now that `step_by` returns an error anyway (it was unfaillible before),
there is no more incentive to keep it as a separate method.
The `step_until` method now accepts an `impl Deadline`, which covers
both cases (`Duration` and `MonotonicTime`).
The build context is now passed as a mutable reference due to the need
to mutate data when adding a model.
Contains small unrelated cleanups and documentation improvements too.
The external_input example has been as well adapted and (at least
temporarily) simplifiedi/modified to remove the dependencies on
`atomic_wait` and `mio`.
TODO: return the list of models involved in a deadlock.
Note that Many execution errors are not implemented at all at the
moment and will need separate PRs, namely:
- Terminated
- ModelError
- Panic
This makes it possible to concurrently control and monitor the
simulation when using gRPC.
Accordingly, the gRPC server now runs on 2 threads so it can serve
control and monitoring requests concurrently.
From Rust 1.78, `Waker::will_wake` tests equality by comparing the VTable
pointers rather than the content of the VTable.
Unfortunately, this exposes some instability in the code generation
which sometimes causes several VTables to be instantiated in memory for
the same generic parameters. This can in turn defeat `Waker::will_wake`
if e.g. `Waker::clone` and `Waker::wake_by_*` end up with different
pointers.
The problemt is hopefully addressed by preventing inlining of the VTable
generation function. A test has been added to try to detect regression,
though the test may not be 100% reliable.