diff --git a/images/satrs-example-dataflow/satrs-example-dataflow.graphml b/images/satrs-example-dataflow/satrs-example-dataflow.graphml
index 1fcee9f..3384f39 100644
--- a/images/satrs-example-dataflow/satrs-example-dataflow.graphml
+++ b/images/satrs-example-dataflow/satrs-example-dataflow.graphml
@@ -601,7 +601,7 @@ Components
-
+
Shared
@@ -671,7 +671,7 @@ Diagram
-
+
@@ -690,7 +690,7 @@ Diagram
-
+
@@ -698,12 +698,12 @@ Diagram
-
-
-
+
+
+
-
+
@@ -711,12 +711,12 @@ Diagram
-
+
-
+
-
+
@@ -803,7 +803,7 @@ Interface
-
+
@@ -851,7 +851,7 @@ Interface
-
+
@@ -860,7 +860,7 @@ Interface
-
+
@@ -959,6 +959,31 @@ Messages
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/satrs-book/book.toml b/satrs-book/book.toml
index 0fcdc62..7d56896 100644
--- a/satrs-book/book.toml
+++ b/satrs-book/book.toml
@@ -5,4 +5,5 @@ multilingual = false
src = "src"
title = "The sat-rs book"
+[output.html]
[output.linkcheck]
diff --git a/satrs-book/src/constrained-systems.md b/satrs-book/src/constrained-systems.md
index 4dbdb7a..3580df7 100644
--- a/satrs-book/src/constrained-systems.md
+++ b/satrs-book/src/constrained-systems.md
@@ -18,7 +18,7 @@ running out of memory (something even Rust can not protect from) or heap fragmen
A huge candidate for heap allocations is the TMTC and handling. TC, TMs and IPC data are all
candidates where the data size might vary greatly. The regular solution for host systems
might be to send around this data as a `Vec` until it is dropped. `sat-rs` provides
-another solution to avoid run-time allocations by offering and recommendng pre-allocated static
+another solution to avoid run-time allocations by offering pre-allocated static
pools.
These pools are split into subpools where each subpool can have different page sizes.
diff --git a/satrs-book/src/example.md b/satrs-book/src/example.md
index 3d5f057..9152a18 100644
--- a/satrs-book/src/example.md
+++ b/satrs-book/src/example.md
@@ -14,7 +14,9 @@ a brief high-level view of the components used inside the example application:
![satrs-example component structure](images/satrs-example/satrs-example-structure.png)
-Some additional explanation is provided for the various components:
+The dotted lines are used to denote optional components. In this case, the static pool components
+are optional because the heap can be used as a simpler mechanism to store TMTC packets as well.
+Some additional explanation is provided for the various components.
### TCP/IP server components
@@ -40,12 +42,12 @@ The most important components of the TMTC infrastructure include the following c
handlers like the UDP and TCP server.
You can read the [Communications chapter](./communication.md) for more
-background information on the chose TMTC infrastructure approach.
+background information on the chosen TMTC infrastructure approach.
### PUS Service Components
A PUS service stack is provided which exposes some functionality conformant with the ECSS PUS
-service. This currently includes the following services:
+services. This currently includes the following services:
- Service 1 for telecommand verification. The verification handling is handled locally: Each
component which generates verification telemetry in some shape or form receives a
@@ -57,7 +59,7 @@ service. This currently includes the following services:
- Service 11 for scheduling telecommands to be released at a specific time. This component
uses the [PUS scheduler class](https://docs.rs/satrs-core/0.1.0-alpha.1/satrs_core/pus/scheduler/alloc_mod/struct.PusScheduler.html)
which performs the core logic of scheduling telecommands. All telecommands released by the
- scheduler are sent to the central TC source via a message.
+ scheduler are sent to the central TC source using a message.
- Service 17 for test purposes like pings.
### Event Management Component
@@ -68,7 +70,7 @@ is provided to handle the event IPC and FDIR mechanism. The event message are co
telemetry by the
[PUS event dispatcher](https://docs.rs/satrs-core/0.1.0-alpha.1/satrs_core/pus/event_man/alloc_mod/struct.PusEventDispatcher.html).
-You can read the [events](#events) chapter for more in-depth information about event management.
+You can read the [events](./events.md) chapter for more in-depth information about event management.
### Sample Application Components
@@ -83,7 +85,8 @@ The interaction of the various components is provided in the following diagram:
![satrs-example dataflow diagram](images/satrs-example/satrs-example-dataflow.png)
-An explanation for important component groups will be given
+It should be noted that an arrow coming out of a component group refers to multiple components
+in that group. An explanation for important component groups will be given.
#### TMTC component group
@@ -94,17 +97,56 @@ In the future, this might be extended with the
[CCSDS File Delivery Protocol](https://public.ccsds.org/Pubs/727x0b5.pdf).
A client can connect to the UDP or TCP server to send these PUS packets to the on-board software.
-These servers then forwards the telecommads to a centralized TC source component using a PUS TC
-message.
+These servers then forward the telecommads to a centralized TC source component using a dedicated
+message abstraction.
-This TC source component then demultiplexes the message and forwards it to the relevant component.
-Right now, it forwards all PUS requests to the respective PUS service handlers, which run in a
-separate thread. In the future, additional forwarding to components like a CFDP handler might be
-added as well.
+This TC source component then demultiplexes the message and forwards it to the relevant components.
+Right now, it forwards all PUS requests to the respective PUS service handlers using the PUS
+receiver component. The individual PUS services are running in a separate thread. In the future,
+additional forwarding to components like a CFDP handler might be added as well. It should be noted
+that PUS11 commands might contain other PUS commands which should be scheduled in the future.
+These wrapped commands are forwarded to the PUS11 handler. When the schedule releases those
+commands, it forwards the released commands to the TC source again. This allows the scheduler
+and the TC source to run in separate threads and keeps them cleanly separated.
All telemetry generated by the on-board software is sent to a centralized TM funnel. This component
also performs a demultiplexing step to forward all telemetry to the relevant TM recipients.
In the example case, this is the last UDP client, or a connected TCP client. In the future,
-a forwarding to a persistent telemetry store and a simulated communication component might be
+forwarding to a persistent telemetry store and a simulated communication component might be
added here as well. The centralized TM funnel also takes care of some packet processing steps which
-need to be applied for each ECSS PUS packet.
+need to be applied for each ECSS PUS packet, for example CCSDS specific APID incrementation and
+PUS specific message counter incrementation.
+
+#### Application Group
+
+The application components generally do not receive raw PUS packets directly, even though
+this is certainly possible. Instead, they receive internalized messages from the PUS service
+handlers. For example, instead of receiving a PUS 8 Action Telecommand directly, an application
+component will receive a special `ActionRequest` message type reduced to the basic important
+information required to execute a request. These special requests are denoted by the blue arrow
+in the diagram.
+
+It should be noted that the arrow pointing towards the event manager points in both directions.
+This is because the application components might be interested in events generated by other
+components as well. This mechanism is oftentimes used to implement the FDIR functionality on system
+and component level.
+
+#### Shared components and functional interfaces
+
+It should be noted that sometimes, a functional interface is used instead of a message. This
+is used for the generation of verification telemetry. The verification reporter is a clonable
+component which generates and sends PUS1 verification telemetry directly to the TM funnel. This
+introduces a loose coupling to the PUS standard but was considered the easiest solution for
+a project which utilizes PUS as the main communication protocol. In the future, a generic
+verification abstraction might be introduced to completely decouple the application layer from
+PUS.
+
+The same concept is applied if the backing store of TMTC packets are shared pools. Every
+component which needs to read telecommands inside that shared pool or generate new telemetry
+into that shared pool will received a clonable shared handle to that pool.
+
+The same concept could be extended to power or thermal handling. For example, a shared power helper
+component might be used to retrieve power state information and send power switch commands through
+a functional interface. The actual implementation of the functional interface might still use
+shared memory and/or messages, but the functional interface makes using and testing the interaction
+with these components easier.
diff --git a/satrs-book/src/images/satrs-example/satrs-example-dataflow.png b/satrs-book/src/images/satrs-example/satrs-example-dataflow.png
index 0ec7ac1..4ca3b5d 100644
Binary files a/satrs-book/src/images/satrs-example/satrs-example-dataflow.png and b/satrs-book/src/images/satrs-example/satrs-example-dataflow.png differ