Merge pull request 're-worked TMTC modules' (#155) from rework-tmtc-modules into main
All checks were successful
Rust/sat-rs/pipeline/head This commit looks good

Reviewed-on: #155
This commit is contained in:
Robin Müller 2024-04-16 11:10:52 +02:00
commit 786671bbd7
53 changed files with 2030 additions and 2939 deletions

View File

@ -24,11 +24,6 @@ A lot of the architecture and general design considerations are based on the
through the 2 missions [FLP](https://www.irs.uni-stuttgart.de/en/research/satellitetechnology-and-instruments/smallsatelliteprogram/flying-laptop/) through the 2 missions [FLP](https://www.irs.uni-stuttgart.de/en/research/satellitetechnology-and-instruments/smallsatelliteprogram/flying-laptop/)
and [EIVE](https://www.irs.uni-stuttgart.de/en/research/satellitetechnology-and-instruments/smallsatelliteprogram/EIVE/). and [EIVE](https://www.irs.uni-stuttgart.de/en/research/satellitetechnology-and-instruments/smallsatelliteprogram/EIVE/).
This framework is in the early stages of development. Important features are missing. New releases
with breaking changes are released regularly, with all changes documented inside respective
changelog files. You should only use this framework if your are willing to work in this
environment.
# Overview # Overview
This project currently contains following crates: This project currently contains following crates:

View File

@ -17,7 +17,7 @@ it is still centered around small packets. `sat-rs` provides support for these E
standards and also attempts to fill the gap to the internet protocol by providing the following standards and also attempts to fill the gap to the internet protocol by providing the following
components. components.
1. [UDP TMTC Server](https://docs.rs/satrs/latest/satrs/hal/host/udp_server/index.html). 1. [UDP TMTC Server](https://docs.rs/satrs/latest/satrs/hal/std/udp_server/index.html).
UDP is already packet based which makes it an excellent fit for exchanging space packets. UDP is already packet based which makes it an excellent fit for exchanging space packets.
2. [TCP TMTC Server Components](https://docs.rs/satrs/latest/satrs/hal/std/tcp_server/index.html). 2. [TCP TMTC Server Components](https://docs.rs/satrs/latest/satrs/hal/std/tcp_server/index.html).
TCP is a stream based protocol, so the library provides building blocks to parse telemetry TCP is a stream based protocol, so the library provides building blocks to parse telemetry
@ -39,8 +39,12 @@ task might be to store all arriving telemetry persistently. This is especially i
space systems which do not have permanent contact like low-earth-orbit (LEO) satellites. space systems which do not have permanent contact like low-earth-orbit (LEO) satellites.
The most important task of a TC source is to deliver the telecommands to the correct recipients. The most important task of a TC source is to deliver the telecommands to the correct recipients.
For modern component oriented software using message passing, this usually includes staged For component oriented software using message passing, this usually includes staged demultiplexing
demultiplexing components to determine where a command needs to be sent. components to determine where a command needs to be sent.
Using a generic concept of a TC source and a TM sink as part of the software design simplifies
the flexibility of the TMTC infrastructure: Newly added TM generators and TC receiver only have to
forward their generated or received packets to those handler objects.
# Low-level protocols and the bridge to the communcation subsystem # Low-level protocols and the bridge to the communcation subsystem

View File

@ -1,11 +1,11 @@
# Modes # Modes
Modes are an extremely useful concept for complex system in general. They also allow simplified Modes are an extremely useful concept to model complex systems. They allow simplified
system reasoning for both system operators and OBSW developers. They model the behaviour of a system reasoning for both system operators and OBSW developers. They also provide a way to alter
component and also provide observability of a system. A few examples of how to model the behaviour of a component and also provide observability of a system. A few examples of how to
different components of a space system with modes will be given. model the mode of different components within a space system with modes will be given.
## Modelling a pyhsical devices with modes ## Pyhsical device component with modes
The following simple mode scheme with the following three mode The following simple mode scheme with the following three mode
@ -13,7 +13,8 @@ The following simple mode scheme with the following three mode
- `ON` - `ON`
- `NORMAL` - `NORMAL`
can be applied to a large number of simpler devices of a remote system, for example sensors. can be applied to a large number of simpler device controllers of a remote system, for example
sensors.
1. `OFF` means that a device is physically switched off, and the corresponding software component 1. `OFF` means that a device is physically switched off, and the corresponding software component
does not poll the device regularly. does not poll the device regularly.
@ -31,7 +32,7 @@ for the majority of devices:
2. `NORMAL` or `ON` to `OFF`: Any important shutdown configuration or handling must be performed 2. `NORMAL` or `ON` to `OFF`: Any important shutdown configuration or handling must be performed
before powering off the device. before powering off the device.
## Modelling a controller with modes ## Controller components with modes
Controller components are not modelling physical devices, but a mode scheme is still the best Controller components are not modelling physical devices, but a mode scheme is still the best
way to model most of these components. way to model most of these components.

View File

@ -11,7 +11,7 @@ use std::sync::{Arc, Mutex};
use satrs::mode::{ use satrs::mode::{
ModeAndSubmode, ModeError, ModeProvider, ModeReply, ModeRequest, ModeRequestHandler, ModeAndSubmode, ModeError, ModeProvider, ModeReply, ModeRequest, ModeRequestHandler,
}; };
use satrs::pus::{EcssTmSenderCore, PusTmVariant}; use satrs::pus::{EcssTmSender, PusTmVariant};
use satrs::request::{GenericMessage, MessageMetadata, UniqueApidTargetId}; use satrs::request::{GenericMessage, MessageMetadata, UniqueApidTargetId};
use satrs_example::config::components::PUS_MODE_SERVICE; use satrs_example::config::components::PUS_MODE_SERVICE;
@ -64,7 +64,7 @@ pub struct MpscModeLeafInterface {
/// Example MGM device handler strongly based on the LIS3MDL MEMS device. /// Example MGM device handler strongly based on the LIS3MDL MEMS device.
#[derive(new)] #[derive(new)]
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
pub struct MgmHandlerLis3Mdl<ComInterface: SpiInterface, TmSender: EcssTmSenderCore> { pub struct MgmHandlerLis3Mdl<ComInterface: SpiInterface, TmSender: EcssTmSender> {
id: UniqueApidTargetId, id: UniqueApidTargetId,
dev_str: &'static str, dev_str: &'static str,
mode_interface: MpscModeLeafInterface, mode_interface: MpscModeLeafInterface,
@ -85,9 +85,7 @@ pub struct MgmHandlerLis3Mdl<ComInterface: SpiInterface, TmSender: EcssTmSenderC
stamp_helper: TimeStampHelper, stamp_helper: TimeStampHelper,
} }
impl<ComInterface: SpiInterface, TmSender: EcssTmSenderCore> impl<ComInterface: SpiInterface, TmSender: EcssTmSender> MgmHandlerLis3Mdl<ComInterface, TmSender> {
MgmHandlerLis3Mdl<ComInterface, TmSender>
{
pub fn periodic_operation(&mut self) { pub fn periodic_operation(&mut self) {
self.stamp_helper.update_from_now(); self.stamp_helper.update_from_now();
// Handle requests. // Handle requests.
@ -203,7 +201,7 @@ impl<ComInterface: SpiInterface, TmSender: EcssTmSenderCore>
} }
} }
impl<ComInterface: SpiInterface, TmSender: EcssTmSenderCore> ModeProvider impl<ComInterface: SpiInterface, TmSender: EcssTmSender> ModeProvider
for MgmHandlerLis3Mdl<ComInterface, TmSender> for MgmHandlerLis3Mdl<ComInterface, TmSender>
{ {
fn mode_and_submode(&self) -> ModeAndSubmode { fn mode_and_submode(&self) -> ModeAndSubmode {
@ -211,7 +209,7 @@ impl<ComInterface: SpiInterface, TmSender: EcssTmSenderCore> ModeProvider
} }
} }
impl<ComInterface: SpiInterface, TmSender: EcssTmSenderCore> ModeRequestHandler impl<ComInterface: SpiInterface, TmSender: EcssTmSender> ModeRequestHandler
for MgmHandlerLis3Mdl<ComInterface, TmSender> for MgmHandlerLis3Mdl<ComInterface, TmSender>
{ {
type Error = ModeError; type Error = ModeError;

View File

@ -132,6 +132,7 @@ pub mod components {
GenericPus = 2, GenericPus = 2,
Acs = 3, Acs = 3,
Cfdp = 4, Cfdp = 4,
Tmtc = 5,
} }
// Component IDs for components with the PUS APID. // Component IDs for components with the PUS APID.
@ -150,6 +151,12 @@ pub mod components {
Mgm0 = 0, Mgm0 = 0,
} }
#[derive(Copy, Clone, PartialEq, Eq)]
pub enum TmtcId {
UdpServer = 0,
TcpServer = 1,
}
pub const PUS_ACTION_SERVICE: UniqueApidTargetId = pub const PUS_ACTION_SERVICE: UniqueApidTargetId =
UniqueApidTargetId::new(Apid::GenericPus as u16, PusId::PusAction as u32); UniqueApidTargetId::new(Apid::GenericPus as u16, PusId::PusAction as u32);
pub const PUS_EVENT_MANAGEMENT: UniqueApidTargetId = pub const PUS_EVENT_MANAGEMENT: UniqueApidTargetId =
@ -166,6 +173,10 @@ pub mod components {
UniqueApidTargetId::new(Apid::Sched as u16, 0); UniqueApidTargetId::new(Apid::Sched as u16, 0);
pub const MGM_HANDLER_0: UniqueApidTargetId = pub const MGM_HANDLER_0: UniqueApidTargetId =
UniqueApidTargetId::new(Apid::Acs as u16, AcsId::Mgm0 as u32); UniqueApidTargetId::new(Apid::Acs as u16, AcsId::Mgm0 as u32);
pub const UDP_SERVER: UniqueApidTargetId =
UniqueApidTargetId::new(Apid::Tmtc as u16, TmtcId::UdpServer as u32);
pub const TCP_SERVER: UniqueApidTargetId =
UniqueApidTargetId::new(Apid::Tmtc as u16, TmtcId::TcpServer as u32);
} }
pub mod pool { pub mod pool {

View File

@ -5,7 +5,7 @@ use satrs::event_man::{EventMessageU32, EventRoutingError};
use satrs::params::WritableToBeBytes; use satrs::params::WritableToBeBytes;
use satrs::pus::event::EventTmHookProvider; use satrs::pus::event::EventTmHookProvider;
use satrs::pus::verification::VerificationReporter; use satrs::pus::verification::VerificationReporter;
use satrs::pus::EcssTmSenderCore; use satrs::pus::EcssTmSender;
use satrs::request::UniqueApidTargetId; use satrs::request::UniqueApidTargetId;
use satrs::{ use satrs::{
event_man::{ event_man::{
@ -38,7 +38,7 @@ impl EventTmHookProvider for EventApidSetter {
/// The PUS event handler subscribes for all events and converts them into ECSS PUS 5 event /// The PUS event handler subscribes for all events and converts them into ECSS PUS 5 event
/// packets. It also handles the verification completion of PUS event service requests. /// packets. It also handles the verification completion of PUS event service requests.
pub struct PusEventHandler<TmSender: EcssTmSenderCore> { pub struct PusEventHandler<TmSender: EcssTmSender> {
event_request_rx: mpsc::Receiver<EventRequestWithToken>, event_request_rx: mpsc::Receiver<EventRequestWithToken>,
pus_event_dispatcher: DefaultPusEventU32Dispatcher<()>, pus_event_dispatcher: DefaultPusEventU32Dispatcher<()>,
pus_event_man_rx: mpsc::Receiver<EventMessageU32>, pus_event_man_rx: mpsc::Receiver<EventMessageU32>,
@ -49,7 +49,7 @@ pub struct PusEventHandler<TmSender: EcssTmSenderCore> {
event_apid_setter: EventApidSetter, event_apid_setter: EventApidSetter,
} }
impl<TmSender: EcssTmSenderCore> PusEventHandler<TmSender> { impl<TmSender: EcssTmSender> PusEventHandler<TmSender> {
pub fn new( pub fn new(
tm_sender: TmSender, tm_sender: TmSender,
verif_handler: VerificationReporter, verif_handler: VerificationReporter,
@ -177,12 +177,12 @@ impl EventManagerWrapper {
} }
} }
pub struct EventHandler<TmSender: EcssTmSenderCore> { pub struct EventHandler<TmSender: EcssTmSender> {
pub event_man_wrapper: EventManagerWrapper, pub event_man_wrapper: EventManagerWrapper,
pub pus_event_handler: PusEventHandler<TmSender>, pub pus_event_handler: PusEventHandler<TmSender>,
} }
impl<TmSender: EcssTmSenderCore> EventHandler<TmSender> { impl<TmSender: EcssTmSender> EventHandler<TmSender> {
pub fn new( pub fn new(
tm_sender: TmSender, tm_sender: TmSender,
event_request_rx: mpsc::Receiver<EventRequestWithToken>, event_request_rx: mpsc::Receiver<EventRequestWithToken>,

View File

@ -1,21 +1,41 @@
use std::{ use std::{
collections::{HashSet, VecDeque}, collections::{HashSet, VecDeque},
fmt::Debug,
marker::PhantomData,
sync::{Arc, Mutex}, sync::{Arc, Mutex},
}; };
use log::{info, warn}; use log::{info, warn};
use satrs::{ use satrs::{
encoding::ccsds::{SpValidity, SpacePacketValidator},
hal::std::tcp_server::{HandledConnectionHandler, ServerConfig, TcpSpacepacketsServer}, hal::std::tcp_server::{HandledConnectionHandler, ServerConfig, TcpSpacepacketsServer},
pus::ReceivesEcssPusTc, spacepackets::{CcsdsPacket, PacketId},
spacepackets::PacketId, tmtc::{PacketSenderRaw, PacketSource},
tmtc::{CcsdsDistributor, CcsdsError, ReceivesCcsdsTc, TmPacketSourceCore},
}; };
use crate::tmtc::ccsds::CcsdsReceiver;
#[derive(Default)] #[derive(Default)]
pub struct ConnectionFinishedHandler {} pub struct ConnectionFinishedHandler {}
pub struct SimplePacketValidator {
pub valid_ids: HashSet<PacketId>,
}
impl SpacePacketValidator for SimplePacketValidator {
fn validate(
&self,
sp_header: &satrs::spacepackets::SpHeader,
_raw_buf: &[u8],
) -> satrs::encoding::ccsds::SpValidity {
if self.valid_ids.contains(&sp_header.packet_id()) {
return SpValidity::Valid;
}
log::warn!("ignoring space packet with header {:?}", sp_header);
// We could perform a CRC check.. but lets keep this simple and assume that TCP ensures
// data integrity.
SpValidity::Skip
}
}
impl HandledConnectionHandler for ConnectionFinishedHandler { impl HandledConnectionHandler for ConnectionFinishedHandler {
fn handled_connection(&mut self, info: satrs::hal::std::tcp_server::HandledConnectionInfo) { fn handled_connection(&mut self, info: satrs::hal::std::tcp_server::HandledConnectionInfo) {
info!( info!(
@ -53,7 +73,7 @@ impl SyncTcpTmSource {
} }
} }
impl TmPacketSourceCore for SyncTcpTmSource { impl PacketSource for SyncTcpTmSource {
type Error = (); type Error = ();
fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error> { fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error> {
@ -81,56 +101,45 @@ impl TmPacketSourceCore for SyncTcpTmSource {
} }
} }
pub type TcpServerType<TcSource, MpscErrorType> = TcpSpacepacketsServer< pub type TcpServer<ReceivesTc, SendError> = TcpSpacepacketsServer<
SyncTcpTmSource, SyncTcpTmSource,
CcsdsDistributor<CcsdsReceiver<TcSource, MpscErrorType>, MpscErrorType>, ReceivesTc,
HashSet<PacketId>, SimplePacketValidator,
ConnectionFinishedHandler, ConnectionFinishedHandler,
(), (),
CcsdsError<MpscErrorType>, SendError,
>; >;
pub struct TcpTask< pub struct TcpTask<TcSender: PacketSenderRaw<Error = SendError>, SendError: Debug + 'static>(
TcSource: ReceivesCcsdsTc<Error = MpscErrorType> pub TcpServer<TcSender, SendError>,
+ ReceivesEcssPusTc<Error = MpscErrorType> PhantomData<SendError>,
+ Clone );
+ Send
+ 'static,
MpscErrorType: 'static,
> {
server: TcpServerType<TcSource, MpscErrorType>,
}
impl< impl<TcSender: PacketSenderRaw<Error = SendError>, SendError: Debug + 'static>
TcSource: ReceivesCcsdsTc<Error = MpscErrorType> TcpTask<TcSender, SendError>
+ ReceivesEcssPusTc<Error = MpscErrorType>
+ Clone
+ Send
+ 'static,
MpscErrorType: 'static + core::fmt::Debug,
> TcpTask<TcSource, MpscErrorType>
{ {
pub fn new( pub fn new(
cfg: ServerConfig, cfg: ServerConfig,
tm_source: SyncTcpTmSource, tm_source: SyncTcpTmSource,
tc_receiver: CcsdsDistributor<CcsdsReceiver<TcSource, MpscErrorType>, MpscErrorType>, tc_sender: TcSender,
packet_id_lookup: HashSet<PacketId>, valid_ids: HashSet<PacketId>,
) -> Result<Self, std::io::Error> { ) -> Result<Self, std::io::Error> {
Ok(Self { Ok(Self(
server: TcpSpacepacketsServer::new( TcpSpacepacketsServer::new(
cfg, cfg,
tm_source, tm_source,
tc_receiver, tc_sender,
packet_id_lookup, SimplePacketValidator { valid_ids },
ConnectionFinishedHandler::default(), ConnectionFinishedHandler::default(),
None, None,
)?, )?,
}) PhantomData,
))
} }
pub fn periodic_operation(&mut self) { pub fn periodic_operation(&mut self) {
loop { loop {
let result = self.server.handle_next_connection(None); let result = self.0.handle_all_connections(None);
match result { match result {
Ok(_conn_result) => (), Ok(_conn_result) => (),
Err(e) => { Err(e) => {

View File

@ -1,20 +1,22 @@
use core::fmt::Debug;
use std::net::{SocketAddr, UdpSocket}; use std::net::{SocketAddr, UdpSocket};
use std::sync::mpsc; use std::sync::mpsc;
use log::{info, warn}; use log::{info, warn};
use satrs::pus::{PusTmAsVec, PusTmInPool}; use satrs::tmtc::{PacketAsVec, PacketInPool, PacketSenderRaw};
use satrs::{ use satrs::{
hal::std::udp_server::{ReceiveResult, UdpTcServer}, hal::std::udp_server::{ReceiveResult, UdpTcServer},
pool::{PoolProviderWithGuards, SharedStaticMemoryPool}, pool::{PoolProviderWithGuards, SharedStaticMemoryPool},
tmtc::CcsdsError,
}; };
use crate::pus::HandlingStatus;
pub trait UdpTmHandler { pub trait UdpTmHandler {
fn send_tm_to_udp_client(&mut self, socket: &UdpSocket, recv_addr: &SocketAddr); fn send_tm_to_udp_client(&mut self, socket: &UdpSocket, recv_addr: &SocketAddr);
} }
pub struct StaticUdpTmHandler { pub struct StaticUdpTmHandler {
pub tm_rx: mpsc::Receiver<PusTmInPool>, pub tm_rx: mpsc::Receiver<PacketInPool>,
pub tm_store: SharedStaticMemoryPool, pub tm_store: SharedStaticMemoryPool,
} }
@ -43,7 +45,7 @@ impl UdpTmHandler for StaticUdpTmHandler {
} }
pub struct DynamicUdpTmHandler { pub struct DynamicUdpTmHandler {
pub tm_rx: mpsc::Receiver<PusTmAsVec>, pub tm_rx: mpsc::Receiver<PacketAsVec>,
} }
impl UdpTmHandler for DynamicUdpTmHandler { impl UdpTmHandler for DynamicUdpTmHandler {
@ -64,42 +66,48 @@ impl UdpTmHandler for DynamicUdpTmHandler {
} }
} }
pub struct UdpTmtcServer<TmHandler: UdpTmHandler, SendError> { pub struct UdpTmtcServer<
pub udp_tc_server: UdpTcServer<CcsdsError<SendError>>, TcSender: PacketSenderRaw<Error = SendError>,
TmHandler: UdpTmHandler,
SendError,
> {
pub udp_tc_server: UdpTcServer<TcSender, SendError>,
pub tm_handler: TmHandler, pub tm_handler: TmHandler,
} }
impl<TmHandler: UdpTmHandler, SendError: core::fmt::Debug + 'static> impl<
UdpTmtcServer<TmHandler, SendError> TcSender: PacketSenderRaw<Error = SendError>,
TmHandler: UdpTmHandler,
SendError: Debug + 'static,
> UdpTmtcServer<TcSender, TmHandler, SendError>
{ {
pub fn periodic_operation(&mut self) { pub fn periodic_operation(&mut self) {
while self.poll_tc_server() {} loop {
if self.poll_tc_server() == HandlingStatus::Empty {
break;
}
}
if let Some(recv_addr) = self.udp_tc_server.last_sender() { if let Some(recv_addr) = self.udp_tc_server.last_sender() {
self.tm_handler self.tm_handler
.send_tm_to_udp_client(&self.udp_tc_server.socket, &recv_addr); .send_tm_to_udp_client(&self.udp_tc_server.socket, &recv_addr);
} }
} }
fn poll_tc_server(&mut self) -> bool { fn poll_tc_server(&mut self) -> HandlingStatus {
match self.udp_tc_server.try_recv_tc() { match self.udp_tc_server.try_recv_tc() {
Ok(_) => true, Ok(_) => HandlingStatus::HandledOne,
Err(e) => match e { Err(e) => {
ReceiveResult::ReceiverError(e) => match e { match e {
CcsdsError::ByteConversionError(e) => { ReceiveResult::NothingReceived => (),
warn!("packet error: {e:?}"); ReceiveResult::Io(e) => {
true
}
CcsdsError::CustomError(e) => {
warn!("mpsc custom error {e:?}");
true
}
},
ReceiveResult::IoError(e) => {
warn!("IO error {e}"); warn!("IO error {e}");
false
} }
ReceiveResult::NothingReceived => false, ReceiveResult::Send(send_error) => {
}, warn!("send error {send_error:?}");
}
}
HandlingStatus::Empty
}
} }
} }
} }
@ -107,6 +115,7 @@ impl<TmHandler: UdpTmHandler, SendError: core::fmt::Debug + 'static>
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::{ use std::{
cell::RefCell,
collections::VecDeque, collections::VecDeque,
net::IpAddr, net::IpAddr,
sync::{Arc, Mutex}, sync::{Arc, Mutex},
@ -117,21 +126,26 @@ mod tests {
ecss::{tc::PusTcCreator, WritablePusPacket}, ecss::{tc::PusTcCreator, WritablePusPacket},
SpHeader, SpHeader,
}, },
tmtc::ReceivesTcCore, tmtc::PacketSenderRaw,
ComponentId,
}; };
use satrs_example::config::{components, OBSW_SERVER_ADDR}; use satrs_example::config::{components, OBSW_SERVER_ADDR};
use super::*; use super::*;
#[derive(Default, Debug, Clone)] const UDP_SERVER_ID: ComponentId = 0x05;
pub struct TestReceiver {
tc_vec: Arc<Mutex<VecDeque<Vec<u8>>>>, #[derive(Default, Debug)]
pub struct TestSender {
tc_vec: RefCell<VecDeque<PacketAsVec>>,
} }
impl ReceivesTcCore for TestReceiver { impl PacketSenderRaw for TestSender {
type Error = CcsdsError<()>; type Error = ();
fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> {
self.tc_vec.lock().unwrap().push_back(tc_raw.to_vec()); fn send_packet(&self, sender_id: ComponentId, tc_raw: &[u8]) -> Result<(), Self::Error> {
let mut mut_queue = self.tc_vec.borrow_mut();
mut_queue.push_back(PacketAsVec::new(sender_id, tc_raw.to_vec()));
Ok(()) Ok(())
} }
} }
@ -150,9 +164,10 @@ mod tests {
#[test] #[test]
fn test_basic() { fn test_basic() {
let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), 0); let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), 0);
let test_receiver = TestReceiver::default(); let test_receiver = TestSender::default();
let tc_queue = test_receiver.tc_vec.clone(); // let tc_queue = test_receiver.tc_vec.clone();
let udp_tc_server = UdpTcServer::new(sock_addr, 2048, Box::new(test_receiver)).unwrap(); let udp_tc_server =
UdpTcServer::new(UDP_SERVER_ID, sock_addr, 2048, test_receiver).unwrap();
let tm_handler = TestTmHandler::default(); let tm_handler = TestTmHandler::default();
let tm_handler_calls = tm_handler.addrs_to_send_to.clone(); let tm_handler_calls = tm_handler.addrs_to_send_to.clone();
let mut udp_dyn_server = UdpTmtcServer { let mut udp_dyn_server = UdpTmtcServer {
@ -160,16 +175,18 @@ mod tests {
tm_handler, tm_handler,
}; };
udp_dyn_server.periodic_operation(); udp_dyn_server.periodic_operation();
assert!(tc_queue.lock().unwrap().is_empty()); let queue = udp_dyn_server.udp_tc_server.tc_sender.tc_vec.borrow();
assert!(queue.is_empty());
assert!(tm_handler_calls.lock().unwrap().is_empty()); assert!(tm_handler_calls.lock().unwrap().is_empty());
} }
#[test] #[test]
fn test_transactions() { fn test_transactions() {
let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), 0); let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), 0);
let test_receiver = TestReceiver::default(); let test_receiver = TestSender::default();
let tc_queue = test_receiver.tc_vec.clone(); // let tc_queue = test_receiver.tc_vec.clone();
let udp_tc_server = UdpTcServer::new(sock_addr, 2048, Box::new(test_receiver)).unwrap(); let udp_tc_server =
UdpTcServer::new(UDP_SERVER_ID, sock_addr, 2048, test_receiver).unwrap();
let server_addr = udp_tc_server.socket.local_addr().unwrap(); let server_addr = udp_tc_server.socket.local_addr().unwrap();
let tm_handler = TestTmHandler::default(); let tm_handler = TestTmHandler::default();
let tm_handler_calls = tm_handler.addrs_to_send_to.clone(); let tm_handler_calls = tm_handler.addrs_to_send_to.clone();
@ -187,10 +204,11 @@ mod tests {
client.send(&ping_tc).unwrap(); client.send(&ping_tc).unwrap();
udp_dyn_server.periodic_operation(); udp_dyn_server.periodic_operation();
{ {
let mut tc_queue = tc_queue.lock().unwrap(); let mut queue = udp_dyn_server.udp_tc_server.tc_sender.tc_vec.borrow_mut();
assert!(!tc_queue.is_empty()); assert!(!queue.is_empty());
let received_tc = tc_queue.pop_front().unwrap(); let packet_with_sender = queue.pop_front().unwrap();
assert_eq!(received_tc, ping_tc); assert_eq!(packet_with_sender.packet, ping_tc);
assert_eq!(packet_with_sender.sender_id, UDP_SERVER_ID);
} }
{ {
@ -201,7 +219,9 @@ mod tests {
assert_eq!(received_addr, client_addr); assert_eq!(received_addr, client_addr);
} }
udp_dyn_server.periodic_operation(); udp_dyn_server.periodic_operation();
assert!(tc_queue.lock().unwrap().is_empty()); let queue = udp_dyn_server.udp_tc_server.tc_sender.tc_vec.borrow();
assert!(queue.is_empty());
drop(queue);
// Still tries to send to the same client. // Still tries to send to the same client.
{ {
let mut tm_handler_calls = tm_handler_calls.lock().unwrap(); let mut tm_handler_calls = tm_handler_calls.lock().unwrap();

View File

@ -10,19 +10,19 @@ mod tmtc;
use crate::events::EventHandler; use crate::events::EventHandler;
use crate::interface::udp::DynamicUdpTmHandler; use crate::interface::udp::DynamicUdpTmHandler;
use crate::pus::stack::PusStack; use crate::pus::stack::PusStack;
use crate::tmtc::tm_funnel::{TmFunnelDynamic, TmFunnelStatic}; use crate::tmtc::tc_source::{TcSourceTaskDynamic, TcSourceTaskStatic};
use crate::tmtc::tm_sink::{TmFunnelDynamic, TmFunnelStatic};
use log::info; use log::info;
use pus::test::create_test_service_dynamic; use pus::test::create_test_service_dynamic;
use satrs::hal::std::tcp_server::ServerConfig; use satrs::hal::std::tcp_server::ServerConfig;
use satrs::hal::std::udp_server::UdpTcServer; use satrs::hal::std::udp_server::UdpTcServer;
use satrs::request::GenericMessage; use satrs::request::GenericMessage;
use satrs::tmtc::tm_helper::SharedTmPool; use satrs::tmtc::{PacketSenderWithSharedPool, SharedPacketPool};
use satrs_example::config::pool::{create_sched_tc_pool, create_static_pools}; use satrs_example::config::pool::{create_sched_tc_pool, create_static_pools};
use satrs_example::config::tasks::{ use satrs_example::config::tasks::{
FREQ_MS_AOCS, FREQ_MS_EVENT_HANDLING, FREQ_MS_PUS_STACK, FREQ_MS_UDP_TMTC, FREQ_MS_AOCS, FREQ_MS_EVENT_HANDLING, FREQ_MS_PUS_STACK, FREQ_MS_UDP_TMTC,
}; };
use satrs_example::config::{OBSW_SERVER_ADDR, PACKET_ID_VALIDATOR, SERVER_PORT}; use satrs_example::config::{OBSW_SERVER_ADDR, PACKET_ID_VALIDATOR, SERVER_PORT};
use tmtc::PusTcSourceProviderDynamic;
use crate::acs::mgm::{MgmHandlerLis3Mdl, MpscModeLeafInterface, SpiDummyInterface}; use crate::acs::mgm::{MgmHandlerLis3Mdl, MpscModeLeafInterface, SpiDummyInterface};
use crate::interface::tcp::{SyncTcpTmSource, TcpTask}; use crate::interface::tcp::{SyncTcpTmSource, TcpTask};
@ -34,18 +34,12 @@ use crate::pus::hk::{create_hk_service_dynamic, create_hk_service_static};
use crate::pus::mode::{create_mode_service_dynamic, create_mode_service_static}; use crate::pus::mode::{create_mode_service_dynamic, create_mode_service_static};
use crate::pus::scheduler::{create_scheduler_service_dynamic, create_scheduler_service_static}; use crate::pus::scheduler::{create_scheduler_service_dynamic, create_scheduler_service_static};
use crate::pus::test::create_test_service_static; use crate::pus::test::create_test_service_static;
use crate::pus::{PusReceiver, PusTcMpscRouter}; use crate::pus::{PusTcDistributor, PusTcMpscRouter};
use crate::requests::{CompositeRequest, GenericRequestRouter}; use crate::requests::{CompositeRequest, GenericRequestRouter};
use crate::tmtc::ccsds::CcsdsReceiver;
use crate::tmtc::{
PusTcSourceProviderSharedPool, SharedTcPool, TcSourceTaskDynamic, TcSourceTaskStatic,
};
use satrs::mode::ModeRequest; use satrs::mode::ModeRequest;
use satrs::pus::event_man::EventRequestWithToken; use satrs::pus::event_man::EventRequestWithToken;
use satrs::pus::TmInSharedPoolSender;
use satrs::spacepackets::{time::cds::CdsTime, time::TimeWriter}; use satrs::spacepackets::{time::cds::CdsTime, time::TimeWriter};
use satrs::tmtc::CcsdsDistributor; use satrs_example::config::components::{MGM_HANDLER_0, TCP_SERVER, UDP_SERVER};
use satrs_example::config::components::MGM_HANDLER_0;
use std::net::{IpAddr, SocketAddr}; use std::net::{IpAddr, SocketAddr};
use std::sync::mpsc; use std::sync::mpsc;
use std::sync::{Arc, RwLock}; use std::sync::{Arc, RwLock};
@ -55,16 +49,16 @@ use std::time::Duration;
#[allow(dead_code)] #[allow(dead_code)]
fn static_tmtc_pool_main() { fn static_tmtc_pool_main() {
let (tm_pool, tc_pool) = create_static_pools(); let (tm_pool, tc_pool) = create_static_pools();
let shared_tm_pool = SharedTmPool::new(tm_pool); let shared_tm_pool = Arc::new(RwLock::new(tm_pool));
let shared_tc_pool = SharedTcPool { let shared_tc_pool = Arc::new(RwLock::new(tc_pool));
pool: Arc::new(RwLock::new(tc_pool)), let shared_tm_pool_wrapper = SharedPacketPool::new(&shared_tm_pool);
}; let shared_tc_pool_wrapper = SharedPacketPool::new(&shared_tc_pool);
let (tc_source_tx, tc_source_rx) = mpsc::sync_channel(50); let (tc_source_tx, tc_source_rx) = mpsc::sync_channel(50);
let (tm_funnel_tx, tm_funnel_rx) = mpsc::sync_channel(50); let (tm_funnel_tx, tm_funnel_rx) = mpsc::sync_channel(50);
let (tm_server_tx, tm_server_rx) = mpsc::sync_channel(50); let (tm_server_tx, tm_server_rx) = mpsc::sync_channel(50);
let tm_funnel_tx_sender = let tm_funnel_tx_sender =
TmInSharedPoolSender::new(shared_tm_pool.clone(), tm_funnel_tx.clone()); PacketSenderWithSharedPool::new(tm_funnel_tx.clone(), shared_tm_pool_wrapper.clone());
let (mgm_handler_composite_tx, mgm_handler_composite_rx) = let (mgm_handler_composite_tx, mgm_handler_composite_rx) =
mpsc::channel::<GenericMessage<CompositeRequest>>(); mpsc::channel::<GenericMessage<CompositeRequest>>();
@ -81,10 +75,7 @@ fn static_tmtc_pool_main() {
// This helper structure is used by all telecommand providers which need to send telecommands // This helper structure is used by all telecommand providers which need to send telecommands
// to the TC source. // to the TC source.
let tc_source = PusTcSourceProviderSharedPool { let tc_source = PacketSenderWithSharedPool::new(tc_source_tx, shared_tc_pool_wrapper.clone());
shared_pool: shared_tc_pool.clone(),
tc_source: tc_source_tx,
};
// Create event handling components // Create event handling components
// These sender handles are used to send event requests, for example to enable or disable // These sender handles are used to send event requests, for example to enable or disable
@ -116,7 +107,7 @@ fn static_tmtc_pool_main() {
}; };
let pus_test_service = create_test_service_static( let pus_test_service = create_test_service_static(
tm_funnel_tx_sender.clone(), tm_funnel_tx_sender.clone(),
shared_tc_pool.pool.clone(), shared_tc_pool.clone(),
event_handler.clone_event_sender(), event_handler.clone_event_sender(),
pus_test_rx, pus_test_rx,
); );
@ -128,27 +119,27 @@ fn static_tmtc_pool_main() {
); );
let pus_event_service = create_event_service_static( let pus_event_service = create_event_service_static(
tm_funnel_tx_sender.clone(), tm_funnel_tx_sender.clone(),
shared_tc_pool.pool.clone(), shared_tc_pool.clone(),
pus_event_rx, pus_event_rx,
event_request_tx, event_request_tx,
); );
let pus_action_service = create_action_service_static( let pus_action_service = create_action_service_static(
tm_funnel_tx_sender.clone(), tm_funnel_tx_sender.clone(),
shared_tc_pool.pool.clone(), shared_tc_pool.clone(),
pus_action_rx, pus_action_rx,
request_map.clone(), request_map.clone(),
pus_action_reply_rx, pus_action_reply_rx,
); );
let pus_hk_service = create_hk_service_static( let pus_hk_service = create_hk_service_static(
tm_funnel_tx_sender.clone(), tm_funnel_tx_sender.clone(),
shared_tc_pool.pool.clone(), shared_tc_pool.clone(),
pus_hk_rx, pus_hk_rx,
request_map.clone(), request_map.clone(),
pus_hk_reply_rx, pus_hk_reply_rx,
); );
let pus_mode_service = create_mode_service_static( let pus_mode_service = create_mode_service_static(
tm_funnel_tx_sender.clone(), tm_funnel_tx_sender.clone(),
shared_tc_pool.pool.clone(), shared_tc_pool.clone(),
pus_mode_rx, pus_mode_rx,
request_map, request_map,
pus_mode_reply_rx, pus_mode_reply_rx,
@ -162,38 +153,41 @@ fn static_tmtc_pool_main() {
pus_mode_service, pus_mode_service,
); );
let ccsds_receiver = CcsdsReceiver { tc_source };
let mut tmtc_task = TcSourceTaskStatic::new( let mut tmtc_task = TcSourceTaskStatic::new(
shared_tc_pool.clone(), shared_tc_pool_wrapper.clone(),
tc_source_rx, tc_source_rx,
PusReceiver::new(tm_funnel_tx_sender, pus_router), PusTcDistributor::new(tm_funnel_tx_sender, pus_router),
); );
let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), SERVER_PORT); let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), SERVER_PORT);
let udp_ccsds_distributor = CcsdsDistributor::new(ccsds_receiver.clone()); let udp_tc_server = UdpTcServer::new(UDP_SERVER.id(), sock_addr, 2048, tc_source.clone())
let udp_tc_server = UdpTcServer::new(sock_addr, 2048, Box::new(udp_ccsds_distributor))
.expect("creating UDP TMTC server failed"); .expect("creating UDP TMTC server failed");
let mut udp_tmtc_server = UdpTmtcServer { let mut udp_tmtc_server = UdpTmtcServer {
udp_tc_server, udp_tc_server,
tm_handler: StaticUdpTmHandler { tm_handler: StaticUdpTmHandler {
tm_rx: tm_server_rx, tm_rx: tm_server_rx,
tm_store: shared_tm_pool.clone_backing_pool(), tm_store: shared_tm_pool.clone(),
}, },
}; };
let tcp_ccsds_distributor = CcsdsDistributor::new(ccsds_receiver); let tcp_server_cfg = ServerConfig::new(
let tcp_server_cfg = ServerConfig::new(sock_addr, Duration::from_millis(400), 4096, 8192); TCP_SERVER.id(),
sock_addr,
Duration::from_millis(400),
4096,
8192,
);
let sync_tm_tcp_source = SyncTcpTmSource::new(200); let sync_tm_tcp_source = SyncTcpTmSource::new(200);
let mut tcp_server = TcpTask::new( let mut tcp_server = TcpTask::new(
tcp_server_cfg, tcp_server_cfg,
sync_tm_tcp_source.clone(), sync_tm_tcp_source.clone(),
tcp_ccsds_distributor, tc_source.clone(),
PACKET_ID_VALIDATOR.clone(), PACKET_ID_VALIDATOR.clone(),
) )
.expect("tcp server creation failed"); .expect("tcp server creation failed");
let mut tm_funnel = TmFunnelStatic::new( let mut tm_funnel = TmFunnelStatic::new(
shared_tm_pool, shared_tm_pool_wrapper,
sync_tm_tcp_source, sync_tm_tcp_source,
tm_funnel_rx, tm_funnel_rx,
tm_server_tx, tm_server_tx,
@ -317,8 +311,6 @@ fn dyn_tmtc_pool_main() {
.mode_router_map .mode_router_map
.insert(MGM_HANDLER_0.raw(), mgm_handler_mode_tx); .insert(MGM_HANDLER_0.raw(), mgm_handler_mode_tx);
let tc_source = PusTcSourceProviderDynamic(tc_source_tx);
// Create event handling components // Create event handling components
// These sender handles are used to send event requests, for example to enable or disable // These sender handles are used to send event requests, for example to enable or disable
// certain events. // certain events.
@ -354,7 +346,7 @@ fn dyn_tmtc_pool_main() {
); );
let pus_scheduler_service = create_scheduler_service_dynamic( let pus_scheduler_service = create_scheduler_service_dynamic(
tm_funnel_tx.clone(), tm_funnel_tx.clone(),
tc_source.0.clone(), tc_source_tx.clone(),
pus_sched_rx, pus_sched_rx,
create_sched_tc_pool(), create_sched_tc_pool(),
); );
@ -388,16 +380,13 @@ fn dyn_tmtc_pool_main() {
pus_mode_service, pus_mode_service,
); );
let ccsds_receiver = CcsdsReceiver { tc_source };
let mut tmtc_task = TcSourceTaskDynamic::new( let mut tmtc_task = TcSourceTaskDynamic::new(
tc_source_rx, tc_source_rx,
PusReceiver::new(tm_funnel_tx.clone(), pus_router), PusTcDistributor::new(tm_funnel_tx.clone(), pus_router),
); );
let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), SERVER_PORT); let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), SERVER_PORT);
let udp_ccsds_distributor = CcsdsDistributor::new(ccsds_receiver.clone()); let udp_tc_server = UdpTcServer::new(UDP_SERVER.id(), sock_addr, 2048, tc_source_tx.clone())
let udp_tc_server = UdpTcServer::new(sock_addr, 2048, Box::new(udp_ccsds_distributor))
.expect("creating UDP TMTC server failed"); .expect("creating UDP TMTC server failed");
let mut udp_tmtc_server = UdpTmtcServer { let mut udp_tmtc_server = UdpTmtcServer {
udp_tc_server, udp_tc_server,
@ -406,13 +395,18 @@ fn dyn_tmtc_pool_main() {
}, },
}; };
let tcp_ccsds_distributor = CcsdsDistributor::new(ccsds_receiver); let tcp_server_cfg = ServerConfig::new(
let tcp_server_cfg = ServerConfig::new(sock_addr, Duration::from_millis(400), 4096, 8192); TCP_SERVER.id(),
sock_addr,
Duration::from_millis(400),
4096,
8192,
);
let sync_tm_tcp_source = SyncTcpTmSource::new(200); let sync_tm_tcp_source = SyncTcpTmSource::new(200);
let mut tcp_server = TcpTask::new( let mut tcp_server = TcpTask::new(
tcp_server_cfg, tcp_server_cfg,
sync_tm_tcp_source.clone(), sync_tm_tcp_source.clone(),
tcp_ccsds_distributor, tc_source_tx.clone(),
PACKET_ID_VALIDATOR.clone(), PACKET_ID_VALIDATOR.clone(),
) )
.expect("tcp server creation failed"); .expect("tcp server creation failed");

View File

@ -11,13 +11,14 @@ use satrs::pus::verification::{
}; };
use satrs::pus::{ use satrs::pus::{
ActiveRequestProvider, EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, ActiveRequestProvider, EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter,
EcssTcInVecConverter, EcssTmSenderCore, EcssTmtcError, GenericConversionError, MpscTcReceiver, EcssTcInVecConverter, EcssTmSender, EcssTmtcError, GenericConversionError, MpscTcReceiver,
MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, PusReplyHandler, MpscTmAsVecSender, PusPacketHandlerResult, PusReplyHandler, PusServiceHelper,
PusServiceHelper, PusTcToRequestConverter, PusTmAsVec, PusTmInPool, TmInSharedPoolSender, PusTcToRequestConverter,
}; };
use satrs::request::{GenericMessage, UniqueApidTargetId}; use satrs::request::{GenericMessage, UniqueApidTargetId};
use satrs::spacepackets::ecss::tc::PusTcReader; use satrs::spacepackets::ecss::tc::PusTcReader;
use satrs::spacepackets::ecss::{EcssEnumU16, PusPacket}; use satrs::spacepackets::ecss::{EcssEnumU16, PusPacket};
use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool};
use satrs_example::config::components::PUS_ACTION_SERVICE; use satrs_example::config::components::PUS_ACTION_SERVICE;
use satrs_example::config::tmtc_err; use satrs_example::config::tmtc_err;
use std::sync::mpsc; use std::sync::mpsc;
@ -48,7 +49,7 @@ impl PusReplyHandler<ActivePusActionRequestStd, ActionReplyPus> for ActionReplyH
fn handle_unrequested_reply( fn handle_unrequested_reply(
&mut self, &mut self,
reply: &GenericMessage<ActionReplyPus>, reply: &GenericMessage<ActionReplyPus>,
_tm_sender: &impl EcssTmSenderCore, _tm_sender: &impl EcssTmSender,
) -> Result<(), Self::Error> { ) -> Result<(), Self::Error> {
warn!("received unexpected reply for service 8: {reply:?}"); warn!("received unexpected reply for service 8: {reply:?}");
Ok(()) Ok(())
@ -58,7 +59,7 @@ impl PusReplyHandler<ActivePusActionRequestStd, ActionReplyPus> for ActionReplyH
&mut self, &mut self,
reply: &GenericMessage<ActionReplyPus>, reply: &GenericMessage<ActionReplyPus>,
active_request: &ActivePusActionRequestStd, active_request: &ActivePusActionRequestStd,
tm_sender: &(impl EcssTmSenderCore + ?Sized), tm_sender: &(impl EcssTmSender + ?Sized),
verification_handler: &impl VerificationReportingProvider, verification_handler: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<bool, Self::Error> { ) -> Result<bool, Self::Error> {
@ -121,7 +122,7 @@ impl PusReplyHandler<ActivePusActionRequestStd, ActionReplyPus> for ActionReplyH
fn handle_request_timeout( fn handle_request_timeout(
&mut self, &mut self,
active_request: &ActivePusActionRequestStd, active_request: &ActivePusActionRequestStd,
tm_sender: &impl EcssTmSenderCore, tm_sender: &impl EcssTmSender,
verification_handler: &impl VerificationReportingProvider, verification_handler: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(), Self::Error> { ) -> Result<(), Self::Error> {
@ -145,7 +146,7 @@ impl PusTcToRequestConverter<ActivePusActionRequestStd, ActionRequest> for Actio
&mut self, &mut self,
token: VerificationToken<TcStateAccepted>, token: VerificationToken<TcStateAccepted>,
tc: &PusTcReader, tc: &PusTcReader,
tm_sender: &(impl EcssTmSenderCore + ?Sized), tm_sender: &(impl EcssTmSender + ?Sized),
verif_reporter: &impl VerificationReportingProvider, verif_reporter: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(ActivePusActionRequestStd, ActionRequest), Self::Error> { ) -> Result<(ActivePusActionRequestStd, ActionRequest), Self::Error> {
@ -195,12 +196,12 @@ impl PusTcToRequestConverter<ActivePusActionRequestStd, ActionRequest> for Actio
} }
pub fn create_action_service_static( pub fn create_action_service_static(
tm_sender: TmInSharedPoolSender<mpsc::SyncSender<PusTmInPool>>, tm_sender: PacketSenderWithSharedPool,
tc_pool: SharedStaticMemoryPool, tc_pool: SharedStaticMemoryPool,
pus_action_rx: mpsc::Receiver<EcssTcAndToken>, pus_action_rx: mpsc::Receiver<EcssTcAndToken>,
action_router: GenericRequestRouter, action_router: GenericRequestRouter,
reply_receiver: mpsc::Receiver<GenericMessage<ActionReplyPus>>, reply_receiver: mpsc::Receiver<GenericMessage<ActionReplyPus>>,
) -> ActionServiceWrapper<MpscTmInSharedPoolSenderBounded, EcssTcInSharedStoreConverter> { ) -> ActionServiceWrapper<PacketSenderWithSharedPool, EcssTcInSharedStoreConverter> {
let action_request_handler = PusTargetedRequestService::new( let action_request_handler = PusTargetedRequestService::new(
PusServiceHelper::new( PusServiceHelper::new(
PUS_ACTION_SERVICE.id(), PUS_ACTION_SERVICE.id(),
@ -223,7 +224,7 @@ pub fn create_action_service_static(
} }
pub fn create_action_service_dynamic( pub fn create_action_service_dynamic(
tm_funnel_tx: mpsc::Sender<PusTmAsVec>, tm_funnel_tx: mpsc::Sender<PacketAsVec>,
pus_action_rx: mpsc::Receiver<EcssTcAndToken>, pus_action_rx: mpsc::Receiver<EcssTcAndToken>,
action_router: GenericRequestRouter, action_router: GenericRequestRouter,
reply_receiver: mpsc::Receiver<GenericMessage<ActionReplyPus>>, reply_receiver: mpsc::Receiver<GenericMessage<ActionReplyPus>>,
@ -247,8 +248,7 @@ pub fn create_action_service_dynamic(
} }
} }
pub struct ActionServiceWrapper<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> pub struct ActionServiceWrapper<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter> {
{
pub(crate) service: PusTargetedRequestService< pub(crate) service: PusTargetedRequestService<
MpscTcReceiver, MpscTcReceiver,
TmSender, TmSender,
@ -263,7 +263,7 @@ pub struct ActionServiceWrapper<TmSender: EcssTmSenderCore, TcInMemConverter: Ec
>, >,
} }
impl<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> TargetedPusService impl<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter> TargetedPusService
for ActionServiceWrapper<TmSender, TcInMemConverter> for ActionServiceWrapper<TmSender, TcInMemConverter>
{ {
/// Returns [true] if the packet handling is finished. /// Returns [true] if the packet handling is finished.
@ -465,7 +465,10 @@ mod tests {
.verif_reporter() .verif_reporter()
.check_next_is_acceptance_success(id, accepted_token.request_id()); .check_next_is_acceptance_success(id, accepted_token.request_id());
self.pus_packet_tx self.pus_packet_tx
.send(EcssTcAndToken::new(tc.to_vec().unwrap(), accepted_token)) .send(EcssTcAndToken::new(
PacketAsVec::new(self.service.service_helper.id(), tc.to_vec().unwrap()),
accepted_token,
))
.unwrap(); .unwrap();
} }
} }

View File

@ -8,19 +8,19 @@ use satrs::pus::event_srv::PusEventServiceHandler;
use satrs::pus::verification::VerificationReporter; use satrs::pus::verification::VerificationReporter;
use satrs::pus::{ use satrs::pus::{
EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter,
EcssTmSenderCore, MpscTcReceiver, MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, EcssTmSender, MpscTcReceiver, MpscTmAsVecSender, PusPacketHandlerResult, PusServiceHelper,
PusPacketHandlerResult, PusServiceHelper, PusTmAsVec, PusTmInPool, TmInSharedPoolSender,
}; };
use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool};
use satrs_example::config::components::PUS_EVENT_MANAGEMENT; use satrs_example::config::components::PUS_EVENT_MANAGEMENT;
use super::HandlingStatus; use super::HandlingStatus;
pub fn create_event_service_static( pub fn create_event_service_static(
tm_sender: TmInSharedPoolSender<mpsc::SyncSender<PusTmInPool>>, tm_sender: PacketSenderWithSharedPool,
tc_pool: SharedStaticMemoryPool, tc_pool: SharedStaticMemoryPool,
pus_event_rx: mpsc::Receiver<EcssTcAndToken>, pus_event_rx: mpsc::Receiver<EcssTcAndToken>,
event_request_tx: mpsc::Sender<EventRequestWithToken>, event_request_tx: mpsc::Sender<EventRequestWithToken>,
) -> EventServiceWrapper<MpscTmInSharedPoolSenderBounded, EcssTcInSharedStoreConverter> { ) -> EventServiceWrapper<PacketSenderWithSharedPool, EcssTcInSharedStoreConverter> {
let pus_5_handler = PusEventServiceHandler::new( let pus_5_handler = PusEventServiceHandler::new(
PusServiceHelper::new( PusServiceHelper::new(
PUS_EVENT_MANAGEMENT.id(), PUS_EVENT_MANAGEMENT.id(),
@ -37,7 +37,7 @@ pub fn create_event_service_static(
} }
pub fn create_event_service_dynamic( pub fn create_event_service_dynamic(
tm_funnel_tx: mpsc::Sender<PusTmAsVec>, tm_funnel_tx: mpsc::Sender<PacketAsVec>,
pus_event_rx: mpsc::Receiver<EcssTcAndToken>, pus_event_rx: mpsc::Receiver<EcssTcAndToken>,
event_request_tx: mpsc::Sender<EventRequestWithToken>, event_request_tx: mpsc::Sender<EventRequestWithToken>,
) -> EventServiceWrapper<MpscTmAsVecSender, EcssTcInVecConverter> { ) -> EventServiceWrapper<MpscTmAsVecSender, EcssTcInVecConverter> {
@ -56,12 +56,12 @@ pub fn create_event_service_dynamic(
} }
} }
pub struct EventServiceWrapper<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> { pub struct EventServiceWrapper<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter> {
pub handler: pub handler:
PusEventServiceHandler<MpscTcReceiver, TmSender, TcInMemConverter, VerificationReporter>, PusEventServiceHandler<MpscTcReceiver, TmSender, TcInMemConverter, VerificationReporter>,
} }
impl<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> impl<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter>
EventServiceWrapper<TmSender, TcInMemConverter> EventServiceWrapper<TmSender, TcInMemConverter>
{ {
pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus { pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus {

View File

@ -8,14 +8,14 @@ use satrs::pus::verification::{
}; };
use satrs::pus::{ use satrs::pus::{
ActivePusRequestStd, ActiveRequestProvider, DefaultActiveRequestMap, EcssTcAndToken, ActivePusRequestStd, ActiveRequestProvider, DefaultActiveRequestMap, EcssTcAndToken,
EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTmSenderCore, EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTmSender,
EcssTmtcError, GenericConversionError, MpscTcReceiver, MpscTmAsVecSender, EcssTmtcError, GenericConversionError, MpscTcReceiver, MpscTmAsVecSender,
MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, PusReplyHandler, PusServiceHelper, PusPacketHandlerResult, PusReplyHandler, PusServiceHelper, PusTcToRequestConverter,
PusTcToRequestConverter, PusTmAsVec, PusTmInPool, TmInSharedPoolSender,
}; };
use satrs::request::{GenericMessage, UniqueApidTargetId}; use satrs::request::{GenericMessage, UniqueApidTargetId};
use satrs::spacepackets::ecss::tc::PusTcReader; use satrs::spacepackets::ecss::tc::PusTcReader;
use satrs::spacepackets::ecss::{hk, PusPacket}; use satrs::spacepackets::ecss::{hk, PusPacket};
use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool};
use satrs_example::config::components::PUS_HK_SERVICE; use satrs_example::config::components::PUS_HK_SERVICE;
use satrs_example::config::{hk_err, tmtc_err}; use satrs_example::config::{hk_err, tmtc_err};
use std::sync::mpsc; use std::sync::mpsc;
@ -46,7 +46,7 @@ impl PusReplyHandler<ActivePusRequestStd, HkReply> for HkReplyHandler {
fn handle_unrequested_reply( fn handle_unrequested_reply(
&mut self, &mut self,
reply: &GenericMessage<HkReply>, reply: &GenericMessage<HkReply>,
_tm_sender: &impl EcssTmSenderCore, _tm_sender: &impl EcssTmSender,
) -> Result<(), Self::Error> { ) -> Result<(), Self::Error> {
log::warn!("received unexpected reply for service 3: {reply:?}"); log::warn!("received unexpected reply for service 3: {reply:?}");
Ok(()) Ok(())
@ -56,7 +56,7 @@ impl PusReplyHandler<ActivePusRequestStd, HkReply> for HkReplyHandler {
&mut self, &mut self,
reply: &GenericMessage<HkReply>, reply: &GenericMessage<HkReply>,
active_request: &ActivePusRequestStd, active_request: &ActivePusRequestStd,
tm_sender: &impl EcssTmSenderCore, tm_sender: &impl EcssTmSender,
verification_handler: &impl VerificationReportingProvider, verification_handler: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<bool, Self::Error> { ) -> Result<bool, Self::Error> {
@ -77,7 +77,7 @@ impl PusReplyHandler<ActivePusRequestStd, HkReply> for HkReplyHandler {
fn handle_request_timeout( fn handle_request_timeout(
&mut self, &mut self,
active_request: &ActivePusRequestStd, active_request: &ActivePusRequestStd,
tm_sender: &impl EcssTmSenderCore, tm_sender: &impl EcssTmSender,
verification_handler: &impl VerificationReportingProvider, verification_handler: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(), Self::Error> { ) -> Result<(), Self::Error> {
@ -111,7 +111,7 @@ impl PusTcToRequestConverter<ActivePusRequestStd, HkRequest> for HkRequestConver
&mut self, &mut self,
token: VerificationToken<TcStateAccepted>, token: VerificationToken<TcStateAccepted>,
tc: &PusTcReader, tc: &PusTcReader,
tm_sender: &(impl EcssTmSenderCore + ?Sized), tm_sender: &(impl EcssTmSender + ?Sized),
verif_reporter: &impl VerificationReportingProvider, verif_reporter: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(ActivePusRequestStd, HkRequest), Self::Error> { ) -> Result<(ActivePusRequestStd, HkRequest), Self::Error> {
@ -232,12 +232,12 @@ impl PusTcToRequestConverter<ActivePusRequestStd, HkRequest> for HkRequestConver
} }
pub fn create_hk_service_static( pub fn create_hk_service_static(
tm_sender: TmInSharedPoolSender<mpsc::SyncSender<PusTmInPool>>, tm_sender: PacketSenderWithSharedPool,
tc_pool: SharedStaticMemoryPool, tc_pool: SharedStaticMemoryPool,
pus_hk_rx: mpsc::Receiver<EcssTcAndToken>, pus_hk_rx: mpsc::Receiver<EcssTcAndToken>,
request_router: GenericRequestRouter, request_router: GenericRequestRouter,
reply_receiver: mpsc::Receiver<GenericMessage<HkReply>>, reply_receiver: mpsc::Receiver<GenericMessage<HkReply>>,
) -> HkServiceWrapper<MpscTmInSharedPoolSenderBounded, EcssTcInSharedStoreConverter> { ) -> HkServiceWrapper<PacketSenderWithSharedPool, EcssTcInSharedStoreConverter> {
let pus_3_handler = PusTargetedRequestService::new( let pus_3_handler = PusTargetedRequestService::new(
PusServiceHelper::new( PusServiceHelper::new(
PUS_HK_SERVICE.id(), PUS_HK_SERVICE.id(),
@ -258,7 +258,7 @@ pub fn create_hk_service_static(
} }
pub fn create_hk_service_dynamic( pub fn create_hk_service_dynamic(
tm_funnel_tx: mpsc::Sender<PusTmAsVec>, tm_funnel_tx: mpsc::Sender<PacketAsVec>,
pus_hk_rx: mpsc::Receiver<EcssTcAndToken>, pus_hk_rx: mpsc::Receiver<EcssTcAndToken>,
request_router: GenericRequestRouter, request_router: GenericRequestRouter,
reply_receiver: mpsc::Receiver<GenericMessage<HkReply>>, reply_receiver: mpsc::Receiver<GenericMessage<HkReply>>,
@ -282,7 +282,7 @@ pub fn create_hk_service_dynamic(
} }
} }
pub struct HkServiceWrapper<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> { pub struct HkServiceWrapper<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter> {
pub(crate) service: PusTargetedRequestService< pub(crate) service: PusTargetedRequestService<
MpscTcReceiver, MpscTcReceiver,
TmSender, TmSender,
@ -297,7 +297,7 @@ pub struct HkServiceWrapper<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTc
>, >,
} }
impl<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> impl<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter>
HkServiceWrapper<TmSender, TcInMemConverter> HkServiceWrapper<TmSender, TcInMemConverter>
{ {
pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus { pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus {

View File

@ -1,20 +1,21 @@
use crate::requests::GenericRequestRouter; use crate::requests::GenericRequestRouter;
use crate::tmtc::MpscStoreAndSendError;
use log::warn; use log::warn;
use satrs::pool::PoolAddr;
use satrs::pus::verification::{ use satrs::pus::verification::{
self, FailParams, TcStateAccepted, TcStateStarted, VerificationReporter, self, FailParams, TcStateAccepted, TcStateStarted, VerificationReporter,
VerificationReporterCfg, VerificationReportingProvider, VerificationToken, VerificationReporterCfg, VerificationReportingProvider, VerificationToken,
}; };
use satrs::pus::{ use satrs::pus::{
ActiveRequestMapProvider, ActiveRequestProvider, EcssTcAndToken, EcssTcInMemConverter, ActiveRequestMapProvider, ActiveRequestProvider, EcssTcAndToken, EcssTcInMemConverter,
EcssTcReceiverCore, EcssTmSenderCore, EcssTmtcError, GenericConversionError, EcssTcReceiver, EcssTmSender, EcssTmtcError, GenericConversionError, GenericRoutingError,
GenericRoutingError, PusPacketHandlerResult, PusPacketHandlingError, PusReplyHandler, PusPacketHandlerResult, PusPacketHandlingError, PusReplyHandler, PusRequestRouter,
PusRequestRouter, PusServiceHelper, PusTcToRequestConverter, TcInMemory, PusServiceHelper, PusTcToRequestConverter, TcInMemory,
}; };
use satrs::queue::GenericReceiveError; use satrs::queue::{GenericReceiveError, GenericSendError};
use satrs::request::{Apid, GenericMessage, MessageMetadata}; use satrs::request::{Apid, GenericMessage, MessageMetadata};
use satrs::spacepackets::ecss::tc::PusTcReader; use satrs::spacepackets::ecss::tc::PusTcReader;
use satrs::spacepackets::ecss::PusServiceId; use satrs::spacepackets::ecss::{PusPacket, PusServiceId};
use satrs::tmtc::{PacketAsVec, PacketInPool};
use satrs::ComponentId; use satrs::ComponentId;
use satrs_example::config::components::PUS_ROUTING_SERVICE; use satrs_example::config::components::PUS_ROUTING_SERVICE;
use satrs_example::config::{tmtc_err, CustomPusServiceId}; use satrs_example::config::{tmtc_err, CustomPusServiceId};
@ -53,7 +54,7 @@ pub struct PusTcMpscRouter {
pub mode_tc_sender: Sender<EcssTcAndToken>, pub mode_tc_sender: Sender<EcssTcAndToken>,
} }
pub struct PusReceiver<TmSender: EcssTmSenderCore> { pub struct PusTcDistributor<TmSender: EcssTmSender> {
pub id: ComponentId, pub id: ComponentId,
pub tm_sender: TmSender, pub tm_sender: TmSender,
pub verif_reporter: VerificationReporter, pub verif_reporter: VerificationReporter,
@ -61,7 +62,7 @@ pub struct PusReceiver<TmSender: EcssTmSenderCore> {
stamp_helper: TimeStampHelper, stamp_helper: TimeStampHelper,
} }
impl<TmSender: EcssTmSenderCore> PusReceiver<TmSender> { impl<TmSender: EcssTmSender> PusTcDistributor<TmSender> {
pub fn new(tm_sender: TmSender, pus_router: PusTcMpscRouter) -> Self { pub fn new(tm_sender: TmSender, pus_router: PusTcMpscRouter) -> Self {
Self { Self {
id: PUS_ROUTING_SERVICE.raw(), id: PUS_ROUTING_SERVICE.raw(),
@ -75,19 +76,54 @@ impl<TmSender: EcssTmSenderCore> PusReceiver<TmSender> {
} }
} }
pub fn handle_tc_packet( pub fn handle_tc_packet_vec(
&mut self, &mut self,
tc_in_memory: TcInMemory, packet_as_vec: PacketAsVec,
service: u8, ) -> Result<PusPacketHandlerResult, GenericSendError> {
pus_tc: &PusTcReader, self.handle_tc_generic(packet_as_vec.sender_id, None, &packet_as_vec.packet)
) -> Result<PusPacketHandlerResult, MpscStoreAndSendError> { }
let init_token = self.verif_reporter.add_tc(pus_tc);
pub fn handle_tc_packet_in_store(
&mut self,
packet_in_pool: PacketInPool,
pus_tc_copy: &[u8],
) -> Result<PusPacketHandlerResult, GenericSendError> {
self.handle_tc_generic(
packet_in_pool.sender_id,
Some(packet_in_pool.store_addr),
pus_tc_copy,
)
}
pub fn handle_tc_generic(
&mut self,
sender_id: ComponentId,
addr_opt: Option<PoolAddr>,
raw_tc: &[u8],
) -> Result<PusPacketHandlerResult, GenericSendError> {
let pus_tc_result = PusTcReader::new(raw_tc);
if pus_tc_result.is_err() {
log::warn!(
"error creating PUS TC from raw data received from {}: {}",
sender_id,
pus_tc_result.unwrap_err()
);
log::warn!("raw data: {:x?}", raw_tc);
return Ok(PusPacketHandlerResult::RequestHandled);
}
let pus_tc = pus_tc_result.unwrap().0;
let init_token = self.verif_reporter.add_tc(&pus_tc);
self.stamp_helper.update_from_now(); self.stamp_helper.update_from_now();
let accepted_token = self let accepted_token = self
.verif_reporter .verif_reporter
.acceptance_success(&self.tm_sender, init_token, self.stamp_helper.stamp()) .acceptance_success(&self.tm_sender, init_token, self.stamp_helper.stamp())
.expect("Acceptance success failure"); .expect("Acceptance success failure");
let service = PusServiceId::try_from(service); let service = PusServiceId::try_from(pus_tc.service());
let tc_in_memory: TcInMemory = if let Some(store_addr) = addr_opt {
PacketInPool::new(sender_id, store_addr).into()
} else {
PacketAsVec::new(sender_id, Vec::from(raw_tc)).into()
};
match service { match service {
Ok(standard_service) => match standard_service { Ok(standard_service) => match standard_service {
PusServiceId::Test => self.pus_router.test_tc_sender.send(EcssTcAndToken { PusServiceId::Test => self.pus_router.test_tc_sender.send(EcssTcAndToken {
@ -128,12 +164,14 @@ impl<TmSender: EcssTmSenderCore> PusReceiver<TmSender> {
Err(e) => { Err(e) => {
if let Ok(custom_service) = CustomPusServiceId::try_from(e.number) { if let Ok(custom_service) = CustomPusServiceId::try_from(e.number) {
match custom_service { match custom_service {
CustomPusServiceId::Mode => { CustomPusServiceId::Mode => self
self.pus_router.mode_tc_sender.send(EcssTcAndToken { .pus_router
.mode_tc_sender
.send(EcssTcAndToken {
tc_in_memory, tc_in_memory,
token: Some(accepted_token.into()), token: Some(accepted_token.into()),
})? })
} .map_err(|_| GenericSendError::RxDisconnected)?,
CustomPusServiceId::Health => {} CustomPusServiceId::Health => {}
} }
} else { } else {
@ -179,12 +217,13 @@ pub trait TargetedPusService {
/// ///
/// The handler exposes the following API: /// The handler exposes the following API:
/// ///
/// 1. [Self::handle_one_tc] which tries to poll and handle one TC packet, covering steps 1-5. /// 1. [Self::poll_and_handle_next_tc] which tries to poll and handle one TC packet, covering
/// 2. [Self::check_one_reply] which tries to poll and handle one reply, covering step 6. /// steps 1-5.
/// 2. [Self::poll_and_check_next_reply] which tries to poll and handle one reply, covering step 6.
/// 3. [Self::check_for_request_timeouts] which checks for request timeouts, covering step 7. /// 3. [Self::check_for_request_timeouts] which checks for request timeouts, covering step 7.
pub struct PusTargetedRequestService< pub struct PusTargetedRequestService<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
RequestConverter: PusTcToRequestConverter<ActiveRequestInfo, RequestType, Error = GenericConversionError>, RequestConverter: PusTcToRequestConverter<ActiveRequestInfo, RequestType, Error = GenericConversionError>,
@ -205,8 +244,8 @@ pub struct PusTargetedRequestService<
} }
impl< impl<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
RequestConverter: PusTcToRequestConverter<ActiveRequestInfo, RequestType, Error = GenericConversionError>, RequestConverter: PusTcToRequestConverter<ActiveRequestInfo, RequestType, Error = GenericConversionError>,
@ -435,7 +474,7 @@ where
/// Generic timeout handling: Handle the verification failure with a dedicated return code /// Generic timeout handling: Handle the verification failure with a dedicated return code
/// and also log the error. /// and also log the error.
pub fn generic_pus_request_timeout_handler( pub fn generic_pus_request_timeout_handler(
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
active_request: &(impl ActiveRequestProvider + Debug), active_request: &(impl ActiveRequestProvider + Debug),
verification_handler: &impl VerificationReportingProvider, verification_handler: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
@ -459,7 +498,7 @@ pub(crate) mod tests {
use std::time::Duration; use std::time::Duration;
use satrs::pus::test_util::TEST_COMPONENT_ID_0; use satrs::pus::test_util::TEST_COMPONENT_ID_0;
use satrs::pus::{MpscTmAsVecSender, PusTmAsVec, PusTmVariant}; use satrs::pus::{MpscTmAsVecSender, PusTmVariant};
use satrs::request::RequestId; use satrs::request::RequestId;
use satrs::{ use satrs::{
pus::{ pus::{
@ -489,7 +528,7 @@ pub(crate) mod tests {
pub id: ComponentId, pub id: ComponentId,
pub verif_reporter: TestVerificationReporter, pub verif_reporter: TestVerificationReporter,
pub reply_handler: ReplyHandler, pub reply_handler: ReplyHandler,
pub tm_receiver: mpsc::Receiver<PusTmAsVec>, pub tm_receiver: mpsc::Receiver<PacketAsVec>,
pub default_timeout: Duration, pub default_timeout: Duration,
tm_sender: MpscTmAsVecSender, tm_sender: MpscTmAsVecSender,
phantom: std::marker::PhantomData<(ActiveRequestInfo, Reply)>, phantom: std::marker::PhantomData<(ActiveRequestInfo, Reply)>,
@ -589,7 +628,7 @@ pub(crate) mod tests {
/// Dummy sender component which does nothing on the [Self::send_tm] call. /// Dummy sender component which does nothing on the [Self::send_tm] call.
/// ///
/// Useful for unit tests. /// Useful for unit tests.
impl EcssTmSenderCore for DummySender { impl EcssTmSender for DummySender {
fn send_tm(&self, _source_id: ComponentId, _tm: PusTmVariant) -> Result<(), EcssTmtcError> { fn send_tm(&self, _source_id: ComponentId, _tm: PusTmVariant) -> Result<(), EcssTmtcError> {
Ok(()) Ok(())
} }
@ -696,7 +735,7 @@ pub(crate) mod tests {
ReplyType, ReplyType,
>, >,
pub request_id: Option<RequestId>, pub request_id: Option<RequestId>,
pub tm_funnel_rx: mpsc::Receiver<PusTmAsVec>, pub tm_funnel_rx: mpsc::Receiver<PacketAsVec>,
pub pus_packet_tx: mpsc::Sender<EcssTcAndToken>, pub pus_packet_tx: mpsc::Sender<EcssTcAndToken>,
pub reply_tx: mpsc::Sender<GenericMessage<ReplyType>>, pub reply_tx: mpsc::Sender<GenericMessage<ReplyType>>,
pub request_rx: mpsc::Receiver<GenericMessage<CompositeRequest>>, pub request_rx: mpsc::Receiver<GenericMessage<CompositeRequest>>,

View File

@ -1,5 +1,6 @@
use derive_new::new; use derive_new::new;
use log::{error, warn}; use log::{error, warn};
use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool};
use std::sync::mpsc; use std::sync::mpsc;
use std::time::Duration; use std::time::Duration;
@ -8,8 +9,8 @@ use satrs::pool::SharedStaticMemoryPool;
use satrs::pus::verification::VerificationReporter; use satrs::pus::verification::VerificationReporter;
use satrs::pus::{ use satrs::pus::{
DefaultActiveRequestMap, EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, DefaultActiveRequestMap, EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter,
EcssTcInVecConverter, MpscTcReceiver, MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, EcssTcInVecConverter, MpscTcReceiver, MpscTmAsVecSender, PusPacketHandlerResult,
PusPacketHandlerResult, PusServiceHelper, PusTmAsVec, PusTmInPool, TmInSharedPoolSender, PusServiceHelper,
}; };
use satrs::request::GenericMessage; use satrs::request::GenericMessage;
use satrs::{ use satrs::{
@ -20,7 +21,7 @@ use satrs::{
self, FailParams, TcStateAccepted, TcStateStarted, VerificationReportingProvider, self, FailParams, TcStateAccepted, TcStateStarted, VerificationReportingProvider,
VerificationToken, VerificationToken,
}, },
ActivePusRequestStd, ActiveRequestProvider, EcssTmSenderCore, EcssTmtcError, ActivePusRequestStd, ActiveRequestProvider, EcssTmSender, EcssTmtcError,
GenericConversionError, PusReplyHandler, PusTcToRequestConverter, PusTmVariant, GenericConversionError, PusReplyHandler, PusTcToRequestConverter, PusTmVariant,
}, },
request::UniqueApidTargetId, request::UniqueApidTargetId,
@ -53,7 +54,7 @@ impl PusReplyHandler<ActivePusRequestStd, ModeReply> for ModeReplyHandler {
fn handle_unrequested_reply( fn handle_unrequested_reply(
&mut self, &mut self,
reply: &GenericMessage<ModeReply>, reply: &GenericMessage<ModeReply>,
_tm_sender: &impl EcssTmSenderCore, _tm_sender: &impl EcssTmSender,
) -> Result<(), Self::Error> { ) -> Result<(), Self::Error> {
log::warn!("received unexpected reply for mode service 5: {reply:?}"); log::warn!("received unexpected reply for mode service 5: {reply:?}");
Ok(()) Ok(())
@ -63,7 +64,7 @@ impl PusReplyHandler<ActivePusRequestStd, ModeReply> for ModeReplyHandler {
&mut self, &mut self,
reply: &GenericMessage<ModeReply>, reply: &GenericMessage<ModeReply>,
active_request: &ActivePusRequestStd, active_request: &ActivePusRequestStd,
tm_sender: &impl EcssTmSenderCore, tm_sender: &impl EcssTmSender,
verification_handler: &impl VerificationReportingProvider, verification_handler: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<bool, Self::Error> { ) -> Result<bool, Self::Error> {
@ -117,7 +118,7 @@ impl PusReplyHandler<ActivePusRequestStd, ModeReply> for ModeReplyHandler {
fn handle_request_timeout( fn handle_request_timeout(
&mut self, &mut self,
active_request: &ActivePusRequestStd, active_request: &ActivePusRequestStd,
tm_sender: &impl EcssTmSenderCore, tm_sender: &impl EcssTmSender,
verification_handler: &impl VerificationReportingProvider, verification_handler: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(), Self::Error> { ) -> Result<(), Self::Error> {
@ -142,7 +143,7 @@ impl PusTcToRequestConverter<ActivePusRequestStd, ModeRequest> for ModeRequestCo
&mut self, &mut self,
token: VerificationToken<TcStateAccepted>, token: VerificationToken<TcStateAccepted>,
tc: &PusTcReader, tc: &PusTcReader,
tm_sender: &(impl EcssTmSenderCore + ?Sized), tm_sender: &(impl EcssTmSender + ?Sized),
verif_reporter: &impl VerificationReportingProvider, verif_reporter: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(ActivePusRequestStd, ModeRequest), Self::Error> { ) -> Result<(ActivePusRequestStd, ModeRequest), Self::Error> {
@ -203,12 +204,12 @@ impl PusTcToRequestConverter<ActivePusRequestStd, ModeRequest> for ModeRequestCo
} }
pub fn create_mode_service_static( pub fn create_mode_service_static(
tm_sender: TmInSharedPoolSender<mpsc::SyncSender<PusTmInPool>>, tm_sender: PacketSenderWithSharedPool,
tc_pool: SharedStaticMemoryPool, tc_pool: SharedStaticMemoryPool,
pus_action_rx: mpsc::Receiver<EcssTcAndToken>, pus_action_rx: mpsc::Receiver<EcssTcAndToken>,
mode_router: GenericRequestRouter, mode_router: GenericRequestRouter,
reply_receiver: mpsc::Receiver<GenericMessage<ModeReply>>, reply_receiver: mpsc::Receiver<GenericMessage<ModeReply>>,
) -> ModeServiceWrapper<MpscTmInSharedPoolSenderBounded, EcssTcInSharedStoreConverter> { ) -> ModeServiceWrapper<PacketSenderWithSharedPool, EcssTcInSharedStoreConverter> {
let mode_request_handler = PusTargetedRequestService::new( let mode_request_handler = PusTargetedRequestService::new(
PusServiceHelper::new( PusServiceHelper::new(
PUS_MODE_SERVICE.id(), PUS_MODE_SERVICE.id(),
@ -229,7 +230,7 @@ pub fn create_mode_service_static(
} }
pub fn create_mode_service_dynamic( pub fn create_mode_service_dynamic(
tm_funnel_tx: mpsc::Sender<PusTmAsVec>, tm_funnel_tx: mpsc::Sender<PacketAsVec>,
pus_action_rx: mpsc::Receiver<EcssTcAndToken>, pus_action_rx: mpsc::Receiver<EcssTcAndToken>,
mode_router: GenericRequestRouter, mode_router: GenericRequestRouter,
reply_receiver: mpsc::Receiver<GenericMessage<ModeReply>>, reply_receiver: mpsc::Receiver<GenericMessage<ModeReply>>,
@ -253,7 +254,7 @@ pub fn create_mode_service_dynamic(
} }
} }
pub struct ModeServiceWrapper<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> { pub struct ModeServiceWrapper<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter> {
pub(crate) service: PusTargetedRequestService< pub(crate) service: PusTargetedRequestService<
MpscTcReceiver, MpscTcReceiver,
TmSender, TmSender,
@ -268,7 +269,7 @@ pub struct ModeServiceWrapper<TmSender: EcssTmSenderCore, TcInMemConverter: Ecss
>, >,
} }
impl<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> TargetedPusService impl<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter> TargetedPusService
for ModeServiceWrapper<TmSender, TcInMemConverter> for ModeServiceWrapper<TmSender, TcInMemConverter>
{ {
/// Returns [true] if the packet handling is finished. /// Returns [true] if the packet handling is finished.

View File

@ -9,53 +9,62 @@ use satrs::pus::scheduler_srv::PusSchedServiceHandler;
use satrs::pus::verification::VerificationReporter; use satrs::pus::verification::VerificationReporter;
use satrs::pus::{ use satrs::pus::{
EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter,
EcssTmSenderCore, MpscTcReceiver, MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, EcssTmSender, MpscTcReceiver, MpscTmAsVecSender, PusPacketHandlerResult, PusServiceHelper,
PusPacketHandlerResult, PusServiceHelper, PusTmAsVec, PusTmInPool, TmInSharedPoolSender,
}; };
use satrs::tmtc::{PacketAsVec, PacketInPool, PacketSenderWithSharedPool};
use satrs::ComponentId;
use satrs_example::config::components::PUS_SCHED_SERVICE; use satrs_example::config::components::PUS_SCHED_SERVICE;
use crate::tmtc::PusTcSourceProviderSharedPool;
use super::HandlingStatus; use super::HandlingStatus;
pub trait TcReleaser { pub trait TcReleaser {
fn release(&mut self, enabled: bool, info: &TcInfo, tc: &[u8]) -> bool; fn release(&mut self, sender_id: ComponentId, enabled: bool, info: &TcInfo, tc: &[u8]) -> bool;
} }
impl TcReleaser for PusTcSourceProviderSharedPool { impl TcReleaser for PacketSenderWithSharedPool {
fn release(&mut self, enabled: bool, _info: &TcInfo, tc: &[u8]) -> bool { fn release(
&mut self,
sender_id: ComponentId,
enabled: bool,
_info: &TcInfo,
tc: &[u8],
) -> bool {
if enabled { if enabled {
let shared_pool = self.shared_pool.get_mut();
// Transfer TC from scheduler TC pool to shared TC pool. // Transfer TC from scheduler TC pool to shared TC pool.
let released_tc_addr = self let released_tc_addr = shared_pool
.shared_pool .0
.pool
.write() .write()
.expect("locking pool failed") .expect("locking pool failed")
.add(tc) .add(tc)
.expect("adding TC to shared pool failed"); .expect("adding TC to shared pool failed");
self.tc_source self.sender
.send(released_tc_addr) .send(PacketInPool::new(sender_id, released_tc_addr))
.expect("sending TC to TC source failed"); .expect("sending TC to TC source failed");
} }
true true
} }
} }
impl TcReleaser for mpsc::Sender<Vec<u8>> { impl TcReleaser for mpsc::Sender<PacketAsVec> {
fn release(&mut self, enabled: bool, _info: &TcInfo, tc: &[u8]) -> bool { fn release(
&mut self,
sender_id: ComponentId,
enabled: bool,
_info: &TcInfo,
tc: &[u8],
) -> bool {
if enabled { if enabled {
// Send released TC to centralized TC source. // Send released TC to centralized TC source.
self.send(tc.to_vec()) self.send(PacketAsVec::new(sender_id, tc.to_vec()))
.expect("sending TC to TC source failed"); .expect("sending TC to TC source failed");
} }
true true
} }
} }
pub struct SchedulingServiceWrapper< pub struct SchedulingServiceWrapper<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter>
TmSender: EcssTmSenderCore, {
TcInMemConverter: EcssTcInMemConverter,
> {
pub pus_11_handler: PusSchedServiceHandler< pub pus_11_handler: PusSchedServiceHandler<
MpscTcReceiver, MpscTcReceiver,
TmSender, TmSender,
@ -68,12 +77,13 @@ pub struct SchedulingServiceWrapper<
pub tc_releaser: Box<dyn TcReleaser + Send>, pub tc_releaser: Box<dyn TcReleaser + Send>,
} }
impl<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> impl<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter>
SchedulingServiceWrapper<TmSender, TcInMemConverter> SchedulingServiceWrapper<TmSender, TcInMemConverter>
{ {
pub fn release_tcs(&mut self) { pub fn release_tcs(&mut self) {
let id = self.pus_11_handler.service_helper.id();
let releaser = |enabled: bool, info: &TcInfo, tc: &[u8]| -> bool { let releaser = |enabled: bool, info: &TcInfo, tc: &[u8]| -> bool {
self.tc_releaser.release(enabled, info, tc) self.tc_releaser.release(id, enabled, info, tc)
}; };
self.pus_11_handler self.pus_11_handler
@ -121,11 +131,11 @@ impl<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter>
} }
pub fn create_scheduler_service_static( pub fn create_scheduler_service_static(
tm_sender: TmInSharedPoolSender<mpsc::SyncSender<PusTmInPool>>, tm_sender: PacketSenderWithSharedPool,
tc_releaser: PusTcSourceProviderSharedPool, tc_releaser: PacketSenderWithSharedPool,
pus_sched_rx: mpsc::Receiver<EcssTcAndToken>, pus_sched_rx: mpsc::Receiver<EcssTcAndToken>,
sched_tc_pool: StaticMemoryPool, sched_tc_pool: StaticMemoryPool,
) -> SchedulingServiceWrapper<MpscTmInSharedPoolSenderBounded, EcssTcInSharedStoreConverter> { ) -> SchedulingServiceWrapper<PacketSenderWithSharedPool, EcssTcInSharedStoreConverter> {
let scheduler = PusScheduler::new_with_current_init_time(Duration::from_secs(5)) let scheduler = PusScheduler::new_with_current_init_time(Duration::from_secs(5))
.expect("Creating PUS Scheduler failed"); .expect("Creating PUS Scheduler failed");
let pus_11_handler = PusSchedServiceHandler::new( let pus_11_handler = PusSchedServiceHandler::new(
@ -134,7 +144,7 @@ pub fn create_scheduler_service_static(
pus_sched_rx, pus_sched_rx,
tm_sender, tm_sender,
create_verification_reporter(PUS_SCHED_SERVICE.id(), PUS_SCHED_SERVICE.apid), create_verification_reporter(PUS_SCHED_SERVICE.id(), PUS_SCHED_SERVICE.apid),
EcssTcInSharedStoreConverter::new(tc_releaser.clone_backing_pool(), 2048), EcssTcInSharedStoreConverter::new(tc_releaser.shared_packet_store().0.clone(), 2048),
), ),
scheduler, scheduler,
); );
@ -147,8 +157,8 @@ pub fn create_scheduler_service_static(
} }
pub fn create_scheduler_service_dynamic( pub fn create_scheduler_service_dynamic(
tm_funnel_tx: mpsc::Sender<PusTmAsVec>, tm_funnel_tx: mpsc::Sender<PacketAsVec>,
tc_source_sender: mpsc::Sender<Vec<u8>>, tc_source_sender: mpsc::Sender<PacketAsVec>,
pus_sched_rx: mpsc::Receiver<EcssTcAndToken>, pus_sched_rx: mpsc::Receiver<EcssTcAndToken>,
sched_tc_pool: StaticMemoryPool, sched_tc_pool: StaticMemoryPool,
) -> SchedulingServiceWrapper<MpscTmAsVecSender, EcssTcInVecConverter> { ) -> SchedulingServiceWrapper<MpscTmAsVecSender, EcssTcInVecConverter> {

View File

@ -1,7 +1,7 @@
use crate::pus::mode::ModeServiceWrapper; use crate::pus::mode::ModeServiceWrapper;
use derive_new::new; use derive_new::new;
use satrs::{ use satrs::{
pus::{EcssTcInMemConverter, EcssTmSenderCore}, pus::{EcssTcInMemConverter, EcssTmSender},
spacepackets::time::{cds, TimeWriter}, spacepackets::time::{cds, TimeWriter},
}; };
@ -12,7 +12,7 @@ use super::{
}; };
#[derive(new)] #[derive(new)]
pub struct PusStack<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> { pub struct PusStack<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter> {
test_srv: TestCustomServiceWrapper<TmSender, TcInMemConverter>, test_srv: TestCustomServiceWrapper<TmSender, TcInMemConverter>,
hk_srv_wrapper: HkServiceWrapper<TmSender, TcInMemConverter>, hk_srv_wrapper: HkServiceWrapper<TmSender, TcInMemConverter>,
event_srv: EventServiceWrapper<TmSender, TcInMemConverter>, event_srv: EventServiceWrapper<TmSender, TcInMemConverter>,
@ -21,7 +21,7 @@ pub struct PusStack<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemCon
mode_srv: ModeServiceWrapper<TmSender, TcInMemConverter>, mode_srv: ModeServiceWrapper<TmSender, TcInMemConverter>,
} }
impl<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> impl<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter>
PusStack<TmSender, TcInMemConverter> PusStack<TmSender, TcInMemConverter>
{ {
pub fn periodic_operation(&mut self) { pub fn periodic_operation(&mut self) {

View File

@ -6,14 +6,14 @@ use satrs::pus::test::PusService17TestHandler;
use satrs::pus::verification::{FailParams, VerificationReporter, VerificationReportingProvider}; use satrs::pus::verification::{FailParams, VerificationReporter, VerificationReportingProvider};
use satrs::pus::EcssTcInSharedStoreConverter; use satrs::pus::EcssTcInSharedStoreConverter;
use satrs::pus::{ use satrs::pus::{
EcssTcAndToken, EcssTcInMemConverter, EcssTcInVecConverter, EcssTmSenderCore, MpscTcReceiver, EcssTcAndToken, EcssTcInMemConverter, EcssTcInVecConverter, EcssTmSender, MpscTcReceiver,
MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, PusServiceHelper, MpscTmAsVecSender, PusPacketHandlerResult, PusServiceHelper,
PusTmAsVec, PusTmInPool, TmInSharedPoolSender,
}; };
use satrs::spacepackets::ecss::tc::PusTcReader; use satrs::spacepackets::ecss::tc::PusTcReader;
use satrs::spacepackets::ecss::PusPacket; use satrs::spacepackets::ecss::PusPacket;
use satrs::spacepackets::time::cds::CdsTime; use satrs::spacepackets::time::cds::CdsTime;
use satrs::spacepackets::time::TimeWriter; use satrs::spacepackets::time::TimeWriter;
use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool};
use satrs_example::config::components::PUS_TEST_SERVICE; use satrs_example::config::components::PUS_TEST_SERVICE;
use satrs_example::config::{tmtc_err, TEST_EVENT}; use satrs_example::config::{tmtc_err, TEST_EVENT};
use std::sync::mpsc; use std::sync::mpsc;
@ -21,11 +21,11 @@ use std::sync::mpsc;
use super::HandlingStatus; use super::HandlingStatus;
pub fn create_test_service_static( pub fn create_test_service_static(
tm_sender: TmInSharedPoolSender<mpsc::SyncSender<PusTmInPool>>, tm_sender: PacketSenderWithSharedPool,
tc_pool: SharedStaticMemoryPool, tc_pool: SharedStaticMemoryPool,
event_sender: mpsc::Sender<EventMessageU32>, event_sender: mpsc::Sender<EventMessageU32>,
pus_test_rx: mpsc::Receiver<EcssTcAndToken>, pus_test_rx: mpsc::Receiver<EcssTcAndToken>,
) -> TestCustomServiceWrapper<MpscTmInSharedPoolSenderBounded, EcssTcInSharedStoreConverter> { ) -> TestCustomServiceWrapper<PacketSenderWithSharedPool, EcssTcInSharedStoreConverter> {
let pus17_handler = PusService17TestHandler::new(PusServiceHelper::new( let pus17_handler = PusService17TestHandler::new(PusServiceHelper::new(
PUS_TEST_SERVICE.id(), PUS_TEST_SERVICE.id(),
pus_test_rx, pus_test_rx,
@ -40,7 +40,7 @@ pub fn create_test_service_static(
} }
pub fn create_test_service_dynamic( pub fn create_test_service_dynamic(
tm_funnel_tx: mpsc::Sender<PusTmAsVec>, tm_funnel_tx: mpsc::Sender<PacketAsVec>,
event_sender: mpsc::Sender<EventMessageU32>, event_sender: mpsc::Sender<EventMessageU32>,
pus_test_rx: mpsc::Receiver<EcssTcAndToken>, pus_test_rx: mpsc::Receiver<EcssTcAndToken>,
) -> TestCustomServiceWrapper<MpscTmAsVecSender, EcssTcInVecConverter> { ) -> TestCustomServiceWrapper<MpscTmAsVecSender, EcssTcInVecConverter> {
@ -57,16 +57,14 @@ pub fn create_test_service_dynamic(
} }
} }
pub struct TestCustomServiceWrapper< pub struct TestCustomServiceWrapper<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter>
TmSender: EcssTmSenderCore, {
TcInMemConverter: EcssTcInMemConverter,
> {
pub handler: pub handler:
PusService17TestHandler<MpscTcReceiver, TmSender, TcInMemConverter, VerificationReporter>, PusService17TestHandler<MpscTcReceiver, TmSender, TcInMemConverter, VerificationReporter>,
pub test_srv_event_sender: mpsc::Sender<EventMessageU32>, pub test_srv_event_sender: mpsc::Sender<EventMessageU32>,
} }
impl<TmSender: EcssTmSenderCore, TcInMemConverter: EcssTcInMemConverter> impl<TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter>
TestCustomServiceWrapper<TmSender, TcInMemConverter> TestCustomServiceWrapper<TmSender, TcInMemConverter>
{ {
pub fn poll_and_handle_next_packet(&mut self, time_stamp: &[u8]) -> HandlingStatus { pub fn poll_and_handle_next_packet(&mut self, time_stamp: &[u8]) -> HandlingStatus {

View File

@ -8,7 +8,7 @@ use satrs::mode::ModeRequest;
use satrs::pus::verification::{ use satrs::pus::verification::{
FailParams, TcStateAccepted, VerificationReportingProvider, VerificationToken, FailParams, TcStateAccepted, VerificationReportingProvider, VerificationToken,
}; };
use satrs::pus::{ActiveRequestProvider, EcssTmSenderCore, GenericRoutingError, PusRequestRouter}; use satrs::pus::{ActiveRequestProvider, EcssTmSender, GenericRoutingError, PusRequestRouter};
use satrs::queue::GenericSendError; use satrs::queue::GenericSendError;
use satrs::request::{GenericMessage, MessageMetadata, UniqueApidTargetId}; use satrs::request::{GenericMessage, MessageMetadata, UniqueApidTargetId};
use satrs::spacepackets::ecss::tc::PusTcReader; use satrs::spacepackets::ecss::tc::PusTcReader;
@ -47,7 +47,7 @@ impl GenericRequestRouter {
active_request: &impl ActiveRequestProvider, active_request: &impl ActiveRequestProvider,
tc: &PusTcReader, tc: &PusTcReader,
error: GenericRoutingError, error: GenericRoutingError,
tm_sender: &(impl EcssTmSenderCore + ?Sized), tm_sender: &(impl EcssTmSender + ?Sized),
verif_reporter: &impl VerificationReportingProvider, verif_reporter: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) { ) {

View File

@ -1,53 +0,0 @@
use satrs::pus::ReceivesEcssPusTc;
use satrs::spacepackets::{CcsdsPacket, SpHeader};
use satrs::tmtc::{CcsdsPacketHandler, ReceivesCcsdsTc};
use satrs::ValidatorU16Id;
use satrs_example::config::components::Apid;
use satrs_example::config::APID_VALIDATOR;
#[derive(Clone)]
pub struct CcsdsReceiver<
TcSource: ReceivesCcsdsTc<Error = E> + ReceivesEcssPusTc<Error = E> + Clone,
E,
> {
pub tc_source: TcSource,
}
impl<
TcSource: ReceivesCcsdsTc<Error = E> + ReceivesEcssPusTc<Error = E> + Clone + 'static,
E: 'static,
> ValidatorU16Id for CcsdsReceiver<TcSource, E>
{
fn validate(&self, apid: u16) -> bool {
APID_VALIDATOR.contains(&apid)
}
}
impl<
TcSource: ReceivesCcsdsTc<Error = E> + ReceivesEcssPusTc<Error = E> + Clone + 'static,
E: 'static,
> CcsdsPacketHandler for CcsdsReceiver<TcSource, E>
{
type Error = E;
fn handle_packet_with_valid_apid(
&mut self,
sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error> {
if sp_header.apid() == Apid::Cfdp as u16 {
} else {
return self.tc_source.pass_ccsds(sp_header, tc_raw);
}
Ok(())
}
fn handle_packet_with_unknown_apid(
&mut self,
sp_header: &SpHeader,
_tc_raw: &[u8],
) -> Result<(), Self::Error> {
log::warn!("unknown APID 0x{:x?} detected", sp_header.apid());
Ok(())
}
}

View File

@ -1,215 +1,2 @@
use log::warn; pub mod tc_source;
use satrs::pus::{ pub mod tm_sink;
EcssTcAndToken, MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, ReceivesEcssPusTc,
};
use satrs::spacepackets::SpHeader;
use std::sync::mpsc::{self, Receiver, SendError, Sender, SyncSender, TryRecvError};
use thiserror::Error;
use crate::pus::PusReceiver;
use satrs::pool::{PoolProvider, SharedStaticMemoryPool, StoreAddr, StoreError};
use satrs::spacepackets::ecss::tc::PusTcReader;
use satrs::spacepackets::ecss::PusPacket;
use satrs::tmtc::ReceivesCcsdsTc;
pub mod ccsds;
pub mod tm_funnel;
#[derive(Debug, Clone, PartialEq, Eq, Error)]
pub enum MpscStoreAndSendError {
#[error("Store error: {0}")]
Store(#[from] StoreError),
#[error("TC send error: {0}")]
TcSend(#[from] SendError<EcssTcAndToken>),
#[error("TMTC send error: {0}")]
TmTcSend(#[from] SendError<StoreAddr>),
}
#[derive(Clone)]
pub struct SharedTcPool {
pub pool: SharedStaticMemoryPool,
}
impl SharedTcPool {
pub fn add_pus_tc(&mut self, pus_tc: &PusTcReader) -> Result<StoreAddr, StoreError> {
let mut pg = self.pool.write().expect("error locking TC store");
let addr = pg.free_element(pus_tc.len_packed(), |buf| {
buf[0..pus_tc.len_packed()].copy_from_slice(pus_tc.raw_data());
})?;
Ok(addr)
}
}
#[derive(Clone)]
pub struct PusTcSourceProviderSharedPool {
pub tc_source: SyncSender<StoreAddr>,
pub shared_pool: SharedTcPool,
}
impl PusTcSourceProviderSharedPool {
#[allow(dead_code)]
pub fn clone_backing_pool(&self) -> SharedStaticMemoryPool {
self.shared_pool.pool.clone()
}
}
impl ReceivesEcssPusTc for PusTcSourceProviderSharedPool {
type Error = MpscStoreAndSendError;
fn pass_pus_tc(&mut self, _: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error> {
let addr = self.shared_pool.add_pus_tc(pus_tc)?;
self.tc_source.send(addr)?;
Ok(())
}
}
impl ReceivesCcsdsTc for PusTcSourceProviderSharedPool {
type Error = MpscStoreAndSendError;
fn pass_ccsds(&mut self, _: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> {
let mut pool = self.shared_pool.pool.write().expect("locking pool failed");
let addr = pool.add(tc_raw)?;
drop(pool);
self.tc_source.send(addr)?;
Ok(())
}
}
// Newtype, can not implement necessary traits on MPSC sender directly because of orphan rules.
#[derive(Clone)]
pub struct PusTcSourceProviderDynamic(pub Sender<Vec<u8>>);
impl ReceivesEcssPusTc for PusTcSourceProviderDynamic {
type Error = SendError<Vec<u8>>;
fn pass_pus_tc(&mut self, _: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error> {
self.0.send(pus_tc.raw_data().to_vec())?;
Ok(())
}
}
impl ReceivesCcsdsTc for PusTcSourceProviderDynamic {
type Error = mpsc::SendError<Vec<u8>>;
fn pass_ccsds(&mut self, _: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> {
self.0.send(tc_raw.to_vec())?;
Ok(())
}
}
// TC source components where static pools are the backing memory of the received telecommands.
pub struct TcSourceTaskStatic {
shared_tc_pool: SharedTcPool,
tc_receiver: Receiver<StoreAddr>,
tc_buf: [u8; 4096],
pus_receiver: PusReceiver<MpscTmInSharedPoolSenderBounded>,
}
impl TcSourceTaskStatic {
pub fn new(
shared_tc_pool: SharedTcPool,
tc_receiver: Receiver<StoreAddr>,
pus_receiver: PusReceiver<MpscTmInSharedPoolSenderBounded>,
) -> Self {
Self {
shared_tc_pool,
tc_receiver,
tc_buf: [0; 4096],
pus_receiver,
}
}
pub fn periodic_operation(&mut self) {
self.poll_tc();
}
pub fn poll_tc(&mut self) -> bool {
match self.tc_receiver.try_recv() {
Ok(addr) => {
let pool = self
.shared_tc_pool
.pool
.read()
.expect("locking tc pool failed");
pool.read(&addr, &mut self.tc_buf)
.expect("reading pool failed");
drop(pool);
match PusTcReader::new(&self.tc_buf) {
Ok((pus_tc, _)) => {
self.pus_receiver
.handle_tc_packet(
satrs::pus::TcInMemory::StoreAddr(addr),
pus_tc.service(),
&pus_tc,
)
.ok();
true
}
Err(e) => {
warn!("error creating PUS TC from raw data: {e}");
warn!("raw data: {:x?}", self.tc_buf);
true
}
}
}
Err(e) => match e {
TryRecvError::Empty => false,
TryRecvError::Disconnected => {
warn!("tmtc thread: sender disconnected");
false
}
},
}
}
}
// TC source components where the heap is the backing memory of the received telecommands.
pub struct TcSourceTaskDynamic {
pub tc_receiver: Receiver<Vec<u8>>,
pus_receiver: PusReceiver<MpscTmAsVecSender>,
}
impl TcSourceTaskDynamic {
pub fn new(
tc_receiver: Receiver<Vec<u8>>,
pus_receiver: PusReceiver<MpscTmAsVecSender>,
) -> Self {
Self {
tc_receiver,
pus_receiver,
}
}
pub fn periodic_operation(&mut self) {
self.poll_tc();
}
pub fn poll_tc(&mut self) -> bool {
match self.tc_receiver.try_recv() {
Ok(tc) => match PusTcReader::new(&tc) {
Ok((pus_tc, _)) => {
self.pus_receiver
.handle_tc_packet(
satrs::pus::TcInMemory::Vec(tc.clone()),
pus_tc.service(),
&pus_tc,
)
.ok();
true
}
Err(e) => {
warn!("error creating PUS TC from raw data: {e}");
warn!("raw data: {:x?}", tc);
true
}
},
Err(e) => match e {
TryRecvError::Empty => false,
TryRecvError::Disconnected => {
warn!("tmtc thread: sender disconnected");
false
}
},
}
}
}

View File

@ -0,0 +1,106 @@
use satrs::{
pool::PoolProvider,
tmtc::{PacketAsVec, PacketInPool, PacketSenderWithSharedPool, SharedPacketPool},
};
use std::sync::mpsc::{self, TryRecvError};
use satrs::pus::MpscTmAsVecSender;
use crate::pus::{HandlingStatus, PusTcDistributor};
// TC source components where static pools are the backing memory of the received telecommands.
pub struct TcSourceTaskStatic {
shared_tc_pool: SharedPacketPool,
tc_receiver: mpsc::Receiver<PacketInPool>,
tc_buf: [u8; 4096],
pus_distributor: PusTcDistributor<PacketSenderWithSharedPool>,
}
impl TcSourceTaskStatic {
pub fn new(
shared_tc_pool: SharedPacketPool,
tc_receiver: mpsc::Receiver<PacketInPool>,
pus_receiver: PusTcDistributor<PacketSenderWithSharedPool>,
) -> Self {
Self {
shared_tc_pool,
tc_receiver,
tc_buf: [0; 4096],
pus_distributor: pus_receiver,
}
}
pub fn periodic_operation(&mut self) {
self.poll_tc();
}
pub fn poll_tc(&mut self) -> HandlingStatus {
// Right now, we only expect ECSS PUS packets.
// If packets like CFDP are expected, we might have to check the APID first.
match self.tc_receiver.try_recv() {
Ok(packet_in_pool) => {
let pool = self
.shared_tc_pool
.0
.read()
.expect("locking tc pool failed");
pool.read(&packet_in_pool.store_addr, &mut self.tc_buf)
.expect("reading pool failed");
drop(pool);
self.pus_distributor
.handle_tc_packet_in_store(packet_in_pool, &self.tc_buf)
.ok();
HandlingStatus::HandledOne
}
Err(e) => match e {
TryRecvError::Empty => HandlingStatus::Empty,
TryRecvError::Disconnected => {
log::warn!("tmtc thread: sender disconnected");
HandlingStatus::Empty
}
},
}
}
}
// TC source components where the heap is the backing memory of the received telecommands.
pub struct TcSourceTaskDynamic {
pub tc_receiver: mpsc::Receiver<PacketAsVec>,
pus_distributor: PusTcDistributor<MpscTmAsVecSender>,
}
impl TcSourceTaskDynamic {
pub fn new(
tc_receiver: mpsc::Receiver<PacketAsVec>,
pus_receiver: PusTcDistributor<MpscTmAsVecSender>,
) -> Self {
Self {
tc_receiver,
pus_distributor: pus_receiver,
}
}
pub fn periodic_operation(&mut self) {
self.poll_tc();
}
pub fn poll_tc(&mut self) -> HandlingStatus {
// Right now, we only expect ECSS PUS packets.
// If packets like CFDP are expected, we might have to check the APID first.
match self.tc_receiver.try_recv() {
Ok(packet_as_vec) => {
self.pus_distributor
.handle_tc_packet_vec(packet_as_vec)
.ok();
HandlingStatus::HandledOne
}
Err(e) => match e {
TryRecvError::Empty => HandlingStatus::Empty,
TryRecvError::Disconnected => {
log::warn!("tmtc thread: sender disconnected");
HandlingStatus::Empty
}
},
}
}
}

View File

@ -4,7 +4,7 @@ use std::{
}; };
use log::info; use log::info;
use satrs::pus::{PusTmAsVec, PusTmInPool}; use satrs::tmtc::{PacketAsVec, PacketInPool, SharedPacketPool};
use satrs::{ use satrs::{
pool::PoolProvider, pool::PoolProvider,
seq_count::{CcsdsSimpleSeqCountProvider, SequenceCountProviderCore}, seq_count::{CcsdsSimpleSeqCountProvider, SequenceCountProviderCore},
@ -13,7 +13,6 @@ use satrs::{
time::cds::MIN_CDS_FIELD_LEN, time::cds::MIN_CDS_FIELD_LEN,
CcsdsPacket, CcsdsPacket,
}, },
tmtc::tm_helper::SharedTmPool,
}; };
use crate::interface::tcp::SyncTcpTmSource; use crate::interface::tcp::SyncTcpTmSource;
@ -77,17 +76,17 @@ impl TmFunnelCommon {
pub struct TmFunnelStatic { pub struct TmFunnelStatic {
common: TmFunnelCommon, common: TmFunnelCommon,
shared_tm_store: SharedTmPool, shared_tm_store: SharedPacketPool,
tm_funnel_rx: mpsc::Receiver<PusTmInPool>, tm_funnel_rx: mpsc::Receiver<PacketInPool>,
tm_server_tx: mpsc::SyncSender<PusTmInPool>, tm_server_tx: mpsc::SyncSender<PacketInPool>,
} }
impl TmFunnelStatic { impl TmFunnelStatic {
pub fn new( pub fn new(
shared_tm_store: SharedTmPool, shared_tm_store: SharedPacketPool,
sync_tm_tcp_source: SyncTcpTmSource, sync_tm_tcp_source: SyncTcpTmSource,
tm_funnel_rx: mpsc::Receiver<PusTmInPool>, tm_funnel_rx: mpsc::Receiver<PacketInPool>,
tm_server_tx: mpsc::SyncSender<PusTmInPool>, tm_server_tx: mpsc::SyncSender<PacketInPool>,
) -> Self { ) -> Self {
Self { Self {
common: TmFunnelCommon::new(sync_tm_tcp_source), common: TmFunnelCommon::new(sync_tm_tcp_source),
@ -101,7 +100,7 @@ impl TmFunnelStatic {
if let Ok(pus_tm_in_pool) = self.tm_funnel_rx.recv() { if let Ok(pus_tm_in_pool) = self.tm_funnel_rx.recv() {
// Read the TM, set sequence counter and message counter, and finally update // Read the TM, set sequence counter and message counter, and finally update
// the CRC. // the CRC.
let shared_pool = self.shared_tm_store.clone_backing_pool(); let shared_pool = self.shared_tm_store.0.clone();
let mut pool_guard = shared_pool.write().expect("Locking TM pool failed"); let mut pool_guard = shared_pool.write().expect("Locking TM pool failed");
let mut tm_copy = Vec::new(); let mut tm_copy = Vec::new();
pool_guard pool_guard
@ -124,15 +123,15 @@ impl TmFunnelStatic {
pub struct TmFunnelDynamic { pub struct TmFunnelDynamic {
common: TmFunnelCommon, common: TmFunnelCommon,
tm_funnel_rx: mpsc::Receiver<PusTmAsVec>, tm_funnel_rx: mpsc::Receiver<PacketAsVec>,
tm_server_tx: mpsc::Sender<PusTmAsVec>, tm_server_tx: mpsc::Sender<PacketAsVec>,
} }
impl TmFunnelDynamic { impl TmFunnelDynamic {
pub fn new( pub fn new(
sync_tm_tcp_source: SyncTcpTmSource, sync_tm_tcp_source: SyncTcpTmSource,
tm_funnel_rx: mpsc::Receiver<PusTmAsVec>, tm_funnel_rx: mpsc::Receiver<PacketAsVec>,
tm_server_tx: mpsc::Sender<PusTmAsVec>, tm_server_tx: mpsc::Sender<PacketAsVec>,
) -> Self { ) -> Self {
Self { Self {
common: TmFunnelCommon::new(sync_tm_tcp_source), common: TmFunnelCommon::new(sync_tm_tcp_source),

View File

@ -28,6 +28,12 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
## Changed ## Changed
- Renamed `ReceivesTcCore` to `PacketSenderRaw` to better show its primary purpose. It now contains
a `send_raw_tc` method which is not mutable anymore.
- Renamed `TmPacketSourceCore` to `TmPacketSource`.
- Renamed `EcssTmSenderCore` to `EcssTmSender`.
- Renamed `StoreAddr` to `PoolAddr`.
- Reanmed `StoreError` to `PoolError`.
- TCP server generics order. The error generics come last now. - TCP server generics order. The error generics come last now.
- `encoding::ccsds::PacketIdValidator` renamed to `ValidatorU16Id`, which lives in the crate root. - `encoding::ccsds::PacketIdValidator` renamed to `ValidatorU16Id`, which lives in the crate root.
It can be used for both CCSDS packet ID and CCSDS APID validation. It can be used for both CCSDS packet ID and CCSDS APID validation.
@ -76,6 +82,9 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
## Removed ## Removed
- Remove `objects` module. - Remove `objects` module.
- Removed CCSDS and PUS distributor modules. Their worth is questionable in an architecture
where routing traits are sufficient and the core logic to demultiplex and distribute packets
is simple enough to be application code.
# [v0.2.0-rc.0] 2024-02-21 # [v0.2.0-rc.0] 2024-02-21

View File

@ -1,4 +1,4 @@
use crate::{params::Params, pool::StoreAddr}; use crate::{params::Params, pool::PoolAddr};
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
pub use alloc_mod::*; pub use alloc_mod::*;
@ -21,7 +21,7 @@ impl ActionRequest {
#[derive(Clone, Eq, PartialEq, Debug)] #[derive(Clone, Eq, PartialEq, Debug)]
pub enum ActionRequestVariant { pub enum ActionRequestVariant {
NoData, NoData,
StoreData(StoreAddr), StoreData(PoolAddr),
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
VecData(alloc::vec::Vec<u8>), VecData(alloc::vec::Vec<u8>),
} }

View File

@ -1,23 +1,52 @@
use crate::{tmtc::ReceivesTcCore, ValidatorU16Id}; use spacepackets::{CcsdsPacket, SpHeader};
use crate::{tmtc::PacketSenderRaw, ComponentId};
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum SpValidity {
Valid,
/// The space packet can be assumed to have a valid format, but the packet should
/// be skipped.
Skip,
/// The space packet or space packet header has an invalid format, for example a CRC check
/// failed. In that case, the parser loses the packet synchronization and needs to check for
/// the start of a new space packet header start again. The space packet header
/// [spacepackets::PacketId] can be used as a synchronization marker to detect the start
/// of a possible valid packet again.
Invalid,
}
/// Simple trait to allow user code to check the validity of a space packet.
pub trait SpacePacketValidator {
fn validate(&self, sp_header: &SpHeader, raw_buf: &[u8]) -> SpValidity;
}
/// This function parses a given buffer for tightly packed CCSDS space packets. It uses the /// This function parses a given buffer for tightly packed CCSDS space packets. It uses the
/// [PacketId] field of the CCSDS packets to detect the start of a CCSDS space packet and then /// [spacepackets::SpHeader] of the CCSDS packets and a user provided [SpacePacketValidator]
/// uses the length field of the packet to extract CCSDS packets. /// to check whether a received space packet is relevant for processing.
/// ///
/// This function is also able to deal with broken tail packets at the end as long a the parser /// This function is also able to deal with broken tail packets at the end as long a the parser
/// can read the full 7 bytes which constitue a space packet header plus one byte minimal size. /// can read the full 7 bytes which constitue a space packet header plus one byte minimal size.
/// If broken tail packets are detected, they are moved to the front of the buffer, and the write /// If broken tail packets are detected, they are moved to the front of the buffer, and the write
/// index for future write operations will be written to the `next_write_idx` argument. /// index for future write operations will be written to the `next_write_idx` argument.
/// ///
/// The parser will write all packets which were decoded successfully to the given `tc_receiver` /// The parses will behave differently based on the [SpValidity] returned from the user provided
/// and return the number of packets found. If the [ReceivesTcCore::pass_tc] calls fails, the /// [SpacePacketValidator]:
///
/// 1. [SpValidity::Valid]: The parser will forward all packets to the given `packet_sender` and
/// return the number of packets found.If the [PacketSenderRaw::send_packet] calls fails, the
/// error will be returned. /// error will be returned.
pub fn parse_buffer_for_ccsds_space_packets<E>( /// 2. [SpValidity::Invalid]: The parser assumes that the synchronization is lost and tries to
/// find the start of a new space packet header by scanning all the following bytes.
/// 3. [SpValidity::Skip]: The parser skips the packet using the packet length determined from the
/// space packet header.
pub fn parse_buffer_for_ccsds_space_packets<SendError>(
buf: &mut [u8], buf: &mut [u8],
packet_id_validator: &(impl ValidatorU16Id + ?Sized), packet_validator: &(impl SpacePacketValidator + ?Sized),
tc_receiver: &mut (impl ReceivesTcCore<Error = E> + ?Sized), sender_id: ComponentId,
packet_sender: &(impl PacketSenderRaw<Error = SendError> + ?Sized),
next_write_idx: &mut usize, next_write_idx: &mut usize,
) -> Result<u32, E> { ) -> Result<u32, SendError> {
*next_write_idx = 0; *next_write_idx = 0;
let mut packets_found = 0; let mut packets_found = 0;
let mut current_idx = 0; let mut current_idx = 0;
@ -26,13 +55,14 @@ pub fn parse_buffer_for_ccsds_space_packets<E>(
if current_idx + 7 >= buf.len() { if current_idx + 7 >= buf.len() {
break; break;
} }
let packet_id = u16::from_be_bytes(buf[current_idx..current_idx + 2].try_into().unwrap()); let sp_header = SpHeader::from_be_bytes(&buf[current_idx..]).unwrap().0;
if packet_id_validator.validate(packet_id) { // let packet_id = u16::from_be_bytes(buf[current_idx..current_idx + 2].try_into().unwrap());
let length_field = match packet_validator.validate(&sp_header, &buf[current_idx..]) {
u16::from_be_bytes(buf[current_idx + 4..current_idx + 6].try_into().unwrap()); SpValidity::Valid => {
let packet_size = length_field + 7; let packet_size = sp_header.total_len();
if (current_idx + packet_size as usize) <= buf_len { if (current_idx + packet_size) <= buf_len {
tc_receiver.pass_tc(&buf[current_idx..current_idx + packet_size as usize])?; packet_sender
.send_packet(sender_id, &buf[current_idx..current_idx + packet_size])?;
packets_found += 1; packets_found += 1;
} else { } else {
// Move packet to start of buffer if applicable. // Move packet to start of buffer if applicable.
@ -41,11 +71,18 @@ pub fn parse_buffer_for_ccsds_space_packets<E>(
*next_write_idx = buf.len() - current_idx; *next_write_idx = buf.len() - current_idx;
} }
} }
current_idx += packet_size as usize; current_idx += packet_size;
continue; continue;
} }
SpValidity::Skip => {
current_idx += sp_header.total_len();
}
// We might have lost sync. Try to find the start of a new space packet header.
SpValidity::Invalid => {
current_idx += 1; current_idx += 1;
} }
}
}
Ok(packets_found) Ok(packets_found)
} }
@ -53,18 +90,43 @@ pub fn parse_buffer_for_ccsds_space_packets<E>(
mod tests { mod tests {
use spacepackets::{ use spacepackets::{
ecss::{tc::PusTcCreator, WritablePusPacket}, ecss::{tc::PusTcCreator, WritablePusPacket},
PacketId, SpHeader, CcsdsPacket, PacketId, SpHeader,
}; };
use crate::encoding::tests::TcCacher; use crate::{encoding::tests::TcCacher, ComponentId};
use super::parse_buffer_for_ccsds_space_packets; use super::{parse_buffer_for_ccsds_space_packets, SpValidity, SpacePacketValidator};
const PARSER_ID: ComponentId = 0x05;
const TEST_APID_0: u16 = 0x02; const TEST_APID_0: u16 = 0x02;
const TEST_APID_1: u16 = 0x10; const TEST_APID_1: u16 = 0x10;
const TEST_PACKET_ID_0: PacketId = PacketId::new_for_tc(true, TEST_APID_0); const TEST_PACKET_ID_0: PacketId = PacketId::new_for_tc(true, TEST_APID_0);
const TEST_PACKET_ID_1: PacketId = PacketId::new_for_tc(true, TEST_APID_1); const TEST_PACKET_ID_1: PacketId = PacketId::new_for_tc(true, TEST_APID_1);
#[derive(Default)]
struct SimpleVerificator {
pub enable_second_id: bool,
}
impl SimpleVerificator {
pub fn new_with_second_id() -> Self {
Self {
enable_second_id: true,
}
}
}
impl SpacePacketValidator for SimpleVerificator {
fn validate(&self, sp_header: &SpHeader, _raw_buf: &[u8]) -> super::SpValidity {
if sp_header.packet_id() == TEST_PACKET_ID_0
|| (self.enable_second_id && sp_header.packet_id() == TEST_PACKET_ID_1)
{
return SpValidity::Valid;
}
SpValidity::Skip
}
}
#[test] #[test]
fn test_basic() { fn test_basic() {
let sph = SpHeader::new_from_apid(TEST_APID_0); let sph = SpHeader::new_from_apid(TEST_APID_0);
@ -73,23 +135,23 @@ mod tests {
let packet_len = ping_tc let packet_len = ping_tc
.write_to_bytes(&mut buffer) .write_to_bytes(&mut buffer)
.expect("writing packet failed"); .expect("writing packet failed");
let valid_packet_ids = [TEST_PACKET_ID_0]; let tc_cacher = TcCacher::default();
let mut tc_cacher = TcCacher::default();
let mut next_write_idx = 0; let mut next_write_idx = 0;
let parse_result = parse_buffer_for_ccsds_space_packets( let parse_result = parse_buffer_for_ccsds_space_packets(
&mut buffer, &mut buffer,
valid_packet_ids.as_slice(), &SimpleVerificator::default(),
&mut tc_cacher, PARSER_ID,
&tc_cacher,
&mut next_write_idx, &mut next_write_idx,
); );
assert!(parse_result.is_ok()); assert!(parse_result.is_ok());
let parsed_packets = parse_result.unwrap(); let parsed_packets = parse_result.unwrap();
assert_eq!(parsed_packets, 1); assert_eq!(parsed_packets, 1);
assert_eq!(tc_cacher.tc_queue.len(), 1); let mut queue = tc_cacher.tc_queue.borrow_mut();
assert_eq!( assert_eq!(queue.len(), 1);
tc_cacher.tc_queue.pop_front().unwrap(), let packet_with_sender = queue.pop_front().unwrap();
buffer[..packet_len] assert_eq!(packet_with_sender.packet, buffer[..packet_len]);
); assert_eq!(packet_with_sender.sender_id, PARSER_ID);
} }
#[test] #[test]
@ -104,25 +166,27 @@ mod tests {
let packet_len_action = action_tc let packet_len_action = action_tc
.write_to_bytes(&mut buffer[packet_len_ping..]) .write_to_bytes(&mut buffer[packet_len_ping..])
.expect("writing packet failed"); .expect("writing packet failed");
let valid_packet_ids = [TEST_PACKET_ID_0]; let tc_cacher = TcCacher::default();
let mut tc_cacher = TcCacher::default();
let mut next_write_idx = 0; let mut next_write_idx = 0;
let parse_result = parse_buffer_for_ccsds_space_packets( let parse_result = parse_buffer_for_ccsds_space_packets(
&mut buffer, &mut buffer,
valid_packet_ids.as_slice(), &SimpleVerificator::default(),
&mut tc_cacher, PARSER_ID,
&tc_cacher,
&mut next_write_idx, &mut next_write_idx,
); );
assert!(parse_result.is_ok()); assert!(parse_result.is_ok());
let parsed_packets = parse_result.unwrap(); let parsed_packets = parse_result.unwrap();
assert_eq!(parsed_packets, 2); assert_eq!(parsed_packets, 2);
assert_eq!(tc_cacher.tc_queue.len(), 2); let mut queue = tc_cacher.tc_queue.borrow_mut();
assert_eq!(queue.len(), 2);
let packet_with_addr = queue.pop_front().unwrap();
assert_eq!(packet_with_addr.packet, buffer[..packet_len_ping]);
assert_eq!(packet_with_addr.sender_id, PARSER_ID);
let packet_with_addr = queue.pop_front().unwrap();
assert_eq!(packet_with_addr.sender_id, PARSER_ID);
assert_eq!( assert_eq!(
tc_cacher.tc_queue.pop_front().unwrap(), packet_with_addr.packet,
buffer[..packet_len_ping]
);
assert_eq!(
tc_cacher.tc_queue.pop_front().unwrap(),
buffer[packet_len_ping..packet_len_ping + packet_len_action] buffer[packet_len_ping..packet_len_ping + packet_len_action]
); );
} }
@ -140,25 +204,26 @@ mod tests {
let packet_len_action = action_tc let packet_len_action = action_tc
.write_to_bytes(&mut buffer[packet_len_ping..]) .write_to_bytes(&mut buffer[packet_len_ping..])
.expect("writing packet failed"); .expect("writing packet failed");
let valid_packet_ids = [TEST_PACKET_ID_0, TEST_PACKET_ID_1]; let tc_cacher = TcCacher::default();
let mut tc_cacher = TcCacher::default();
let mut next_write_idx = 0; let mut next_write_idx = 0;
let verificator = SimpleVerificator::new_with_second_id();
let parse_result = parse_buffer_for_ccsds_space_packets( let parse_result = parse_buffer_for_ccsds_space_packets(
&mut buffer, &mut buffer,
valid_packet_ids.as_slice(), &verificator,
&mut tc_cacher, PARSER_ID,
&tc_cacher,
&mut next_write_idx, &mut next_write_idx,
); );
assert!(parse_result.is_ok()); assert!(parse_result.is_ok());
let parsed_packets = parse_result.unwrap(); let parsed_packets = parse_result.unwrap();
assert_eq!(parsed_packets, 2); assert_eq!(parsed_packets, 2);
assert_eq!(tc_cacher.tc_queue.len(), 2); let mut queue = tc_cacher.tc_queue.borrow_mut();
assert_eq!(queue.len(), 2);
let packet_with_addr = queue.pop_front().unwrap();
assert_eq!(packet_with_addr.packet, buffer[..packet_len_ping]);
let packet_with_addr = queue.pop_front().unwrap();
assert_eq!( assert_eq!(
tc_cacher.tc_queue.pop_front().unwrap(), packet_with_addr.packet,
buffer[..packet_len_ping]
);
assert_eq!(
tc_cacher.tc_queue.pop_front().unwrap(),
buffer[packet_len_ping..packet_len_ping + packet_len_action] buffer[packet_len_ping..packet_len_ping + packet_len_action]
); );
} }
@ -176,19 +241,22 @@ mod tests {
let packet_len_action = action_tc let packet_len_action = action_tc
.write_to_bytes(&mut buffer[packet_len_ping..]) .write_to_bytes(&mut buffer[packet_len_ping..])
.expect("writing packet failed"); .expect("writing packet failed");
let valid_packet_ids = [TEST_PACKET_ID_0, TEST_PACKET_ID_1]; let tc_cacher = TcCacher::default();
let mut tc_cacher = TcCacher::default();
let mut next_write_idx = 0; let mut next_write_idx = 0;
let verificator = SimpleVerificator::new_with_second_id();
let parse_result = parse_buffer_for_ccsds_space_packets( let parse_result = parse_buffer_for_ccsds_space_packets(
&mut buffer[..packet_len_ping + packet_len_action - 4], &mut buffer[..packet_len_ping + packet_len_action - 4],
valid_packet_ids.as_slice(), &verificator,
&mut tc_cacher, PARSER_ID,
&tc_cacher,
&mut next_write_idx, &mut next_write_idx,
); );
assert!(parse_result.is_ok()); assert!(parse_result.is_ok());
let parsed_packets = parse_result.unwrap(); let parsed_packets = parse_result.unwrap();
assert_eq!(parsed_packets, 1); assert_eq!(parsed_packets, 1);
assert_eq!(tc_cacher.tc_queue.len(), 1);
let queue = tc_cacher.tc_queue.borrow();
assert_eq!(queue.len(), 1);
// The broken packet was moved to the start, so the next write index should be after the // The broken packet was moved to the start, so the next write index should be after the
// last segment missing 4 bytes. // last segment missing 4 bytes.
assert_eq!(next_write_idx, packet_len_action - 4); assert_eq!(next_write_idx, packet_len_action - 4);
@ -202,19 +270,22 @@ mod tests {
let packet_len_ping = ping_tc let packet_len_ping = ping_tc
.write_to_bytes(&mut buffer) .write_to_bytes(&mut buffer)
.expect("writing packet failed"); .expect("writing packet failed");
let valid_packet_ids = [TEST_PACKET_ID_0, TEST_PACKET_ID_1]; let tc_cacher = TcCacher::default();
let mut tc_cacher = TcCacher::default();
let verificator = SimpleVerificator::new_with_second_id();
let mut next_write_idx = 0; let mut next_write_idx = 0;
let parse_result = parse_buffer_for_ccsds_space_packets( let parse_result = parse_buffer_for_ccsds_space_packets(
&mut buffer[..packet_len_ping - 4], &mut buffer[..packet_len_ping - 4],
valid_packet_ids.as_slice(), &verificator,
&mut tc_cacher, PARSER_ID,
&tc_cacher,
&mut next_write_idx, &mut next_write_idx,
); );
assert_eq!(next_write_idx, 0); assert_eq!(next_write_idx, 0);
assert!(parse_result.is_ok()); assert!(parse_result.is_ok());
let parsed_packets = parse_result.unwrap(); let parsed_packets = parse_result.unwrap();
assert_eq!(parsed_packets, 0); assert_eq!(parsed_packets, 0);
assert_eq!(tc_cacher.tc_queue.len(), 0); let queue = tc_cacher.tc_queue.borrow();
assert_eq!(queue.len(), 0);
} }
} }

View File

@ -1,4 +1,4 @@
use crate::tmtc::ReceivesTcCore; use crate::{tmtc::PacketSenderRaw, ComponentId};
use cobs::{decode_in_place, encode, max_encoding_length}; use cobs::{decode_in_place, encode, max_encoding_length};
/// This function encodes the given packet with COBS and also wraps the encoded packet with /// This function encodes the given packet with COBS and also wraps the encoded packet with
@ -55,11 +55,12 @@ pub fn encode_packet_with_cobs(
/// future write operations will be written to the `next_write_idx` argument. /// future write operations will be written to the `next_write_idx` argument.
/// ///
/// The parser will write all packets which were decoded successfully to the given `tc_receiver`. /// The parser will write all packets which were decoded successfully to the given `tc_receiver`.
pub fn parse_buffer_for_cobs_encoded_packets<E>( pub fn parse_buffer_for_cobs_encoded_packets<SendError>(
buf: &mut [u8], buf: &mut [u8],
tc_receiver: &mut dyn ReceivesTcCore<Error = E>, sender_id: ComponentId,
packet_sender: &(impl PacketSenderRaw<Error = SendError> + ?Sized),
next_write_idx: &mut usize, next_write_idx: &mut usize,
) -> Result<u32, E> { ) -> Result<u32, SendError> {
let mut start_index_packet = 0; let mut start_index_packet = 0;
let mut start_found = false; let mut start_found = false;
let mut last_byte = false; let mut last_byte = false;
@ -78,8 +79,10 @@ pub fn parse_buffer_for_cobs_encoded_packets<E>(
let decode_result = decode_in_place(&mut buf[start_index_packet..i]); let decode_result = decode_in_place(&mut buf[start_index_packet..i]);
if let Ok(packet_len) = decode_result { if let Ok(packet_len) = decode_result {
packets_found += 1; packets_found += 1;
tc_receiver packet_sender.send_packet(
.pass_tc(&buf[start_index_packet..start_index_packet + packet_len])?; sender_id,
&buf[start_index_packet..start_index_packet + packet_len],
)?;
} }
start_found = false; start_found = false;
} else { } else {
@ -100,32 +103,39 @@ pub fn parse_buffer_for_cobs_encoded_packets<E>(
pub(crate) mod tests { pub(crate) mod tests {
use cobs::encode; use cobs::encode;
use crate::encoding::tests::{encode_simple_packet, TcCacher, INVERTED_PACKET, SIMPLE_PACKET}; use crate::{
encoding::tests::{encode_simple_packet, TcCacher, INVERTED_PACKET, SIMPLE_PACKET},
ComponentId,
};
use super::parse_buffer_for_cobs_encoded_packets; use super::parse_buffer_for_cobs_encoded_packets;
const PARSER_ID: ComponentId = 0x05;
#[test] #[test]
fn test_parsing_simple_packet() { fn test_parsing_simple_packet() {
let mut test_sender = TcCacher::default(); let test_sender = TcCacher::default();
let mut encoded_buf: [u8; 16] = [0; 16]; let mut encoded_buf: [u8; 16] = [0; 16];
let mut current_idx = 0; let mut current_idx = 0;
encode_simple_packet(&mut encoded_buf, &mut current_idx); encode_simple_packet(&mut encoded_buf, &mut current_idx);
let mut next_read_idx = 0; let mut next_read_idx = 0;
let packets = parse_buffer_for_cobs_encoded_packets( let packets = parse_buffer_for_cobs_encoded_packets(
&mut encoded_buf[0..current_idx], &mut encoded_buf[0..current_idx],
&mut test_sender, PARSER_ID,
&test_sender,
&mut next_read_idx, &mut next_read_idx,
) )
.unwrap(); .unwrap();
assert_eq!(packets, 1); assert_eq!(packets, 1);
assert_eq!(test_sender.tc_queue.len(), 1); let queue = test_sender.tc_queue.borrow();
let packet = &test_sender.tc_queue[0]; assert_eq!(queue.len(), 1);
assert_eq!(packet, &SIMPLE_PACKET); let packet = &queue[0];
assert_eq!(packet.packet, &SIMPLE_PACKET);
} }
#[test] #[test]
fn test_parsing_consecutive_packets() { fn test_parsing_consecutive_packets() {
let mut test_sender = TcCacher::default(); let test_sender = TcCacher::default();
let mut encoded_buf: [u8; 16] = [0; 16]; let mut encoded_buf: [u8; 16] = [0; 16];
let mut current_idx = 0; let mut current_idx = 0;
encode_simple_packet(&mut encoded_buf, &mut current_idx); encode_simple_packet(&mut encoded_buf, &mut current_idx);
@ -139,21 +149,23 @@ pub(crate) mod tests {
let mut next_read_idx = 0; let mut next_read_idx = 0;
let packets = parse_buffer_for_cobs_encoded_packets( let packets = parse_buffer_for_cobs_encoded_packets(
&mut encoded_buf[0..current_idx], &mut encoded_buf[0..current_idx],
&mut test_sender, PARSER_ID,
&test_sender,
&mut next_read_idx, &mut next_read_idx,
) )
.unwrap(); .unwrap();
assert_eq!(packets, 2); assert_eq!(packets, 2);
assert_eq!(test_sender.tc_queue.len(), 2); let queue = test_sender.tc_queue.borrow();
let packet0 = &test_sender.tc_queue[0]; assert_eq!(queue.len(), 2);
assert_eq!(packet0, &SIMPLE_PACKET); let packet0 = &queue[0];
let packet1 = &test_sender.tc_queue[1]; assert_eq!(packet0.packet, &SIMPLE_PACKET);
assert_eq!(packet1, &INVERTED_PACKET); let packet1 = &queue[1];
assert_eq!(packet1.packet, &INVERTED_PACKET);
} }
#[test] #[test]
fn test_split_tail_packet_only() { fn test_split_tail_packet_only() {
let mut test_sender = TcCacher::default(); let test_sender = TcCacher::default();
let mut encoded_buf: [u8; 16] = [0; 16]; let mut encoded_buf: [u8; 16] = [0; 16];
let mut current_idx = 0; let mut current_idx = 0;
encode_simple_packet(&mut encoded_buf, &mut current_idx); encode_simple_packet(&mut encoded_buf, &mut current_idx);
@ -161,17 +173,19 @@ pub(crate) mod tests {
let packets = parse_buffer_for_cobs_encoded_packets( let packets = parse_buffer_for_cobs_encoded_packets(
// Cut off the sentinel byte at the end. // Cut off the sentinel byte at the end.
&mut encoded_buf[0..current_idx - 1], &mut encoded_buf[0..current_idx - 1],
&mut test_sender, PARSER_ID,
&test_sender,
&mut next_read_idx, &mut next_read_idx,
) )
.unwrap(); .unwrap();
assert_eq!(packets, 0); assert_eq!(packets, 0);
assert_eq!(test_sender.tc_queue.len(), 0); let queue = test_sender.tc_queue.borrow();
assert_eq!(queue.len(), 0);
assert_eq!(next_read_idx, 0); assert_eq!(next_read_idx, 0);
} }
fn generic_test_split_packet(cut_off: usize) { fn generic_test_split_packet(cut_off: usize) {
let mut test_sender = TcCacher::default(); let test_sender = TcCacher::default();
let mut encoded_buf: [u8; 16] = [0; 16]; let mut encoded_buf: [u8; 16] = [0; 16];
assert!(cut_off < INVERTED_PACKET.len() + 1); assert!(cut_off < INVERTED_PACKET.len() + 1);
let mut current_idx = 0; let mut current_idx = 0;
@ -193,13 +207,15 @@ pub(crate) mod tests {
let packets = parse_buffer_for_cobs_encoded_packets( let packets = parse_buffer_for_cobs_encoded_packets(
// Cut off the sentinel byte at the end. // Cut off the sentinel byte at the end.
&mut encoded_buf[0..current_idx - cut_off], &mut encoded_buf[0..current_idx - cut_off],
&mut test_sender, PARSER_ID,
&test_sender,
&mut next_write_idx, &mut next_write_idx,
) )
.unwrap(); .unwrap();
assert_eq!(packets, 1); assert_eq!(packets, 1);
assert_eq!(test_sender.tc_queue.len(), 1); let queue = test_sender.tc_queue.borrow();
assert_eq!(&test_sender.tc_queue[0], &SIMPLE_PACKET); assert_eq!(queue.len(), 1);
assert_eq!(&queue[0].packet, &SIMPLE_PACKET);
assert_eq!(next_write_idx, next_expected_write_idx); assert_eq!(next_write_idx, next_expected_write_idx);
assert_eq!(encoded_buf[..next_expected_write_idx], expected_at_start); assert_eq!(encoded_buf[..next_expected_write_idx], expected_at_start);
} }
@ -221,7 +237,7 @@ pub(crate) mod tests {
#[test] #[test]
fn test_zero_at_end() { fn test_zero_at_end() {
let mut test_sender = TcCacher::default(); let test_sender = TcCacher::default();
let mut encoded_buf: [u8; 16] = [0; 16]; let mut encoded_buf: [u8; 16] = [0; 16];
let mut next_write_idx = 0; let mut next_write_idx = 0;
let mut current_idx = 0; let mut current_idx = 0;
@ -233,31 +249,35 @@ pub(crate) mod tests {
let packets = parse_buffer_for_cobs_encoded_packets( let packets = parse_buffer_for_cobs_encoded_packets(
// Cut off the sentinel byte at the end. // Cut off the sentinel byte at the end.
&mut encoded_buf[0..current_idx], &mut encoded_buf[0..current_idx],
&mut test_sender, PARSER_ID,
&test_sender,
&mut next_write_idx, &mut next_write_idx,
) )
.unwrap(); .unwrap();
assert_eq!(packets, 1); assert_eq!(packets, 1);
assert_eq!(test_sender.tc_queue.len(), 1); let queue = test_sender.tc_queue.borrow_mut();
assert_eq!(&test_sender.tc_queue[0], &SIMPLE_PACKET); assert_eq!(queue.len(), 1);
assert_eq!(&queue[0].packet, &SIMPLE_PACKET);
assert_eq!(next_write_idx, 1); assert_eq!(next_write_idx, 1);
assert_eq!(encoded_buf[0], 0); assert_eq!(encoded_buf[0], 0);
} }
#[test] #[test]
fn test_all_zeroes() { fn test_all_zeroes() {
let mut test_sender = TcCacher::default(); let test_sender = TcCacher::default();
let mut all_zeroes: [u8; 5] = [0; 5]; let mut all_zeroes: [u8; 5] = [0; 5];
let mut next_write_idx = 0; let mut next_write_idx = 0;
let packets = parse_buffer_for_cobs_encoded_packets( let packets = parse_buffer_for_cobs_encoded_packets(
// Cut off the sentinel byte at the end. // Cut off the sentinel byte at the end.
&mut all_zeroes, &mut all_zeroes,
&mut test_sender, PARSER_ID,
&test_sender,
&mut next_write_idx, &mut next_write_idx,
) )
.unwrap(); .unwrap();
assert_eq!(packets, 0); assert_eq!(packets, 0);
assert!(test_sender.tc_queue.is_empty()); let queue = test_sender.tc_queue.borrow();
assert!(queue.is_empty());
assert_eq!(next_write_idx, 0); assert_eq!(next_write_idx, 0);
} }
} }

View File

@ -6,9 +6,14 @@ pub use crate::encoding::cobs::{encode_packet_with_cobs, parse_buffer_for_cobs_e
#[cfg(test)] #[cfg(test)]
pub(crate) mod tests { pub(crate) mod tests {
use alloc::{collections::VecDeque, vec::Vec}; use core::cell::RefCell;
use crate::tmtc::ReceivesTcCore; use alloc::collections::VecDeque;
use crate::{
tmtc::{PacketAsVec, PacketSenderRaw},
ComponentId,
};
use super::cobs::encode_packet_with_cobs; use super::cobs::encode_packet_with_cobs;
@ -17,14 +22,15 @@ pub(crate) mod tests {
#[derive(Default)] #[derive(Default)]
pub(crate) struct TcCacher { pub(crate) struct TcCacher {
pub(crate) tc_queue: VecDeque<Vec<u8>>, pub(crate) tc_queue: RefCell<VecDeque<PacketAsVec>>,
} }
impl ReceivesTcCore for TcCacher { impl PacketSenderRaw for TcCacher {
type Error = (); type Error = ();
fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { fn send_packet(&self, sender_id: ComponentId, tc_raw: &[u8]) -> Result<(), Self::Error> {
self.tc_queue.push_back(tc_raw.to_vec()); let mut mut_queue = self.tc_queue.borrow_mut();
mut_queue.push_back(PacketAsVec::new(sender_id, tc_raw.to_vec()));
Ok(()) Ok(())
} }
} }

View File

@ -10,12 +10,13 @@ use std::net::SocketAddr;
use std::vec::Vec; use std::vec::Vec;
use crate::encoding::parse_buffer_for_cobs_encoded_packets; use crate::encoding::parse_buffer_for_cobs_encoded_packets;
use crate::tmtc::ReceivesTc; use crate::tmtc::PacketSenderRaw;
use crate::tmtc::TmPacketSource; use crate::tmtc::PacketSource;
use crate::hal::std::tcp_server::{ use crate::hal::std::tcp_server::{
ConnectionResult, ServerConfig, TcpTcParser, TcpTmSender, TcpTmtcError, TcpTmtcGenericServer, ConnectionResult, ServerConfig, TcpTcParser, TcpTmSender, TcpTmtcError, TcpTmtcGenericServer,
}; };
use crate::ComponentId;
use super::tcp_server::HandledConnectionHandler; use super::tcp_server::HandledConnectionHandler;
use super::tcp_server::HandledConnectionInfo; use super::tcp_server::HandledConnectionInfo;
@ -28,14 +29,16 @@ impl<TmError, TcError: 'static> TcpTcParser<TmError, TcError> for CobsTcParser {
fn handle_tc_parsing( fn handle_tc_parsing(
&mut self, &mut self,
tc_buffer: &mut [u8], tc_buffer: &mut [u8],
tc_receiver: &mut (impl ReceivesTc<Error = TcError> + ?Sized), sender_id: ComponentId,
tc_sender: &(impl PacketSenderRaw<Error = TcError> + ?Sized),
conn_result: &mut HandledConnectionInfo, conn_result: &mut HandledConnectionInfo,
current_write_idx: usize, current_write_idx: usize,
next_write_idx: &mut usize, next_write_idx: &mut usize,
) -> Result<(), TcpTmtcError<TmError, TcError>> { ) -> Result<(), TcpTmtcError<TmError, TcError>> {
conn_result.num_received_tcs += parse_buffer_for_cobs_encoded_packets( conn_result.num_received_tcs += parse_buffer_for_cobs_encoded_packets(
&mut tc_buffer[..current_write_idx], &mut tc_buffer[..current_write_idx],
tc_receiver.upcast_mut(), sender_id,
tc_sender,
next_write_idx, next_write_idx,
) )
.map_err(|e| TcpTmtcError::TcError(e))?; .map_err(|e| TcpTmtcError::TcError(e))?;
@ -62,7 +65,7 @@ impl<TmError, TcError> TcpTmSender<TmError, TcError> for CobsTmSender {
fn handle_tm_sending( fn handle_tm_sending(
&mut self, &mut self,
tm_buffer: &mut [u8], tm_buffer: &mut [u8],
tm_source: &mut (impl TmPacketSource<Error = TmError> + ?Sized), tm_source: &mut (impl PacketSource<Error = TmError> + ?Sized),
conn_result: &mut HandledConnectionInfo, conn_result: &mut HandledConnectionInfo,
stream: &mut TcpStream, stream: &mut TcpStream,
) -> Result<bool, TcpTmtcError<TmError, TcError>> { ) -> Result<bool, TcpTmtcError<TmError, TcError>> {
@ -101,7 +104,7 @@ impl<TmError, TcError> TcpTmSender<TmError, TcError> for CobsTmSender {
/// Telemetry will be encoded with the COBS protocol using [cobs::encode] in addition to being /// Telemetry will be encoded with the COBS protocol using [cobs::encode] in addition to being
/// wrapped with the sentinel value 0 as the packet delimiter as well before being sent back to /// wrapped with the sentinel value 0 as the packet delimiter as well before being sent back to
/// the client. Please note that the server will send as much data as it can retrieve from the /// the client. Please note that the server will send as much data as it can retrieve from the
/// [TmPacketSource] in its current implementation. /// [PacketSource] in its current implementation.
/// ///
/// Using a framing protocol like COBS imposes minimal restrictions on the type of TMTC data /// Using a framing protocol like COBS imposes minimal restrictions on the type of TMTC data
/// exchanged while also allowing packets with flexible size and a reliable way to reconstruct full /// exchanged while also allowing packets with flexible size and a reliable way to reconstruct full
@ -115,26 +118,26 @@ impl<TmError, TcError> TcpTmSender<TmError, TcError> for CobsTmSender {
/// The [TCP integration tests](https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/main/satrs/tests/tcp_servers.rs) /// The [TCP integration tests](https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/main/satrs/tests/tcp_servers.rs)
/// test also serves as the example application for this module. /// test also serves as the example application for this module.
pub struct TcpTmtcInCobsServer< pub struct TcpTmtcInCobsServer<
TmSource: TmPacketSource<Error = TmError>, TmSource: PacketSource<Error = TmError>,
TcReceiver: ReceivesTc<Error = TcError>, TcSender: PacketSenderRaw<Error = SendError>,
HandledConnection: HandledConnectionHandler, HandledConnection: HandledConnectionHandler,
TmError, TmError,
TcError: 'static, SendError: 'static,
> { > {
pub generic_server: TcpTmtcGenericServer< pub generic_server: TcpTmtcGenericServer<
TmSource, TmSource,
TcReceiver, TcSender,
CobsTmSender, CobsTmSender,
CobsTcParser, CobsTcParser,
HandledConnection, HandledConnection,
TmError, TmError,
TcError, SendError,
>, >,
} }
impl< impl<
TmSource: TmPacketSource<Error = TmError>, TmSource: PacketSource<Error = TmError>,
TcReceiver: ReceivesTc<Error = TcError>, TcReceiver: PacketSenderRaw<Error = TcError>,
HandledConnection: HandledConnectionHandler, HandledConnection: HandledConnectionHandler,
TmError: 'static, TmError: 'static,
TcError: 'static, TcError: 'static,
@ -178,8 +181,8 @@ impl<
/// useful if using the port number 0 for OS auto-assignment. /// useful if using the port number 0 for OS auto-assignment.
pub fn local_addr(&self) -> std::io::Result<SocketAddr>; pub fn local_addr(&self) -> std::io::Result<SocketAddr>;
/// Delegation to the [TcpTmtcGenericServer::handle_next_connection] call. /// Delegation to the [TcpTmtcGenericServer::handle_all_connections] call.
pub fn handle_next_connection( pub fn handle_all_connections(
&mut self, &mut self,
poll_duration: Option<Duration>, poll_duration: Option<Duration>,
) -> Result<ConnectionResult, TcpTmtcError<TmError, TcError>>; ) -> Result<ConnectionResult, TcpTmtcError<TmError, TcError>>;
@ -196,22 +199,29 @@ mod tests {
use std::{ use std::{
io::{Read, Write}, io::{Read, Write},
net::{IpAddr, Ipv4Addr, SocketAddr, TcpStream}, net::{IpAddr, Ipv4Addr, SocketAddr, TcpStream},
panic, thread, panic,
sync::mpsc,
thread,
time::Instant, time::Instant,
}; };
use crate::{ use crate::{
encoding::tests::{INVERTED_PACKET, SIMPLE_PACKET}, encoding::tests::{INVERTED_PACKET, SIMPLE_PACKET},
hal::std::tcp_server::{ hal::std::tcp_server::{
tests::{ConnectionFinishedHandler, SyncTcCacher, SyncTmSource}, tests::{ConnectionFinishedHandler, SyncTmSource},
ConnectionResult, ServerConfig, ConnectionResult, ServerConfig,
}, },
queue::GenericSendError,
tmtc::PacketAsVec,
ComponentId,
}; };
use alloc::sync::Arc; use alloc::sync::Arc;
use cobs::encode; use cobs::encode;
use super::TcpTmtcInCobsServer; use super::TcpTmtcInCobsServer;
const TCP_SERVER_ID: ComponentId = 0x05;
fn encode_simple_packet(encoded_buf: &mut [u8], current_idx: &mut usize) { fn encode_simple_packet(encoded_buf: &mut [u8], current_idx: &mut usize) {
encode_packet(&SIMPLE_PACKET, encoded_buf, current_idx) encode_packet(&SIMPLE_PACKET, encoded_buf, current_idx)
} }
@ -230,14 +240,20 @@ mod tests {
fn generic_tmtc_server( fn generic_tmtc_server(
addr: &SocketAddr, addr: &SocketAddr,
tc_receiver: SyncTcCacher, tc_sender: mpsc::Sender<PacketAsVec>,
tm_source: SyncTmSource, tm_source: SyncTmSource,
stop_signal: Option<Arc<AtomicBool>>, stop_signal: Option<Arc<AtomicBool>>,
) -> TcpTmtcInCobsServer<SyncTmSource, SyncTcCacher, ConnectionFinishedHandler, (), ()> { ) -> TcpTmtcInCobsServer<
SyncTmSource,
mpsc::Sender<PacketAsVec>,
ConnectionFinishedHandler,
(),
GenericSendError,
> {
TcpTmtcInCobsServer::new( TcpTmtcInCobsServer::new(
ServerConfig::new(*addr, Duration::from_millis(2), 1024, 1024), ServerConfig::new(TCP_SERVER_ID, *addr, Duration::from_millis(2), 1024, 1024),
tm_source, tm_source,
tc_receiver, tc_sender,
ConnectionFinishedHandler::default(), ConnectionFinishedHandler::default(),
stop_signal, stop_signal,
) )
@ -247,10 +263,10 @@ mod tests {
#[test] #[test]
fn test_server_basic_no_tm() { fn test_server_basic_no_tm() {
let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0);
let tc_receiver = SyncTcCacher::default(); let (tc_sender, tc_receiver) = mpsc::channel();
let tm_source = SyncTmSource::default(); let tm_source = SyncTmSource::default();
let mut tcp_server = let mut tcp_server =
generic_tmtc_server(&auto_port_addr, tc_receiver.clone(), tm_source, None); generic_tmtc_server(&auto_port_addr, tc_sender.clone(), tm_source, None);
let dest_addr = tcp_server let dest_addr = tcp_server
.local_addr() .local_addr()
.expect("retrieving dest addr failed"); .expect("retrieving dest addr failed");
@ -258,7 +274,7 @@ mod tests {
let set_if_done = conn_handled.clone(); let set_if_done = conn_handled.clone();
// Call the connection handler in separate thread, does block. // Call the connection handler in separate thread, does block.
thread::spawn(move || { thread::spawn(move || {
let result = tcp_server.handle_next_connection(Some(Duration::from_millis(100))); let result = tcp_server.handle_all_connections(Some(Duration::from_millis(100)));
if result.is_err() { if result.is_err() {
panic!("handling connection failed: {:?}", result.unwrap_err()); panic!("handling connection failed: {:?}", result.unwrap_err());
} }
@ -293,28 +309,20 @@ mod tests {
panic!("connection was not handled properly"); panic!("connection was not handled properly");
} }
// Check that the packet was received and decoded successfully. // Check that the packet was received and decoded successfully.
let mut tc_queue = tc_receiver let packet_with_sender = tc_receiver.recv().expect("receiving TC failed");
.tc_queue assert_eq!(packet_with_sender.packet, &SIMPLE_PACKET);
.lock() matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty));
.expect("locking tc queue failed");
assert_eq!(tc_queue.len(), 1);
assert_eq!(tc_queue.pop_front().unwrap(), &SIMPLE_PACKET);
drop(tc_queue);
} }
#[test] #[test]
fn test_server_basic_multi_tm_multi_tc() { fn test_server_basic_multi_tm_multi_tc() {
let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0);
let tc_receiver = SyncTcCacher::default(); let (tc_sender, tc_receiver) = mpsc::channel();
let mut tm_source = SyncTmSource::default(); let mut tm_source = SyncTmSource::default();
tm_source.add_tm(&INVERTED_PACKET); tm_source.add_tm(&INVERTED_PACKET);
tm_source.add_tm(&SIMPLE_PACKET); tm_source.add_tm(&SIMPLE_PACKET);
let mut tcp_server = generic_tmtc_server( let mut tcp_server =
&auto_port_addr, generic_tmtc_server(&auto_port_addr, tc_sender.clone(), tm_source.clone(), None);
tc_receiver.clone(),
tm_source.clone(),
None,
);
let dest_addr = tcp_server let dest_addr = tcp_server
.local_addr() .local_addr()
.expect("retrieving dest addr failed"); .expect("retrieving dest addr failed");
@ -322,7 +330,7 @@ mod tests {
let set_if_done = conn_handled.clone(); let set_if_done = conn_handled.clone();
// Call the connection handler in separate thread, does block. // Call the connection handler in separate thread, does block.
thread::spawn(move || { thread::spawn(move || {
let result = tcp_server.handle_next_connection(Some(Duration::from_millis(100))); let result = tcp_server.handle_all_connections(Some(Duration::from_millis(100)));
if result.is_err() { if result.is_err() {
panic!("handling connection failed: {:?}", result.unwrap_err()); panic!("handling connection failed: {:?}", result.unwrap_err());
} }
@ -409,27 +417,26 @@ mod tests {
panic!("connection was not handled properly"); panic!("connection was not handled properly");
} }
// Check that the packet was received and decoded successfully. // Check that the packet was received and decoded successfully.
let mut tc_queue = tc_receiver let packet_with_sender = tc_receiver.recv().expect("receiving TC failed");
.tc_queue let packet = &packet_with_sender.packet;
.lock() assert_eq!(packet, &SIMPLE_PACKET);
.expect("locking tc queue failed"); let packet_with_sender = tc_receiver.recv().expect("receiving TC failed");
assert_eq!(tc_queue.len(), 2); let packet = &packet_with_sender.packet;
assert_eq!(tc_queue.pop_front().unwrap(), &SIMPLE_PACKET); assert_eq!(packet, &INVERTED_PACKET);
assert_eq!(tc_queue.pop_front().unwrap(), &INVERTED_PACKET); matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty));
drop(tc_queue);
} }
#[test] #[test]
fn test_server_accept_timeout() { fn test_server_accept_timeout() {
let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0);
let tc_receiver = SyncTcCacher::default(); let (tc_sender, _tc_receiver) = mpsc::channel();
let tm_source = SyncTmSource::default(); let tm_source = SyncTmSource::default();
let mut tcp_server = let mut tcp_server =
generic_tmtc_server(&auto_port_addr, tc_receiver.clone(), tm_source, None); generic_tmtc_server(&auto_port_addr, tc_sender.clone(), tm_source, None);
let start = Instant::now(); let start = Instant::now();
// Call the connection handler in separate thread, does block. // Call the connection handler in separate thread, does block.
let thread_jh = thread::spawn(move || loop { let thread_jh = thread::spawn(move || loop {
let result = tcp_server.handle_next_connection(Some(Duration::from_millis(20))); let result = tcp_server.handle_all_connections(Some(Duration::from_millis(20)));
if result.is_err() { if result.is_err() {
panic!("handling connection failed: {:?}", result.unwrap_err()); panic!("handling connection failed: {:?}", result.unwrap_err());
} }
@ -447,12 +454,12 @@ mod tests {
#[test] #[test]
fn test_server_stop_signal() { fn test_server_stop_signal() {
let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0);
let tc_receiver = SyncTcCacher::default(); let (tc_sender, _tc_receiver) = mpsc::channel();
let tm_source = SyncTmSource::default(); let tm_source = SyncTmSource::default();
let stop_signal = Arc::new(AtomicBool::new(false)); let stop_signal = Arc::new(AtomicBool::new(false));
let mut tcp_server = generic_tmtc_server( let mut tcp_server = generic_tmtc_server(
&auto_port_addr, &auto_port_addr,
tc_receiver.clone(), tc_sender.clone(),
tm_source, tm_source,
Some(stop_signal.clone()), Some(stop_signal.clone()),
); );
@ -463,7 +470,7 @@ mod tests {
let start = Instant::now(); let start = Instant::now();
// Call the connection handler in separate thread, does block. // Call the connection handler in separate thread, does block.
let thread_jh = thread::spawn(move || loop { let thread_jh = thread::spawn(move || loop {
let result = tcp_server.handle_next_connection(Some(Duration::from_millis(20))); let result = tcp_server.handle_all_connections(Some(Duration::from_millis(20)));
if result.is_err() { if result.is_err() {
panic!("handling connection failed: {:?}", result.unwrap_err()); panic!("handling connection failed: {:?}", result.unwrap_err());
} }

View File

@ -13,14 +13,13 @@ use std::net::SocketAddr;
// use std::net::{SocketAddr, TcpStream}; // use std::net::{SocketAddr, TcpStream};
use std::thread; use std::thread;
use crate::tmtc::{ReceivesTc, TmPacketSource}; use crate::tmtc::{PacketSenderRaw, PacketSource};
use crate::ComponentId;
use thiserror::Error; use thiserror::Error;
// Re-export the TMTC in COBS server. // Re-export the TMTC in COBS server.
pub use crate::hal::std::tcp_cobs_server::{CobsTcParser, CobsTmSender, TcpTmtcInCobsServer}; pub use crate::hal::std::tcp_cobs_server::{CobsTcParser, CobsTmSender, TcpTmtcInCobsServer};
pub use crate::hal::std::tcp_spacepackets_server::{ pub use crate::hal::std::tcp_spacepackets_server::{SpacepacketsTmSender, TcpSpacepacketsServer};
SpacepacketsTcParser, SpacepacketsTmSender, TcpSpacepacketsServer,
};
/// Configuration struct for the generic TCP TMTC server /// Configuration struct for the generic TCP TMTC server
/// ///
@ -30,7 +29,7 @@ pub use crate::hal::std::tcp_spacepackets_server::{
/// * `inner_loop_delay` - If a client connects for a longer period, but no TC is received or /// * `inner_loop_delay` - If a client connects for a longer period, but no TC is received or
/// no TM needs to be sent, the TCP server will delay for the specified amount of time /// no TM needs to be sent, the TCP server will delay for the specified amount of time
/// to reduce CPU load. /// to reduce CPU load.
/// * `tm_buffer_size` - Size of the TM buffer used to read TM from the [TmPacketSource] and /// * `tm_buffer_size` - Size of the TM buffer used to read TM from the [PacketSource] and
/// encoding of that data. This buffer should at large enough to hold the maximum expected /// encoding of that data. This buffer should at large enough to hold the maximum expected
/// TM size read from the packet source. /// TM size read from the packet source.
/// * `tc_buffer_size` - Size of the TC buffer used to read encoded telecommands sent from /// * `tc_buffer_size` - Size of the TC buffer used to read encoded telecommands sent from
@ -46,6 +45,7 @@ pub use crate::hal::std::tcp_spacepackets_server::{
/// default. /// default.
#[derive(Debug, Copy, Clone)] #[derive(Debug, Copy, Clone)]
pub struct ServerConfig { pub struct ServerConfig {
pub id: ComponentId,
pub addr: SocketAddr, pub addr: SocketAddr,
pub inner_loop_delay: Duration, pub inner_loop_delay: Duration,
pub tm_buffer_size: usize, pub tm_buffer_size: usize,
@ -56,12 +56,14 @@ pub struct ServerConfig {
impl ServerConfig { impl ServerConfig {
pub fn new( pub fn new(
id: ComponentId,
addr: SocketAddr, addr: SocketAddr,
inner_loop_delay: Duration, inner_loop_delay: Duration,
tm_buffer_size: usize, tm_buffer_size: usize,
tc_buffer_size: usize, tc_buffer_size: usize,
) -> Self { ) -> Self {
Self { Self {
id,
addr, addr,
inner_loop_delay, inner_loop_delay,
tm_buffer_size, tm_buffer_size,
@ -116,28 +118,29 @@ pub trait HandledConnectionHandler {
} }
/// Generic parser abstraction for an object which can parse for telecommands given a raw /// Generic parser abstraction for an object which can parse for telecommands given a raw
/// bytestream received from a TCP socket and send them to a generic [ReceivesTc] telecommand /// bytestream received from a TCP socket and send them using a generic [PacketSenderRaw]
/// receiver. This allows different encoding schemes for telecommands. /// implementation. This allows different encoding schemes for telecommands.
pub trait TcpTcParser<TmError, TcError> { pub trait TcpTcParser<TmError, SendError> {
fn handle_tc_parsing( fn handle_tc_parsing(
&mut self, &mut self,
tc_buffer: &mut [u8], tc_buffer: &mut [u8],
tc_receiver: &mut (impl ReceivesTc<Error = TcError> + ?Sized), sender_id: ComponentId,
tc_sender: &(impl PacketSenderRaw<Error = SendError> + ?Sized),
conn_result: &mut HandledConnectionInfo, conn_result: &mut HandledConnectionInfo,
current_write_idx: usize, current_write_idx: usize,
next_write_idx: &mut usize, next_write_idx: &mut usize,
) -> Result<(), TcpTmtcError<TmError, TcError>>; ) -> Result<(), TcpTmtcError<TmError, SendError>>;
} }
/// Generic sender abstraction for an object which can pull telemetry from a given TM source /// Generic sender abstraction for an object which can pull telemetry from a given TM source
/// using a [TmPacketSource] and then send them back to a client using a given [TcpStream]. /// using a [PacketSource] and then send them back to a client using a given [TcpStream].
/// The concrete implementation can also perform any encoding steps which are necessary before /// The concrete implementation can also perform any encoding steps which are necessary before
/// sending back the data to a client. /// sending back the data to a client.
pub trait TcpTmSender<TmError, TcError> { pub trait TcpTmSender<TmError, TcError> {
fn handle_tm_sending( fn handle_tm_sending(
&mut self, &mut self,
tm_buffer: &mut [u8], tm_buffer: &mut [u8],
tm_source: &mut (impl TmPacketSource<Error = TmError> + ?Sized), tm_source: &mut (impl PacketSource<Error = TmError> + ?Sized),
conn_result: &mut HandledConnectionInfo, conn_result: &mut HandledConnectionInfo,
stream: &mut TcpStream, stream: &mut TcpStream,
) -> Result<bool, TcpTmtcError<TmError, TcError>>; ) -> Result<bool, TcpTmtcError<TmError, TcError>>;
@ -150,9 +153,9 @@ pub trait TcpTmSender<TmError, TcError> {
/// through the following 4 core abstractions: /// through the following 4 core abstractions:
/// ///
/// 1. [TcpTcParser] to parse for telecommands from the raw bytestream received from a client. /// 1. [TcpTcParser] to parse for telecommands from the raw bytestream received from a client.
/// 2. Parsed telecommands will be sent to the [ReceivesTc] telecommand receiver. /// 2. Parsed telecommands will be sent using the [PacketSenderRaw] object.
/// 3. [TcpTmSender] to send telemetry pulled from a TM source back to the client. /// 3. [TcpTmSender] to send telemetry pulled from a TM source back to the client.
/// 4. [TmPacketSource] as a generic TM source used by the [TcpTmSender]. /// 4. [PacketSource] as a generic TM source used by the [TcpTmSender].
/// ///
/// It is possible to specify custom abstractions to build a dedicated TCP TMTC server without /// It is possible to specify custom abstractions to build a dedicated TCP TMTC server without
/// having to re-implement common logic. /// having to re-implement common logic.
@ -160,46 +163,48 @@ pub trait TcpTmSender<TmError, TcError> {
/// Currently, this framework offers the following concrete implementations: /// Currently, this framework offers the following concrete implementations:
/// ///
/// 1. [TcpTmtcInCobsServer] to exchange TMTC wrapped inside the COBS framing protocol. /// 1. [TcpTmtcInCobsServer] to exchange TMTC wrapped inside the COBS framing protocol.
/// 2. [TcpSpacepacketsServer] to exchange space packets via TCP.
pub struct TcpTmtcGenericServer< pub struct TcpTmtcGenericServer<
TmSource: TmPacketSource<Error = TmError>, TmSource: PacketSource<Error = TmError>,
TcReceiver: ReceivesTc<Error = TcError>, TcSender: PacketSenderRaw<Error = TcSendError>,
TmSender: TcpTmSender<TmError, TcError>, TmSender: TcpTmSender<TmError, TcSendError>,
TcParser: TcpTcParser<TmError, TcError>, TcParser: TcpTcParser<TmError, TcSendError>,
HandledConnection: HandledConnectionHandler, HandledConnection: HandledConnectionHandler,
TmError, TmError,
TcError, TcSendError,
> { > {
pub id: ComponentId,
pub finished_handler: HandledConnection, pub finished_handler: HandledConnection,
pub(crate) listener: TcpListener, pub(crate) listener: TcpListener,
pub(crate) inner_loop_delay: Duration, pub(crate) inner_loop_delay: Duration,
pub(crate) tm_source: TmSource, pub(crate) tm_source: TmSource,
pub(crate) tm_buffer: Vec<u8>, pub(crate) tm_buffer: Vec<u8>,
pub(crate) tc_receiver: TcReceiver, pub(crate) tc_sender: TcSender,
pub(crate) tc_buffer: Vec<u8>, pub(crate) tc_buffer: Vec<u8>,
poll: Poll, poll: Poll,
events: Events, events: Events,
tc_handler: TcParser, pub tc_handler: TcParser,
tm_handler: TmSender, pub tm_handler: TmSender,
stop_signal: Option<Arc<AtomicBool>>, stop_signal: Option<Arc<AtomicBool>>,
} }
impl< impl<
TmSource: TmPacketSource<Error = TmError>, TmSource: PacketSource<Error = TmError>,
TcReceiver: ReceivesTc<Error = TcError>, TcSender: PacketSenderRaw<Error = TcSendError>,
TmSender: TcpTmSender<TmError, TcError>, TmSender: TcpTmSender<TmError, TcSendError>,
TcParser: TcpTcParser<TmError, TcError>, TcParser: TcpTcParser<TmError, TcSendError>,
HandledConnection: HandledConnectionHandler, HandledConnection: HandledConnectionHandler,
TmError: 'static, TmError: 'static,
TcError: 'static, TcSendError: 'static,
> >
TcpTmtcGenericServer< TcpTmtcGenericServer<
TmSource, TmSource,
TcReceiver, TcSender,
TmSender, TmSender,
TcParser, TcParser,
HandledConnection, HandledConnection,
TmError, TmError,
TcError, TcSendError,
> >
{ {
/// Create a new generic TMTC server instance. /// Create a new generic TMTC server instance.
@ -212,15 +217,15 @@ impl<
/// * `tm_sender` - Sends back telemetry to the client using the specified TM source. /// * `tm_sender` - Sends back telemetry to the client using the specified TM source.
/// * `tm_source` - Generic TM source used by the server to pull telemetry packets which are /// * `tm_source` - Generic TM source used by the server to pull telemetry packets which are
/// then sent back to the client. /// then sent back to the client.
/// * `tc_receiver` - Any received telecommand which was decoded successfully will be forwarded /// * `tc_sender` - Any received telecommand which was decoded successfully will be forwarded
/// to this TC receiver. /// using this TC sender.
/// * `stop_signal` - Can be used to stop the server even if a connection is ongoing. /// * `stop_signal` - Can be used to stop the server even if a connection is ongoing.
pub fn new( pub fn new(
cfg: ServerConfig, cfg: ServerConfig,
tc_parser: TcParser, tc_parser: TcParser,
tm_sender: TmSender, tm_sender: TmSender,
tm_source: TmSource, tm_source: TmSource,
tc_receiver: TcReceiver, tc_receiver: TcSender,
finished_handler: HandledConnection, finished_handler: HandledConnection,
stop_signal: Option<Arc<AtomicBool>>, stop_signal: Option<Arc<AtomicBool>>,
) -> Result<Self, std::io::Error> { ) -> Result<Self, std::io::Error> {
@ -248,6 +253,7 @@ impl<
.register(&mut mio_listener, Token(0), Interest::READABLE)?; .register(&mut mio_listener, Token(0), Interest::READABLE)?;
Ok(Self { Ok(Self {
id: cfg.id,
tc_handler: tc_parser, tc_handler: tc_parser,
tm_handler: tm_sender, tm_handler: tm_sender,
poll, poll,
@ -256,7 +262,7 @@ impl<
inner_loop_delay: cfg.inner_loop_delay, inner_loop_delay: cfg.inner_loop_delay,
tm_source, tm_source,
tm_buffer: vec![0; cfg.tm_buffer_size], tm_buffer: vec![0; cfg.tm_buffer_size],
tc_receiver, tc_sender: tc_receiver,
tc_buffer: vec![0; cfg.tc_buffer_size], tc_buffer: vec![0; cfg.tc_buffer_size],
stop_signal, stop_signal,
finished_handler, finished_handler,
@ -287,10 +293,10 @@ impl<
/// The server will delay for a user-specified period if the client connects to the server /// The server will delay for a user-specified period if the client connects to the server
/// for prolonged periods and there is no traffic for the server. This is the case if the /// for prolonged periods and there is no traffic for the server. This is the case if the
/// client does not send any telecommands and no telemetry needs to be sent back to the client. /// client does not send any telecommands and no telemetry needs to be sent back to the client.
pub fn handle_next_connection( pub fn handle_all_connections(
&mut self, &mut self,
poll_timeout: Option<Duration>, poll_timeout: Option<Duration>,
) -> Result<ConnectionResult, TcpTmtcError<TmError, TcError>> { ) -> Result<ConnectionResult, TcpTmtcError<TmError, TcSendError>> {
let mut handled_connections = 0; let mut handled_connections = 0;
// Poll Mio for events. // Poll Mio for events.
self.poll.poll(&mut self.events, poll_timeout)?; self.poll.poll(&mut self.events, poll_timeout)?;
@ -329,7 +335,7 @@ impl<
&mut self, &mut self,
mut stream: TcpStream, mut stream: TcpStream,
addr: SocketAddr, addr: SocketAddr,
) -> Result<(), TcpTmtcError<TmError, TcError>> { ) -> Result<(), TcpTmtcError<TmError, TcSendError>> {
let mut current_write_idx; let mut current_write_idx;
let mut next_write_idx = 0; let mut next_write_idx = 0;
let mut connection_result = HandledConnectionInfo::new(addr); let mut connection_result = HandledConnectionInfo::new(addr);
@ -343,7 +349,8 @@ impl<
if current_write_idx > 0 { if current_write_idx > 0 {
self.tc_handler.handle_tc_parsing( self.tc_handler.handle_tc_parsing(
&mut self.tc_buffer, &mut self.tc_buffer,
&mut self.tc_receiver, self.id,
&self.tc_sender,
&mut connection_result, &mut connection_result,
current_write_idx, current_write_idx,
&mut next_write_idx, &mut next_write_idx,
@ -357,7 +364,8 @@ impl<
if current_write_idx == self.tc_buffer.capacity() { if current_write_idx == self.tc_buffer.capacity() {
self.tc_handler.handle_tc_parsing( self.tc_handler.handle_tc_parsing(
&mut self.tc_buffer, &mut self.tc_buffer,
&mut self.tc_receiver, self.id,
&self.tc_sender,
&mut connection_result, &mut connection_result,
current_write_idx, current_write_idx,
&mut next_write_idx, &mut next_write_idx,
@ -371,7 +379,8 @@ impl<
std::io::ErrorKind::WouldBlock | std::io::ErrorKind::TimedOut => { std::io::ErrorKind::WouldBlock | std::io::ErrorKind::TimedOut => {
self.tc_handler.handle_tc_parsing( self.tc_handler.handle_tc_parsing(
&mut self.tc_buffer, &mut self.tc_buffer,
&mut self.tc_receiver, self.id,
&self.tc_sender,
&mut connection_result, &mut connection_result,
current_write_idx, current_write_idx,
&mut next_write_idx, &mut next_write_idx,
@ -424,24 +433,10 @@ pub(crate) mod tests {
use alloc::{collections::VecDeque, sync::Arc, vec::Vec}; use alloc::{collections::VecDeque, sync::Arc, vec::Vec};
use crate::tmtc::{ReceivesTcCore, TmPacketSourceCore}; use crate::tmtc::PacketSource;
use super::*; use super::*;
#[derive(Default, Clone)]
pub(crate) struct SyncTcCacher {
pub(crate) tc_queue: Arc<Mutex<VecDeque<Vec<u8>>>>,
}
impl ReceivesTcCore for SyncTcCacher {
type Error = ();
fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> {
let mut tc_queue = self.tc_queue.lock().expect("tc forwarder failed");
tc_queue.push_back(tc_raw.to_vec());
Ok(())
}
}
#[derive(Default, Clone)] #[derive(Default, Clone)]
pub(crate) struct SyncTmSource { pub(crate) struct SyncTmSource {
tm_queue: Arc<Mutex<VecDeque<Vec<u8>>>>, tm_queue: Arc<Mutex<VecDeque<Vec<u8>>>>,
@ -454,7 +449,7 @@ pub(crate) mod tests {
} }
} }
impl TmPacketSourceCore for SyncTmSource { impl PacketSource for SyncTmSource {
type Error = (); type Error = ();
fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error> { fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error> {

View File

@ -5,9 +5,9 @@ use mio::net::{TcpListener, TcpStream};
use std::{io::Write, net::SocketAddr}; use std::{io::Write, net::SocketAddr};
use crate::{ use crate::{
encoding::parse_buffer_for_ccsds_space_packets, encoding::{ccsds::SpacePacketValidator, parse_buffer_for_ccsds_space_packets},
tmtc::{ReceivesTc, TmPacketSource}, tmtc::{PacketSenderRaw, PacketSource},
ValidatorU16Id, ComponentId,
}; };
use super::tcp_server::{ use super::tcp_server::{
@ -15,24 +15,12 @@ use super::tcp_server::{
TcpTmSender, TcpTmtcError, TcpTmtcGenericServer, TcpTmSender, TcpTmtcError, TcpTmtcGenericServer,
}; };
/// Concrete [TcpTcParser] implementation for the [TcpSpacepacketsServer]. impl<T: SpacePacketValidator, TmError, TcError: 'static> TcpTcParser<TmError, TcError> for T {
pub struct SpacepacketsTcParser<PacketIdChecker: ValidatorU16Id> {
packet_id_lookup: PacketIdChecker,
}
impl<PacketIdChecker: ValidatorU16Id> SpacepacketsTcParser<PacketIdChecker> {
pub fn new(packet_id_lookup: PacketIdChecker) -> Self {
Self { packet_id_lookup }
}
}
impl<PacketIdChecker: ValidatorU16Id, TmError, TcError: 'static> TcpTcParser<TmError, TcError>
for SpacepacketsTcParser<PacketIdChecker>
{
fn handle_tc_parsing( fn handle_tc_parsing(
&mut self, &mut self,
tc_buffer: &mut [u8], tc_buffer: &mut [u8],
tc_receiver: &mut (impl ReceivesTc<Error = TcError> + ?Sized), sender_id: ComponentId,
tc_sender: &(impl PacketSenderRaw<Error = TcError> + ?Sized),
conn_result: &mut HandledConnectionInfo, conn_result: &mut HandledConnectionInfo,
current_write_idx: usize, current_write_idx: usize,
next_write_idx: &mut usize, next_write_idx: &mut usize,
@ -40,8 +28,9 @@ impl<PacketIdChecker: ValidatorU16Id, TmError, TcError: 'static> TcpTcParser<TmE
// Reader vec full, need to parse for packets. // Reader vec full, need to parse for packets.
conn_result.num_received_tcs += parse_buffer_for_ccsds_space_packets( conn_result.num_received_tcs += parse_buffer_for_ccsds_space_packets(
&mut tc_buffer[..current_write_idx], &mut tc_buffer[..current_write_idx],
&self.packet_id_lookup, self,
tc_receiver.upcast_mut(), sender_id,
tc_sender,
next_write_idx, next_write_idx,
) )
.map_err(|e| TcpTmtcError::TcError(e))?; .map_err(|e| TcpTmtcError::TcError(e))?;
@ -57,7 +46,7 @@ impl<TmError, TcError> TcpTmSender<TmError, TcError> for SpacepacketsTmSender {
fn handle_tm_sending( fn handle_tm_sending(
&mut self, &mut self,
tm_buffer: &mut [u8], tm_buffer: &mut [u8],
tm_source: &mut (impl TmPacketSource<Error = TmError> + ?Sized), tm_source: &mut (impl PacketSource<Error = TmError> + ?Sized),
conn_result: &mut HandledConnectionInfo, conn_result: &mut HandledConnectionInfo,
stream: &mut TcpStream, stream: &mut TcpStream,
) -> Result<bool, TcpTmtcError<TmError, TcError>> { ) -> Result<bool, TcpTmtcError<TmError, TcError>> {
@ -85,48 +74,41 @@ impl<TmError, TcError> TcpTmSender<TmError, TcError> for SpacepacketsTmSender {
/// ///
/// This serves only works if /// This serves only works if
/// [CCSDS 133.0-B-2 space packets](https://public.ccsds.org/Pubs/133x0b2e1.pdf) are the only /// [CCSDS 133.0-B-2 space packets](https://public.ccsds.org/Pubs/133x0b2e1.pdf) are the only
/// packet type being exchanged. It uses the CCSDS [spacepackets::PacketId] as the packet delimiter /// packet type being exchanged. It uses the CCSDS space packet header [spacepackets::SpHeader] and
/// and start marker when parsing for packets. The user specifies a set of expected /// a user specified [SpacePacketValidator] to determine the space packets relevant for further
/// [spacepackets::PacketId]s as part of the server configuration for that purpose. /// processing.
/// ///
/// ## Example /// ## Example
///
/// The [TCP server integration tests](https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/main/satrs/tests/tcp_servers.rs) /// The [TCP server integration tests](https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/main/satrs/tests/tcp_servers.rs)
/// also serves as the example application for this module. /// also serves as the example application for this module.
pub struct TcpSpacepacketsServer< pub struct TcpSpacepacketsServer<
TmSource: TmPacketSource<Error = TmError>, TmSource: PacketSource<Error = TmError>,
TcReceiver: ReceivesTc<Error = TcError>, TcSender: PacketSenderRaw<Error = SendError>,
PacketIdChecker: ValidatorU16Id, Validator: SpacePacketValidator,
HandledConnection: HandledConnectionHandler, HandledConnection: HandledConnectionHandler,
TmError, TmError,
TcError: 'static, SendError: 'static,
> { > {
pub generic_server: TcpTmtcGenericServer< pub generic_server: TcpTmtcGenericServer<
TmSource, TmSource,
TcReceiver, TcSender,
SpacepacketsTmSender, SpacepacketsTmSender,
SpacepacketsTcParser<PacketIdChecker>, Validator,
HandledConnection, HandledConnection,
TmError, TmError,
TcError, SendError,
>, >,
} }
impl< impl<
TmSource: TmPacketSource<Error = TmError>, TmSource: PacketSource<Error = TmError>,
TcReceiver: ReceivesTc<Error = TcError>, TcSender: PacketSenderRaw<Error = TcError>,
PacketIdChecker: ValidatorU16Id, Validator: SpacePacketValidator,
HandledConnection: HandledConnectionHandler, HandledConnection: HandledConnectionHandler,
TmError: 'static, TmError: 'static,
TcError: 'static, TcError: 'static,
> > TcpSpacepacketsServer<TmSource, TcSender, Validator, HandledConnection, TmError, TcError>
TcpSpacepacketsServer<
TmSource,
TcReceiver,
PacketIdChecker,
HandledConnection,
TmError,
TcError,
>
{ {
/// ///
/// ## Parameter /// ## Parameter
@ -134,26 +116,30 @@ impl<
/// * `cfg` - Configuration of the server. /// * `cfg` - Configuration of the server.
/// * `tm_source` - Generic TM source used by the server to pull telemetry packets which are /// * `tm_source` - Generic TM source used by the server to pull telemetry packets which are
/// then sent back to the client. /// then sent back to the client.
/// * `tc_receiver` - Any received telecommands which were decoded successfully will be /// * `tc_sender` - Any received telecommands which were decoded successfully will be
/// forwarded to this TC receiver. /// forwarded using this [PacketSenderRaw].
/// * `packet_id_lookup` - This lookup table contains the relevant packets IDs for packet /// * `validator` - Used to determine the space packets relevant for further processing and
/// parsing. This mechanism is used to have a start marker for finding CCSDS packets. /// to detect broken space packets.
/// * `handled_connection_hook` - Called to notify the user about a succesfully handled
/// connection.
/// * `stop_signal` - Can be used to shut down the TCP server even for longer running
/// connections.
pub fn new( pub fn new(
cfg: ServerConfig, cfg: ServerConfig,
tm_source: TmSource, tm_source: TmSource,
tc_receiver: TcReceiver, tc_sender: TcSender,
packet_id_checker: PacketIdChecker, validator: Validator,
handled_connection: HandledConnection, handled_connection_hook: HandledConnection,
stop_signal: Option<Arc<AtomicBool>>, stop_signal: Option<Arc<AtomicBool>>,
) -> Result<Self, std::io::Error> { ) -> Result<Self, std::io::Error> {
Ok(Self { Ok(Self {
generic_server: TcpTmtcGenericServer::new( generic_server: TcpTmtcGenericServer::new(
cfg, cfg,
SpacepacketsTcParser::new(packet_id_checker), validator,
SpacepacketsTmSender::default(), SpacepacketsTmSender::default(),
tm_source, tm_source,
tc_receiver, tc_sender,
handled_connection, handled_connection_hook,
stop_signal, stop_signal,
)?, )?,
}) })
@ -167,8 +153,8 @@ impl<
/// useful if using the port number 0 for OS auto-assignment. /// useful if using the port number 0 for OS auto-assignment.
pub fn local_addr(&self) -> std::io::Result<SocketAddr>; pub fn local_addr(&self) -> std::io::Result<SocketAddr>;
/// Delegation to the [TcpTmtcGenericServer::handle_next_connection] call. /// Delegation to the [TcpTmtcGenericServer::handle_all_connections] call.
pub fn handle_next_connection( pub fn handle_all_connections(
&mut self, &mut self,
poll_timeout: Option<Duration> poll_timeout: Option<Duration>
) -> Result<ConnectionResult, TcpTmtcError<TmError, TcError>>; ) -> Result<ConnectionResult, TcpTmtcError<TmError, TcError>>;
@ -187,6 +173,7 @@ mod tests {
use std::{ use std::{
io::{Read, Write}, io::{Read, Write},
net::{IpAddr, Ipv4Addr, SocketAddr, TcpStream}, net::{IpAddr, Ipv4Addr, SocketAddr, TcpStream},
sync::mpsc,
thread, thread,
}; };
@ -194,40 +181,60 @@ mod tests {
use hashbrown::HashSet; use hashbrown::HashSet;
use spacepackets::{ use spacepackets::{
ecss::{tc::PusTcCreator, WritablePusPacket}, ecss::{tc::PusTcCreator, WritablePusPacket},
PacketId, SpHeader, CcsdsPacket, PacketId, SpHeader,
}; };
use crate::hal::std::tcp_server::{ use crate::{
tests::{ConnectionFinishedHandler, SyncTcCacher, SyncTmSource}, encoding::ccsds::{SpValidity, SpacePacketValidator},
hal::std::tcp_server::{
tests::{ConnectionFinishedHandler, SyncTmSource},
ConnectionResult, ServerConfig, ConnectionResult, ServerConfig,
},
queue::GenericSendError,
tmtc::PacketAsVec,
ComponentId,
}; };
use super::TcpSpacepacketsServer; use super::TcpSpacepacketsServer;
const TCP_SERVER_ID: ComponentId = 0x05;
const TEST_APID_0: u16 = 0x02; const TEST_APID_0: u16 = 0x02;
const TEST_PACKET_ID_0: PacketId = PacketId::new_for_tc(true, TEST_APID_0); const TEST_PACKET_ID_0: PacketId = PacketId::new_for_tc(true, TEST_APID_0);
const TEST_APID_1: u16 = 0x10; const TEST_APID_1: u16 = 0x10;
const TEST_PACKET_ID_1: PacketId = PacketId::new_for_tc(true, TEST_APID_1); const TEST_PACKET_ID_1: PacketId = PacketId::new_for_tc(true, TEST_APID_1);
#[derive(Default)]
pub struct SimpleValidator(pub HashSet<PacketId>);
impl SpacePacketValidator for SimpleValidator {
fn validate(&self, sp_header: &SpHeader, _raw_buf: &[u8]) -> SpValidity {
if self.0.contains(&sp_header.packet_id()) {
return SpValidity::Valid;
}
// Simple case: Assume that the interface always contains valid space packets.
SpValidity::Skip
}
}
fn generic_tmtc_server( fn generic_tmtc_server(
addr: &SocketAddr, addr: &SocketAddr,
tc_receiver: SyncTcCacher, tc_sender: mpsc::Sender<PacketAsVec>,
tm_source: SyncTmSource, tm_source: SyncTmSource,
packet_id_lookup: HashSet<PacketId>, validator: SimpleValidator,
stop_signal: Option<Arc<AtomicBool>>, stop_signal: Option<Arc<AtomicBool>>,
) -> TcpSpacepacketsServer< ) -> TcpSpacepacketsServer<
SyncTmSource, SyncTmSource,
SyncTcCacher, mpsc::Sender<PacketAsVec>,
HashSet<PacketId>, SimpleValidator,
ConnectionFinishedHandler, ConnectionFinishedHandler,
(), (),
(), GenericSendError,
> { > {
TcpSpacepacketsServer::new( TcpSpacepacketsServer::new(
ServerConfig::new(*addr, Duration::from_millis(2), 1024, 1024), ServerConfig::new(TCP_SERVER_ID, *addr, Duration::from_millis(2), 1024, 1024),
tm_source, tm_source,
tc_receiver, tc_sender,
packet_id_lookup, validator,
ConnectionFinishedHandler::default(), ConnectionFinishedHandler::default(),
stop_signal, stop_signal,
) )
@ -237,15 +244,15 @@ mod tests {
#[test] #[test]
fn test_basic_tc_only() { fn test_basic_tc_only() {
let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0);
let tc_receiver = SyncTcCacher::default(); let (tc_sender, tc_receiver) = mpsc::channel();
let tm_source = SyncTmSource::default(); let tm_source = SyncTmSource::default();
let mut packet_id_lookup = HashSet::new(); let mut validator = SimpleValidator::default();
packet_id_lookup.insert(TEST_PACKET_ID_0); validator.0.insert(TEST_PACKET_ID_0);
let mut tcp_server = generic_tmtc_server( let mut tcp_server = generic_tmtc_server(
&auto_port_addr, &auto_port_addr,
tc_receiver.clone(), tc_sender.clone(),
tm_source, tm_source,
packet_id_lookup, validator,
None, None,
); );
let dest_addr = tcp_server let dest_addr = tcp_server
@ -255,7 +262,7 @@ mod tests {
let set_if_done = conn_handled.clone(); let set_if_done = conn_handled.clone();
// Call the connection handler in separate thread, does block. // Call the connection handler in separate thread, does block.
thread::spawn(move || { thread::spawn(move || {
let result = tcp_server.handle_next_connection(Some(Duration::from_millis(100))); let result = tcp_server.handle_all_connections(Some(Duration::from_millis(100)));
if result.is_err() { if result.is_err() {
panic!("handling connection failed: {:?}", result.unwrap_err()); panic!("handling connection failed: {:?}", result.unwrap_err());
} }
@ -289,16 +296,15 @@ mod tests {
if !conn_handled.load(Ordering::Relaxed) { if !conn_handled.load(Ordering::Relaxed) {
panic!("connection was not handled properly"); panic!("connection was not handled properly");
} }
// Check that TC has arrived. let packet = tc_receiver.try_recv().expect("receiving TC failed");
let mut tc_queue = tc_receiver.tc_queue.lock().unwrap(); assert_eq!(packet.packet, tc_0);
assert_eq!(tc_queue.len(), 1); matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty));
assert_eq!(tc_queue.pop_front().unwrap(), tc_0);
} }
#[test] #[test]
fn test_multi_tc_multi_tm() { fn test_multi_tc_multi_tm() {
let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0);
let tc_receiver = SyncTcCacher::default(); let (tc_sender, tc_receiver) = mpsc::channel();
let mut tm_source = SyncTmSource::default(); let mut tm_source = SyncTmSource::default();
// Add telemetry // Add telemetry
@ -315,14 +321,14 @@ mod tests {
tm_source.add_tm(&tm_1); tm_source.add_tm(&tm_1);
// Set up server // Set up server
let mut packet_id_lookup = HashSet::new(); let mut validator = SimpleValidator::default();
packet_id_lookup.insert(TEST_PACKET_ID_0); validator.0.insert(TEST_PACKET_ID_0);
packet_id_lookup.insert(TEST_PACKET_ID_1); validator.0.insert(TEST_PACKET_ID_1);
let mut tcp_server = generic_tmtc_server( let mut tcp_server = generic_tmtc_server(
&auto_port_addr, &auto_port_addr,
tc_receiver.clone(), tc_sender.clone(),
tm_source, tm_source,
packet_id_lookup, validator,
None, None,
); );
let dest_addr = tcp_server let dest_addr = tcp_server
@ -333,7 +339,7 @@ mod tests {
// Call the connection handler in separate thread, does block. // Call the connection handler in separate thread, does block.
thread::spawn(move || { thread::spawn(move || {
let result = tcp_server.handle_next_connection(Some(Duration::from_millis(100))); let result = tcp_server.handle_all_connections(Some(Duration::from_millis(100)));
if result.is_err() { if result.is_err() {
panic!("handling connection failed: {:?}", result.unwrap_err()); panic!("handling connection failed: {:?}", result.unwrap_err());
} }
@ -397,9 +403,10 @@ mod tests {
panic!("connection was not handled properly"); panic!("connection was not handled properly");
} }
// Check that TC has arrived. // Check that TC has arrived.
let mut tc_queue = tc_receiver.tc_queue.lock().unwrap(); let packet_0 = tc_receiver.try_recv().expect("receiving TC failed");
assert_eq!(tc_queue.len(), 2); assert_eq!(packet_0.packet, tc_0);
assert_eq!(tc_queue.pop_front().unwrap(), tc_0); let packet_1 = tc_receiver.try_recv().expect("receiving TC failed");
assert_eq!(tc_queue.pop_front().unwrap(), tc_1); assert_eq!(packet_1.packet, tc_1);
matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty));
} }
} }

View File

@ -1,7 +1,8 @@
//! Generic UDP TC server. //! Generic UDP TC server.
use crate::tmtc::{ReceivesTc, ReceivesTcCore}; use crate::tmtc::PacketSenderRaw;
use std::boxed::Box; use crate::ComponentId;
use std::io::{Error, ErrorKind}; use core::fmt::Debug;
use std::io::{self, ErrorKind};
use std::net::{SocketAddr, ToSocketAddrs, UdpSocket}; use std::net::{SocketAddr, ToSocketAddrs, UdpSocket};
use std::vec; use std::vec;
use std::vec::Vec; use std::vec::Vec;
@ -11,45 +12,46 @@ use std::vec::Vec;
/// ///
/// It caches all received telecomands into a vector. The maximum expected telecommand size should /// It caches all received telecomands into a vector. The maximum expected telecommand size should
/// be declared upfront. This avoids dynamic allocation during run-time. The user can specify a TC /// be declared upfront. This avoids dynamic allocation during run-time. The user can specify a TC
/// receiver in form of a special trait object which implements [ReceivesTc]. Please note that the /// sender in form of a special trait object which implements [PacketSenderRaw]. For example, this
/// receiver should copy out the received data if it the data is required past the /// can be used to send the telecommands to a centralized TC source component for further
/// [ReceivesTcCore::pass_tc] call. /// processing and routing.
/// ///
/// # Examples /// # Examples
/// ///
/// ``` /// ```
/// use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket}; /// use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
/// use std::sync::mpsc;
/// use spacepackets::ecss::WritablePusPacket; /// use spacepackets::ecss::WritablePusPacket;
/// use satrs::hal::std::udp_server::UdpTcServer; /// use satrs::hal::std::udp_server::UdpTcServer;
/// use satrs::tmtc::{ReceivesTc, ReceivesTcCore}; /// use satrs::ComponentId;
/// use satrs::tmtc::PacketSenderRaw;
/// use spacepackets::SpHeader; /// use spacepackets::SpHeader;
/// use spacepackets::ecss::tc::PusTcCreator; /// use spacepackets::ecss::tc::PusTcCreator;
/// ///
/// #[derive (Default)] /// const UDP_SERVER_ID: ComponentId = 0x05;
/// struct PingReceiver {}
/// impl ReceivesTcCore for PingReceiver {
/// type Error = ();
/// fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> {
/// assert_eq!(tc_raw.len(), 13);
/// Ok(())
/// }
/// }
/// ///
/// let mut buf = [0; 32]; /// let (packet_sender, packet_receiver) = mpsc::channel();
/// let dest_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 7777); /// let dest_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 7777);
/// let ping_receiver = PingReceiver::default(); /// let mut udp_tc_server = UdpTcServer::new(UDP_SERVER_ID, dest_addr, 2048, packet_sender)
/// let mut udp_tc_server = UdpTcServer::new(dest_addr, 2048, Box::new(ping_receiver))
/// .expect("Creating UDP TMTC server failed"); /// .expect("Creating UDP TMTC server failed");
/// let sph = SpHeader::new_from_apid(0x02); /// let sph = SpHeader::new_from_apid(0x02);
/// let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true); /// let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true);
/// let len = pus_tc /// // Can not fail.
/// .write_to_bytes(&mut buf) /// let ping_tc_raw = pus_tc.to_vec().unwrap();
/// .expect("Error writing PUS TC packet"); ///
/// assert_eq!(len, 13); /// // Now create a UDP client and send the ping telecommand to the server.
/// let client = UdpSocket::bind("127.0.0.1:7778").expect("Connecting to UDP server failed"); /// let client = UdpSocket::bind("127.0.0.1:0").expect("creating UDP client failed");
/// client /// client
/// .send_to(&buf[0..len], dest_addr) /// .send_to(&ping_tc_raw, dest_addr)
/// .expect("Error sending PUS TC via UDP"); /// .expect("Error sending PUS TC via UDP");
/// let recv_result = udp_tc_server.try_recv_tc();
/// assert!(recv_result.is_ok());
/// // The packet is received by the UDP TC server and sent via the mpsc channel.
/// let sent_packet_with_sender = packet_receiver.try_recv().expect("expected telecommand");
/// assert_eq!(sent_packet_with_sender.packet, ping_tc_raw);
/// assert_eq!(sent_packet_with_sender.sender_id, UDP_SERVER_ID);
/// // No more packets received.
/// matches!(packet_receiver.try_recv(), Err(mpsc::TryRecvError::Empty));
/// ``` /// ```
/// ///
/// The [satrs-example crate](https://egit.irs.uni-stuttgart.de/rust/fsrc-launchpad/src/branch/main/satrs-example) /// The [satrs-example crate](https://egit.irs.uni-stuttgart.de/rust/fsrc-launchpad/src/branch/main/satrs-example)
@ -57,65 +59,45 @@ use std::vec::Vec;
/// [example code](https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/main/satrs-example/src/tmtc.rs#L67) /// [example code](https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/main/satrs-example/src/tmtc.rs#L67)
/// on how to use this TC server. It uses the server to receive PUS telecommands on a specific port /// on how to use this TC server. It uses the server to receive PUS telecommands on a specific port
/// and then forwards them to a generic CCSDS packet receiver. /// and then forwards them to a generic CCSDS packet receiver.
pub struct UdpTcServer<E> { pub struct UdpTcServer<TcSender: PacketSenderRaw<Error = SendError>, SendError> {
pub id: ComponentId,
pub socket: UdpSocket, pub socket: UdpSocket,
recv_buf: Vec<u8>, recv_buf: Vec<u8>,
sender_addr: Option<SocketAddr>, sender_addr: Option<SocketAddr>,
tc_receiver: Box<dyn ReceivesTc<Error = E>>, pub tc_sender: TcSender,
} }
#[derive(Debug)] #[derive(Debug, thiserror::Error)]
pub enum ReceiveResult<E> { pub enum ReceiveResult<SendError: Debug + 'static> {
#[error("nothing was received")]
NothingReceived, NothingReceived,
IoError(Error), #[error(transparent)]
ReceiverError(E), Io(#[from] io::Error),
#[error(transparent)]
Send(SendError),
} }
impl<E> From<Error> for ReceiveResult<E> { impl<TcSender: PacketSenderRaw<Error = SendError>, SendError: Debug + 'static>
fn from(e: Error) -> Self { UdpTcServer<TcSender, SendError>
ReceiveResult::IoError(e) {
}
}
impl<E: PartialEq> PartialEq for ReceiveResult<E> {
fn eq(&self, other: &Self) -> bool {
use ReceiveResult::*;
match (self, other) {
(IoError(ref e), IoError(ref other_e)) => e.kind() == other_e.kind(),
(NothingReceived, NothingReceived) => true,
(ReceiverError(e), ReceiverError(other_e)) => e == other_e,
_ => false,
}
}
}
impl<E: Eq + PartialEq> Eq for ReceiveResult<E> {}
impl<E: 'static> ReceivesTcCore for UdpTcServer<E> {
type Error = E;
fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> {
self.tc_receiver.pass_tc(tc_raw)
}
}
impl<E: 'static> UdpTcServer<E> {
pub fn new<A: ToSocketAddrs>( pub fn new<A: ToSocketAddrs>(
id: ComponentId,
addr: A, addr: A,
max_recv_size: usize, max_recv_size: usize,
tc_receiver: Box<dyn ReceivesTc<Error = E>>, tc_sender: TcSender,
) -> Result<Self, Error> { ) -> Result<Self, io::Error> {
let server = Self { let server = Self {
id,
socket: UdpSocket::bind(addr)?, socket: UdpSocket::bind(addr)?,
recv_buf: vec![0; max_recv_size], recv_buf: vec![0; max_recv_size],
sender_addr: None, sender_addr: None,
tc_receiver, tc_sender,
}; };
server.socket.set_nonblocking(true)?; server.socket.set_nonblocking(true)?;
Ok(server) Ok(server)
} }
pub fn try_recv_tc(&mut self) -> Result<(usize, SocketAddr), ReceiveResult<E>> { pub fn try_recv_tc(&mut self) -> Result<(usize, SocketAddr), ReceiveResult<SendError>> {
let res = match self.socket.recv_from(&mut self.recv_buf) { let res = match self.socket.recv_from(&mut self.recv_buf) {
Ok(res) => res, Ok(res) => res,
Err(e) => { Err(e) => {
@ -128,9 +110,9 @@ impl<E: 'static> UdpTcServer<E> {
}; };
let (num_bytes, from) = res; let (num_bytes, from) = res;
self.sender_addr = Some(from); self.sender_addr = Some(from);
self.tc_receiver self.tc_sender
.pass_tc(&self.recv_buf[0..num_bytes]) .send_packet(self.id, &self.recv_buf[0..num_bytes])
.map_err(|e| ReceiveResult::ReceiverError(e))?; .map_err(ReceiveResult::Send)?;
Ok(res) Ok(res)
} }
@ -142,29 +124,35 @@ impl<E: 'static> UdpTcServer<E> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::hal::std::udp_server::{ReceiveResult, UdpTcServer}; use crate::hal::std::udp_server::{ReceiveResult, UdpTcServer};
use crate::tmtc::ReceivesTcCore; use crate::queue::GenericSendError;
use crate::tmtc::PacketSenderRaw;
use crate::ComponentId;
use core::cell::RefCell;
use spacepackets::ecss::tc::PusTcCreator; use spacepackets::ecss::tc::PusTcCreator;
use spacepackets::ecss::WritablePusPacket; use spacepackets::ecss::WritablePusPacket;
use spacepackets::SpHeader; use spacepackets::SpHeader;
use std::boxed::Box;
use std::collections::VecDeque; use std::collections::VecDeque;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket}; use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
use std::vec::Vec; use std::vec::Vec;
fn is_send<T: Send>(_: &T) {} fn is_send<T: Send>(_: &T) {}
const UDP_SERVER_ID: ComponentId = 0x05;
#[derive(Default)] #[derive(Default)]
struct PingReceiver { struct PingReceiver {
pub sent_cmds: VecDeque<Vec<u8>>, pub sent_cmds: RefCell<VecDeque<Vec<u8>>>,
} }
impl ReceivesTcCore for PingReceiver { impl PacketSenderRaw for PingReceiver {
type Error = (); type Error = GenericSendError;
fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { fn send_packet(&self, sender_id: ComponentId, tc_raw: &[u8]) -> Result<(), Self::Error> {
assert_eq!(sender_id, UDP_SERVER_ID);
let mut sent_data = Vec::new(); let mut sent_data = Vec::new();
sent_data.extend_from_slice(tc_raw); sent_data.extend_from_slice(tc_raw);
self.sent_cmds.push_back(sent_data); let mut queue = self.sent_cmds.borrow_mut();
queue.push_back(sent_data);
Ok(()) Ok(())
} }
} }
@ -175,7 +163,7 @@ mod tests {
let dest_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 7777); let dest_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 7777);
let ping_receiver = PingReceiver::default(); let ping_receiver = PingReceiver::default();
is_send(&ping_receiver); is_send(&ping_receiver);
let mut udp_tc_server = UdpTcServer::new(dest_addr, 2048, Box::new(ping_receiver)) let mut udp_tc_server = UdpTcServer::new(UDP_SERVER_ID, dest_addr, 2048, ping_receiver)
.expect("Creating UDP TMTC server failed"); .expect("Creating UDP TMTC server failed");
is_send(&udp_tc_server); is_send(&udp_tc_server);
let sph = SpHeader::new_from_apid(0x02); let sph = SpHeader::new_from_apid(0x02);
@ -195,9 +183,10 @@ mod tests {
udp_tc_server.last_sender().expect("No sender set"), udp_tc_server.last_sender().expect("No sender set"),
local_addr local_addr
); );
let ping_receiver: &mut PingReceiver = udp_tc_server.tc_receiver.downcast_mut().unwrap(); let ping_receiver = &mut udp_tc_server.tc_sender;
assert_eq!(ping_receiver.sent_cmds.len(), 1); let mut queue = ping_receiver.sent_cmds.borrow_mut();
let sent_cmd = ping_receiver.sent_cmds.pop_front().unwrap(); assert_eq!(queue.len(), 1);
let sent_cmd = queue.pop_front().unwrap();
assert_eq!(sent_cmd, buf[0..len]); assert_eq!(sent_cmd, buf[0..len]);
} }
@ -205,11 +194,11 @@ mod tests {
fn test_nothing_received() { fn test_nothing_received() {
let dest_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 7779); let dest_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 7779);
let ping_receiver = PingReceiver::default(); let ping_receiver = PingReceiver::default();
let mut udp_tc_server = UdpTcServer::new(dest_addr, 2048, Box::new(ping_receiver)) let mut udp_tc_server = UdpTcServer::new(UDP_SERVER_ID, dest_addr, 2048, ping_receiver)
.expect("Creating UDP TMTC server failed"); .expect("Creating UDP TMTC server failed");
let res = udp_tc_server.try_recv_tc(); let res = udp_tc_server.try_recv_tc();
assert!(res.is_err()); assert!(res.is_err());
let err = res.unwrap_err(); let err = res.unwrap_err();
assert_eq!(err, ReceiveResult::NothingReceived); matches!(err, ReceiveResult::NothingReceived);
} }
} }

View File

@ -72,6 +72,18 @@ impl ValidatorU16Id for hashbrown::HashSet<u16> {
} }
} }
impl ValidatorU16Id for u16 {
fn validate(&self, id: u16) -> bool {
id == *self
}
}
impl ValidatorU16Id for &u16 {
fn validate(&self, id: u16) -> bool {
id == **self
}
}
impl ValidatorU16Id for [u16] { impl ValidatorU16Id for [u16] {
fn validate(&self, id: u16) -> bool { fn validate(&self, id: u16) -> bool {
self.binary_search(&id).is_ok() self.binary_search(&id).is_ok()

View File

@ -269,14 +269,8 @@ pub trait ModeReplySender {
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
pub mod alloc_mod { pub mod alloc_mod {
use crate::{ use crate::request::{
mode::ModeRequest, MessageSender, MessageSenderAndReceiver, MessageSenderMap, RequestAndReplySenderAndReceiver,
queue::GenericTargetedMessagingError,
request::{
MessageMetadata, MessageSender, MessageSenderAndReceiver, MessageSenderMap,
RequestAndReplySenderAndReceiver, RequestId,
},
ComponentId,
}; };
use super::*; use super::*;
@ -558,8 +552,6 @@ pub mod alloc_mod {
pub mod std_mod { pub mod std_mod {
use std::sync::mpsc; use std::sync::mpsc;
use crate::request::GenericMessage;
use super::*; use super::*;
pub type ModeRequestHandlerMpsc = ModeRequestHandlerInterface< pub type ModeRequestHandlerMpsc = ModeRequestHandlerInterface<

View File

@ -43,7 +43,7 @@
//! This includes the [ParamsHeapless] enumeration for contained values which do not require heap //! This includes the [ParamsHeapless] enumeration for contained values which do not require heap
//! allocation, and the [Params] which enumerates [ParamsHeapless] and some additional types which //! allocation, and the [Params] which enumerates [ParamsHeapless] and some additional types which
//! require [alloc] support but allow for more flexbility. //! require [alloc] support but allow for more flexbility.
use crate::pool::StoreAddr; use crate::pool::PoolAddr;
use core::fmt::Debug; use core::fmt::Debug;
use core::mem::size_of; use core::mem::size_of;
use paste::paste; use paste::paste;
@ -588,15 +588,15 @@ from_conversions_for_raw!(
#[non_exhaustive] #[non_exhaustive]
pub enum Params { pub enum Params {
Heapless(ParamsHeapless), Heapless(ParamsHeapless),
Store(StoreAddr), Store(PoolAddr),
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
Vec(Vec<u8>), Vec(Vec<u8>),
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
String(String), String(String),
} }
impl From<StoreAddr> for Params { impl From<PoolAddr> for Params {
fn from(x: StoreAddr) -> Self { fn from(x: PoolAddr) -> Self {
Self::Store(x) Self::Store(x)
} }
} }

View File

@ -82,7 +82,7 @@ use spacepackets::ByteConversionError;
use std::error::Error; use std::error::Error;
type NumBlocks = u16; type NumBlocks = u16;
pub type StoreAddr = u64; pub type PoolAddr = u64;
/// Simple address type used for transactions with the local pool. /// Simple address type used for transactions with the local pool.
#[derive(Debug, Copy, Clone, PartialEq, Eq)] #[derive(Debug, Copy, Clone, PartialEq, Eq)]
@ -100,14 +100,14 @@ impl StaticPoolAddr {
} }
} }
impl From<StaticPoolAddr> for StoreAddr { impl From<StaticPoolAddr> for PoolAddr {
fn from(value: StaticPoolAddr) -> Self { fn from(value: StaticPoolAddr) -> Self {
((value.pool_idx as u64) << 16) | value.packet_idx as u64 ((value.pool_idx as u64) << 16) | value.packet_idx as u64
} }
} }
impl From<StoreAddr> for StaticPoolAddr { impl From<PoolAddr> for StaticPoolAddr {
fn from(value: StoreAddr) -> Self { fn from(value: PoolAddr) -> Self {
Self { Self {
pool_idx: ((value >> 16) & 0xff) as u16, pool_idx: ((value >> 16) & 0xff) as u16,
packet_idx: (value & 0xff) as u16, packet_idx: (value & 0xff) as u16,
@ -150,59 +150,59 @@ impl Error for StoreIdError {}
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
pub enum StoreError { pub enum PoolError {
/// Requested data block is too large /// Requested data block is too large
DataTooLarge(usize), DataTooLarge(usize),
/// The store is full. Contains the index of the full subpool /// The store is full. Contains the index of the full subpool
StoreFull(u16), StoreFull(u16),
/// Store ID is invalid. This also includes partial errors where only the subpool is invalid /// Store ID is invalid. This also includes partial errors where only the subpool is invalid
InvalidStoreId(StoreIdError, Option<StoreAddr>), InvalidStoreId(StoreIdError, Option<PoolAddr>),
/// Valid subpool and packet index, but no data is stored at the given address /// Valid subpool and packet index, but no data is stored at the given address
DataDoesNotExist(StoreAddr), DataDoesNotExist(PoolAddr),
ByteConversionError(spacepackets::ByteConversionError), ByteConversionError(spacepackets::ByteConversionError),
LockError, LockError,
/// Internal or configuration errors /// Internal or configuration errors
InternalError(u32), InternalError(u32),
} }
impl Display for StoreError { impl Display for PoolError {
fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result {
match self { match self {
StoreError::DataTooLarge(size) => { PoolError::DataTooLarge(size) => {
write!(f, "data to store with size {size} is too large") write!(f, "data to store with size {size} is too large")
} }
StoreError::StoreFull(u16) => { PoolError::StoreFull(u16) => {
write!(f, "store is too full. index for full subpool: {u16}") write!(f, "store is too full. index for full subpool: {u16}")
} }
StoreError::InvalidStoreId(id_e, addr) => { PoolError::InvalidStoreId(id_e, addr) => {
write!(f, "invalid store ID: {id_e}, address: {addr:?}") write!(f, "invalid store ID: {id_e}, address: {addr:?}")
} }
StoreError::DataDoesNotExist(addr) => { PoolError::DataDoesNotExist(addr) => {
write!(f, "no data exists at address {addr:?}") write!(f, "no data exists at address {addr:?}")
} }
StoreError::InternalError(e) => { PoolError::InternalError(e) => {
write!(f, "internal error: {e}") write!(f, "internal error: {e}")
} }
StoreError::ByteConversionError(e) => { PoolError::ByteConversionError(e) => {
write!(f, "store error: {e}") write!(f, "store error: {e}")
} }
StoreError::LockError => { PoolError::LockError => {
write!(f, "lock error") write!(f, "lock error")
} }
} }
} }
} }
impl From<ByteConversionError> for StoreError { impl From<ByteConversionError> for PoolError {
fn from(value: ByteConversionError) -> Self { fn from(value: ByteConversionError) -> Self {
Self::ByteConversionError(value) Self::ByteConversionError(value)
} }
} }
#[cfg(feature = "std")] #[cfg(feature = "std")]
impl Error for StoreError { impl Error for PoolError {
fn source(&self) -> Option<&(dyn Error + 'static)> { fn source(&self) -> Option<&(dyn Error + 'static)> {
if let StoreError::InvalidStoreId(e, _) = self { if let PoolError::InvalidStoreId(e, _) = self {
return Some(e); return Some(e);
} }
None None
@ -217,44 +217,41 @@ impl Error for StoreError {
/// pool structure being wrapped inside a lock. /// pool structure being wrapped inside a lock.
pub trait PoolProvider { pub trait PoolProvider {
/// Add new data to the pool. The provider should attempt to reserve a memory block with the /// Add new data to the pool. The provider should attempt to reserve a memory block with the
/// appropriate size and then copy the given data to the block. Yields a [StoreAddr] which can /// appropriate size and then copy the given data to the block. Yields a [PoolAddr] which can
/// be used to access the data stored in the pool /// be used to access the data stored in the pool
fn add(&mut self, data: &[u8]) -> Result<StoreAddr, StoreError>; fn add(&mut self, data: &[u8]) -> Result<PoolAddr, PoolError>;
/// The provider should attempt to reserve a free memory block with the appropriate size first. /// The provider should attempt to reserve a free memory block with the appropriate size first.
/// It then executes a user-provided closure and passes a mutable reference to that memory /// It then executes a user-provided closure and passes a mutable reference to that memory
/// block to the closure. This allows the user to write data to the memory block. /// block to the closure. This allows the user to write data to the memory block.
/// The function should yield a [StoreAddr] which can be used to access the data stored in the /// The function should yield a [PoolAddr] which can be used to access the data stored in the
/// pool. /// pool.
fn free_element<W: FnMut(&mut [u8])>( fn free_element<W: FnMut(&mut [u8])>(
&mut self, &mut self,
len: usize, len: usize,
writer: W, writer: W,
) -> Result<StoreAddr, StoreError>; ) -> Result<PoolAddr, PoolError>;
/// Modify data added previously using a given [StoreAddr]. The provider should use the store /// Modify data added previously using a given [PoolAddr]. The provider should use the store
/// address to determine if a memory block exists for that address. If it does, it should /// address to determine if a memory block exists for that address. If it does, it should
/// call the user-provided closure and pass a mutable reference to the memory block /// call the user-provided closure and pass a mutable reference to the memory block
/// to the closure. This allows the user to modify the memory block. /// to the closure. This allows the user to modify the memory block.
fn modify<U: FnMut(&mut [u8])>( fn modify<U: FnMut(&mut [u8])>(&mut self, addr: &PoolAddr, updater: U)
&mut self, -> Result<(), PoolError>;
addr: &StoreAddr,
updater: U,
) -> Result<(), StoreError>;
/// The provider should copy the data from the memory block to the user-provided buffer if /// The provider should copy the data from the memory block to the user-provided buffer if
/// it exists. /// it exists.
fn read(&self, addr: &StoreAddr, buf: &mut [u8]) -> Result<usize, StoreError>; fn read(&self, addr: &PoolAddr, buf: &mut [u8]) -> Result<usize, PoolError>;
/// Delete data inside the pool given a [StoreAddr]. /// Delete data inside the pool given a [PoolAddr].
fn delete(&mut self, addr: StoreAddr) -> Result<(), StoreError>; fn delete(&mut self, addr: PoolAddr) -> Result<(), PoolError>;
fn has_element_at(&self, addr: &StoreAddr) -> Result<bool, StoreError>; fn has_element_at(&self, addr: &PoolAddr) -> Result<bool, PoolError>;
/// Retrieve the length of the data at the given store address. /// Retrieve the length of the data at the given store address.
fn len_of_data(&self, addr: &StoreAddr) -> Result<usize, StoreError>; fn len_of_data(&self, addr: &PoolAddr) -> Result<usize, PoolError>;
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
fn read_as_vec(&self, addr: &StoreAddr) -> Result<alloc::vec::Vec<u8>, StoreError> { fn read_as_vec(&self, addr: &PoolAddr) -> Result<alloc::vec::Vec<u8>, PoolError> {
let mut vec = alloc::vec![0; self.len_of_data(addr)?]; let mut vec = alloc::vec![0; self.len_of_data(addr)?];
self.read(addr, &mut vec)?; self.read(addr, &mut vec)?;
Ok(vec) Ok(vec)
@ -271,7 +268,7 @@ pub trait PoolProviderWithGuards: PoolProvider {
/// This can prevent memory leaks. Users can read the data and release the guard /// This can prevent memory leaks. Users can read the data and release the guard
/// if the data in the store is valid for further processing. If the data is faulty, no /// if the data in the store is valid for further processing. If the data is faulty, no
/// manual deletion is necessary when returning from a processing function prematurely. /// manual deletion is necessary when returning from a processing function prematurely.
fn read_with_guard(&mut self, addr: StoreAddr) -> PoolGuard<Self>; fn read_with_guard(&mut self, addr: PoolAddr) -> PoolGuard<Self>;
/// This function behaves like [PoolProvider::modify], but consumes the provided /// This function behaves like [PoolProvider::modify], but consumes the provided
/// address and returns a RAII conformant guard object. /// address and returns a RAII conformant guard object.
@ -281,20 +278,20 @@ pub trait PoolProviderWithGuards: PoolProvider {
/// This can prevent memory leaks. Users can read (and modify) the data and release the guard /// This can prevent memory leaks. Users can read (and modify) the data and release the guard
/// if the data in the store is valid for further processing. If the data is faulty, no /// if the data in the store is valid for further processing. If the data is faulty, no
/// manual deletion is necessary when returning from a processing function prematurely. /// manual deletion is necessary when returning from a processing function prematurely.
fn modify_with_guard(&mut self, addr: StoreAddr) -> PoolRwGuard<Self>; fn modify_with_guard(&mut self, addr: PoolAddr) -> PoolRwGuard<Self>;
} }
pub struct PoolGuard<'a, MemProvider: PoolProvider + ?Sized> { pub struct PoolGuard<'a, MemProvider: PoolProvider + ?Sized> {
pool: &'a mut MemProvider, pool: &'a mut MemProvider,
pub addr: StoreAddr, pub addr: PoolAddr,
no_deletion: bool, no_deletion: bool,
deletion_failed_error: Option<StoreError>, deletion_failed_error: Option<PoolError>,
} }
/// This helper object can be used to safely access pool data without worrying about memory /// This helper object can be used to safely access pool data without worrying about memory
/// leaks. /// leaks.
impl<'a, MemProvider: PoolProvider> PoolGuard<'a, MemProvider> { impl<'a, MemProvider: PoolProvider> PoolGuard<'a, MemProvider> {
pub fn new(pool: &'a mut MemProvider, addr: StoreAddr) -> Self { pub fn new(pool: &'a mut MemProvider, addr: PoolAddr) -> Self {
Self { Self {
pool, pool,
addr, addr,
@ -303,12 +300,12 @@ impl<'a, MemProvider: PoolProvider> PoolGuard<'a, MemProvider> {
} }
} }
pub fn read(&self, buf: &mut [u8]) -> Result<usize, StoreError> { pub fn read(&self, buf: &mut [u8]) -> Result<usize, PoolError> {
self.pool.read(&self.addr, buf) self.pool.read(&self.addr, buf)
} }
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
pub fn read_as_vec(&self) -> Result<alloc::vec::Vec<u8>, StoreError> { pub fn read_as_vec(&self) -> Result<alloc::vec::Vec<u8>, PoolError> {
self.pool.read_as_vec(&self.addr) self.pool.read_as_vec(&self.addr)
} }
@ -334,19 +331,19 @@ pub struct PoolRwGuard<'a, MemProvider: PoolProvider + ?Sized> {
} }
impl<'a, MemProvider: PoolProvider> PoolRwGuard<'a, MemProvider> { impl<'a, MemProvider: PoolProvider> PoolRwGuard<'a, MemProvider> {
pub fn new(pool: &'a mut MemProvider, addr: StoreAddr) -> Self { pub fn new(pool: &'a mut MemProvider, addr: PoolAddr) -> Self {
Self { Self {
guard: PoolGuard::new(pool, addr), guard: PoolGuard::new(pool, addr),
} }
} }
pub fn update<U: FnMut(&mut [u8])>(&mut self, updater: &mut U) -> Result<(), StoreError> { pub fn update<U: FnMut(&mut [u8])>(&mut self, updater: &mut U) -> Result<(), PoolError> {
self.guard.pool.modify(&self.guard.addr, updater) self.guard.pool.modify(&self.guard.addr, updater)
} }
delegate!( delegate!(
to self.guard { to self.guard {
pub fn read(&self, buf: &mut [u8]) -> Result<usize, StoreError>; pub fn read(&self, buf: &mut [u8]) -> Result<usize, PoolError>;
/// Releasing the pool guard will disable the automatic deletion of the data when the guard /// Releasing the pool guard will disable the automatic deletion of the data when the guard
/// is dropped. /// is dropped.
pub fn release(&mut self); pub fn release(&mut self);
@ -357,7 +354,7 @@ impl<'a, MemProvider: PoolProvider> PoolRwGuard<'a, MemProvider> {
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
mod alloc_mod { mod alloc_mod {
use super::{PoolGuard, PoolProvider, PoolProviderWithGuards, PoolRwGuard, StaticPoolAddr}; use super::{PoolGuard, PoolProvider, PoolProviderWithGuards, PoolRwGuard, StaticPoolAddr};
use crate::pool::{NumBlocks, StoreAddr, StoreError, StoreIdError}; use crate::pool::{NumBlocks, PoolAddr, PoolError, StoreIdError};
use alloc::vec; use alloc::vec;
use alloc::vec::Vec; use alloc::vec::Vec;
use spacepackets::ByteConversionError; use spacepackets::ByteConversionError;
@ -422,7 +419,7 @@ mod alloc_mod {
/// fitting subpool is full. This might be added in the future. /// fitting subpool is full. This might be added in the future.
/// ///
/// Transactions with the [pool][StaticMemoryPool] are done using a generic /// Transactions with the [pool][StaticMemoryPool] are done using a generic
/// [address][StoreAddr] type. Adding any data to the pool will yield a store address. /// [address][PoolAddr] type. Adding any data to the pool will yield a store address.
/// Modification and read operations are done using a reference to a store address. Deletion /// Modification and read operations are done using a reference to a store address. Deletion
/// will consume the store address. /// will consume the store address.
pub struct StaticMemoryPool { pub struct StaticMemoryPool {
@ -452,41 +449,41 @@ mod alloc_mod {
local_pool local_pool
} }
fn addr_check(&self, addr: &StaticPoolAddr) -> Result<usize, StoreError> { fn addr_check(&self, addr: &StaticPoolAddr) -> Result<usize, PoolError> {
self.validate_addr(addr)?; self.validate_addr(addr)?;
let pool_idx = addr.pool_idx as usize; let pool_idx = addr.pool_idx as usize;
let size_list = self.sizes_lists.get(pool_idx).unwrap(); let size_list = self.sizes_lists.get(pool_idx).unwrap();
let curr_size = size_list[addr.packet_idx as usize]; let curr_size = size_list[addr.packet_idx as usize];
if curr_size == STORE_FREE { if curr_size == STORE_FREE {
return Err(StoreError::DataDoesNotExist(StoreAddr::from(*addr))); return Err(PoolError::DataDoesNotExist(PoolAddr::from(*addr)));
} }
Ok(curr_size) Ok(curr_size)
} }
fn validate_addr(&self, addr: &StaticPoolAddr) -> Result<(), StoreError> { fn validate_addr(&self, addr: &StaticPoolAddr) -> Result<(), PoolError> {
let pool_idx = addr.pool_idx as usize; let pool_idx = addr.pool_idx as usize;
if pool_idx >= self.pool_cfg.cfg.len() { if pool_idx >= self.pool_cfg.cfg.len() {
return Err(StoreError::InvalidStoreId( return Err(PoolError::InvalidStoreId(
StoreIdError::InvalidSubpool(addr.pool_idx), StoreIdError::InvalidSubpool(addr.pool_idx),
Some(StoreAddr::from(*addr)), Some(PoolAddr::from(*addr)),
)); ));
} }
if addr.packet_idx >= self.pool_cfg.cfg[addr.pool_idx as usize].0 { if addr.packet_idx >= self.pool_cfg.cfg[addr.pool_idx as usize].0 {
return Err(StoreError::InvalidStoreId( return Err(PoolError::InvalidStoreId(
StoreIdError::InvalidPacketIdx(addr.packet_idx), StoreIdError::InvalidPacketIdx(addr.packet_idx),
Some(StoreAddr::from(*addr)), Some(PoolAddr::from(*addr)),
)); ));
} }
Ok(()) Ok(())
} }
fn reserve(&mut self, data_len: usize) -> Result<StaticPoolAddr, StoreError> { fn reserve(&mut self, data_len: usize) -> Result<StaticPoolAddr, PoolError> {
let mut subpool_idx = self.find_subpool(data_len, 0)?; let mut subpool_idx = self.find_subpool(data_len, 0)?;
if self.pool_cfg.spill_to_higher_subpools { if self.pool_cfg.spill_to_higher_subpools {
while let Err(StoreError::StoreFull(_)) = self.find_empty(subpool_idx) { while let Err(PoolError::StoreFull(_)) = self.find_empty(subpool_idx) {
if (subpool_idx + 1) as usize == self.sizes_lists.len() { if (subpool_idx + 1) as usize == self.sizes_lists.len() {
return Err(StoreError::StoreFull(subpool_idx)); return Err(PoolError::StoreFull(subpool_idx));
} }
subpool_idx += 1; subpool_idx += 1;
} }
@ -500,7 +497,7 @@ mod alloc_mod {
}) })
} }
fn find_subpool(&self, req_size: usize, start_at_subpool: u16) -> Result<u16, StoreError> { fn find_subpool(&self, req_size: usize, start_at_subpool: u16) -> Result<u16, PoolError> {
for (i, &(_, elem_size)) in self.pool_cfg.cfg.iter().enumerate() { for (i, &(_, elem_size)) in self.pool_cfg.cfg.iter().enumerate() {
if i < start_at_subpool as usize { if i < start_at_subpool as usize {
continue; continue;
@ -509,21 +506,21 @@ mod alloc_mod {
return Ok(i as u16); return Ok(i as u16);
} }
} }
Err(StoreError::DataTooLarge(req_size)) Err(PoolError::DataTooLarge(req_size))
} }
fn write(&mut self, addr: &StaticPoolAddr, data: &[u8]) -> Result<(), StoreError> { fn write(&mut self, addr: &StaticPoolAddr, data: &[u8]) -> Result<(), PoolError> {
let packet_pos = self.raw_pos(addr).ok_or(StoreError::InternalError(0))?; let packet_pos = self.raw_pos(addr).ok_or(PoolError::InternalError(0))?;
let subpool = self let subpool = self
.pool .pool
.get_mut(addr.pool_idx as usize) .get_mut(addr.pool_idx as usize)
.ok_or(StoreError::InternalError(1))?; .ok_or(PoolError::InternalError(1))?;
let pool_slice = &mut subpool[packet_pos..packet_pos + data.len()]; let pool_slice = &mut subpool[packet_pos..packet_pos + data.len()];
pool_slice.copy_from_slice(data); pool_slice.copy_from_slice(data);
Ok(()) Ok(())
} }
fn find_empty(&mut self, subpool: u16) -> Result<(u16, &mut usize), StoreError> { fn find_empty(&mut self, subpool: u16) -> Result<(u16, &mut usize), PoolError> {
if let Some(size_list) = self.sizes_lists.get_mut(subpool as usize) { if let Some(size_list) = self.sizes_lists.get_mut(subpool as usize) {
for (i, elem_size) in size_list.iter_mut().enumerate() { for (i, elem_size) in size_list.iter_mut().enumerate() {
if *elem_size == STORE_FREE { if *elem_size == STORE_FREE {
@ -531,12 +528,12 @@ mod alloc_mod {
} }
} }
} else { } else {
return Err(StoreError::InvalidStoreId( return Err(PoolError::InvalidStoreId(
StoreIdError::InvalidSubpool(subpool), StoreIdError::InvalidSubpool(subpool),
None, None,
)); ));
} }
Err(StoreError::StoreFull(subpool)) Err(PoolError::StoreFull(subpool))
} }
fn raw_pos(&self, addr: &StaticPoolAddr) -> Option<usize> { fn raw_pos(&self, addr: &StaticPoolAddr) -> Option<usize> {
@ -546,10 +543,10 @@ mod alloc_mod {
} }
impl PoolProvider for StaticMemoryPool { impl PoolProvider for StaticMemoryPool {
fn add(&mut self, data: &[u8]) -> Result<StoreAddr, StoreError> { fn add(&mut self, data: &[u8]) -> Result<PoolAddr, PoolError> {
let data_len = data.len(); let data_len = data.len();
if data_len > POOL_MAX_SIZE { if data_len > POOL_MAX_SIZE {
return Err(StoreError::DataTooLarge(data_len)); return Err(PoolError::DataTooLarge(data_len));
} }
let addr = self.reserve(data_len)?; let addr = self.reserve(data_len)?;
self.write(&addr, data)?; self.write(&addr, data)?;
@ -560,9 +557,9 @@ mod alloc_mod {
&mut self, &mut self,
len: usize, len: usize,
mut writer: W, mut writer: W,
) -> Result<StoreAddr, StoreError> { ) -> Result<PoolAddr, PoolError> {
if len > POOL_MAX_SIZE { if len > POOL_MAX_SIZE {
return Err(StoreError::DataTooLarge(len)); return Err(PoolError::DataTooLarge(len));
} }
let addr = self.reserve(len)?; let addr = self.reserve(len)?;
let raw_pos = self.raw_pos(&addr).unwrap(); let raw_pos = self.raw_pos(&addr).unwrap();
@ -574,9 +571,9 @@ mod alloc_mod {
fn modify<U: FnMut(&mut [u8])>( fn modify<U: FnMut(&mut [u8])>(
&mut self, &mut self,
addr: &StoreAddr, addr: &PoolAddr,
mut updater: U, mut updater: U,
) -> Result<(), StoreError> { ) -> Result<(), PoolError> {
let addr = StaticPoolAddr::from(*addr); let addr = StaticPoolAddr::from(*addr);
let curr_size = self.addr_check(&addr)?; let curr_size = self.addr_check(&addr)?;
let raw_pos = self.raw_pos(&addr).unwrap(); let raw_pos = self.raw_pos(&addr).unwrap();
@ -586,7 +583,7 @@ mod alloc_mod {
Ok(()) Ok(())
} }
fn read(&self, addr: &StoreAddr, buf: &mut [u8]) -> Result<usize, StoreError> { fn read(&self, addr: &PoolAddr, buf: &mut [u8]) -> Result<usize, PoolError> {
let addr = StaticPoolAddr::from(*addr); let addr = StaticPoolAddr::from(*addr);
let curr_size = self.addr_check(&addr)?; let curr_size = self.addr_check(&addr)?;
if buf.len() < curr_size { if buf.len() < curr_size {
@ -604,7 +601,7 @@ mod alloc_mod {
Ok(curr_size) Ok(curr_size)
} }
fn delete(&mut self, addr: StoreAddr) -> Result<(), StoreError> { fn delete(&mut self, addr: PoolAddr) -> Result<(), PoolError> {
let addr = StaticPoolAddr::from(addr); let addr = StaticPoolAddr::from(addr);
self.addr_check(&addr)?; self.addr_check(&addr)?;
let block_size = self.pool_cfg.cfg.get(addr.pool_idx as usize).unwrap().1; let block_size = self.pool_cfg.cfg.get(addr.pool_idx as usize).unwrap().1;
@ -617,7 +614,7 @@ mod alloc_mod {
Ok(()) Ok(())
} }
fn has_element_at(&self, addr: &StoreAddr) -> Result<bool, StoreError> { fn has_element_at(&self, addr: &PoolAddr) -> Result<bool, PoolError> {
let addr = StaticPoolAddr::from(*addr); let addr = StaticPoolAddr::from(*addr);
self.validate_addr(&addr)?; self.validate_addr(&addr)?;
let pool_idx = addr.pool_idx as usize; let pool_idx = addr.pool_idx as usize;
@ -629,7 +626,7 @@ mod alloc_mod {
Ok(true) Ok(true)
} }
fn len_of_data(&self, addr: &StoreAddr) -> Result<usize, StoreError> { fn len_of_data(&self, addr: &PoolAddr) -> Result<usize, PoolError> {
let addr = StaticPoolAddr::from(*addr); let addr = StaticPoolAddr::from(*addr);
self.validate_addr(&addr)?; self.validate_addr(&addr)?;
let pool_idx = addr.pool_idx as usize; let pool_idx = addr.pool_idx as usize;
@ -643,11 +640,11 @@ mod alloc_mod {
} }
impl PoolProviderWithGuards for StaticMemoryPool { impl PoolProviderWithGuards for StaticMemoryPool {
fn modify_with_guard(&mut self, addr: StoreAddr) -> PoolRwGuard<Self> { fn modify_with_guard(&mut self, addr: PoolAddr) -> PoolRwGuard<Self> {
PoolRwGuard::new(self, addr) PoolRwGuard::new(self, addr)
} }
fn read_with_guard(&mut self, addr: StoreAddr) -> PoolGuard<Self> { fn read_with_guard(&mut self, addr: PoolAddr) -> PoolGuard<Self> {
PoolGuard::new(self, addr) PoolGuard::new(self, addr)
} }
} }
@ -656,8 +653,8 @@ mod alloc_mod {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::pool::{ use crate::pool::{
PoolGuard, PoolProvider, PoolProviderWithGuards, PoolRwGuard, StaticMemoryPool, PoolError, PoolGuard, PoolProvider, PoolProviderWithGuards, PoolRwGuard, StaticMemoryPool,
StaticPoolAddr, StaticPoolConfig, StoreError, StoreIdError, POOL_MAX_SIZE, StaticPoolAddr, StaticPoolConfig, StoreIdError, POOL_MAX_SIZE,
}; };
use std::vec; use std::vec;
@ -781,7 +778,7 @@ mod tests {
let res = local_pool.free_element(8, |_| {}); let res = local_pool.free_element(8, |_| {});
assert!(res.is_err()); assert!(res.is_err());
let err = res.unwrap_err(); let err = res.unwrap_err();
assert_eq!(err, StoreError::StoreFull(1)); assert_eq!(err, PoolError::StoreFull(1));
// Verify that the two deletions are successful // Verify that the two deletions are successful
assert!(local_pool.delete(addr0).is_ok()); assert!(local_pool.delete(addr0).is_ok());
@ -803,7 +800,7 @@ mod tests {
assert!(res.is_err()); assert!(res.is_err());
assert!(matches!( assert!(matches!(
res.unwrap_err(), res.unwrap_err(),
StoreError::DataDoesNotExist { .. } PoolError::DataDoesNotExist { .. }
)); ));
} }
@ -816,8 +813,8 @@ mod tests {
let res = local_pool.add(&test_buf); let res = local_pool.add(&test_buf);
assert!(res.is_err()); assert!(res.is_err());
let err = res.unwrap_err(); let err = res.unwrap_err();
assert!(matches!(err, StoreError::StoreFull { .. })); assert!(matches!(err, PoolError::StoreFull { .. }));
if let StoreError::StoreFull(subpool) = err { if let PoolError::StoreFull(subpool) = err {
assert_eq!(subpool, 2); assert_eq!(subpool, 2);
} }
} }
@ -835,7 +832,7 @@ mod tests {
let err = res.unwrap_err(); let err = res.unwrap_err();
assert!(matches!( assert!(matches!(
err, err,
StoreError::InvalidStoreId(StoreIdError::InvalidSubpool(3), Some(_)) PoolError::InvalidStoreId(StoreIdError::InvalidSubpool(3), Some(_))
)); ));
} }
@ -852,7 +849,7 @@ mod tests {
let err = res.unwrap_err(); let err = res.unwrap_err();
assert!(matches!( assert!(matches!(
err, err,
StoreError::InvalidStoreId(StoreIdError::InvalidPacketIdx(1), Some(_)) PoolError::InvalidStoreId(StoreIdError::InvalidPacketIdx(1), Some(_))
)); ));
} }
@ -863,7 +860,7 @@ mod tests {
let res = local_pool.add(&data_too_large); let res = local_pool.add(&data_too_large);
assert!(res.is_err()); assert!(res.is_err());
let err = res.unwrap_err(); let err = res.unwrap_err();
assert_eq!(err, StoreError::DataTooLarge(20)); assert_eq!(err, PoolError::DataTooLarge(20));
} }
#[test] #[test]
@ -871,10 +868,7 @@ mod tests {
let mut local_pool = basic_small_pool(); let mut local_pool = basic_small_pool();
let res = local_pool.free_element(POOL_MAX_SIZE + 1, |_| {}); let res = local_pool.free_element(POOL_MAX_SIZE + 1, |_| {});
assert!(res.is_err()); assert!(res.is_err());
assert_eq!( assert_eq!(res.unwrap_err(), PoolError::DataTooLarge(POOL_MAX_SIZE + 1));
res.unwrap_err(),
StoreError::DataTooLarge(POOL_MAX_SIZE + 1)
);
} }
#[test] #[test]
@ -883,7 +877,7 @@ mod tests {
// Try to request a slot which is too large // Try to request a slot which is too large
let res = local_pool.free_element(20, |_| {}); let res = local_pool.free_element(20, |_| {});
assert!(res.is_err()); assert!(res.is_err());
assert_eq!(res.unwrap_err(), StoreError::DataTooLarge(20)); assert_eq!(res.unwrap_err(), PoolError::DataTooLarge(20));
} }
#[test] #[test]
@ -1003,7 +997,7 @@ mod tests {
let should_fail = local_pool.free_element(8, |_| {}); let should_fail = local_pool.free_element(8, |_| {});
assert!(should_fail.is_err()); assert!(should_fail.is_err());
if let Err(err) = should_fail { if let Err(err) = should_fail {
assert_eq!(err, StoreError::StoreFull(1)); assert_eq!(err, PoolError::StoreFull(1));
} else { } else {
panic!("unexpected store address"); panic!("unexpected store address");
} }
@ -1034,7 +1028,7 @@ mod tests {
let should_fail = local_pool.free_element(8, |_| {}); let should_fail = local_pool.free_element(8, |_| {});
assert!(should_fail.is_err()); assert!(should_fail.is_err());
if let Err(err) = should_fail { if let Err(err) = should_fail {
assert_eq!(err, StoreError::StoreFull(2)); assert_eq!(err, PoolError::StoreFull(2));
} else { } else {
panic!("unexpected store address"); panic!("unexpected store address");
} }

View File

@ -195,610 +195,7 @@ pub mod std_mod {
mpsc::SyncSender<GenericMessage<ActionRequest>>, mpsc::SyncSender<GenericMessage<ActionRequest>>,
mpsc::Receiver<GenericMessage<ActionReplyPus>>, mpsc::Receiver<GenericMessage<ActionReplyPus>>,
>; >;
/*
pub type ModeRequestorAndHandlerMpsc = ModeInterface<
mpsc::Sender<GenericMessage<ModeRequest>>,
mpsc::Receiver<GenericMessage<ModeReply>>,
mpsc::Sender<GenericMessage<ModeReply>>,
mpsc::Receiver<GenericMessage<ModeRequest>>,
>;
pub type ModeRequestorAndHandlerMpscBounded = ModeInterface<
mpsc::SyncSender<GenericMessage<ModeRequest>>,
mpsc::Receiver<GenericMessage<ModeReply>>,
mpsc::SyncSender<GenericMessage<ModeReply>>,
mpsc::Receiver<GenericMessage<ModeRequest>>,
>;
*/
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {}
/*
use core::{cell::RefCell, time::Duration};
use std::{sync::mpsc, time::SystemTimeError};
use alloc::{collections::VecDeque, vec::Vec};
use delegate::delegate;
use spacepackets::{
ecss::{
tc::{PusTcCreator, PusTcReader},
tm::PusTmReader,
PusPacket,
},
time::{cds, TimeWriter},
CcsdsPacket,
};
use crate::{
action::ActionRequestVariant,
params::{self, ParamsRaw, WritableToBeBytes},
pus::{
tests::{
PusServiceHandlerWithVecCommon, PusTestHarness, SimplePusPacketHandler,
TestConverter, TestRouter, APP_DATA_TOO_SHORT,
},
verification::{
self,
tests::{SharedVerificationMap, TestVerificationReporter, VerificationStatus},
FailParams, TcStateAccepted, TcStateNone, TcStateStarted,
VerificationReportingProvider,
},
EcssTcInMemConverter, EcssTcInVecConverter, EcssTmtcError, GenericRoutingError,
MpscTcReceiver, PusPacketHandlerResult, PusPacketHandlingError, PusRequestRouter,
PusServiceHelper, PusTcToRequestConverter, TmAsVecSenderWithMpsc,
},
};
use super::*;
impl<Request> PusRequestRouter<Request> for TestRouter<Request> {
type Error = GenericRoutingError;
fn route(
&self,
target_id: TargetId,
request: Request,
_token: VerificationToken<TcStateAccepted>,
) -> Result<(), Self::Error> {
self.routing_requests
.borrow_mut()
.push_back((target_id, request));
self.check_for_injected_error()
}
fn handle_error(
&self,
target_id: TargetId,
token: VerificationToken<TcStateAccepted>,
tc: &PusTcReader,
error: Self::Error,
time_stamp: &[u8],
verif_reporter: &impl VerificationReportingProvider,
) {
self.routing_errors
.borrow_mut()
.push_back((target_id, error));
}
}
impl PusTcToRequestConverter<ActionRequest> for TestConverter<8> {
type Error = PusPacketHandlingError;
fn convert(
&mut self,
token: VerificationToken<TcStateAccepted>,
tc: &PusTcReader,
time_stamp: &[u8],
verif_reporter: &impl VerificationReportingProvider,
) -> Result<(TargetId, ActionRequest), Self::Error> {
self.conversion_request.push_back(tc.raw_data().to_vec());
self.check_service(tc)?;
let target_id = tc.apid();
if tc.user_data().len() < 4 {
verif_reporter
.start_failure(
token,
FailParams::new(
time_stamp,
&APP_DATA_TOO_SHORT,
(tc.user_data().len() as u32).to_be_bytes().as_ref(),
),
)
.expect("start success failure");
return Err(PusPacketHandlingError::NotEnoughAppData {
expected: 4,
found: tc.user_data().len(),
});
}
if tc.subservice() == 1 {
verif_reporter
.start_success(token, time_stamp)
.expect("start success failure");
return Ok((
target_id.into(),
ActionRequest {
action_id: u32::from_be_bytes(tc.user_data()[0..4].try_into().unwrap()),
variant: ActionRequestVariant::VecData(tc.user_data()[4..].to_vec()),
},
));
}
Err(PusPacketHandlingError::InvalidAppData(
"unexpected app data".into(),
))
}
}
pub struct PusDynRequestHandler<const SERVICE: u8, Request> {
srv_helper: PusServiceHelper<
MpscTcReceiver,
TmAsVecSenderWithMpsc,
EcssTcInVecConverter,
TestVerificationReporter,
>,
request_converter: TestConverter<SERVICE>,
request_router: TestRouter<Request>,
}
struct Pus8RequestTestbenchWithVec {
common: PusServiceHandlerWithVecCommon<TestVerificationReporter>,
handler: PusDynRequestHandler<8, ActionRequest>,
}
impl Pus8RequestTestbenchWithVec {
pub fn new() -> Self {
let (common, srv_helper) = PusServiceHandlerWithVecCommon::new_with_test_verif_sender();
Self {
common,
handler: PusDynRequestHandler {
srv_helper,
request_converter: TestConverter::default(),
request_router: TestRouter::default(),
},
}
}
delegate! {
to self.handler.request_converter {
pub fn check_next_conversion(&mut self, tc: &PusTcCreator);
}
}
delegate! {
to self.handler.request_router {
pub fn retrieve_next_request(&mut self) -> (TargetId, ActionRequest);
}
}
delegate! {
to self.handler.request_router {
pub fn retrieve_next_routing_error(&mut self) -> (TargetId, GenericRoutingError);
}
}
}
impl PusTestHarness for Pus8RequestTestbenchWithVec {
delegate! {
to self.common {
fn send_tc(&mut self, tc: &PusTcCreator) -> VerificationToken<TcStateAccepted>;
fn read_next_tm(&mut self) -> PusTmReader<'_>;
fn check_no_tm_available(&self) -> bool;
fn check_next_verification_tm(
&self,
subservice: u8,
expected_request_id: verification::RequestId,
);
}
}
}
impl SimplePusPacketHandler for Pus8RequestTestbenchWithVec {
fn handle_one_tc(&mut self) -> Result<PusPacketHandlerResult, PusPacketHandlingError> {
let possible_packet = self.handler.srv_helper.retrieve_and_accept_next_packet()?;
if possible_packet.is_none() {
return Ok(PusPacketHandlerResult::Empty);
}
let ecss_tc_and_token = possible_packet.unwrap();
let tc = self
.handler
.srv_helper
.tc_in_mem_converter
.convert_ecss_tc_in_memory_to_reader(&ecss_tc_and_token.tc_in_memory)?;
let time_stamp = cds::TimeProvider::from_now_with_u16_days()
.expect("timestamp generation failed")
.to_vec()
.unwrap();
let (target_id, action_request) = self.handler.request_converter.convert(
ecss_tc_and_token.token,
&tc,
&time_stamp,
&self.handler.srv_helper.common.verification_handler,
)?;
if let Err(e) = self.handler.request_router.route(
target_id,
action_request,
ecss_tc_and_token.token,
) {
self.handler.request_router.handle_error(
target_id,
ecss_tc_and_token.token,
&tc,
e.clone(),
&time_stamp,
&self.handler.srv_helper.common.verification_handler,
);
return Err(e.into());
}
Ok(PusPacketHandlerResult::RequestHandled)
}
}
const TIMEOUT_ERROR_CODE: ResultU16 = ResultU16::new(1, 2);
const COMPLETION_ERROR_CODE: ResultU16 = ResultU16::new(2, 0);
const COMPLETION_ERROR_CODE_STEP: ResultU16 = ResultU16::new(2, 1);
#[derive(Default)]
pub struct TestReplyHandlerHook {
pub unexpected_replies: VecDeque<GenericActionReplyPus>,
pub timeouts: RefCell<VecDeque<ActivePusActionRequest>>,
}
impl ReplyHandlerHook<ActivePusActionRequest, ActionReplyPusWithActionId> for TestReplyHandlerHook {
fn handle_unexpected_reply(&mut self, reply: &GenericActionReplyPus) {
self.unexpected_replies.push_back(reply.clone());
}
fn timeout_callback(&self, active_request: &ActivePusActionRequest) {
self.timeouts.borrow_mut().push_back(active_request.clone());
}
fn timeout_error_code(&self) -> ResultU16 {
TIMEOUT_ERROR_CODE
}
}
pub struct Pus8ReplyTestbench {
verif_reporter: TestVerificationReporter,
#[allow(dead_code)]
ecss_tm_receiver: mpsc::Receiver<Vec<u8>>,
handler: PusService8ReplyHandler<
TestVerificationReporter,
DefaultActiveActionRequestMap,
TestReplyHandlerHook,
mpsc::Sender<Vec<u8>>,
>,
}
impl Pus8ReplyTestbench {
pub fn new(normal_ctor: bool) -> Self {
let reply_handler_hook = TestReplyHandlerHook::default();
let shared_verif_map = SharedVerificationMap::default();
let test_verif_reporter = TestVerificationReporter::new(shared_verif_map.clone());
let (ecss_tm_sender, ecss_tm_receiver) = mpsc::channel();
let reply_handler = if normal_ctor {
PusService8ReplyHandler::new_from_now_with_default_map(
test_verif_reporter.clone(),
128,
reply_handler_hook,
ecss_tm_sender,
)
.expect("creating reply handler failed")
} else {
PusService8ReplyHandler::new_from_now(
test_verif_reporter.clone(),
DefaultActiveActionRequestMap::default(),
128,
reply_handler_hook,
ecss_tm_sender,
)
.expect("creating reply handler failed")
};
Self {
verif_reporter: test_verif_reporter,
ecss_tm_receiver,
handler: reply_handler,
}
}
pub fn init_handling_for_request(
&mut self,
request_id: RequestId,
_action_id: ActionId,
) -> VerificationToken<TcStateStarted> {
assert!(!self.handler.request_active(request_id));
// let action_req = ActionRequest::new(action_id, ActionRequestVariant::NoData);
let token = self.add_tc_with_req_id(request_id.into());
let token = self
.verif_reporter
.acceptance_success(token, &[])
.expect("acceptance success failure");
let token = self
.verif_reporter
.start_success(token, &[])
.expect("start success failure");
let verif_info = self
.verif_reporter
.verification_info(&verification::RequestId::from(request_id))
.expect("no verification info found");
assert!(verif_info.started.expect("request was not started"));
assert!(verif_info.accepted.expect("request was not accepted"));
token
}
pub fn next_unrequested_reply(&self) -> Option<GenericActionReplyPus> {
self.handler.user_hook.unexpected_replies.front().cloned()
}
pub fn assert_request_completion_success(&self, step: Option<u16>, request_id: RequestId) {
let verif_info = self
.verif_reporter
.verification_info(&verification::RequestId::from(request_id))
.expect("no verification info found");
self.assert_request_completion_common(request_id, &verif_info, step, true);
}
pub fn assert_request_completion_failure(
&self,
step: Option<u16>,
request_id: RequestId,
fail_enum: ResultU16,
fail_data: &[u8],
) {
let verif_info = self
.verif_reporter
.verification_info(&verification::RequestId::from(request_id))
.expect("no verification info found");
self.assert_request_completion_common(request_id, &verif_info, step, false);
assert_eq!(verif_info.fail_enum.unwrap(), fail_enum.raw() as u64);
assert_eq!(verif_info.failure_data.unwrap(), fail_data);
}
pub fn assert_request_completion_common(
&self,
request_id: RequestId,
verif_info: &VerificationStatus,
step: Option<u16>,
completion_success: bool,
) {
if let Some(step) = step {
assert!(verif_info.step_status.is_some());
assert!(verif_info.step_status.unwrap());
assert_eq!(step, verif_info.step);
}
assert_eq!(
verif_info.completed.expect("request is not completed"),
completion_success
);
assert!(!self.handler.request_active(request_id));
}
pub fn assert_request_step_failure(&self, step: u16, request_id: RequestId) {
let verif_info = self
.verif_reporter
.verification_info(&verification::RequestId::from(request_id))
.expect("no verification info found");
assert!(verif_info.step_status.is_some());
assert!(!verif_info.step_status.unwrap());
assert_eq!(step, verif_info.step);
}
pub fn add_routed_request(
&mut self,
request_id: verification::RequestId,
target_id: TargetId,
action_id: ActionId,
token: VerificationToken<TcStateStarted>,
timeout: Duration,
) {
if self.handler.request_active(request_id.into()) {
panic!("request already present");
}
self.handler
.add_routed_action_request(request_id, target_id, action_id, token, timeout);
if !self.handler.request_active(request_id.into()) {
panic!("request should be active now");
}
}
delegate! {
to self.handler {
pub fn request_active(&self, request_id: RequestId) -> bool;
pub fn handle_action_reply(
&mut self,
action_reply_with_ids: GenericMessage<ActionReplyPusWithActionId>,
time_stamp: &[u8]
) -> Result<(), EcssTmtcError>;
pub fn update_time_from_now(&mut self) -> Result<(), SystemTimeError>;
pub fn check_for_timeouts(&mut self, time_stamp: &[u8]) -> Result<(), EcssTmtcError>;
}
to self.verif_reporter {
fn add_tc_with_req_id(&mut self, req_id: verification::RequestId) -> VerificationToken<TcStateNone>;
}
}
}
#[test]
fn test_reply_handler_completion_success() {
let mut reply_testbench = Pus8ReplyTestbench::new(true);
let sender_id = 0x06;
let request_id = 0x02;
let target_id = 0x05;
let action_id = 0x03;
let token = reply_testbench.init_handling_for_request(request_id, action_id);
reply_testbench.add_routed_request(
request_id.into(),
target_id,
action_id,
token,
Duration::from_millis(1),
);
assert!(reply_testbench.request_active(request_id));
let action_reply = GenericMessage::new(
request_id,
sender_id,
ActionReplyPusWithActionId {
action_id,
variant: ActionReplyPus::Completed,
},
);
reply_testbench
.handle_action_reply(action_reply, &[])
.expect("reply handling failure");
reply_testbench.assert_request_completion_success(None, request_id);
}
#[test]
fn test_reply_handler_step_success() {
let mut reply_testbench = Pus8ReplyTestbench::new(false);
let request_id = 0x02;
let target_id = 0x05;
let action_id = 0x03;
let token = reply_testbench.init_handling_for_request(request_id, action_id);
reply_testbench.add_routed_request(
request_id.into(),
target_id,
action_id,
token,
Duration::from_millis(1),
);
let action_reply = GenericActionReplyPus::new_action_reply(
request_id,
action_id,
action_id,
ActionReplyPus::StepSuccess { step: 1 },
);
reply_testbench
.handle_action_reply(action_reply, &[])
.expect("reply handling failure");
let action_reply = GenericActionReplyPus::new_action_reply(
request_id,
action_id,
action_id,
ActionReplyPus::Completed,
);
reply_testbench
.handle_action_reply(action_reply, &[])
.expect("reply handling failure");
reply_testbench.assert_request_completion_success(Some(1), request_id);
}
#[test]
fn test_reply_handler_completion_failure() {
let mut reply_testbench = Pus8ReplyTestbench::new(true);
let sender_id = 0x01;
let request_id = 0x02;
let target_id = 0x05;
let action_id = 0x03;
let token = reply_testbench.init_handling_for_request(request_id, action_id);
reply_testbench.add_routed_request(
request_id.into(),
target_id,
action_id,
token,
Duration::from_millis(1),
);
let params_raw = ParamsRaw::U32(params::U32(5));
let action_reply = GenericActionReplyPus::new_action_reply(
request_id,
sender_id,
action_id,
ActionReplyPus::CompletionFailed {
error_code: COMPLETION_ERROR_CODE,
params: params_raw.into(),
},
);
reply_testbench
.handle_action_reply(action_reply, &[])
.expect("reply handling failure");
reply_testbench.assert_request_completion_failure(
None,
request_id,
COMPLETION_ERROR_CODE,
&params_raw.to_vec().unwrap(),
);
}
#[test]
fn test_reply_handler_step_failure() {
let mut reply_testbench = Pus8ReplyTestbench::new(false);
let sender_id = 0x01;
let request_id = 0x02;
let target_id = 0x05;
let action_id = 0x03;
let token = reply_testbench.init_handling_for_request(request_id, action_id);
reply_testbench.add_routed_request(
request_id.into(),
target_id,
action_id,
token,
Duration::from_millis(1),
);
let action_reply = GenericActionReplyPus::new_action_reply(
request_id,
sender_id,
action_id,
ActionReplyPus::StepFailed {
error_code: COMPLETION_ERROR_CODE_STEP,
step: 2,
params: ParamsRaw::U32(crate::params::U32(5)).into(),
},
);
reply_testbench
.handle_action_reply(action_reply, &[])
.expect("reply handling failure");
reply_testbench.assert_request_step_failure(2, request_id);
}
#[test]
fn test_reply_handler_timeout_handling() {
let mut reply_testbench = Pus8ReplyTestbench::new(true);
let request_id = 0x02;
let target_id = 0x06;
let action_id = 0x03;
let token = reply_testbench.init_handling_for_request(request_id, action_id);
reply_testbench.add_routed_request(
request_id.into(),
target_id,
action_id,
token,
Duration::from_millis(1),
);
let timeout_param = Duration::from_millis(1).as_millis() as u64;
let timeout_param_raw = timeout_param.to_be_bytes();
std::thread::sleep(Duration::from_millis(2));
reply_testbench
.update_time_from_now()
.expect("time update failure");
reply_testbench.check_for_timeouts(&[]).unwrap();
reply_testbench.assert_request_completion_failure(
None,
request_id,
TIMEOUT_ERROR_CODE,
&timeout_param_raw,
);
}
#[test]
fn test_unrequested_reply() {
let mut reply_testbench = Pus8ReplyTestbench::new(true);
let sender_id = 0x01;
let request_id = 0x02;
let action_id = 0x03;
let action_reply = GenericActionReplyPus::new_action_reply(
request_id,
sender_id,
action_id,
ActionReplyPus::Completed,
);
reply_testbench
.handle_action_reply(action_reply, &[])
.expect("reply handling failure");
let reply = reply_testbench.next_unrequested_reply();
assert!(reply.is_some());
let reply = reply.unwrap();
assert_eq!(reply.message.action_id, action_id);
assert_eq!(reply.request_id, request_id);
assert_eq!(reply.message.variant, ActionReplyPus::Completed);
}
*/
}

View File

@ -132,7 +132,7 @@ impl EventReportCreator {
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
mod alloc_mod { mod alloc_mod {
use super::*; use super::*;
use crate::pus::{EcssTmSenderCore, EcssTmtcError}; use crate::pus::{EcssTmSender, EcssTmtcError};
use crate::ComponentId; use crate::ComponentId;
use alloc::vec; use alloc::vec;
use alloc::vec::Vec; use alloc::vec::Vec;
@ -194,7 +194,7 @@ mod alloc_mod {
pub fn event_info( pub fn event_info(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
time_stamp: &[u8], time_stamp: &[u8],
event_id: impl EcssEnumeration, event_id: impl EcssEnumeration,
params: Option<&[u8]>, params: Option<&[u8]>,
@ -211,7 +211,7 @@ mod alloc_mod {
pub fn event_low_severity( pub fn event_low_severity(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
time_stamp: &[u8], time_stamp: &[u8],
event_id: impl EcssEnumeration, event_id: impl EcssEnumeration,
params: Option<&[u8]>, params: Option<&[u8]>,
@ -228,7 +228,7 @@ mod alloc_mod {
pub fn event_medium_severity( pub fn event_medium_severity(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
time_stamp: &[u8], time_stamp: &[u8],
event_id: impl EcssEnumeration, event_id: impl EcssEnumeration,
params: Option<&[u8]>, params: Option<&[u8]>,
@ -245,7 +245,7 @@ mod alloc_mod {
pub fn event_high_severity( pub fn event_high_severity(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
time_stamp: &[u8], time_stamp: &[u8],
event_id: impl EcssEnumeration, event_id: impl EcssEnumeration,
params: Option<&[u8]>, params: Option<&[u8]>,
@ -268,7 +268,7 @@ mod tests {
use crate::events::{EventU32, Severity}; use crate::events::{EventU32, Severity};
use crate::pus::test_util::TEST_COMPONENT_ID_0; use crate::pus::test_util::TEST_COMPONENT_ID_0;
use crate::pus::tests::CommonTmInfo; use crate::pus::tests::CommonTmInfo;
use crate::pus::{ChannelWithId, EcssTmSenderCore, EcssTmtcError, PusTmVariant}; use crate::pus::{ChannelWithId, EcssTmSender, EcssTmtcError, PusTmVariant};
use crate::ComponentId; use crate::ComponentId;
use spacepackets::ecss::PusError; use spacepackets::ecss::PusError;
use spacepackets::ByteConversionError; use spacepackets::ByteConversionError;
@ -301,7 +301,7 @@ mod tests {
} }
} }
impl EcssTmSenderCore for TestSender { impl EcssTmSender for TestSender {
fn send_tm(&self, sender_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { fn send_tm(&self, sender_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> {
match tm { match tm {
PusTmVariant::InStore(_) => { PusTmVariant::InStore(_) => {

View File

@ -10,7 +10,7 @@ use hashbrown::HashSet;
pub use crate::pus::event::EventReporter; pub use crate::pus::event::EventReporter;
use crate::pus::verification::TcStateToken; use crate::pus::verification::TcStateToken;
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
use crate::pus::EcssTmSenderCore; use crate::pus::EcssTmSender;
use crate::pus::EcssTmtcError; use crate::pus::EcssTmtcError;
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
@ -178,7 +178,7 @@ pub mod alloc_mod {
pub fn generate_pus_event_tm_generic( pub fn generate_pus_event_tm_generic(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
time_stamp: &[u8], time_stamp: &[u8],
event: Event, event: Event,
params: Option<&[u8]>, params: Option<&[u8]>,
@ -240,7 +240,7 @@ pub mod alloc_mod {
pub fn generate_pus_event_tm<Severity: HasSeverity>( pub fn generate_pus_event_tm<Severity: HasSeverity>(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
time_stamp: &[u8], time_stamp: &[u8],
event: EventU32TypedSev<Severity>, event: EventU32TypedSev<Severity>,
aux_data: Option<&[u8]>, aux_data: Option<&[u8]>,
@ -257,9 +257,8 @@ pub mod alloc_mod {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::events::SeverityInfo;
use crate::pus::PusTmAsVec;
use crate::request::UniqueApidTargetId; use crate::request::UniqueApidTargetId;
use crate::{events::SeverityInfo, tmtc::PacketAsVec};
use std::sync::mpsc::{self, TryRecvError}; use std::sync::mpsc::{self, TryRecvError};
const INFO_EVENT: EventU32TypedSev<SeverityInfo> = const INFO_EVENT: EventU32TypedSev<SeverityInfo> =
@ -284,7 +283,7 @@ mod tests {
#[test] #[test]
fn test_basic() { fn test_basic() {
let event_man = create_basic_man_1(); let event_man = create_basic_man_1();
let (event_tx, event_rx) = mpsc::channel::<PusTmAsVec>(); let (event_tx, event_rx) = mpsc::channel::<PacketAsVec>();
let event_sent = event_man let event_sent = event_man
.generate_pus_event_tm(&event_tx, &EMPTY_STAMP, INFO_EVENT, None) .generate_pus_event_tm(&event_tx, &EMPTY_STAMP, INFO_EVENT, None)
.expect("Sending info event failed"); .expect("Sending info event failed");
@ -297,7 +296,7 @@ mod tests {
#[test] #[test]
fn test_disable_event() { fn test_disable_event() {
let mut event_man = create_basic_man_2(); let mut event_man = create_basic_man_2();
let (event_tx, event_rx) = mpsc::channel::<PusTmAsVec>(); let (event_tx, event_rx) = mpsc::channel::<PacketAsVec>();
// let mut sender = TmAsVecSenderWithMpsc::new(0, "test", event_tx); // let mut sender = TmAsVecSenderWithMpsc::new(0, "test", event_tx);
let res = event_man.disable_tm_for_event(&LOW_SEV_EVENT); let res = event_man.disable_tm_for_event(&LOW_SEV_EVENT);
assert!(res.is_ok()); assert!(res.is_ok());
@ -320,7 +319,7 @@ mod tests {
#[test] #[test]
fn test_reenable_event() { fn test_reenable_event() {
let mut event_man = create_basic_man_1(); let mut event_man = create_basic_man_1();
let (event_tx, event_rx) = mpsc::channel::<PusTmAsVec>(); let (event_tx, event_rx) = mpsc::channel::<PacketAsVec>();
let mut res = event_man.disable_tm_for_event_with_sev(&INFO_EVENT); let mut res = event_man.disable_tm_for_event_with_sev(&INFO_EVENT);
assert!(res.is_ok()); assert!(res.is_ok());
assert!(res.unwrap()); assert!(res.unwrap());

View File

@ -9,13 +9,13 @@ use std::sync::mpsc::Sender;
use super::verification::VerificationReportingProvider; use super::verification::VerificationReportingProvider;
use super::{ use super::{
EcssTcInMemConverter, EcssTcReceiverCore, EcssTmSenderCore, GenericConversionError, EcssTcInMemConverter, EcssTcReceiver, EcssTmSender, GenericConversionError,
GenericRoutingError, PusServiceHelper, GenericRoutingError, PusServiceHelper,
}; };
pub struct PusEventServiceHandler< pub struct PusEventServiceHandler<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
> { > {
@ -25,8 +25,8 @@ pub struct PusEventServiceHandler<
} }
impl< impl<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
> PusEventServiceHandler<TcReceiver, TmSender, TcInMemConverter, VerificationReporter> > PusEventServiceHandler<TcReceiver, TmSender, TcInMemConverter, VerificationReporter>
@ -167,7 +167,8 @@ mod tests {
use crate::pus::verification::{ use crate::pus::verification::{
RequestId, VerificationReporter, VerificationReportingProvider, RequestId, VerificationReporter, VerificationReportingProvider,
}; };
use crate::pus::{GenericConversionError, MpscTcReceiver, MpscTmInSharedPoolSenderBounded}; use crate::pus::{GenericConversionError, MpscTcReceiver};
use crate::tmtc::PacketSenderWithSharedPool;
use crate::{ use crate::{
events::EventU32, events::EventU32,
pus::{ pus::{
@ -186,7 +187,7 @@ mod tests {
common: PusServiceHandlerWithSharedStoreCommon, common: PusServiceHandlerWithSharedStoreCommon,
handler: PusEventServiceHandler< handler: PusEventServiceHandler<
MpscTcReceiver, MpscTcReceiver,
MpscTmInSharedPoolSenderBounded, PacketSenderWithSharedPool,
EcssTcInSharedStoreConverter, EcssTcInSharedStoreConverter,
VerificationReporter, VerificationReporter,
>, >,
@ -212,9 +213,13 @@ mod tests {
.expect("acceptance success failure") .expect("acceptance success failure")
} }
fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator) {
self.common
.send_tc(self.handler.service_helper.id(), token, tc);
}
delegate! { delegate! {
to self.common { to self.common {
fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator);
fn read_next_tm(&mut self) -> PusTmReader<'_>; fn read_next_tm(&mut self) -> PusTmReader<'_>;
fn check_no_tm_available(&self) -> bool; fn check_no_tm_available(&self) -> bool;
fn check_next_verification_tm(&self, subservice: u8, expected_request_id: RequestId); fn check_next_verification_tm(&self, subservice: u8, expected_request_id: RequestId);

View File

@ -2,10 +2,13 @@
//! //!
//! This module contains structures to make working with the PUS C standard easier. //! This module contains structures to make working with the PUS C standard easier.
//! The satrs-example application contains various usage examples of these components. //! The satrs-example application contains various usage examples of these components.
use crate::pool::{StoreAddr, StoreError}; use crate::pool::{PoolAddr, PoolError};
use crate::pus::verification::{TcStateAccepted, TcStateToken, VerificationToken}; use crate::pus::verification::{TcStateAccepted, TcStateToken, VerificationToken};
use crate::queue::{GenericReceiveError, GenericSendError}; use crate::queue::{GenericReceiveError, GenericSendError};
use crate::request::{GenericMessage, MessageMetadata, RequestId}; use crate::request::{GenericMessage, MessageMetadata, RequestId};
#[cfg(feature = "alloc")]
use crate::tmtc::PacketAsVec;
use crate::tmtc::PacketInPool;
use crate::ComponentId; use crate::ComponentId;
use core::fmt::{Display, Formatter}; use core::fmt::{Display, Formatter};
use core::time::Duration; use core::time::Duration;
@ -44,12 +47,12 @@ use self::verification::VerificationReportingProvider;
#[derive(Debug, PartialEq, Eq, Clone)] #[derive(Debug, PartialEq, Eq, Clone)]
pub enum PusTmVariant<'time, 'src_data> { pub enum PusTmVariant<'time, 'src_data> {
InStore(StoreAddr), InStore(PoolAddr),
Direct(PusTmCreator<'time, 'src_data>), Direct(PusTmCreator<'time, 'src_data>),
} }
impl From<StoreAddr> for PusTmVariant<'_, '_> { impl From<PoolAddr> for PusTmVariant<'_, '_> {
fn from(value: StoreAddr) -> Self { fn from(value: PoolAddr) -> Self {
Self::InStore(value) Self::InStore(value)
} }
} }
@ -62,10 +65,10 @@ impl<'time, 'src_data> From<PusTmCreator<'time, 'src_data>> for PusTmVariant<'ti
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub enum EcssTmtcError { pub enum EcssTmtcError {
Store(StoreError), Store(PoolError),
ByteConversion(ByteConversionError), ByteConversion(ByteConversionError),
Pus(PusError), Pus(PusError),
CantSendAddr(StoreAddr), CantSendAddr(PoolAddr),
CantSendDirectTm, CantSendDirectTm,
Send(GenericSendError), Send(GenericSendError),
Receive(GenericReceiveError), Receive(GenericReceiveError),
@ -99,8 +102,8 @@ impl Display for EcssTmtcError {
} }
} }
impl From<StoreError> for EcssTmtcError { impl From<PoolError> for EcssTmtcError {
fn from(value: StoreError) -> Self { fn from(value: PoolError) -> Self {
Self::Store(value) Self::Store(value)
} }
} }
@ -153,15 +156,15 @@ pub trait ChannelWithId: Send {
/// Generic trait for a user supplied sender object. /// Generic trait for a user supplied sender object.
/// ///
/// This sender object is responsible for sending PUS telemetry to a TM sink. /// This sender object is responsible for sending PUS telemetry to a TM sink.
pub trait EcssTmSenderCore: Send { pub trait EcssTmSender: Send {
fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError>; fn send_tm(&self, sender_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError>;
} }
/// Generic trait for a user supplied sender object. /// Generic trait for a user supplied sender object.
/// ///
/// This sender object is responsible for sending PUS telecommands to a TC recipient. Each /// This sender object is responsible for sending PUS telecommands to a TC recipient. Each
/// telecommand can optionally have a token which contains its verification state. /// telecommand can optionally have a token which contains its verification state.
pub trait EcssTcSenderCore { pub trait EcssTcSender {
fn send_tc(&self, tc: PusTcCreator, token: Option<TcStateToken>) -> Result<(), EcssTmtcError>; fn send_tc(&self, tc: PusTcCreator, token: Option<TcStateToken>) -> Result<(), EcssTmtcError>;
} }
@ -169,32 +172,32 @@ pub trait EcssTcSenderCore {
#[derive(Default)] #[derive(Default)]
pub struct EcssTmDummySender {} pub struct EcssTmDummySender {}
impl EcssTmSenderCore for EcssTmDummySender { impl EcssTmSender for EcssTmDummySender {
fn send_tm(&self, _source_id: ComponentId, _tm: PusTmVariant) -> Result<(), EcssTmtcError> { fn send_tm(&self, _source_id: ComponentId, _tm: PusTmVariant) -> Result<(), EcssTmtcError> {
Ok(()) Ok(())
} }
} }
/// A PUS telecommand packet can be stored in memory using different methods. Right now, /// A PUS telecommand packet can be stored in memory and sent using different methods. Right now,
/// storage inside a pool structure like [crate::pool::StaticMemoryPool], and storage inside a /// storage inside a pool structure like [crate::pool::StaticMemoryPool], and storage inside a
/// `Vec<u8>` are supported. /// `Vec<u8>` are supported.
#[non_exhaustive] #[non_exhaustive]
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub enum TcInMemory { pub enum TcInMemory {
StoreAddr(StoreAddr), Pool(PacketInPool),
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
Vec(alloc::vec::Vec<u8>), Vec(PacketAsVec),
} }
impl From<StoreAddr> for TcInMemory { impl From<PacketInPool> for TcInMemory {
fn from(value: StoreAddr) -> Self { fn from(value: PacketInPool) -> Self {
Self::StoreAddr(value) Self::Pool(value)
} }
} }
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
impl From<alloc::vec::Vec<u8>> for TcInMemory { impl From<PacketAsVec> for TcInMemory {
fn from(value: alloc::vec::Vec<u8>) -> Self { fn from(value: PacketAsVec) -> Self {
Self::Vec(value) Self::Vec(value)
} }
} }
@ -262,25 +265,26 @@ impl From<PusError> for TryRecvTmtcError {
} }
} }
impl From<StoreError> for TryRecvTmtcError { impl From<PoolError> for TryRecvTmtcError {
fn from(value: StoreError) -> Self { fn from(value: PoolError) -> Self {
Self::Tmtc(value.into()) Self::Tmtc(value.into())
} }
} }
/// Generic trait for a user supplied receiver object. /// Generic trait for a user supplied receiver object.
pub trait EcssTcReceiverCore { pub trait EcssTcReceiver {
fn recv_tc(&self) -> Result<EcssTcAndToken, TryRecvTmtcError>; fn recv_tc(&self) -> Result<EcssTcAndToken, TryRecvTmtcError>;
} }
/// Generic trait for objects which can receive ECSS PUS telecommands. This trait is /// Generic trait for objects which can send ECSS PUS telecommands.
/// implemented by the [crate::tmtc::pus_distrib::PusDistributor] objects to allow passing PUS TC pub trait PacketSenderPusTc: Send {
/// packets into it. It is generally assumed that the telecommand is stored in some pool structure,
/// and the store address is passed as well. This allows efficient zero-copy forwarding of
/// telecommands.
pub trait ReceivesEcssPusTc {
type Error; type Error;
fn pass_pus_tc(&mut self, header: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error>; fn send_pus_tc(
&self,
sender_id: ComponentId,
header: &SpHeader,
pus_tc: &PusTcReader,
) -> Result<(), Self::Error>;
} }
pub trait ActiveRequestMapProvider<V>: Sized { pub trait ActiveRequestMapProvider<V>: Sized {
@ -326,7 +330,7 @@ pub trait PusReplyHandler<ActiveRequestInfo: ActiveRequestProvider, ReplyType> {
&mut self, &mut self,
reply: &GenericMessage<ReplyType>, reply: &GenericMessage<ReplyType>,
active_request: &ActiveRequestInfo, active_request: &ActiveRequestInfo,
tm_sender: &impl EcssTmSenderCore, tm_sender: &impl EcssTmSender,
verification_handler: &impl VerificationReportingProvider, verification_handler: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<bool, Self::Error>; ) -> Result<bool, Self::Error>;
@ -334,14 +338,14 @@ pub trait PusReplyHandler<ActiveRequestInfo: ActiveRequestProvider, ReplyType> {
fn handle_unrequested_reply( fn handle_unrequested_reply(
&mut self, &mut self,
reply: &GenericMessage<ReplyType>, reply: &GenericMessage<ReplyType>,
tm_sender: &impl EcssTmSenderCore, tm_sender: &impl EcssTmSender,
) -> Result<(), Self::Error>; ) -> Result<(), Self::Error>;
/// Handle the timeout of an active request. /// Handle the timeout of an active request.
fn handle_request_timeout( fn handle_request_timeout(
&mut self, &mut self,
active_request: &ActiveRequestInfo, active_request: &ActiveRequestInfo,
tm_sender: &impl EcssTmSenderCore, tm_sender: &impl EcssTmSender,
verification_handler: &impl VerificationReportingProvider, verification_handler: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(), Self::Error>; ) -> Result<(), Self::Error>;
@ -353,9 +357,7 @@ pub mod alloc_mod {
use super::*; use super::*;
use crate::pus::verification::VerificationReportingProvider; /// Extension trait for [EcssTmSender].
/// Extension trait for [EcssTmSenderCore].
/// ///
/// It provides additional functionality, for example by implementing the [Downcast] trait /// It provides additional functionality, for example by implementing the [Downcast] trait
/// and the [DynClone] trait. /// and the [DynClone] trait.
@ -367,36 +369,36 @@ pub mod alloc_mod {
/// [Clone]. /// [Clone].
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
pub trait EcssTmSender: EcssTmSenderCore + Downcast + DynClone { pub trait EcssTmSenderExt: EcssTmSender + Downcast + DynClone {
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast(&self) -> &dyn EcssTmSenderCore; fn upcast(&self) -> &dyn EcssTmSender;
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast_mut(&mut self) -> &mut dyn EcssTmSenderCore; fn upcast_mut(&mut self) -> &mut dyn EcssTmSender;
} }
/// Blanket implementation for all types which implement [EcssTmSenderCore] and are clonable. /// Blanket implementation for all types which implement [EcssTmSender] and are clonable.
impl<T> EcssTmSender for T impl<T> EcssTmSenderExt for T
where where
T: EcssTmSenderCore + Clone + 'static, T: EcssTmSender + Clone + 'static,
{ {
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast(&self) -> &dyn EcssTmSenderCore { fn upcast(&self) -> &dyn EcssTmSender {
self self
} }
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast_mut(&mut self) -> &mut dyn EcssTmSenderCore { fn upcast_mut(&mut self) -> &mut dyn EcssTmSender {
self self
} }
} }
dyn_clone::clone_trait_object!(EcssTmSender); dyn_clone::clone_trait_object!(EcssTmSenderExt);
impl_downcast!(EcssTmSender); impl_downcast!(EcssTmSenderExt);
/// Extension trait for [EcssTcSenderCore]. /// Extension trait for [EcssTcSender].
/// ///
/// It provides additional functionality, for example by implementing the [Downcast] trait /// It provides additional functionality, for example by implementing the [Downcast] trait
/// and the [DynClone] trait. /// and the [DynClone] trait.
@ -408,15 +410,15 @@ pub mod alloc_mod {
/// [Clone]. /// [Clone].
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
pub trait EcssTcSender: EcssTcSenderCore + Downcast + DynClone {} pub trait EcssTcSenderExt: EcssTcSender + Downcast + DynClone {}
/// Blanket implementation for all types which implement [EcssTcSenderCore] and are clonable. /// Blanket implementation for all types which implement [EcssTcSender] and are clonable.
impl<T> EcssTcSender for T where T: EcssTcSenderCore + Clone + 'static {} impl<T> EcssTcSenderExt for T where T: EcssTcSender + Clone + 'static {}
dyn_clone::clone_trait_object!(EcssTcSender); dyn_clone::clone_trait_object!(EcssTcSenderExt);
impl_downcast!(EcssTcSender); impl_downcast!(EcssTcSenderExt);
/// Extension trait for [EcssTcReceiverCore]. /// Extension trait for [EcssTcReceiver].
/// ///
/// It provides additional functionality, for example by implementing the [Downcast] trait /// It provides additional functionality, for example by implementing the [Downcast] trait
/// and the [DynClone] trait. /// and the [DynClone] trait.
@ -428,12 +430,12 @@ pub mod alloc_mod {
/// [Clone]. /// [Clone].
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
pub trait EcssTcReceiver: EcssTcReceiverCore + Downcast {} pub trait EcssTcReceiverExt: EcssTcReceiver + Downcast {}
/// Blanket implementation for all types which implement [EcssTcReceiverCore] and are clonable. /// Blanket implementation for all types which implement [EcssTcReceiver] and are clonable.
impl<T> EcssTcReceiver for T where T: EcssTcReceiverCore + 'static {} impl<T> EcssTcReceiverExt for T where T: EcssTcReceiver + 'static {}
impl_downcast!(EcssTcReceiver); impl_downcast!(EcssTcReceiverExt);
/// This trait is an abstraction for the conversion of a PUS telecommand into a generic request /// This trait is an abstraction for the conversion of a PUS telecommand into a generic request
/// type. /// type.
@ -457,7 +459,7 @@ pub mod alloc_mod {
&mut self, &mut self,
token: VerificationToken<TcStateAccepted>, token: VerificationToken<TcStateAccepted>,
tc: &PusTcReader, tc: &PusTcReader,
tm_sender: &(impl EcssTmSenderCore + ?Sized), tm_sender: &(impl EcssTmSender + ?Sized),
verif_reporter: &impl VerificationReportingProvider, verif_reporter: &impl VerificationReportingProvider,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(ActiveRequestInfo, Request), Self::Error>; ) -> Result<(ActiveRequestInfo, Request), Self::Error>;
@ -654,19 +656,18 @@ pub mod alloc_mod {
#[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))]
pub mod std_mod { pub mod std_mod {
use crate::pool::{ use crate::pool::{
PoolProvider, PoolProviderWithGuards, SharedStaticMemoryPool, StoreAddr, StoreError, PoolAddr, PoolError, PoolProvider, PoolProviderWithGuards, SharedStaticMemoryPool,
}; };
use crate::pus::verification::{TcStateAccepted, VerificationToken}; use crate::pus::verification::{TcStateAccepted, VerificationToken};
use crate::pus::{ use crate::pus::{
EcssTcAndToken, EcssTcReceiverCore, EcssTmSenderCore, EcssTmtcError, GenericReceiveError, EcssTcAndToken, EcssTcReceiver, EcssTmSender, EcssTmtcError, GenericReceiveError,
GenericSendError, PusTmVariant, TryRecvTmtcError, GenericSendError, PusTmVariant, TryRecvTmtcError,
}; };
use crate::tmtc::tm_helper::SharedTmPool; use crate::tmtc::{PacketAsVec, PacketSenderWithSharedPool};
use crate::ComponentId; use crate::ComponentId;
use alloc::vec::Vec; use alloc::vec::Vec;
use core::time::Duration; use core::time::Duration;
use spacepackets::ecss::tc::PusTcReader; use spacepackets::ecss::tc::PusTcReader;
use spacepackets::ecss::tm::PusTmCreator;
use spacepackets::ecss::WritablePusPacket; use spacepackets::ecss::WritablePusPacket;
use spacepackets::time::StdTimestampError; use spacepackets::time::StdTimestampError;
use spacepackets::ByteConversionError; use spacepackets::ByteConversionError;
@ -680,25 +681,20 @@ pub mod std_mod {
use super::verification::{TcStateToken, VerificationReportingProvider}; use super::verification::{TcStateToken, VerificationReportingProvider};
use super::{AcceptedEcssTcAndToken, ActiveRequestProvider, TcInMemory}; use super::{AcceptedEcssTcAndToken, ActiveRequestProvider, TcInMemory};
use crate::tmtc::PacketInPool;
#[derive(Debug)] impl From<mpsc::SendError<PoolAddr>> for EcssTmtcError {
pub struct PusTmInPool { fn from(_: mpsc::SendError<PoolAddr>) -> Self {
pub source_id: ComponentId,
pub store_addr: StoreAddr,
}
impl From<mpsc::SendError<StoreAddr>> for EcssTmtcError {
fn from(_: mpsc::SendError<StoreAddr>) -> Self {
Self::Send(GenericSendError::RxDisconnected) Self::Send(GenericSendError::RxDisconnected)
} }
} }
impl EcssTmSenderCore for mpsc::Sender<PusTmInPool> { impl EcssTmSender for mpsc::Sender<PacketInPool> {
fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> {
match tm { match tm {
PusTmVariant::InStore(store_addr) => self PusTmVariant::InStore(store_addr) => self
.send(PusTmInPool { .send(PacketInPool {
source_id, sender_id: source_id,
store_addr, store_addr,
}) })
.map_err(|_| GenericSendError::RxDisconnected)?, .map_err(|_| GenericSendError::RxDisconnected)?,
@ -708,12 +704,12 @@ pub mod std_mod {
} }
} }
impl EcssTmSenderCore for mpsc::SyncSender<PusTmInPool> { impl EcssTmSender for mpsc::SyncSender<PacketInPool> {
fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> {
match tm { match tm {
PusTmVariant::InStore(store_addr) => self PusTmVariant::InStore(store_addr) => self
.try_send(PusTmInPool { .try_send(PacketInPool {
source_id, sender_id: source_id,
store_addr, store_addr,
}) })
.map_err(|e| EcssTmtcError::Send(e.into()))?, .map_err(|e| EcssTmtcError::Send(e.into()))?,
@ -723,21 +719,15 @@ pub mod std_mod {
} }
} }
#[derive(Debug)] pub type MpscTmAsVecSender = mpsc::Sender<PacketAsVec>;
pub struct PusTmAsVec {
pub source_id: ComponentId,
pub packet: Vec<u8>,
}
pub type MpscTmAsVecSender = mpsc::Sender<PusTmAsVec>; impl EcssTmSender for MpscTmAsVecSender {
impl EcssTmSenderCore for MpscTmAsVecSender {
fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> {
match tm { match tm {
PusTmVariant::InStore(addr) => return Err(EcssTmtcError::CantSendAddr(addr)), PusTmVariant::InStore(addr) => return Err(EcssTmtcError::CantSendAddr(addr)),
PusTmVariant::Direct(tm) => self PusTmVariant::Direct(tm) => self
.send(PusTmAsVec { .send(PacketAsVec {
source_id, sender_id: source_id,
packet: tm.to_vec()?, packet: tm.to_vec()?,
}) })
.map_err(|e| EcssTmtcError::Send(e.into()))?, .map_err(|e| EcssTmtcError::Send(e.into()))?,
@ -746,15 +736,15 @@ pub mod std_mod {
} }
} }
pub type MpscTmAsVecSenderBounded = mpsc::SyncSender<PusTmAsVec>; pub type MpscTmAsVecSenderBounded = mpsc::SyncSender<PacketAsVec>;
impl EcssTmSenderCore for MpscTmAsVecSenderBounded { impl EcssTmSender for MpscTmAsVecSenderBounded {
fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> {
match tm { match tm {
PusTmVariant::InStore(addr) => return Err(EcssTmtcError::CantSendAddr(addr)), PusTmVariant::InStore(addr) => return Err(EcssTmtcError::CantSendAddr(addr)),
PusTmVariant::Direct(tm) => self PusTmVariant::Direct(tm) => self
.send(PusTmAsVec { .send(PacketAsVec {
source_id, sender_id: source_id,
packet: tm.to_vec()?, packet: tm.to_vec()?,
}) })
.map_err(|e| EcssTmtcError::Send(e.into()))?, .map_err(|e| EcssTmtcError::Send(e.into()))?,
@ -763,47 +753,9 @@ pub mod std_mod {
} }
} }
#[derive(Clone)]
pub struct TmInSharedPoolSender<Sender: EcssTmSenderCore> {
shared_tm_store: SharedTmPool,
sender: Sender,
}
impl<Sender: EcssTmSenderCore> TmInSharedPoolSender<Sender> {
pub fn send_direct_tm(
&self,
source_id: ComponentId,
tm: PusTmCreator,
) -> Result<(), EcssTmtcError> {
let addr = self.shared_tm_store.add_pus_tm(&tm)?;
self.sender.send_tm(source_id, PusTmVariant::InStore(addr))
}
}
impl<Sender: EcssTmSenderCore> EcssTmSenderCore for TmInSharedPoolSender<Sender> {
fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> {
if let PusTmVariant::Direct(tm) = tm {
return self.send_direct_tm(source_id, tm);
}
self.sender.send_tm(source_id, tm)
}
}
impl<Sender: EcssTmSenderCore> TmInSharedPoolSender<Sender> {
pub fn new(shared_tm_store: SharedTmPool, sender: Sender) -> Self {
Self {
shared_tm_store,
sender,
}
}
}
pub type MpscTmInSharedPoolSender = TmInSharedPoolSender<mpsc::Sender<PusTmInPool>>;
pub type MpscTmInSharedPoolSenderBounded = TmInSharedPoolSender<mpsc::SyncSender<PusTmInPool>>;
pub type MpscTcReceiver = mpsc::Receiver<EcssTcAndToken>; pub type MpscTcReceiver = mpsc::Receiver<EcssTcAndToken>;
impl EcssTcReceiverCore for MpscTcReceiver { impl EcssTcReceiver for MpscTcReceiver {
fn recv_tc(&self) -> Result<EcssTcAndToken, TryRecvTmtcError> { fn recv_tc(&self) -> Result<EcssTcAndToken, TryRecvTmtcError> {
self.try_recv().map_err(|e| match e { self.try_recv().map_err(|e| match e {
TryRecvError::Empty => TryRecvTmtcError::Empty, TryRecvError::Empty => TryRecvTmtcError::Empty,
@ -819,16 +771,14 @@ pub mod std_mod {
use super::*; use super::*;
use crossbeam_channel as cb; use crossbeam_channel as cb;
pub type TmInSharedPoolSenderWithCrossbeam = TmInSharedPoolSender<cb::Sender<PusTmInPool>>; impl From<cb::SendError<PoolAddr>> for EcssTmtcError {
fn from(_: cb::SendError<PoolAddr>) -> Self {
impl From<cb::SendError<StoreAddr>> for EcssTmtcError {
fn from(_: cb::SendError<StoreAddr>) -> Self {
Self::Send(GenericSendError::RxDisconnected) Self::Send(GenericSendError::RxDisconnected)
} }
} }
impl From<cb::TrySendError<StoreAddr>> for EcssTmtcError { impl From<cb::TrySendError<PoolAddr>> for EcssTmtcError {
fn from(value: cb::TrySendError<StoreAddr>) -> Self { fn from(value: cb::TrySendError<PoolAddr>) -> Self {
match value { match value {
cb::TrySendError::Full(_) => Self::Send(GenericSendError::QueueFull(None)), cb::TrySendError::Full(_) => Self::Send(GenericSendError::QueueFull(None)),
cb::TrySendError::Disconnected(_) => { cb::TrySendError::Disconnected(_) => {
@ -838,37 +788,31 @@ pub mod std_mod {
} }
} }
impl EcssTmSenderCore for cb::Sender<PusTmInPool> { impl EcssTmSender for cb::Sender<PacketInPool> {
fn send_tm( fn send_tm(
&self, &self,
source_id: ComponentId, sender_id: ComponentId,
tm: PusTmVariant, tm: PusTmVariant,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
match tm { match tm {
PusTmVariant::InStore(addr) => self PusTmVariant::InStore(addr) => self
.try_send(PusTmInPool { .try_send(PacketInPool::new(sender_id, addr))
source_id,
store_addr: addr,
})
.map_err(|e| EcssTmtcError::Send(e.into()))?, .map_err(|e| EcssTmtcError::Send(e.into()))?,
PusTmVariant::Direct(_) => return Err(EcssTmtcError::CantSendDirectTm), PusTmVariant::Direct(_) => return Err(EcssTmtcError::CantSendDirectTm),
}; };
Ok(()) Ok(())
} }
} }
impl EcssTmSenderCore for cb::Sender<PusTmAsVec> { impl EcssTmSender for cb::Sender<PacketAsVec> {
fn send_tm( fn send_tm(
&self, &self,
source_id: ComponentId, sender_id: ComponentId,
tm: PusTmVariant, tm: PusTmVariant,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
match tm { match tm {
PusTmVariant::InStore(addr) => return Err(EcssTmtcError::CantSendAddr(addr)), PusTmVariant::InStore(addr) => return Err(EcssTmtcError::CantSendAddr(addr)),
PusTmVariant::Direct(tm) => self PusTmVariant::Direct(tm) => self
.send(PusTmAsVec { .send(PacketAsVec::new(sender_id, tm.to_vec()?))
source_id,
packet: tm.to_vec()?,
})
.map_err(|e| EcssTmtcError::Send(e.into()))?, .map_err(|e| EcssTmtcError::Send(e.into()))?,
}; };
Ok(()) Ok(())
@ -1010,6 +954,8 @@ pub mod std_mod {
fn tc_slice_raw(&self) -> &[u8]; fn tc_slice_raw(&self) -> &[u8];
fn sender_id(&self) -> Option<ComponentId>;
fn cache_and_convert( fn cache_and_convert(
&mut self, &mut self,
possible_packet: &TcInMemory, possible_packet: &TcInMemory,
@ -1032,6 +978,7 @@ pub mod std_mod {
/// [SharedStaticMemoryPool]. /// [SharedStaticMemoryPool].
#[derive(Default, Clone)] #[derive(Default, Clone)]
pub struct EcssTcInVecConverter { pub struct EcssTcInVecConverter {
sender_id: Option<ComponentId>,
pub pus_tc_raw: Option<Vec<u8>>, pub pus_tc_raw: Option<Vec<u8>>,
} }
@ -1039,16 +986,21 @@ pub mod std_mod {
fn cache(&mut self, tc_in_memory: &TcInMemory) -> Result<(), PusTcFromMemError> { fn cache(&mut self, tc_in_memory: &TcInMemory) -> Result<(), PusTcFromMemError> {
self.pus_tc_raw = None; self.pus_tc_raw = None;
match tc_in_memory { match tc_in_memory {
super::TcInMemory::StoreAddr(_) => { super::TcInMemory::Pool(_packet_in_pool) => {
return Err(PusTcFromMemError::InvalidFormat(tc_in_memory.clone())); return Err(PusTcFromMemError::InvalidFormat(tc_in_memory.clone()));
} }
super::TcInMemory::Vec(vec) => { super::TcInMemory::Vec(packet_with_sender) => {
self.pus_tc_raw = Some(vec.clone()); self.pus_tc_raw = Some(packet_with_sender.packet.clone());
self.sender_id = Some(packet_with_sender.sender_id);
} }
}; };
Ok(()) Ok(())
} }
fn sender_id(&self) -> Option<ComponentId> {
self.sender_id
}
fn tc_slice_raw(&self) -> &[u8] { fn tc_slice_raw(&self) -> &[u8] {
if self.pus_tc_raw.is_none() { if self.pus_tc_raw.is_none() {
return &[]; return &[];
@ -1062,6 +1014,7 @@ pub mod std_mod {
/// packets should be avoided. Please note that this structure is not able to convert TCs which /// packets should be avoided. Please note that this structure is not able to convert TCs which
/// are stored as a `Vec<u8>`. /// are stored as a `Vec<u8>`.
pub struct EcssTcInSharedStoreConverter { pub struct EcssTcInSharedStoreConverter {
sender_id: Option<ComponentId>,
shared_tc_store: SharedStaticMemoryPool, shared_tc_store: SharedStaticMemoryPool,
pus_buf: Vec<u8>, pus_buf: Vec<u8>,
} }
@ -1069,15 +1022,16 @@ pub mod std_mod {
impl EcssTcInSharedStoreConverter { impl EcssTcInSharedStoreConverter {
pub fn new(shared_tc_store: SharedStaticMemoryPool, max_expected_tc_size: usize) -> Self { pub fn new(shared_tc_store: SharedStaticMemoryPool, max_expected_tc_size: usize) -> Self {
Self { Self {
sender_id: None,
shared_tc_store, shared_tc_store,
pus_buf: alloc::vec![0; max_expected_tc_size], pus_buf: alloc::vec![0; max_expected_tc_size],
} }
} }
pub fn copy_tc_to_buf(&mut self, addr: StoreAddr) -> Result<(), PusTcFromMemError> { pub fn copy_tc_to_buf(&mut self, addr: PoolAddr) -> Result<(), PusTcFromMemError> {
// Keep locked section as short as possible. // Keep locked section as short as possible.
let mut tc_pool = self.shared_tc_store.write().map_err(|_| { let mut tc_pool = self.shared_tc_store.write().map_err(|_| {
PusTcFromMemError::EcssTmtc(EcssTmtcError::Store(StoreError::LockError)) PusTcFromMemError::EcssTmtc(EcssTmtcError::Store(PoolError::LockError))
})?; })?;
let tc_size = tc_pool.len_of_data(&addr).map_err(EcssTmtcError::Store)?; let tc_size = tc_pool.len_of_data(&addr).map_err(EcssTmtcError::Store)?;
if tc_size > self.pus_buf.len() { if tc_size > self.pus_buf.len() {
@ -1099,8 +1053,9 @@ pub mod std_mod {
impl EcssTcInMemConverter for EcssTcInSharedStoreConverter { impl EcssTcInMemConverter for EcssTcInSharedStoreConverter {
fn cache(&mut self, tc_in_memory: &TcInMemory) -> Result<(), PusTcFromMemError> { fn cache(&mut self, tc_in_memory: &TcInMemory) -> Result<(), PusTcFromMemError> {
match tc_in_memory { match tc_in_memory {
super::TcInMemory::StoreAddr(addr) => { super::TcInMemory::Pool(packet_in_pool) => {
self.copy_tc_to_buf(*addr)?; self.copy_tc_to_buf(packet_in_pool.store_addr)?;
self.sender_id = Some(packet_in_pool.sender_id);
} }
super::TcInMemory::Vec(_) => { super::TcInMemory::Vec(_) => {
return Err(PusTcFromMemError::InvalidFormat(tc_in_memory.clone())); return Err(PusTcFromMemError::InvalidFormat(tc_in_memory.clone()));
@ -1112,11 +1067,15 @@ pub mod std_mod {
fn tc_slice_raw(&self) -> &[u8] { fn tc_slice_raw(&self) -> &[u8] {
self.pus_buf.as_ref() self.pus_buf.as_ref()
} }
fn sender_id(&self) -> Option<ComponentId> {
self.sender_id
}
} }
pub struct PusServiceBase< pub struct PusServiceBase<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
> { > {
pub id: ComponentId, pub id: ComponentId,
@ -1135,8 +1094,8 @@ pub mod std_mod {
/// by using the [EcssTcInMemConverter] abstraction. This object provides some convenience /// by using the [EcssTcInMemConverter] abstraction. This object provides some convenience
/// methods to make the generic parts of TC handling easier. /// methods to make the generic parts of TC handling easier.
pub struct PusServiceHelper< pub struct PusServiceHelper<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
> { > {
@ -1145,8 +1104,8 @@ pub mod std_mod {
} }
impl< impl<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
> PusServiceHelper<TcReceiver, TmSender, TcInMemConverter, VerificationReporter> > PusServiceHelper<TcReceiver, TmSender, TcInMemConverter, VerificationReporter>
@ -1177,7 +1136,7 @@ pub mod std_mod {
&self.common.tm_sender &self.common.tm_sender
} }
/// This function can be used to poll the internal [EcssTcReceiverCore] object for the next /// This function can be used to poll the internal [EcssTcReceiver] object for the next
/// telecommand packet. It will return `Ok(None)` if there are not packets available. /// telecommand packet. It will return `Ok(None)` if there are not packets available.
/// In any other case, it will perform the acceptance of the ECSS TC packet using the /// In any other case, it will perform the acceptance of the ECSS TC packet using the
/// internal [VerificationReportingProvider] object. It will then return the telecommand /// internal [VerificationReportingProvider] object. It will then return the telecommand
@ -1236,14 +1195,14 @@ pub mod std_mod {
pub type PusServiceHelperStaticWithMpsc<TcInMemConverter, VerificationReporter> = pub type PusServiceHelperStaticWithMpsc<TcInMemConverter, VerificationReporter> =
PusServiceHelper< PusServiceHelper<
MpscTcReceiver, MpscTcReceiver,
MpscTmInSharedPoolSender, PacketSenderWithSharedPool,
TcInMemConverter, TcInMemConverter,
VerificationReporter, VerificationReporter,
>; >;
pub type PusServiceHelperStaticWithBoundedMpsc<TcInMemConverter, VerificationReporter> = pub type PusServiceHelperStaticWithBoundedMpsc<TcInMemConverter, VerificationReporter> =
PusServiceHelper< PusServiceHelper<
MpscTcReceiver, MpscTcReceiver,
MpscTmInSharedPoolSenderBounded, PacketSenderWithSharedPool,
TcInMemConverter, TcInMemConverter,
VerificationReporter, VerificationReporter,
>; >;
@ -1313,7 +1272,7 @@ pub mod tests {
use crate::pool::{PoolProvider, SharedStaticMemoryPool, StaticMemoryPool, StaticPoolConfig}; use crate::pool::{PoolProvider, SharedStaticMemoryPool, StaticMemoryPool, StaticPoolConfig};
use crate::pus::verification::{RequestId, VerificationReporter}; use crate::pus::verification::{RequestId, VerificationReporter};
use crate::tmtc::tm_helper::SharedTmPool; use crate::tmtc::{PacketAsVec, PacketInPool, PacketSenderWithSharedPool, SharedPacketPool};
use crate::ComponentId; use crate::ComponentId;
use super::test_util::{TEST_APID, TEST_COMPONENT_ID_0}; use super::test_util::{TEST_APID, TEST_COMPONENT_ID_0};
@ -1370,14 +1329,14 @@ pub mod tests {
pus_buf: RefCell<[u8; 2048]>, pus_buf: RefCell<[u8; 2048]>,
tm_buf: [u8; 2048], tm_buf: [u8; 2048],
tc_pool: SharedStaticMemoryPool, tc_pool: SharedStaticMemoryPool,
tm_pool: SharedTmPool, tm_pool: SharedPacketPool,
tc_sender: mpsc::SyncSender<EcssTcAndToken>, tc_sender: mpsc::SyncSender<EcssTcAndToken>,
tm_receiver: mpsc::Receiver<PusTmInPool>, tm_receiver: mpsc::Receiver<PacketInPool>,
} }
pub type PusServiceHelperStatic = PusServiceHelper< pub type PusServiceHelperStatic = PusServiceHelper<
MpscTcReceiver, MpscTcReceiver,
MpscTmInSharedPoolSenderBounded, PacketSenderWithSharedPool,
EcssTcInSharedStoreConverter, EcssTcInSharedStoreConverter,
VerificationReporter, VerificationReporter,
>; >;
@ -1392,14 +1351,16 @@ pub mod tests {
let tc_pool = StaticMemoryPool::new(pool_cfg.clone()); let tc_pool = StaticMemoryPool::new(pool_cfg.clone());
let tm_pool = StaticMemoryPool::new(pool_cfg); let tm_pool = StaticMemoryPool::new(pool_cfg);
let shared_tc_pool = SharedStaticMemoryPool::new(RwLock::new(tc_pool)); let shared_tc_pool = SharedStaticMemoryPool::new(RwLock::new(tc_pool));
let shared_tm_pool = SharedTmPool::new(tm_pool); let shared_tm_pool = SharedStaticMemoryPool::new(RwLock::new(tm_pool));
let shared_tm_pool_wrapper = SharedPacketPool::new(&shared_tm_pool);
let (test_srv_tc_tx, test_srv_tc_rx) = mpsc::sync_channel(10); let (test_srv_tc_tx, test_srv_tc_rx) = mpsc::sync_channel(10);
let (tm_tx, tm_rx) = mpsc::sync_channel(10); let (tm_tx, tm_rx) = mpsc::sync_channel(10);
let verif_cfg = VerificationReporterCfg::new(TEST_APID, 1, 2, 8).unwrap(); let verif_cfg = VerificationReporterCfg::new(TEST_APID, 1, 2, 8).unwrap();
let verification_handler = let verification_handler =
VerificationReporter::new(TEST_COMPONENT_ID_0.id(), &verif_cfg); VerificationReporter::new(TEST_COMPONENT_ID_0.id(), &verif_cfg);
let test_srv_tm_sender = TmInSharedPoolSender::new(shared_tm_pool.clone(), tm_tx); let test_srv_tm_sender =
PacketSenderWithSharedPool::new(tm_tx, shared_tm_pool_wrapper.clone());
let in_store_converter = let in_store_converter =
EcssTcInSharedStoreConverter::new(shared_tc_pool.clone(), 2048); EcssTcInSharedStoreConverter::new(shared_tc_pool.clone(), 2048);
( (
@ -1407,7 +1368,7 @@ pub mod tests {
pus_buf: RefCell::new([0; 2048]), pus_buf: RefCell::new([0; 2048]),
tm_buf: [0; 2048], tm_buf: [0; 2048],
tc_pool: shared_tc_pool, tc_pool: shared_tc_pool,
tm_pool: shared_tm_pool, tm_pool: shared_tm_pool_wrapper,
tc_sender: test_srv_tc_tx, tc_sender: test_srv_tc_tx,
tm_receiver: tm_rx, tm_receiver: tm_rx,
}, },
@ -1420,7 +1381,12 @@ pub mod tests {
), ),
) )
} }
pub fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator) { pub fn send_tc(
&self,
sender_id: ComponentId,
token: &VerificationToken<TcStateAccepted>,
tc: &PusTcCreator,
) {
let mut mut_buf = self.pus_buf.borrow_mut(); let mut mut_buf = self.pus_buf.borrow_mut();
let tc_size = tc.write_to_bytes(mut_buf.as_mut_slice()).unwrap(); let tc_size = tc.write_to_bytes(mut_buf.as_mut_slice()).unwrap();
let mut tc_pool = self.tc_pool.write().unwrap(); let mut tc_pool = self.tc_pool.write().unwrap();
@ -1428,7 +1394,10 @@ pub mod tests {
drop(tc_pool); drop(tc_pool);
// Send accepted TC to test service handler. // Send accepted TC to test service handler.
self.tc_sender self.tc_sender
.send(EcssTcAndToken::new(addr, *token)) .send(EcssTcAndToken::new(
PacketInPool::new(sender_id, addr),
*token,
))
.expect("sending tc failed"); .expect("sending tc failed");
} }
@ -1469,7 +1438,7 @@ pub mod tests {
pub struct PusServiceHandlerWithVecCommon { pub struct PusServiceHandlerWithVecCommon {
current_tm: Option<Vec<u8>>, current_tm: Option<Vec<u8>>,
tc_sender: mpsc::Sender<EcssTcAndToken>, tc_sender: mpsc::Sender<EcssTcAndToken>,
tm_receiver: mpsc::Receiver<PusTmAsVec>, tm_receiver: mpsc::Receiver<PacketAsVec>,
} }
pub type PusServiceHelperDynamic = PusServiceHelper< pub type PusServiceHelperDynamic = PusServiceHelper<
MpscTcReceiver, MpscTcReceiver,
@ -1542,11 +1511,19 @@ pub mod tests {
} }
impl PusServiceHandlerWithVecCommon { impl PusServiceHandlerWithVecCommon {
pub fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator) { pub fn send_tc(
&self,
sender_id: ComponentId,
token: &VerificationToken<TcStateAccepted>,
tc: &PusTcCreator,
) {
// Send accepted TC to test service handler. // Send accepted TC to test service handler.
self.tc_sender self.tc_sender
.send(EcssTcAndToken::new( .send(EcssTcAndToken::new(
TcInMemory::Vec(tc.to_vec().expect("pus tc conversion to vec failed")), TcInMemory::Vec(PacketAsVec::new(
sender_id,
tc.to_vec().expect("pus tc conversion to vec failed"),
)),
*token, *token,
)) ))
.expect("sending tc failed"); .expect("sending tc failed");

View File

@ -14,7 +14,7 @@ use spacepackets::{ByteConversionError, CcsdsPacket};
#[cfg(feature = "std")] #[cfg(feature = "std")]
use std::error::Error; use std::error::Error;
use crate::pool::{PoolProvider, StoreError}; use crate::pool::{PoolError, PoolProvider};
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
pub use alloc_mod::*; pub use alloc_mod::*;
@ -151,7 +151,7 @@ pub enum ScheduleError {
}, },
/// Nested time-tagged commands are not allowed. /// Nested time-tagged commands are not allowed.
NestedScheduledTc, NestedScheduledTc,
StoreError(StoreError), StoreError(PoolError),
TcDataEmpty, TcDataEmpty,
TimestampError(TimestampError), TimestampError(TimestampError),
WrongSubservice(u8), WrongSubservice(u8),
@ -206,8 +206,8 @@ impl From<PusError> for ScheduleError {
} }
} }
impl From<StoreError> for ScheduleError { impl From<PoolError> for ScheduleError {
fn from(e: StoreError) -> Self { fn from(e: PoolError) -> Self {
Self::StoreError(e) Self::StoreError(e)
} }
} }
@ -240,7 +240,7 @@ impl Error for ScheduleError {
pub trait PusSchedulerProvider { pub trait PusSchedulerProvider {
type TimeProvider: CcsdsTimeProvider + TimeReader; type TimeProvider: CcsdsTimeProvider + TimeReader;
fn reset(&mut self, store: &mut (impl PoolProvider + ?Sized)) -> Result<(), StoreError>; fn reset(&mut self, store: &mut (impl PoolProvider + ?Sized)) -> Result<(), PoolError>;
fn is_enabled(&self) -> bool; fn is_enabled(&self) -> bool;
@ -345,12 +345,9 @@ pub mod alloc_mod {
}, },
vec::Vec, vec::Vec,
}; };
use spacepackets::time::{ use spacepackets::time::cds::{self, DaysLen24Bits};
cds::{self, DaysLen24Bits},
UnixTime,
};
use crate::pool::StoreAddr; use crate::pool::PoolAddr;
use super::*; use super::*;
@ -371,8 +368,8 @@ pub mod alloc_mod {
} }
enum DeletionResult { enum DeletionResult {
WithoutStoreDeletion(Option<StoreAddr>), WithoutStoreDeletion(Option<PoolAddr>),
WithStoreDeletion(Result<bool, StoreError>), WithStoreDeletion(Result<bool, PoolError>),
} }
/// This is the core data structure for scheduling PUS telecommands with [alloc] support. /// This is the core data structure for scheduling PUS telecommands with [alloc] support.
@ -528,7 +525,7 @@ pub mod alloc_mod {
&mut self, &mut self,
time_window: TimeWindow<TimeProvider>, time_window: TimeWindow<TimeProvider>,
pool: &mut (impl PoolProvider + ?Sized), pool: &mut (impl PoolProvider + ?Sized),
) -> Result<u64, (u64, StoreError)> { ) -> Result<u64, (u64, PoolError)> {
let range = self.retrieve_by_time_filter(time_window); let range = self.retrieve_by_time_filter(time_window);
let mut del_packets = 0; let mut del_packets = 0;
let mut res_if_fails = None; let mut res_if_fails = None;
@ -558,7 +555,7 @@ pub mod alloc_mod {
pub fn delete_all( pub fn delete_all(
&mut self, &mut self,
pool: &mut (impl PoolProvider + ?Sized), pool: &mut (impl PoolProvider + ?Sized),
) -> Result<u64, (u64, StoreError)> { ) -> Result<u64, (u64, PoolError)> {
self.delete_by_time_filter(TimeWindow::<cds::CdsTime>::new_select_all(), pool) self.delete_by_time_filter(TimeWindow::<cds::CdsTime>::new_select_all(), pool)
} }
@ -604,7 +601,7 @@ pub mod alloc_mod {
/// Please note that this function will stop on the first telecommand with a request ID match. /// Please note that this function will stop on the first telecommand with a request ID match.
/// In case of duplicate IDs (which should generally not happen), this function needs to be /// In case of duplicate IDs (which should generally not happen), this function needs to be
/// called repeatedly. /// called repeatedly.
pub fn delete_by_request_id(&mut self, req_id: &RequestId) -> Option<StoreAddr> { pub fn delete_by_request_id(&mut self, req_id: &RequestId) -> Option<PoolAddr> {
if let DeletionResult::WithoutStoreDeletion(v) = if let DeletionResult::WithoutStoreDeletion(v) =
self.delete_by_request_id_internal_without_store_deletion(req_id) self.delete_by_request_id_internal_without_store_deletion(req_id)
{ {
@ -618,7 +615,7 @@ pub mod alloc_mod {
&mut self, &mut self,
req_id: &RequestId, req_id: &RequestId,
pool: &mut (impl PoolProvider + ?Sized), pool: &mut (impl PoolProvider + ?Sized),
) -> Result<bool, StoreError> { ) -> Result<bool, PoolError> {
if let DeletionResult::WithStoreDeletion(v) = if let DeletionResult::WithStoreDeletion(v) =
self.delete_by_request_id_internal_with_store_deletion(req_id, pool) self.delete_by_request_id_internal_with_store_deletion(req_id, pool)
{ {
@ -696,7 +693,7 @@ pub mod alloc_mod {
releaser: R, releaser: R,
tc_store: &mut (impl PoolProvider + ?Sized), tc_store: &mut (impl PoolProvider + ?Sized),
tc_buf: &mut [u8], tc_buf: &mut [u8],
) -> Result<u64, (u64, StoreError)> { ) -> Result<u64, (u64, PoolError)> {
self.release_telecommands_internal(releaser, tc_store, Some(tc_buf)) self.release_telecommands_internal(releaser, tc_store, Some(tc_buf))
} }
@ -710,7 +707,7 @@ pub mod alloc_mod {
&mut self, &mut self,
releaser: R, releaser: R,
tc_store: &mut (impl PoolProvider + ?Sized), tc_store: &mut (impl PoolProvider + ?Sized),
) -> Result<u64, (u64, StoreError)> { ) -> Result<u64, (u64, PoolError)> {
self.release_telecommands_internal(releaser, tc_store, None) self.release_telecommands_internal(releaser, tc_store, None)
} }
@ -719,7 +716,7 @@ pub mod alloc_mod {
mut releaser: R, mut releaser: R,
tc_store: &mut (impl PoolProvider + ?Sized), tc_store: &mut (impl PoolProvider + ?Sized),
mut tc_buf: Option<&mut [u8]>, mut tc_buf: Option<&mut [u8]>,
) -> Result<u64, (u64, StoreError)> { ) -> Result<u64, (u64, PoolError)> {
let tcs_to_release = self.telecommands_to_release(); let tcs_to_release = self.telecommands_to_release();
let mut released_tcs = 0; let mut released_tcs = 0;
let mut store_error = Ok(()); let mut store_error = Ok(());
@ -765,7 +762,7 @@ pub mod alloc_mod {
mut releaser: R, mut releaser: R,
tc_store: &(impl PoolProvider + ?Sized), tc_store: &(impl PoolProvider + ?Sized),
tc_buf: &mut [u8], tc_buf: &mut [u8],
) -> Result<alloc::vec::Vec<TcInfo>, (alloc::vec::Vec<TcInfo>, StoreError)> { ) -> Result<alloc::vec::Vec<TcInfo>, (alloc::vec::Vec<TcInfo>, PoolError)> {
let tcs_to_release = self.telecommands_to_release(); let tcs_to_release = self.telecommands_to_release();
let mut released_tcs = alloc::vec::Vec::new(); let mut released_tcs = alloc::vec::Vec::new();
for tc in tcs_to_release { for tc in tcs_to_release {
@ -796,7 +793,7 @@ pub mod alloc_mod {
/// The holding store for the telecommands needs to be passed so all the stored telecommands /// The holding store for the telecommands needs to be passed so all the stored telecommands
/// can be deleted to avoid a memory leak. If at last one deletion operation fails, the error /// can be deleted to avoid a memory leak. If at last one deletion operation fails, the error
/// will be returned but the method will still try to delete all the commands in the schedule. /// will be returned but the method will still try to delete all the commands in the schedule.
fn reset(&mut self, store: &mut (impl PoolProvider + ?Sized)) -> Result<(), StoreError> { fn reset(&mut self, store: &mut (impl PoolProvider + ?Sized)) -> Result<(), PoolError> {
self.enabled = false; self.enabled = false;
let mut deletion_ok = Ok(()); let mut deletion_ok = Ok(());
for tc_lists in &mut self.tc_map { for tc_lists in &mut self.tc_map {
@ -854,7 +851,7 @@ pub mod alloc_mod {
mod tests { mod tests {
use super::*; use super::*;
use crate::pool::{ use crate::pool::{
PoolProvider, StaticMemoryPool, StaticPoolAddr, StaticPoolConfig, StoreAddr, StoreError, PoolAddr, PoolError, PoolProvider, StaticMemoryPool, StaticPoolAddr, StaticPoolConfig,
}; };
use alloc::collections::btree_map::Range; use alloc::collections::btree_map::Range;
use spacepackets::ecss::tc::{PusTcCreator, PusTcReader, PusTcSecondaryHeader}; use spacepackets::ecss::tc::{PusTcCreator, PusTcReader, PusTcSecondaryHeader};
@ -993,7 +990,7 @@ mod tests {
.insert_unwrapped_and_stored_tc( .insert_unwrapped_and_stored_tc(
UnixTime::new_only_secs(100), UnixTime::new_only_secs(100),
TcInfo::new( TcInfo::new(
StoreAddr::from(StaticPoolAddr { PoolAddr::from(StaticPoolAddr {
pool_idx: 0, pool_idx: 0,
packet_idx: 1, packet_idx: 1,
}), }),
@ -1010,7 +1007,7 @@ mod tests {
.insert_unwrapped_and_stored_tc( .insert_unwrapped_and_stored_tc(
UnixTime::new_only_secs(100), UnixTime::new_only_secs(100),
TcInfo::new( TcInfo::new(
StoreAddr::from(StaticPoolAddr { PoolAddr::from(StaticPoolAddr {
pool_idx: 0, pool_idx: 0,
packet_idx: 2, packet_idx: 2,
}), }),
@ -1054,8 +1051,8 @@ mod tests {
fn common_check( fn common_check(
enabled: bool, enabled: bool,
store_addr: &StoreAddr, store_addr: &PoolAddr,
expected_store_addrs: Vec<StoreAddr>, expected_store_addrs: Vec<PoolAddr>,
counter: &mut usize, counter: &mut usize,
) { ) {
assert!(enabled); assert!(enabled);
@ -1064,8 +1061,8 @@ mod tests {
} }
fn common_check_disabled( fn common_check_disabled(
enabled: bool, enabled: bool,
store_addr: &StoreAddr, store_addr: &PoolAddr,
expected_store_addrs: Vec<StoreAddr>, expected_store_addrs: Vec<PoolAddr>,
counter: &mut usize, counter: &mut usize,
) { ) {
assert!(!enabled); assert!(!enabled);
@ -1519,7 +1516,7 @@ mod tests {
// TC could not even be read.. // TC could not even be read..
assert_eq!(err.0, 0); assert_eq!(err.0, 0);
match err.1 { match err.1 {
StoreError::DataDoesNotExist(addr) => { PoolError::DataDoesNotExist(addr) => {
assert_eq!(tc_info_0.addr(), addr); assert_eq!(tc_info_0.addr(), addr);
} }
_ => panic!("unexpected error {}", err.1), _ => panic!("unexpected error {}", err.1),
@ -1542,7 +1539,7 @@ mod tests {
assert!(reset_res.is_err()); assert!(reset_res.is_err());
let err = reset_res.unwrap_err(); let err = reset_res.unwrap_err();
match err { match err {
StoreError::DataDoesNotExist(addr) => { PoolError::DataDoesNotExist(addr) => {
assert_eq!(addr, tc_info_0.addr()); assert_eq!(addr, tc_info_0.addr());
} }
_ => panic!("unexpected error {err}"), _ => panic!("unexpected error {err}"),
@ -1644,7 +1641,7 @@ mod tests {
let err = insert_res.unwrap_err(); let err = insert_res.unwrap_err();
match err { match err {
ScheduleError::StoreError(e) => match e { ScheduleError::StoreError(e) => match e {
StoreError::StoreFull(_) => {} PoolError::StoreFull(_) => {}
_ => panic!("unexpected store error {e}"), _ => panic!("unexpected store error {e}"),
}, },
_ => panic!("unexpected error {err}"), _ => panic!("unexpected error {err}"),

View File

@ -1,12 +1,12 @@
use super::scheduler::PusSchedulerProvider; use super::scheduler::PusSchedulerProvider;
use super::verification::{VerificationReporter, VerificationReportingProvider}; use super::verification::{VerificationReporter, VerificationReportingProvider};
use super::{ use super::{
EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcReceiverCore, EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcReceiver,
EcssTmSenderCore, MpscTcReceiver, MpscTmInSharedPoolSender, MpscTmInSharedPoolSenderBounded, EcssTmSender, MpscTcReceiver, PusServiceHelper,
PusServiceHelper, PusTmAsVec,
}; };
use crate::pool::PoolProvider; use crate::pool::PoolProvider;
use crate::pus::{PusPacketHandlerResult, PusPacketHandlingError}; use crate::pus::{PusPacketHandlerResult, PusPacketHandlingError};
use crate::tmtc::{PacketAsVec, PacketSenderWithSharedPool};
use alloc::string::ToString; use alloc::string::ToString;
use spacepackets::ecss::{scheduling, PusPacket}; use spacepackets::ecss::{scheduling, PusPacket};
use spacepackets::time::cds::CdsTime; use spacepackets::time::cds::CdsTime;
@ -21,8 +21,8 @@ use std::sync::mpsc;
/// [Self::scheduler] and [Self::scheduler_mut] function and then use the scheduler API to release /// [Self::scheduler] and [Self::scheduler_mut] function and then use the scheduler API to release
/// telecommands when applicable. /// telecommands when applicable.
pub struct PusSchedServiceHandler< pub struct PusSchedServiceHandler<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
PusScheduler: PusSchedulerProvider, PusScheduler: PusSchedulerProvider,
@ -33,8 +33,8 @@ pub struct PusSchedServiceHandler<
} }
impl< impl<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
Scheduler: PusSchedulerProvider, Scheduler: PusSchedulerProvider,
@ -212,7 +212,7 @@ impl<
/// mpsc queues. /// mpsc queues.
pub type PusService11SchedHandlerDynWithMpsc<PusScheduler> = PusSchedServiceHandler< pub type PusService11SchedHandlerDynWithMpsc<PusScheduler> = PusSchedServiceHandler<
MpscTcReceiver, MpscTcReceiver,
mpsc::Sender<PusTmAsVec>, mpsc::Sender<PacketAsVec>,
EcssTcInVecConverter, EcssTcInVecConverter,
VerificationReporter, VerificationReporter,
PusScheduler, PusScheduler,
@ -221,7 +221,7 @@ pub type PusService11SchedHandlerDynWithMpsc<PusScheduler> = PusSchedServiceHand
/// queues. /// queues.
pub type PusService11SchedHandlerDynWithBoundedMpsc<PusScheduler> = PusSchedServiceHandler< pub type PusService11SchedHandlerDynWithBoundedMpsc<PusScheduler> = PusSchedServiceHandler<
MpscTcReceiver, MpscTcReceiver,
mpsc::SyncSender<PusTmAsVec>, mpsc::SyncSender<PacketAsVec>,
EcssTcInVecConverter, EcssTcInVecConverter,
VerificationReporter, VerificationReporter,
PusScheduler, PusScheduler,
@ -230,7 +230,7 @@ pub type PusService11SchedHandlerDynWithBoundedMpsc<PusScheduler> = PusSchedServ
/// mpsc queues. /// mpsc queues.
pub type PusService11SchedHandlerStaticWithMpsc<PusScheduler> = PusSchedServiceHandler< pub type PusService11SchedHandlerStaticWithMpsc<PusScheduler> = PusSchedServiceHandler<
MpscTcReceiver, MpscTcReceiver,
MpscTmInSharedPoolSender, PacketSenderWithSharedPool,
EcssTcInSharedStoreConverter, EcssTcInSharedStoreConverter,
VerificationReporter, VerificationReporter,
PusScheduler, PusScheduler,
@ -239,7 +239,7 @@ pub type PusService11SchedHandlerStaticWithMpsc<PusScheduler> = PusSchedServiceH
/// mpsc queues. /// mpsc queues.
pub type PusService11SchedHandlerStaticWithBoundedMpsc<PusScheduler> = PusSchedServiceHandler< pub type PusService11SchedHandlerStaticWithBoundedMpsc<PusScheduler> = PusSchedServiceHandler<
MpscTcReceiver, MpscTcReceiver,
MpscTmInSharedPoolSenderBounded, PacketSenderWithSharedPool,
EcssTcInSharedStoreConverter, EcssTcInSharedStoreConverter,
VerificationReporter, VerificationReporter,
PusScheduler, PusScheduler,
@ -257,10 +257,8 @@ mod tests {
verification::{RequestId, TcStateAccepted, VerificationToken}, verification::{RequestId, TcStateAccepted, VerificationToken},
EcssTcInSharedStoreConverter, EcssTcInSharedStoreConverter,
}; };
use crate::pus::{ use crate::pus::{MpscTcReceiver, PusPacketHandlerResult, PusPacketHandlingError};
MpscTcReceiver, MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, use crate::tmtc::PacketSenderWithSharedPool;
PusPacketHandlingError,
};
use alloc::collections::VecDeque; use alloc::collections::VecDeque;
use delegate::delegate; use delegate::delegate;
use spacepackets::ecss::scheduling::Subservice; use spacepackets::ecss::scheduling::Subservice;
@ -279,7 +277,7 @@ mod tests {
common: PusServiceHandlerWithSharedStoreCommon, common: PusServiceHandlerWithSharedStoreCommon,
handler: PusSchedServiceHandler< handler: PusSchedServiceHandler<
MpscTcReceiver, MpscTcReceiver,
MpscTmInSharedPoolSenderBounded, PacketSenderWithSharedPool,
EcssTcInSharedStoreConverter, EcssTcInSharedStoreConverter,
VerificationReporter, VerificationReporter,
TestScheduler, TestScheduler,
@ -317,9 +315,13 @@ mod tests {
.expect("acceptance success failure") .expect("acceptance success failure")
} }
fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator) {
self.common
.send_tc(self.handler.service_helper.id(), token, tc);
}
delegate! { delegate! {
to self.common { to self.common {
fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator);
fn read_next_tm(&mut self) -> PusTmReader<'_>; fn read_next_tm(&mut self) -> PusTmReader<'_>;
fn check_no_tm_available(&self) -> bool; fn check_no_tm_available(&self) -> bool;
fn check_next_verification_tm(&self, subservice: u8, expected_request_id: RequestId); fn check_next_verification_tm(&self, subservice: u8, expected_request_id: RequestId);
@ -342,7 +344,7 @@ mod tests {
fn reset( fn reset(
&mut self, &mut self,
_store: &mut (impl crate::pool::PoolProvider + ?Sized), _store: &mut (impl crate::pool::PoolProvider + ?Sized),
) -> Result<(), crate::pool::StoreError> { ) -> Result<(), crate::pool::PoolError> {
self.reset_count += 1; self.reset_count += 1;
Ok(()) Ok(())
} }

View File

@ -1,7 +1,7 @@
use crate::pus::{ use crate::pus::{
PartialPusHandlingError, PusPacketHandlerResult, PusPacketHandlingError, PusTmAsVec, PartialPusHandlingError, PusPacketHandlerResult, PusPacketHandlingError, PusTmVariant,
PusTmInPool, PusTmVariant,
}; };
use crate::tmtc::{PacketAsVec, PacketSenderWithSharedPool};
use spacepackets::ecss::tm::{PusTmCreator, PusTmSecondaryHeader}; use spacepackets::ecss::tm::{PusTmCreator, PusTmSecondaryHeader};
use spacepackets::ecss::PusPacket; use spacepackets::ecss::PusPacket;
use spacepackets::SpHeader; use spacepackets::SpHeader;
@ -9,16 +9,15 @@ use std::sync::mpsc;
use super::verification::{VerificationReporter, VerificationReportingProvider}; use super::verification::{VerificationReporter, VerificationReportingProvider};
use super::{ use super::{
EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcReceiverCore, EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcReceiver,
EcssTmSenderCore, GenericConversionError, MpscTcReceiver, MpscTmInSharedPoolSender, EcssTmSender, GenericConversionError, MpscTcReceiver, PusServiceHelper,
MpscTmInSharedPoolSenderBounded, PusServiceHelper,
}; };
/// This is a helper class for [std] environments to handle generic PUS 17 (test service) packets. /// This is a helper class for [std] environments to handle generic PUS 17 (test service) packets.
/// This handler only processes ping requests and generates a ping reply for them accordingly. /// This handler only processes ping requests and generates a ping reply for them accordingly.
pub struct PusService17TestHandler< pub struct PusService17TestHandler<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
> { > {
@ -27,8 +26,8 @@ pub struct PusService17TestHandler<
} }
impl< impl<
TcReceiver: EcssTcReceiverCore, TcReceiver: EcssTcReceiver,
TmSender: EcssTmSenderCore, TmSender: EcssTmSender,
TcInMemConverter: EcssTcInMemConverter, TcInMemConverter: EcssTcInMemConverter,
VerificationReporter: VerificationReportingProvider, VerificationReporter: VerificationReportingProvider,
> PusService17TestHandler<TcReceiver, TmSender, TcInMemConverter, VerificationReporter> > PusService17TestHandler<TcReceiver, TmSender, TcInMemConverter, VerificationReporter>
@ -127,7 +126,7 @@ impl<
/// mpsc queues. /// mpsc queues.
pub type PusService17TestHandlerDynWithMpsc = PusService17TestHandler< pub type PusService17TestHandlerDynWithMpsc = PusService17TestHandler<
MpscTcReceiver, MpscTcReceiver,
mpsc::Sender<PusTmAsVec>, mpsc::Sender<PacketAsVec>,
EcssTcInVecConverter, EcssTcInVecConverter,
VerificationReporter, VerificationReporter,
>; >;
@ -135,23 +134,15 @@ pub type PusService17TestHandlerDynWithMpsc = PusService17TestHandler<
/// queues. /// queues.
pub type PusService17TestHandlerDynWithBoundedMpsc = PusService17TestHandler< pub type PusService17TestHandlerDynWithBoundedMpsc = PusService17TestHandler<
MpscTcReceiver, MpscTcReceiver,
mpsc::SyncSender<PusTmInPool>, mpsc::SyncSender<PacketAsVec>,
EcssTcInVecConverter, EcssTcInVecConverter,
VerificationReporter, VerificationReporter,
>; >;
/// Helper type definition for a PUS 17 handler with a shared store TMTC memory backend and regular
/// mpsc queues.
pub type PusService17TestHandlerStaticWithMpsc = PusService17TestHandler<
MpscTcReceiver,
MpscTmInSharedPoolSender,
EcssTcInSharedStoreConverter,
VerificationReporter,
>;
/// Helper type definition for a PUS 17 handler with a shared store TMTC memory backend and bounded /// Helper type definition for a PUS 17 handler with a shared store TMTC memory backend and bounded
/// mpsc queues. /// mpsc queues.
pub type PusService17TestHandlerStaticWithBoundedMpsc = PusService17TestHandler< pub type PusService17TestHandlerStaticWithBoundedMpsc = PusService17TestHandler<
MpscTcReceiver, MpscTcReceiver,
MpscTmInSharedPoolSenderBounded, PacketSenderWithSharedPool,
EcssTcInSharedStoreConverter, EcssTcInSharedStoreConverter,
VerificationReporter, VerificationReporter,
>; >;
@ -168,9 +159,9 @@ mod tests {
use crate::pus::verification::{TcStateAccepted, VerificationToken}; use crate::pus::verification::{TcStateAccepted, VerificationToken};
use crate::pus::{ use crate::pus::{
EcssTcInSharedStoreConverter, EcssTcInVecConverter, GenericConversionError, MpscTcReceiver, EcssTcInSharedStoreConverter, EcssTcInVecConverter, GenericConversionError, MpscTcReceiver,
MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, MpscTmAsVecSender, PusPacketHandlerResult, PusPacketHandlingError,
PusPacketHandlingError,
}; };
use crate::tmtc::PacketSenderWithSharedPool;
use crate::ComponentId; use crate::ComponentId;
use delegate::delegate; use delegate::delegate;
use spacepackets::ecss::tc::{PusTcCreator, PusTcSecondaryHeader}; use spacepackets::ecss::tc::{PusTcCreator, PusTcSecondaryHeader};
@ -185,7 +176,7 @@ mod tests {
common: PusServiceHandlerWithSharedStoreCommon, common: PusServiceHandlerWithSharedStoreCommon,
handler: PusService17TestHandler< handler: PusService17TestHandler<
MpscTcReceiver, MpscTcReceiver,
MpscTmInSharedPoolSenderBounded, PacketSenderWithSharedPool,
EcssTcInSharedStoreConverter, EcssTcInSharedStoreConverter,
VerificationReporter, VerificationReporter,
>, >,
@ -212,10 +203,14 @@ mod tests {
.expect("acceptance success failure") .expect("acceptance success failure")
} }
fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator) {
self.common
.send_tc(self.handler.service_helper.id(), token, tc);
}
delegate! { delegate! {
to self.common { to self.common {
fn read_next_tm(&mut self) -> PusTmReader<'_>; fn read_next_tm(&mut self) -> PusTmReader<'_>;
fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator);
fn check_no_tm_available(&self) -> bool; fn check_no_tm_available(&self) -> bool;
fn check_next_verification_tm( fn check_next_verification_tm(
&self, &self,
@ -263,9 +258,13 @@ mod tests {
.expect("acceptance success failure") .expect("acceptance success failure")
} }
fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator) {
self.common
.send_tc(self.handler.service_helper.id(), token, tc);
}
delegate! { delegate! {
to self.common { to self.common {
fn send_tc(&self, token: &VerificationToken<TcStateAccepted>, tc: &PusTcCreator);
fn read_next_tm(&mut self) -> PusTmReader<'_>; fn read_next_tm(&mut self) -> PusTmReader<'_>;
fn check_no_tm_available(&self) -> bool; fn check_no_tm_available(&self) -> bool;
fn check_next_verification_tm( fn check_next_verification_tm(

View File

@ -19,10 +19,9 @@
//! use satrs::pus::verification::{ //! use satrs::pus::verification::{
//! VerificationReportingProvider, VerificationReporterCfg, VerificationReporter //! VerificationReportingProvider, VerificationReporterCfg, VerificationReporter
//! }; //! };
//! use satrs::tmtc::{SharedStaticMemoryPool, PacketSenderWithSharedPool};
//! use satrs::seq_count::SeqCountProviderSimple; //! use satrs::seq_count::SeqCountProviderSimple;
//! use satrs::request::UniqueApidTargetId; //! use satrs::request::UniqueApidTargetId;
//! use satrs::pus::MpscTmInSharedPoolSender;
//! use satrs::tmtc::tm_helper::SharedTmPool;
//! use spacepackets::ecss::PusPacket; //! use spacepackets::ecss::PusPacket;
//! use spacepackets::SpHeader; //! use spacepackets::SpHeader;
//! use spacepackets::ecss::tc::{PusTcCreator, PusTcSecondaryHeader}; //! use spacepackets::ecss::tc::{PusTcCreator, PusTcSecondaryHeader};
@ -34,10 +33,9 @@
//! //!
//! let pool_cfg = StaticPoolConfig::new(vec![(10, 32), (10, 64), (10, 128), (10, 1024)], false); //! let pool_cfg = StaticPoolConfig::new(vec![(10, 32), (10, 64), (10, 128), (10, 1024)], false);
//! let tm_pool = StaticMemoryPool::new(pool_cfg.clone()); //! let tm_pool = StaticMemoryPool::new(pool_cfg.clone());
//! let shared_tm_store = SharedTmPool::new(tm_pool); //! let shared_tm_pool = SharedStaticMemoryPool::new(RwLock::new(tm_pool));
//! let tm_store = shared_tm_store.clone_backing_pool(); //! let (verif_tx, verif_rx) = mpsc::sync_channel(10);
//! let (verif_tx, verif_rx) = mpsc::channel(); //! let sender = PacketSenderWithSharedPool::new_with_shared_packet_pool(verif_tx, &shared_tm_pool);
//! let sender = MpscTmInSharedPoolSender::new(shared_tm_store, verif_tx);
//! let cfg = VerificationReporterCfg::new(TEST_APID, 1, 2, 8).unwrap(); //! let cfg = VerificationReporterCfg::new(TEST_APID, 1, 2, 8).unwrap();
//! let mut reporter = VerificationReporter::new(TEST_COMPONENT_ID.id(), &cfg); //! let mut reporter = VerificationReporter::new(TEST_COMPONENT_ID.id(), &cfg);
//! //!
@ -61,7 +59,7 @@
//! let tm_in_store = verif_rx.recv_timeout(Duration::from_millis(10)).unwrap(); //! let tm_in_store = verif_rx.recv_timeout(Duration::from_millis(10)).unwrap();
//! let tm_len; //! let tm_len;
//! { //! {
//! let mut rg = tm_store.write().expect("Error locking shared pool"); //! let mut rg = shared_tm_pool.write().expect("Error locking shared pool");
//! let store_guard = rg.read_with_guard(tm_in_store.store_addr); //! let store_guard = rg.read_with_guard(tm_in_store.store_addr);
//! tm_len = store_guard.read(&mut tm_buf).expect("Error reading TM slice"); //! tm_len = store_guard.read(&mut tm_buf).expect("Error reading TM slice");
//! } //! }
@ -81,7 +79,7 @@
//! The [integration test](https://egit.irs.uni-stuttgart.de/rust/fsrc-launchpad/src/branch/main/fsrc-core/tests/verification_test.rs) //! The [integration test](https://egit.irs.uni-stuttgart.de/rust/fsrc-launchpad/src/branch/main/fsrc-core/tests/verification_test.rs)
//! for the verification module contains examples how this module could be used in a more complex //! for the verification module contains examples how this module could be used in a more complex
//! context involving multiple threads //! context involving multiple threads
use crate::pus::{source_buffer_large_enough, EcssTmSenderCore, EcssTmtcError}; use crate::pus::{source_buffer_large_enough, EcssTmSender, EcssTmtcError};
use core::fmt::{Debug, Display, Formatter}; use core::fmt::{Debug, Display, Formatter};
use core::hash::{Hash, Hasher}; use core::hash::{Hash, Hasher};
use core::marker::PhantomData; use core::marker::PhantomData;
@ -425,35 +423,35 @@ pub trait VerificationReportingProvider {
fn acceptance_success( fn acceptance_success(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateNone>, token: VerificationToken<TcStateNone>,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<VerificationToken<TcStateAccepted>, EcssTmtcError>; ) -> Result<VerificationToken<TcStateAccepted>, EcssTmtcError>;
fn acceptance_failure( fn acceptance_failure(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateNone>, token: VerificationToken<TcStateNone>,
params: FailParams, params: FailParams,
) -> Result<(), EcssTmtcError>; ) -> Result<(), EcssTmtcError>;
fn start_success( fn start_success(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateAccepted>, token: VerificationToken<TcStateAccepted>,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<VerificationToken<TcStateStarted>, EcssTmtcError>; ) -> Result<VerificationToken<TcStateStarted>, EcssTmtcError>;
fn start_failure( fn start_failure(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateAccepted>, token: VerificationToken<TcStateAccepted>,
params: FailParams, params: FailParams,
) -> Result<(), EcssTmtcError>; ) -> Result<(), EcssTmtcError>;
fn step_success( fn step_success(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: &VerificationToken<TcStateStarted>, token: &VerificationToken<TcStateStarted>,
time_stamp: &[u8], time_stamp: &[u8],
step: impl EcssEnumeration, step: impl EcssEnumeration,
@ -461,21 +459,21 @@ pub trait VerificationReportingProvider {
fn step_failure( fn step_failure(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateStarted>, token: VerificationToken<TcStateStarted>,
params: FailParamsWithStep, params: FailParamsWithStep,
) -> Result<(), EcssTmtcError>; ) -> Result<(), EcssTmtcError>;
fn completion_success<TcState: WasAtLeastAccepted + Copy>( fn completion_success<TcState: WasAtLeastAccepted + Copy>(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcState>, token: VerificationToken<TcState>,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(), EcssTmtcError>; ) -> Result<(), EcssTmtcError>;
fn completion_failure<TcState: WasAtLeastAccepted + Copy>( fn completion_failure<TcState: WasAtLeastAccepted + Copy>(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcState>, token: VerificationToken<TcState>,
params: FailParams, params: FailParams,
) -> Result<(), EcssTmtcError>; ) -> Result<(), EcssTmtcError>;
@ -886,7 +884,7 @@ pub mod alloc_mod {
use spacepackets::ecss::PusError; use spacepackets::ecss::PusError;
use super::*; use super::*;
use crate::{pus::PusTmVariant, ComponentId}; use crate::pus::PusTmVariant;
use core::cell::RefCell; use core::cell::RefCell;
#[derive(Clone)] #[derive(Clone)]
@ -1027,7 +1025,7 @@ pub mod alloc_mod {
/// Package and send a PUS TM\[1, 1\] packet, see 8.1.2.1 of the PUS standard /// Package and send a PUS TM\[1, 1\] packet, see 8.1.2.1 of the PUS standard
fn acceptance_success( fn acceptance_success(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateNone>, token: VerificationToken<TcStateNone>,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<VerificationToken<TcStateAccepted>, EcssTmtcError> { ) -> Result<VerificationToken<TcStateAccepted>, EcssTmtcError> {
@ -1044,7 +1042,7 @@ pub mod alloc_mod {
/// Package and send a PUS TM\[1, 2\] packet, see 8.1.2.2 of the PUS standard /// Package and send a PUS TM\[1, 2\] packet, see 8.1.2.2 of the PUS standard
fn acceptance_failure( fn acceptance_failure(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateNone>, token: VerificationToken<TcStateNone>,
params: FailParams, params: FailParams,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1063,7 +1061,7 @@ pub mod alloc_mod {
/// Requires a token previously acquired by calling [Self::acceptance_success]. /// Requires a token previously acquired by calling [Self::acceptance_success].
fn start_success( fn start_success(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateAccepted>, token: VerificationToken<TcStateAccepted>,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<VerificationToken<TcStateStarted>, EcssTmtcError> { ) -> Result<VerificationToken<TcStateStarted>, EcssTmtcError> {
@ -1083,7 +1081,7 @@ pub mod alloc_mod {
/// the token because verification handling is done. /// the token because verification handling is done.
fn start_failure( fn start_failure(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateAccepted>, token: VerificationToken<TcStateAccepted>,
params: FailParams, params: FailParams,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1102,7 +1100,7 @@ pub mod alloc_mod {
/// Requires a token previously acquired by calling [Self::start_success]. /// Requires a token previously acquired by calling [Self::start_success].
fn step_success( fn step_success(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: &VerificationToken<TcStateStarted>, token: &VerificationToken<TcStateStarted>,
time_stamp: &[u8], time_stamp: &[u8],
step: impl EcssEnumeration, step: impl EcssEnumeration,
@ -1123,7 +1121,7 @@ pub mod alloc_mod {
/// token because verification handling is done. /// token because verification handling is done.
fn step_failure( fn step_failure(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateStarted>, token: VerificationToken<TcStateStarted>,
params: FailParamsWithStep, params: FailParamsWithStep,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1144,7 +1142,7 @@ pub mod alloc_mod {
fn completion_success<TcState: WasAtLeastAccepted + Copy>( fn completion_success<TcState: WasAtLeastAccepted + Copy>(
&self, &self,
// sender_id: ComponentId, // sender_id: ComponentId,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcState>, token: VerificationToken<TcState>,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1164,7 +1162,7 @@ pub mod alloc_mod {
/// token because verification handling is done. /// token because verification handling is done.
fn completion_failure<TcState: WasAtLeastAccepted + Copy>( fn completion_failure<TcState: WasAtLeastAccepted + Copy>(
&self, &self,
sender: &(impl EcssTmSenderCore + ?Sized), sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcState>, token: VerificationToken<TcState>,
params: FailParams, params: FailParams,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1269,7 +1267,7 @@ pub mod test_util {
fn acceptance_success( fn acceptance_success(
&self, &self,
_sender: &(impl EcssTmSenderCore + ?Sized), _sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateNone>, token: VerificationToken<TcStateNone>,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<VerificationToken<TcStateAccepted>, EcssTmtcError> { ) -> Result<VerificationToken<TcStateAccepted>, EcssTmtcError> {
@ -1288,7 +1286,7 @@ pub mod test_util {
fn acceptance_failure( fn acceptance_failure(
&self, &self,
_sender: &(impl EcssTmSenderCore + ?Sized), _sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateNone>, token: VerificationToken<TcStateNone>,
params: FailParams, params: FailParams,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1306,7 +1304,7 @@ pub mod test_util {
fn start_success( fn start_success(
&self, &self,
_sender: &(impl EcssTmSenderCore + ?Sized), _sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateAccepted>, token: VerificationToken<TcStateAccepted>,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<VerificationToken<TcStateStarted>, EcssTmtcError> { ) -> Result<VerificationToken<TcStateStarted>, EcssTmtcError> {
@ -1325,7 +1323,7 @@ pub mod test_util {
fn start_failure( fn start_failure(
&self, &self,
_sender: &(impl EcssTmSenderCore + ?Sized), _sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<super::TcStateAccepted>, token: VerificationToken<super::TcStateAccepted>,
params: FailParams, params: FailParams,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1343,7 +1341,7 @@ pub mod test_util {
fn step_success( fn step_success(
&self, &self,
_sender: &(impl EcssTmSenderCore + ?Sized), _sender: &(impl EcssTmSender + ?Sized),
token: &VerificationToken<TcStateStarted>, token: &VerificationToken<TcStateStarted>,
time_stamp: &[u8], time_stamp: &[u8],
step: impl EcssEnumeration, step: impl EcssEnumeration,
@ -1363,7 +1361,7 @@ pub mod test_util {
fn step_failure( fn step_failure(
&self, &self,
_sender: &(impl EcssTmSenderCore + ?Sized), _sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcStateStarted>, token: VerificationToken<TcStateStarted>,
params: FailParamsWithStep, params: FailParamsWithStep,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1381,7 +1379,7 @@ pub mod test_util {
fn completion_success<TcState: super::WasAtLeastAccepted + Copy>( fn completion_success<TcState: super::WasAtLeastAccepted + Copy>(
&self, &self,
_sender: &(impl EcssTmSenderCore + ?Sized), _sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcState>, token: VerificationToken<TcState>,
time_stamp: &[u8], time_stamp: &[u8],
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1397,7 +1395,7 @@ pub mod test_util {
fn completion_failure<TcState: WasAtLeastAccepted + Copy>( fn completion_failure<TcState: WasAtLeastAccepted + Copy>(
&self, &self,
_sender: &(impl EcssTmSenderCore + ?Sized), _sender: &(impl EcssTmSender + ?Sized),
token: VerificationToken<TcState>, token: VerificationToken<TcState>,
params: FailParams, params: FailParams,
) -> Result<(), EcssTmtcError> { ) -> Result<(), EcssTmtcError> {
@ -1636,17 +1634,17 @@ pub mod test_util {
#[cfg(test)] #[cfg(test)]
pub mod tests { pub mod tests {
use crate::pool::{StaticMemoryPool, StaticPoolConfig}; use crate::pool::{SharedStaticMemoryPool, StaticMemoryPool, StaticPoolConfig};
use crate::pus::test_util::{TEST_APID, TEST_COMPONENT_ID_0}; use crate::pus::test_util::{TEST_APID, TEST_COMPONENT_ID_0};
use crate::pus::tests::CommonTmInfo; use crate::pus::tests::CommonTmInfo;
use crate::pus::verification::{ use crate::pus::verification::{
EcssTmSenderCore, EcssTmtcError, FailParams, FailParamsWithStep, RequestId, TcStateNone, EcssTmSender, EcssTmtcError, FailParams, FailParamsWithStep, RequestId, TcStateNone,
VerificationReporter, VerificationReporterCfg, VerificationToken, VerificationReporter, VerificationReporterCfg, VerificationToken,
}; };
use crate::pus::{ChannelWithId, MpscTmInSharedPoolSender, PusTmVariant}; use crate::pus::{ChannelWithId, PusTmVariant};
use crate::request::MessageMetadata; use crate::request::MessageMetadata;
use crate::seq_count::{CcsdsSimpleSeqCountProvider, SequenceCountProviderCore}; use crate::seq_count::{CcsdsSimpleSeqCountProvider, SequenceCountProviderCore};
use crate::tmtc::tm_helper::SharedTmPool; use crate::tmtc::{PacketSenderWithSharedPool, SharedPacketPool};
use crate::ComponentId; use crate::ComponentId;
use alloc::format; use alloc::format;
use spacepackets::ecss::tc::{PusTcCreator, PusTcReader, PusTcSecondaryHeader}; use spacepackets::ecss::tc::{PusTcCreator, PusTcReader, PusTcSecondaryHeader};
@ -1658,7 +1656,7 @@ pub mod tests {
use spacepackets::{ByteConversionError, SpHeader}; use spacepackets::{ByteConversionError, SpHeader};
use std::cell::RefCell; use std::cell::RefCell;
use std::collections::VecDeque; use std::collections::VecDeque;
use std::sync::mpsc; use std::sync::{mpsc, RwLock};
use std::vec; use std::vec;
use std::vec::Vec; use std::vec::Vec;
@ -1694,7 +1692,7 @@ pub mod tests {
} }
} }
impl EcssTmSenderCore for TestSender { impl EcssTmSender for TestSender {
fn send_tm(&self, sender_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { fn send_tm(&self, sender_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> {
match tm { match tm {
PusTmVariant::InStore(_) => { PusTmVariant::InStore(_) => {
@ -2128,9 +2126,10 @@ pub mod tests {
#[test] #[test]
fn test_mpsc_verif_send() { fn test_mpsc_verif_send() {
let pool = StaticMemoryPool::new(StaticPoolConfig::new(vec![(8, 8)], false)); let pool = StaticMemoryPool::new(StaticPoolConfig::new(vec![(8, 8)], false));
let shared_tm_store = SharedTmPool::new(pool); let shared_tm_store =
let (tx, _) = mpsc::channel(); SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new(pool)));
let mpsc_verif_sender = MpscTmInSharedPoolSender::new(shared_tm_store, tx); let (tx, _) = mpsc::sync_channel(10);
let mpsc_verif_sender = PacketSenderWithSharedPool::new(tx, shared_tm_store);
is_send(&mpsc_verif_sender); is_send(&mpsc_verif_sender);
} }

View File

@ -193,8 +193,6 @@ impl<MSG, R: MessageReceiver<MSG>> MessageReceiverWithId<MSG, R> {
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
pub mod alloc_mod { pub mod alloc_mod {
use core::marker::PhantomData;
use crate::queue::GenericSendError; use crate::queue::GenericSendError;
use super::*; use super::*;
@ -333,7 +331,7 @@ pub mod std_mod {
use super::*; use super::*;
use std::sync::mpsc; use std::sync::mpsc;
use crate::queue::{GenericReceiveError, GenericSendError, GenericTargetedMessagingError}; use crate::queue::{GenericReceiveError, GenericSendError};
impl<MSG: Send> MessageSender<MSG> for mpsc::Sender<GenericMessage<MSG>> { impl<MSG: Send> MessageSender<MSG> for mpsc::Sender<GenericMessage<MSG>> {
fn send(&self, message: GenericMessage<MSG>) -> Result<(), GenericTargetedMessagingError> { fn send(&self, message: GenericMessage<MSG>) -> Result<(), GenericTargetedMessagingError> {

View File

@ -1,403 +0,0 @@
//! CCSDS packet routing components.
//!
//! The routing components consist of two core components:
//! 1. [CcsdsDistributor] component which dispatches received packets to a user-provided handler
//! 2. [CcsdsPacketHandler] trait which should be implemented by the user-provided packet handler.
//!
//! The [CcsdsDistributor] implements the [ReceivesCcsdsTc] and [ReceivesTcCore] trait which allows to
//! pass raw or CCSDS packets to it. Upon receiving a packet, it performs the following steps:
//!
//! 1. It tries to identify the target Application Process Identifier (APID) based on the
//! respective CCSDS space packet header field. If that process fails, a [ByteConversionError] is
//! returned to the user
//! 2. If a valid APID is found and matches one of the APIDs provided by
//! [CcsdsPacketHandler::valid_apids], it will pass the packet to the user provided
//! [CcsdsPacketHandler::handle_known_apid] function. If no valid APID is found, the packet
//! will be passed to the [CcsdsPacketHandler::handle_unknown_apid] function.
//!
//! # Example
//!
//! ```rust
//! use satrs::ValidatorU16Id;
//! use satrs::tmtc::ccsds_distrib::{CcsdsPacketHandler, CcsdsDistributor};
//! use satrs::tmtc::{ReceivesTc, ReceivesTcCore};
//! use spacepackets::{CcsdsPacket, SpHeader};
//! use spacepackets::ecss::WritablePusPacket;
//! use spacepackets::ecss::tc::PusTcCreator;
//!
//! #[derive (Default)]
//! struct ConcreteApidHandler {
//! known_call_count: u32,
//! unknown_call_count: u32
//! }
//!
//! impl ConcreteApidHandler {
//! fn mutable_foo(&mut self) {}
//! }
//!
//! impl ValidatorU16Id for ConcreteApidHandler {
//! fn validate(&self, apid: u16) -> bool { apid == 0x0002 }
//! }
//!
//! impl CcsdsPacketHandler for ConcreteApidHandler {
//! type Error = ();
//! fn handle_packet_with_valid_apid(&mut self, sp_header: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> {
//! assert_eq!(sp_header.apid(), 0x002);
//! assert_eq!(tc_raw.len(), 13);
//! self.known_call_count += 1;
//! Ok(())
//! }
//! fn handle_packet_with_unknown_apid(&mut self, sp_header: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> {
//! assert_eq!(sp_header.apid(), 0x003);
//! assert_eq!(tc_raw.len(), 13);
//! self.unknown_call_count += 1;
//! Ok(())
//! }
//! }
//!
//! let apid_handler = ConcreteApidHandler::default();
//! let mut ccsds_distributor = CcsdsDistributor::new(apid_handler);
//!
//! // Create and pass PUS telecommand with a valid APID
//! let sp_header = SpHeader::new_for_unseg_tc(0x002, 0x34, 0);
//! let mut pus_tc = PusTcCreator::new_simple(sp_header, 17, 1, &[], true);
//! let mut test_buf: [u8; 32] = [0; 32];
//! let mut size = pus_tc
//! .write_to_bytes(test_buf.as_mut_slice())
//! .expect("Error writing TC to buffer");
//! let tc_slice = &test_buf[0..size];
//! ccsds_distributor.pass_tc(&tc_slice).expect("Passing TC slice failed");
//!
//! // Now pass a packet with an unknown APID to the distributor
//! pus_tc.set_apid(0x003);
//! size = pus_tc
//! .write_to_bytes(test_buf.as_mut_slice())
//! .expect("Error writing TC to buffer");
//! let tc_slice = &test_buf[0..size];
//! ccsds_distributor.pass_tc(&tc_slice).expect("Passing TC slice failed");
//!
//! // Retrieve the APID handler.
//! let handler_ref = ccsds_distributor.packet_handler();
//! assert_eq!(handler_ref.known_call_count, 1);
//! assert_eq!(handler_ref.unknown_call_count, 1);
//!
//! // Mutable access to the handler.
//! let mutable_handler_ref = ccsds_distributor.packet_handler_mut();
//! mutable_handler_ref.mutable_foo();
//! ```
use crate::{
tmtc::{ReceivesCcsdsTc, ReceivesTcCore},
ValidatorU16Id,
};
use core::fmt::{Display, Formatter};
use spacepackets::{ByteConversionError, CcsdsPacket, SpHeader};
#[cfg(feature = "std")]
use std::error::Error;
/// Generic trait for a handler or dispatcher object handling CCSDS packets.
///
/// Users should implement this trait on their custom CCSDS packet handler and then pass a boxed
/// instance of this handler to the [CcsdsDistributor]. The distributor will use the trait
/// interface to dispatch received packets to the user based on the Application Process Identifier
/// (APID) field of the CCSDS packet. The APID will be checked using the generic [ValidatorU16Id]
/// trait.
pub trait CcsdsPacketHandler: ValidatorU16Id {
type Error;
fn handle_packet_with_valid_apid(
&mut self,
sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error>;
fn handle_packet_with_unknown_apid(
&mut self,
sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error>;
}
/// The CCSDS distributor dispatches received CCSDS packets to a user provided packet handler.
pub struct CcsdsDistributor<PacketHandler: CcsdsPacketHandler<Error = E>, E> {
/// User provided APID handler stored as a generic trait object.
/// It can be cast back to the original concrete type using [Self::packet_handler] or
/// the [Self::packet_handler_mut] method.
packet_handler: PacketHandler,
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum CcsdsError<E> {
CustomError(E),
ByteConversionError(ByteConversionError),
}
impl<E: Display> Display for CcsdsError<E> {
fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result {
match self {
Self::CustomError(e) => write!(f, "{e}"),
Self::ByteConversionError(e) => write!(f, "{e}"),
}
}
}
#[cfg(feature = "std")]
impl<E: Error> Error for CcsdsError<E> {
fn source(&self) -> Option<&(dyn Error + 'static)> {
match self {
Self::CustomError(e) => e.source(),
Self::ByteConversionError(e) => e.source(),
}
}
}
impl<PacketHandler: CcsdsPacketHandler<Error = E>, E: 'static> ReceivesCcsdsTc
for CcsdsDistributor<PacketHandler, E>
{
type Error = CcsdsError<E>;
fn pass_ccsds(&mut self, header: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> {
self.dispatch_ccsds(header, tc_raw)
}
}
impl<PacketHandler: CcsdsPacketHandler<Error = E>, E: 'static> ReceivesTcCore
for CcsdsDistributor<PacketHandler, E>
{
type Error = CcsdsError<E>;
fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> {
if tc_raw.len() < 7 {
return Err(CcsdsError::ByteConversionError(
ByteConversionError::FromSliceTooSmall {
found: tc_raw.len(),
expected: 7,
},
));
}
let (sp_header, _) =
SpHeader::from_be_bytes(tc_raw).map_err(|e| CcsdsError::ByteConversionError(e))?;
self.dispatch_ccsds(&sp_header, tc_raw)
}
}
impl<PacketHandler: CcsdsPacketHandler<Error = E>, E: 'static> CcsdsDistributor<PacketHandler, E> {
pub fn new(packet_handler: PacketHandler) -> Self {
CcsdsDistributor { packet_handler }
}
pub fn packet_handler(&self) -> &PacketHandler {
&self.packet_handler
}
pub fn packet_handler_mut(&mut self) -> &mut PacketHandler {
&mut self.packet_handler
}
fn dispatch_ccsds(&mut self, sp_header: &SpHeader, tc_raw: &[u8]) -> Result<(), CcsdsError<E>> {
let valid_apid = self.packet_handler().validate(sp_header.apid());
if valid_apid {
self.packet_handler
.handle_packet_with_valid_apid(sp_header, tc_raw)
.map_err(|e| CcsdsError::CustomError(e))?;
return Ok(());
}
self.packet_handler
.handle_packet_with_unknown_apid(sp_header, tc_raw)
.map_err(|e| CcsdsError::CustomError(e))
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
use crate::tmtc::ccsds_distrib::{CcsdsDistributor, CcsdsPacketHandler};
use spacepackets::ecss::tc::PusTcCreator;
use spacepackets::ecss::WritablePusPacket;
use spacepackets::CcsdsPacket;
use std::collections::VecDeque;
use std::sync::{Arc, Mutex};
use std::vec::Vec;
fn is_send<T: Send>(_: &T) {}
pub fn generate_ping_tc(buf: &mut [u8]) -> &[u8] {
let sph = SpHeader::new_for_unseg_tc(0x002, 0x34, 0);
let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true);
let size = pus_tc
.write_to_bytes(buf)
.expect("Error writing TC to buffer");
assert_eq!(size, 13);
&buf[0..size]
}
pub fn generate_ping_tc_as_vec() -> Vec<u8> {
let sph = SpHeader::new_for_unseg_tc(0x002, 0x34, 0);
PusTcCreator::new_simple(sph, 17, 1, &[], true)
.to_vec()
.unwrap()
}
type SharedPacketQueue = Arc<Mutex<VecDeque<(u16, Vec<u8>)>>>;
pub struct BasicApidHandlerSharedQueue {
pub known_packet_queue: SharedPacketQueue,
pub unknown_packet_queue: SharedPacketQueue,
}
#[derive(Default)]
pub struct BasicApidHandlerOwnedQueue {
pub known_packet_queue: VecDeque<(u16, Vec<u8>)>,
pub unknown_packet_queue: VecDeque<(u16, Vec<u8>)>,
}
impl ValidatorU16Id for BasicApidHandlerSharedQueue {
fn validate(&self, packet_id: u16) -> bool {
[0x000, 0x002].contains(&packet_id)
}
}
impl CcsdsPacketHandler for BasicApidHandlerSharedQueue {
type Error = ();
fn handle_packet_with_valid_apid(
&mut self,
sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error> {
let mut vec = Vec::new();
vec.extend_from_slice(tc_raw);
self.known_packet_queue
.lock()
.unwrap()
.push_back((sp_header.apid(), vec));
Ok(())
}
fn handle_packet_with_unknown_apid(
&mut self,
sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error> {
let mut vec = Vec::new();
vec.extend_from_slice(tc_raw);
self.unknown_packet_queue
.lock()
.unwrap()
.push_back((sp_header.apid(), vec));
Ok(())
}
}
impl ValidatorU16Id for BasicApidHandlerOwnedQueue {
fn validate(&self, packet_id: u16) -> bool {
[0x000, 0x002].contains(&packet_id)
}
}
impl CcsdsPacketHandler for BasicApidHandlerOwnedQueue {
type Error = ();
fn handle_packet_with_valid_apid(
&mut self,
sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error> {
let mut vec = Vec::new();
vec.extend_from_slice(tc_raw);
self.known_packet_queue.push_back((sp_header.apid(), vec));
Ok(())
}
fn handle_packet_with_unknown_apid(
&mut self,
sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error> {
let mut vec = Vec::new();
vec.extend_from_slice(tc_raw);
self.unknown_packet_queue.push_back((sp_header.apid(), vec));
Ok(())
}
}
#[test]
fn test_distribs_known_apid() {
let known_packet_queue = Arc::new(Mutex::default());
let unknown_packet_queue = Arc::new(Mutex::default());
let apid_handler = BasicApidHandlerSharedQueue {
known_packet_queue: known_packet_queue.clone(),
unknown_packet_queue: unknown_packet_queue.clone(),
};
let mut ccsds_distrib = CcsdsDistributor::new(apid_handler);
is_send(&ccsds_distrib);
let mut test_buf: [u8; 32] = [0; 32];
let tc_slice = generate_ping_tc(test_buf.as_mut_slice());
ccsds_distrib.pass_tc(tc_slice).expect("Passing TC failed");
let recvd = known_packet_queue.lock().unwrap().pop_front();
assert!(unknown_packet_queue.lock().unwrap().is_empty());
assert!(recvd.is_some());
let (apid, packet) = recvd.unwrap();
assert_eq!(apid, 0x002);
assert_eq!(packet, tc_slice);
}
#[test]
fn test_unknown_apid_handling() {
let apid_handler = BasicApidHandlerOwnedQueue::default();
let mut ccsds_distrib = CcsdsDistributor::new(apid_handler);
let sph = SpHeader::new_for_unseg_tc(0x004, 0x34, 0);
let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true);
let mut test_buf: [u8; 32] = [0; 32];
pus_tc
.write_to_bytes(test_buf.as_mut_slice())
.expect("Error writing TC to buffer");
ccsds_distrib.pass_tc(&test_buf).expect("Passing TC failed");
assert!(ccsds_distrib.packet_handler().known_packet_queue.is_empty());
let apid_handler = ccsds_distrib.packet_handler_mut();
let recvd = apid_handler.unknown_packet_queue.pop_front();
assert!(recvd.is_some());
let (apid, packet) = recvd.unwrap();
assert_eq!(apid, 0x004);
assert_eq!(packet.as_slice(), test_buf);
}
#[test]
fn test_ccsds_distribution() {
let mut ccsds_distrib = CcsdsDistributor::new(BasicApidHandlerOwnedQueue::default());
let sph = SpHeader::new_for_unseg_tc(0x002, 0x34, 0);
let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true);
let tc_vec = pus_tc.to_vec().unwrap();
ccsds_distrib
.pass_ccsds(&sph, &tc_vec)
.expect("passing CCSDS TC failed");
let recvd = ccsds_distrib
.packet_handler_mut()
.known_packet_queue
.pop_front();
assert!(recvd.is_some());
let recvd = recvd.unwrap();
assert_eq!(recvd.0, 0x002);
assert_eq!(recvd.1, tc_vec);
}
#[test]
fn test_distribution_short_packet_fails() {
let mut ccsds_distrib = CcsdsDistributor::new(BasicApidHandlerOwnedQueue::default());
let sph = SpHeader::new_for_unseg_tc(0x002, 0x34, 0);
let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true);
let tc_vec = pus_tc.to_vec().unwrap();
let result = ccsds_distrib.pass_tc(&tc_vec[0..6]);
assert!(result.is_err());
let error = result.unwrap_err();
if let CcsdsError::ByteConversionError(ByteConversionError::FromSliceTooSmall {
found,
expected,
}) = error
{
assert_eq!(found, 6);
assert_eq!(expected, 7);
} else {
panic!("Unexpected error variant");
}
}
}

View File

@ -1,115 +1,651 @@
//! Telemetry and Telecommanding (TMTC) module. Contains packet routing components with special //! Telemetry and Telecommanding (TMTC) module. Contains packet routing components with special
//! support for CCSDS and ECSS packets. //! support for CCSDS and ECSS packets.
//! //!
//! The distributor modules provided by this module use trait objects provided by the user to //! It is recommended to read the [sat-rs book chapter](https://absatsw.irs.uni-stuttgart.de/projects/sat-rs/book/communication.html)
//! directly dispatch received packets to packet listeners based on packet fields like the CCSDS //! about communication first. The TMTC abstractions provided by this framework are based on the
//! Application Process ID (APID) or the ECSS PUS service type. This allows for fast packet //! assumption that all telemetry is sent to a special handler object called the TM sink while
//! routing without the overhead and complication of using message queues. However, it also requires //! all received telecommands are sent to a special handler object called TC source. Using
//! a design like this makes it simpler to add new TC packet sources or new telemetry generators:
//! They only need to send the received and generated data to these objects.
use crate::queue::GenericSendError;
use crate::{
pool::{PoolAddr, PoolError},
ComponentId,
};
#[cfg(feature = "std")]
pub use alloc_mod::*;
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
use downcast_rs::{impl_downcast, Downcast}; use downcast_rs::{impl_downcast, Downcast};
use spacepackets::SpHeader; use spacepackets::{
ecss::{
tc::PusTcReader,
tm::{PusTmCreator, PusTmReader},
},
SpHeader,
};
#[cfg(feature = "std")]
use std::sync::mpsc;
#[cfg(feature = "std")]
pub use std_mod::*;
#[cfg(feature = "alloc")]
pub mod ccsds_distrib;
#[cfg(feature = "alloc")]
pub mod pus_distrib;
pub mod tm_helper; pub mod tm_helper;
#[cfg(feature = "alloc")] /// Simple type modelling packet stored inside a pool structure. This structure is intended to
pub use ccsds_distrib::{CcsdsDistributor, CcsdsError, CcsdsPacketHandler}; /// be used when sending a packet via a message queue, so it also contains the sender ID.
#[cfg(feature = "alloc")] #[derive(Debug, PartialEq, Eq, Clone)]
pub use pus_distrib::{PusDistributor, PusServiceDistributor}; pub struct PacketInPool {
pub sender_id: ComponentId,
pub store_addr: PoolAddr,
}
/// Generic trait for object which can receive any telecommands in form of a raw bytestream, with impl PacketInPool {
pub fn new(sender_id: ComponentId, store_addr: PoolAddr) -> Self {
Self {
sender_id,
store_addr,
}
}
}
/// Generic trait for object which can send any packets in form of a raw bytestream, with
/// no assumptions about the received protocol. /// no assumptions about the received protocol.
/// pub trait PacketSenderRaw: Send {
/// This trait is implemented by both the [crate::tmtc::pus_distrib::PusDistributor] and the
/// [crate::tmtc::ccsds_distrib::CcsdsDistributor] which allows to pass the respective packets in
/// raw byte format into them.
pub trait ReceivesTcCore {
type Error; type Error;
fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error>; fn send_packet(&self, sender_id: ComponentId, packet: &[u8]) -> Result<(), Self::Error>;
} }
/// Extension trait of [ReceivesTcCore] which allows downcasting by implementing [Downcast] and /// Extension trait of [PacketSenderRaw] which allows downcasting by implementing [Downcast].
/// is also sendable.
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
pub trait ReceivesTc: ReceivesTcCore + Downcast + Send { pub trait PacketSenderRawExt: PacketSenderRaw + Downcast {
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast(&self) -> &dyn ReceivesTcCore<Error = Self::Error>; fn upcast(&self) -> &dyn PacketSenderRaw<Error = Self::Error>;
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast_mut(&mut self) -> &mut dyn ReceivesTcCore<Error = Self::Error>; fn upcast_mut(&mut self) -> &mut dyn PacketSenderRaw<Error = Self::Error>;
} }
/// Blanket implementation to automatically implement [ReceivesTc] when the [alloc] feature /// Blanket implementation to automatically implement [PacketSenderRawExt] when the [alloc]
/// is enabled. /// feature is enabled.
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
impl<T> ReceivesTc for T impl<T> PacketSenderRawExt for T
where where
T: ReceivesTcCore + Send + 'static, T: PacketSenderRaw + Send + 'static,
{ {
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast(&self) -> &dyn ReceivesTcCore<Error = Self::Error> { fn upcast(&self) -> &dyn PacketSenderRaw<Error = Self::Error> {
self self
} }
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast_mut(&mut self) -> &mut dyn ReceivesTcCore<Error = Self::Error> { fn upcast_mut(&mut self) -> &mut dyn PacketSenderRaw<Error = Self::Error> {
self self
} }
} }
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
impl_downcast!(ReceivesTc assoc Error); impl_downcast!(PacketSenderRawExt assoc Error);
/// Generic trait for object which can receive CCSDS space packets, for example ECSS PUS packets /// Generic trait for object which can send CCSDS space packets, for example ECSS PUS packets
/// for CCSDS File Delivery Protocol (CFDP) packets. /// or CCSDS File Delivery Protocol (CFDP) packets wrapped in space packets.
/// pub trait PacketSenderCcsds: Send {
/// This trait is implemented by both the [crate::tmtc::pus_distrib::PusDistributor] and the
/// [crate::tmtc::ccsds_distrib::CcsdsDistributor] which allows
/// to pass the respective packets in raw byte format or in CCSDS format into them.
pub trait ReceivesCcsdsTc {
type Error; type Error;
fn pass_ccsds(&mut self, header: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error>; fn send_ccsds(
&self,
sender_id: ComponentId,
header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error>;
} }
/// Generic trait for a TM packet source, with no restrictions on the type of TM. #[cfg(feature = "std")]
impl PacketSenderCcsds for mpsc::Sender<PacketAsVec> {
type Error = GenericSendError;
fn send_ccsds(
&self,
sender_id: ComponentId,
_: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error> {
self.send(PacketAsVec::new(sender_id, tc_raw.to_vec()))
.map_err(|_| GenericSendError::RxDisconnected)
}
}
#[cfg(feature = "std")]
impl PacketSenderCcsds for mpsc::SyncSender<PacketAsVec> {
type Error = GenericSendError;
fn send_ccsds(
&self,
sender_id: ComponentId,
_: &SpHeader,
packet_raw: &[u8],
) -> Result<(), Self::Error> {
self.try_send(PacketAsVec::new(sender_id, packet_raw.to_vec()))
.map_err(|e| match e {
mpsc::TrySendError::Full(_) => GenericSendError::QueueFull(None),
mpsc::TrySendError::Disconnected(_) => GenericSendError::RxDisconnected,
})
}
}
/// Generic trait for a packet receiver, with no restrictions on the type of packet.
/// Implementors write the telemetry into the provided buffer and return the size of the telemetry. /// Implementors write the telemetry into the provided buffer and return the size of the telemetry.
pub trait TmPacketSourceCore { pub trait PacketSource: Send {
type Error; type Error;
fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error>; fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error>;
} }
/// Extension trait of [TmPacketSourceCore] which allows downcasting by implementing [Downcast] and /// Extension trait of [PacketSource] which allows downcasting by implementing [Downcast].
/// is also sendable.
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
pub trait TmPacketSource: TmPacketSourceCore + Downcast + Send { pub trait PacketSourceExt: PacketSource + Downcast {
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast(&self) -> &dyn TmPacketSourceCore<Error = Self::Error>; fn upcast(&self) -> &dyn PacketSource<Error = Self::Error>;
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast_mut(&mut self) -> &mut dyn TmPacketSourceCore<Error = Self::Error>; fn upcast_mut(&mut self) -> &mut dyn PacketSource<Error = Self::Error>;
} }
/// Blanket implementation to automatically implement [ReceivesTc] when the [alloc] feature /// Blanket implementation to automatically implement [PacketSourceExt] when the [alloc] feature
/// is enabled. /// is enabled.
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
impl<T> TmPacketSource for T impl<T> PacketSourceExt for T
where where
T: TmPacketSourceCore + Send + 'static, T: PacketSource + 'static,
{ {
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast(&self) -> &dyn TmPacketSourceCore<Error = Self::Error> { fn upcast(&self) -> &dyn PacketSource<Error = Self::Error> {
self self
} }
// Remove this once trait upcasting coercion has been implemented. // Remove this once trait upcasting coercion has been implemented.
// Tracking issue: https://github.com/rust-lang/rust/issues/65991 // Tracking issue: https://github.com/rust-lang/rust/issues/65991
fn upcast_mut(&mut self) -> &mut dyn TmPacketSourceCore<Error = Self::Error> { fn upcast_mut(&mut self) -> &mut dyn PacketSource<Error = Self::Error> {
self self
} }
} }
/// Helper trait for any generic (static) store which allows storing raw or CCSDS packets.
pub trait CcsdsPacketPool {
fn add_ccsds_tc(&mut self, _: &SpHeader, tc_raw: &[u8]) -> Result<PoolAddr, PoolError> {
self.add_raw_tc(tc_raw)
}
fn add_raw_tc(&mut self, tc_raw: &[u8]) -> Result<PoolAddr, PoolError>;
}
/// Helper trait for any generic (static) store which allows storing ECSS PUS Telecommand packets.
pub trait PusTcPool {
fn add_pus_tc(&mut self, pus_tc: &PusTcReader) -> Result<PoolAddr, PoolError>;
}
/// Helper trait for any generic (static) store which allows storing ECSS PUS Telemetry packets.
pub trait PusTmPool {
fn add_pus_tm_from_reader(&mut self, pus_tm: &PusTmReader) -> Result<PoolAddr, PoolError>;
fn add_pus_tm_from_creator(&mut self, pus_tm: &PusTmCreator) -> Result<PoolAddr, PoolError>;
}
/// Generic trait for any sender component able to send packets stored inside a pool structure.
pub trait PacketInPoolSender: Send {
fn send_packet(
&self,
sender_id: ComponentId,
store_addr: PoolAddr,
) -> Result<(), GenericSendError>;
}
#[cfg(feature = "alloc")]
pub mod alloc_mod {
use alloc::vec::Vec;
use super::*;
/// Simple type modelling packet stored in the heap. This structure is intended to
/// be used when sending a packet via a message queue, so it also contains the sender ID.
#[derive(Debug, PartialEq, Eq, Clone)]
pub struct PacketAsVec {
pub sender_id: ComponentId,
pub packet: Vec<u8>,
}
impl PacketAsVec {
pub fn new(sender_id: ComponentId, packet: Vec<u8>) -> Self {
Self { sender_id, packet }
}
}
}
#[cfg(feature = "std")]
pub mod std_mod {
use core::cell::RefCell;
#[cfg(feature = "crossbeam")]
use crossbeam_channel as cb;
use spacepackets::ecss::WritablePusPacket;
use thiserror::Error;
use crate::pool::PoolProvider;
use crate::pus::{EcssTmSender, EcssTmtcError, PacketSenderPusTc};
use super::*;
/// Newtype wrapper around the [SharedStaticMemoryPool] to enable extension helper traits on
/// top of the regular shared memory pool API.
#[derive(Clone)]
pub struct SharedPacketPool(pub SharedStaticMemoryPool);
impl SharedPacketPool {
pub fn new(pool: &SharedStaticMemoryPool) -> Self {
Self(pool.clone())
}
}
impl PusTcPool for SharedPacketPool {
fn add_pus_tc(&mut self, pus_tc: &PusTcReader) -> Result<PoolAddr, PoolError> {
let mut pg = self.0.write().map_err(|_| PoolError::LockError)?;
let addr = pg.free_element(pus_tc.len_packed(), |buf| {
buf[0..pus_tc.len_packed()].copy_from_slice(pus_tc.raw_data());
})?;
Ok(addr)
}
}
impl PusTmPool for SharedPacketPool {
fn add_pus_tm_from_reader(&mut self, pus_tm: &PusTmReader) -> Result<PoolAddr, PoolError> {
let mut pg = self.0.write().map_err(|_| PoolError::LockError)?;
let addr = pg.free_element(pus_tm.len_packed(), |buf| {
buf[0..pus_tm.len_packed()].copy_from_slice(pus_tm.raw_data());
})?;
Ok(addr)
}
fn add_pus_tm_from_creator(
&mut self,
pus_tm: &PusTmCreator,
) -> Result<PoolAddr, PoolError> {
let mut pg = self.0.write().map_err(|_| PoolError::LockError)?;
let mut result = Ok(0);
let addr = pg.free_element(pus_tm.len_written(), |buf| {
result = pus_tm.write_to_bytes(buf);
})?;
result?;
Ok(addr)
}
}
impl CcsdsPacketPool for SharedPacketPool {
fn add_raw_tc(&mut self, tc_raw: &[u8]) -> Result<PoolAddr, PoolError> {
let mut pg = self.0.write().map_err(|_| PoolError::LockError)?;
let addr = pg.free_element(tc_raw.len(), |buf| {
buf[0..tc_raw.len()].copy_from_slice(tc_raw);
})?;
Ok(addr)
}
}
#[cfg(feature = "std")]
impl PacketSenderRaw for mpsc::Sender<PacketAsVec> {
type Error = GenericSendError;
fn send_packet(&self, sender_id: ComponentId, packet: &[u8]) -> Result<(), Self::Error> {
self.send(PacketAsVec::new(sender_id, packet.to_vec()))
.map_err(|_| GenericSendError::RxDisconnected)
}
}
#[cfg(feature = "std")]
impl PacketSenderRaw for mpsc::SyncSender<PacketAsVec> {
type Error = GenericSendError;
fn send_packet(&self, sender_id: ComponentId, tc_raw: &[u8]) -> Result<(), Self::Error> {
self.try_send(PacketAsVec::new(sender_id, tc_raw.to_vec()))
.map_err(|e| match e {
mpsc::TrySendError::Full(_) => GenericSendError::QueueFull(None),
mpsc::TrySendError::Disconnected(_) => GenericSendError::RxDisconnected,
})
}
}
#[derive(Debug, Clone, PartialEq, Eq, Error)]
pub enum StoreAndSendError {
#[error("Store error: {0}")]
Store(#[from] PoolError),
#[error("Genreric send error: {0}")]
Send(#[from] GenericSendError),
}
pub use crate::pool::SharedStaticMemoryPool;
impl PacketInPoolSender for mpsc::Sender<PacketInPool> {
fn send_packet(
&self,
sender_id: ComponentId,
store_addr: PoolAddr,
) -> Result<(), GenericSendError> {
self.send(PacketInPool::new(sender_id, store_addr))
.map_err(|_| GenericSendError::RxDisconnected)
}
}
impl PacketInPoolSender for mpsc::SyncSender<PacketInPool> {
fn send_packet(
&self,
sender_id: ComponentId,
store_addr: PoolAddr,
) -> Result<(), GenericSendError> {
self.try_send(PacketInPool::new(sender_id, store_addr))
.map_err(|e| match e {
mpsc::TrySendError::Full(_) => GenericSendError::QueueFull(None),
mpsc::TrySendError::Disconnected(_) => GenericSendError::RxDisconnected,
})
}
}
#[cfg(feature = "crossbeam")]
impl PacketInPoolSender for cb::Sender<PacketInPool> {
fn send_packet(
&self,
sender_id: ComponentId,
store_addr: PoolAddr,
) -> Result<(), GenericSendError> {
self.try_send(PacketInPool::new(sender_id, store_addr))
.map_err(|e| match e {
cb::TrySendError::Full(_) => GenericSendError::QueueFull(None),
cb::TrySendError::Disconnected(_) => GenericSendError::RxDisconnected,
})
}
}
/// This is the primary structure used to send packets stored in a dedicated memory pool
/// structure.
#[derive(Clone)]
pub struct PacketSenderWithSharedPool<
Sender: PacketInPoolSender = mpsc::SyncSender<PacketInPool>,
PacketPool: CcsdsPacketPool = SharedPacketPool,
> {
pub sender: Sender,
pub shared_pool: RefCell<PacketPool>,
}
impl<Sender: PacketInPoolSender> PacketSenderWithSharedPool<Sender, SharedPacketPool> {
pub fn new_with_shared_packet_pool(
packet_sender: Sender,
shared_pool: &SharedStaticMemoryPool,
) -> Self {
Self {
sender: packet_sender,
shared_pool: RefCell::new(SharedPacketPool::new(shared_pool)),
}
}
}
impl<Sender: PacketInPoolSender, PacketStore: CcsdsPacketPool>
PacketSenderWithSharedPool<Sender, PacketStore>
{
pub fn new(packet_sender: Sender, shared_pool: PacketStore) -> Self {
Self {
sender: packet_sender,
shared_pool: RefCell::new(shared_pool),
}
}
}
impl<Sender: PacketInPoolSender, PacketStore: CcsdsPacketPool + Clone>
PacketSenderWithSharedPool<Sender, PacketStore>
{
pub fn shared_packet_store(&self) -> PacketStore {
let pool = self.shared_pool.borrow();
pool.clone()
}
}
impl<Sender: PacketInPoolSender, PacketStore: CcsdsPacketPool + Send> PacketSenderRaw
for PacketSenderWithSharedPool<Sender, PacketStore>
{
type Error = StoreAndSendError;
fn send_packet(&self, sender_id: ComponentId, packet: &[u8]) -> Result<(), Self::Error> {
let mut shared_pool = self.shared_pool.borrow_mut();
let store_addr = shared_pool.add_raw_tc(packet)?;
drop(shared_pool);
self.sender
.send_packet(sender_id, store_addr)
.map_err(StoreAndSendError::Send)?;
Ok(())
}
}
impl<Sender: PacketInPoolSender, PacketStore: CcsdsPacketPool + PusTcPool + Send>
PacketSenderPusTc for PacketSenderWithSharedPool<Sender, PacketStore>
{
type Error = StoreAndSendError;
fn send_pus_tc(
&self,
sender_id: ComponentId,
_: &SpHeader,
pus_tc: &PusTcReader,
) -> Result<(), Self::Error> {
let mut shared_pool = self.shared_pool.borrow_mut();
let store_addr = shared_pool.add_raw_tc(pus_tc.raw_data())?;
drop(shared_pool);
self.sender
.send_packet(sender_id, store_addr)
.map_err(StoreAndSendError::Send)?;
Ok(())
}
}
impl<Sender: PacketInPoolSender, PacketStore: CcsdsPacketPool + Send> PacketSenderCcsds
for PacketSenderWithSharedPool<Sender, PacketStore>
{
type Error = StoreAndSendError;
fn send_ccsds(
&self,
sender_id: ComponentId,
_sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error> {
self.send_packet(sender_id, tc_raw)
}
}
impl<Sender: PacketInPoolSender, PacketStore: CcsdsPacketPool + PusTmPool + Send> EcssTmSender
for PacketSenderWithSharedPool<Sender, PacketStore>
{
fn send_tm(
&self,
sender_id: crate::ComponentId,
tm: crate::pus::PusTmVariant,
) -> Result<(), crate::pus::EcssTmtcError> {
let send_addr = |store_addr: PoolAddr| {
self.sender
.send_packet(sender_id, store_addr)
.map_err(EcssTmtcError::Send)
};
match tm {
crate::pus::PusTmVariant::InStore(store_addr) => send_addr(store_addr),
crate::pus::PusTmVariant::Direct(tm_creator) => {
let mut pool = self.shared_pool.borrow_mut();
let store_addr = pool.add_pus_tm_from_creator(&tm_creator)?;
send_addr(store_addr)
}
}
}
}
}
#[cfg(test)]
pub(crate) mod tests {
use alloc::vec;
use std::sync::RwLock;
use crate::pool::{PoolProviderWithGuards, StaticMemoryPool, StaticPoolConfig};
use super::*;
use std::sync::mpsc;
pub(crate) fn send_with_sender<SendError>(
sender_id: ComponentId,
packet_sender: &(impl PacketSenderRaw<Error = SendError> + ?Sized),
packet: &[u8],
) -> Result<(), SendError> {
packet_sender.send_packet(sender_id, packet)
}
#[test]
fn test_basic_mpsc_channel_sender_bounded() {
let (tx, rx) = mpsc::channel();
let some_packet = vec![1, 2, 3, 4, 5];
send_with_sender(1, &tx, &some_packet).expect("failed to send packet");
let rx_packet = rx.try_recv().unwrap();
assert_eq!(some_packet, rx_packet.packet);
assert_eq!(1, rx_packet.sender_id);
}
#[test]
fn test_basic_mpsc_channel_receiver_dropped() {
let (tx, rx) = mpsc::channel();
let some_packet = vec![1, 2, 3, 4, 5];
drop(rx);
let result = send_with_sender(2, &tx, &some_packet);
assert!(result.is_err());
matches!(result.unwrap_err(), GenericSendError::RxDisconnected);
}
#[test]
fn test_basic_mpsc_sync_sender() {
let (tx, rx) = mpsc::sync_channel(3);
let some_packet = vec![1, 2, 3, 4, 5];
send_with_sender(3, &tx, &some_packet).expect("failed to send packet");
let rx_packet = rx.try_recv().unwrap();
assert_eq!(some_packet, rx_packet.packet);
assert_eq!(3, rx_packet.sender_id);
}
#[test]
fn test_basic_mpsc_sync_sender_receiver_dropped() {
let (tx, rx) = mpsc::sync_channel(3);
let some_packet = vec![1, 2, 3, 4, 5];
drop(rx);
let result = send_with_sender(0, &tx, &some_packet);
assert!(result.is_err());
matches!(result.unwrap_err(), GenericSendError::RxDisconnected);
}
#[test]
fn test_basic_mpsc_sync_sender_queue_full() {
let (tx, rx) = mpsc::sync_channel(1);
let some_packet = vec![1, 2, 3, 4, 5];
send_with_sender(0, &tx, &some_packet).expect("failed to send packet");
let result = send_with_sender(1, &tx, &some_packet);
assert!(result.is_err());
matches!(result.unwrap_err(), GenericSendError::QueueFull(None));
let rx_packet = rx.try_recv().unwrap();
assert_eq!(some_packet, rx_packet.packet);
}
#[test]
fn test_basic_shared_store_sender_unbounded_sender() {
let (tc_tx, tc_rx) = mpsc::channel();
let pool_cfg = StaticPoolConfig::new(vec![(2, 8)], true);
let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new(
StaticMemoryPool::new(pool_cfg),
)));
let some_packet = vec![1, 2, 3, 4, 5];
let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone());
send_with_sender(5, &tc_sender, &some_packet).expect("failed to send packet");
let packet_in_pool = tc_rx.try_recv().unwrap();
let mut pool = shared_pool.0.write().unwrap();
let read_guard = pool.read_with_guard(packet_in_pool.store_addr);
assert_eq!(read_guard.read_as_vec().unwrap(), some_packet);
assert_eq!(packet_in_pool.sender_id, 5)
}
#[test]
fn test_basic_shared_store_sender() {
let (tc_tx, tc_rx) = mpsc::sync_channel(10);
let pool_cfg = StaticPoolConfig::new(vec![(2, 8)], true);
let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new(
StaticMemoryPool::new(pool_cfg),
)));
let some_packet = vec![1, 2, 3, 4, 5];
let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone());
send_with_sender(5, &tc_sender, &some_packet).expect("failed to send packet");
let packet_in_pool = tc_rx.try_recv().unwrap();
let mut pool = shared_pool.0.write().unwrap();
let read_guard = pool.read_with_guard(packet_in_pool.store_addr);
assert_eq!(read_guard.read_as_vec().unwrap(), some_packet);
assert_eq!(packet_in_pool.sender_id, 5)
}
#[test]
fn test_basic_shared_store_sender_rx_dropped() {
let (tc_tx, tc_rx) = mpsc::sync_channel(10);
let pool_cfg = StaticPoolConfig::new(vec![(2, 8)], true);
let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new(
StaticMemoryPool::new(pool_cfg),
)));
let some_packet = vec![1, 2, 3, 4, 5];
drop(tc_rx);
let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone());
let result = send_with_sender(2, &tc_sender, &some_packet);
assert!(result.is_err());
matches!(
result.unwrap_err(),
StoreAndSendError::Send(GenericSendError::RxDisconnected)
);
}
#[test]
fn test_basic_shared_store_sender_queue_full() {
let (tc_tx, tc_rx) = mpsc::sync_channel(1);
let pool_cfg = StaticPoolConfig::new(vec![(2, 8)], true);
let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new(
StaticMemoryPool::new(pool_cfg),
)));
let some_packet = vec![1, 2, 3, 4, 5];
let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone());
send_with_sender(3, &tc_sender, &some_packet).expect("failed to send packet");
let result = send_with_sender(3, &tc_sender, &some_packet);
assert!(result.is_err());
matches!(
result.unwrap_err(),
StoreAndSendError::Send(GenericSendError::RxDisconnected)
);
let packet_in_pool = tc_rx.try_recv().unwrap();
let mut pool = shared_pool.0.write().unwrap();
let read_guard = pool.read_with_guard(packet_in_pool.store_addr);
assert_eq!(read_guard.read_as_vec().unwrap(), some_packet);
assert_eq!(packet_in_pool.sender_id, 3);
}
#[test]
fn test_basic_shared_store_store_error() {
let (tc_tx, tc_rx) = mpsc::sync_channel(1);
let pool_cfg = StaticPoolConfig::new(vec![(1, 8)], true);
let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new(
StaticMemoryPool::new(pool_cfg),
)));
let some_packet = vec![1, 2, 3, 4, 5];
let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone());
send_with_sender(4, &tc_sender, &some_packet).expect("failed to send packet");
let result = send_with_sender(4, &tc_sender, &some_packet);
assert!(result.is_err());
matches!(
result.unwrap_err(),
StoreAndSendError::Store(PoolError::StoreFull(..))
);
let packet_in_pool = tc_rx.try_recv().unwrap();
let mut pool = shared_pool.0.write().unwrap();
let read_guard = pool.read_with_guard(packet_in_pool.store_addr);
assert_eq!(read_guard.read_as_vec().unwrap(), some_packet);
assert_eq!(packet_in_pool.sender_id, 4);
}
}

View File

@ -1,414 +0,0 @@
//! ECSS PUS packet routing components.
//!
//! The routing components consist of two core components:
//! 1. [PusDistributor] component which dispatches received packets to a user-provided handler.
//! 2. [PusServiceDistributor] trait which should be implemented by the user-provided PUS packet
//! handler.
//!
//! The [PusDistributor] implements the [ReceivesEcssPusTc], [ReceivesCcsdsTc] and the
//! [ReceivesTcCore] trait which allows to pass raw packets, CCSDS packets and PUS TC packets into
//! it. Upon receiving a packet, it performs the following steps:
//!
//! 1. It tries to extract the [SpHeader] and [spacepackets::ecss::tc::PusTcReader] objects from
//! the raw bytestream. If this process fails, a [PusDistribError::PusError] is returned to the
//! user.
//! 2. If it was possible to extract both components, the packet will be passed to the
//! [PusServiceDistributor::distribute_packet] method provided by the user.
//!
//! # Example
//!
//! ```rust
//! use spacepackets::ecss::WritablePusPacket;
//! use satrs::tmtc::pus_distrib::{PusDistributor, PusServiceDistributor};
//! use satrs::tmtc::{ReceivesTc, ReceivesTcCore};
//! use spacepackets::SpHeader;
//! use spacepackets::ecss::tc::{PusTcCreator, PusTcReader};
//!
//! struct ConcretePusHandler {
//! handler_call_count: u32
//! }
//!
//! // This is a very simple possible service provider. It increments an internal call count field,
//! // which is used to verify the handler was called
//! impl PusServiceDistributor for ConcretePusHandler {
//! type Error = ();
//! fn distribute_packet(&mut self, service: u8, header: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error> {
//! assert_eq!(service, 17);
//! assert_eq!(pus_tc.len_packed(), 13);
//! self.handler_call_count += 1;
//! Ok(())
//! }
//! }
//!
//! let service_handler = ConcretePusHandler {
//! handler_call_count: 0
//! };
//! let mut pus_distributor = PusDistributor::new(service_handler);
//!
//! // Create and pass PUS ping telecommand with a valid APID
//! let sp_header = SpHeader::new_for_unseg_tc(0x002, 0x34, 0);
//! let mut pus_tc = PusTcCreator::new_simple(sp_header, 17, 1, &[], true);
//! let mut test_buf: [u8; 32] = [0; 32];
//! let mut size = pus_tc
//! .write_to_bytes(test_buf.as_mut_slice())
//! .expect("Error writing TC to buffer");
//! let tc_slice = &test_buf[0..size];
//!
//! pus_distributor.pass_tc(tc_slice).expect("Passing PUS telecommand failed");
//!
//! // User helper function to retrieve concrete class. We check the call count here to verify
//! // that the PUS ping telecommand was routed successfully.
//! let concrete_handler = pus_distributor.service_distributor();
//! assert_eq!(concrete_handler.handler_call_count, 1);
//! ```
use crate::pus::ReceivesEcssPusTc;
use crate::tmtc::{ReceivesCcsdsTc, ReceivesTcCore};
use core::fmt::{Display, Formatter};
use spacepackets::ecss::tc::PusTcReader;
use spacepackets::ecss::{PusError, PusPacket};
use spacepackets::SpHeader;
#[cfg(feature = "std")]
use std::error::Error;
/// Trait for a generic distributor object which can distribute PUS packets based on packet
/// properties like the PUS service, space packet header or any other content of the PUS packet.
pub trait PusServiceDistributor {
type Error;
fn distribute_packet(
&mut self,
service: u8,
header: &SpHeader,
pus_tc: &PusTcReader,
) -> Result<(), Self::Error>;
}
/// Generic distributor object which dispatches received packets to a user provided handler.
pub struct PusDistributor<ServiceDistributor: PusServiceDistributor<Error = E>, E> {
service_distributor: ServiceDistributor,
}
impl<ServiceDistributor: PusServiceDistributor<Error = E>, E>
PusDistributor<ServiceDistributor, E>
{
pub fn new(service_provider: ServiceDistributor) -> Self {
PusDistributor {
service_distributor: service_provider,
}
}
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum PusDistribError<E> {
CustomError(E),
PusError(PusError),
}
impl<E: Display> Display for PusDistribError<E> {
fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result {
match self {
PusDistribError::CustomError(e) => write!(f, "pus distribution error: {e}"),
PusDistribError::PusError(e) => write!(f, "pus distribution error: {e}"),
}
}
}
#[cfg(feature = "std")]
impl<E: Error> Error for PusDistribError<E> {
fn source(&self) -> Option<&(dyn Error + 'static)> {
match self {
Self::CustomError(e) => e.source(),
Self::PusError(e) => e.source(),
}
}
}
impl<ServiceDistributor: PusServiceDistributor<Error = E>, E: 'static> ReceivesTcCore
for PusDistributor<ServiceDistributor, E>
{
type Error = PusDistribError<E>;
fn pass_tc(&mut self, tm_raw: &[u8]) -> Result<(), Self::Error> {
// Convert to ccsds and call pass_ccsds
let (sp_header, _) = SpHeader::from_be_bytes(tm_raw)
.map_err(|e| PusDistribError::PusError(PusError::ByteConversion(e)))?;
self.pass_ccsds(&sp_header, tm_raw)
}
}
impl<ServiceDistributor: PusServiceDistributor<Error = E>, E: 'static> ReceivesCcsdsTc
for PusDistributor<ServiceDistributor, E>
{
type Error = PusDistribError<E>;
fn pass_ccsds(&mut self, header: &SpHeader, tm_raw: &[u8]) -> Result<(), Self::Error> {
let (tc, _) = PusTcReader::new(tm_raw).map_err(|e| PusDistribError::PusError(e))?;
self.pass_pus_tc(header, &tc)
}
}
impl<ServiceDistributor: PusServiceDistributor<Error = E>, E: 'static> ReceivesEcssPusTc
for PusDistributor<ServiceDistributor, E>
{
type Error = PusDistribError<E>;
fn pass_pus_tc(&mut self, header: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error> {
self.service_distributor
.distribute_packet(pus_tc.service(), header, pus_tc)
.map_err(|e| PusDistribError::CustomError(e))
}
}
impl<ServiceDistributor: PusServiceDistributor<Error = E>, E: 'static>
PusDistributor<ServiceDistributor, E>
{
pub fn service_distributor(&self) -> &ServiceDistributor {
&self.service_distributor
}
pub fn service_distributor_mut(&mut self) -> &mut ServiceDistributor {
&mut self.service_distributor
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::queue::GenericSendError;
use crate::tmtc::ccsds_distrib::tests::{
generate_ping_tc, generate_ping_tc_as_vec, BasicApidHandlerOwnedQueue,
BasicApidHandlerSharedQueue,
};
use crate::tmtc::ccsds_distrib::{CcsdsDistributor, CcsdsPacketHandler};
use crate::ValidatorU16Id;
use alloc::format;
use alloc::vec::Vec;
use spacepackets::ecss::PusError;
use spacepackets::CcsdsPacket;
#[cfg(feature = "std")]
use std::collections::VecDeque;
#[cfg(feature = "std")]
use std::sync::{Arc, Mutex};
fn is_send<T: Send>(_: &T) {}
pub struct PacketInfo {
pub service: u8,
pub apid: u16,
pub packet: Vec<u8>,
}
struct PusHandlerSharedQueue(Arc<Mutex<VecDeque<PacketInfo>>>);
#[derive(Default)]
struct PusHandlerOwnedQueue(VecDeque<PacketInfo>);
impl PusServiceDistributor for PusHandlerSharedQueue {
type Error = PusError;
fn distribute_packet(
&mut self,
service: u8,
sp_header: &SpHeader,
pus_tc: &PusTcReader,
) -> Result<(), Self::Error> {
let mut packet: Vec<u8> = Vec::new();
packet.extend_from_slice(pus_tc.raw_data());
self.0
.lock()
.expect("Mutex lock failed")
.push_back(PacketInfo {
service,
apid: sp_header.apid(),
packet,
});
Ok(())
}
}
impl PusServiceDistributor for PusHandlerOwnedQueue {
type Error = PusError;
fn distribute_packet(
&mut self,
service: u8,
sp_header: &SpHeader,
pus_tc: &PusTcReader,
) -> Result<(), Self::Error> {
let mut packet: Vec<u8> = Vec::new();
packet.extend_from_slice(pus_tc.raw_data());
self.0.push_back(PacketInfo {
service,
apid: sp_header.apid(),
packet,
});
Ok(())
}
}
struct ApidHandlerShared {
pub pus_distrib: PusDistributor<PusHandlerSharedQueue, PusError>,
pub handler_base: BasicApidHandlerSharedQueue,
}
struct ApidHandlerOwned {
pub pus_distrib: PusDistributor<PusHandlerOwnedQueue, PusError>,
handler_base: BasicApidHandlerOwnedQueue,
}
macro_rules! apid_handler_impl {
() => {
type Error = PusError;
fn handle_packet_with_valid_apid(
&mut self,
sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error> {
self.handler_base
.handle_packet_with_valid_apid(&sp_header, tc_raw)
.ok()
.expect("Unexpected error");
match self.pus_distrib.pass_ccsds(&sp_header, tc_raw) {
Ok(_) => Ok(()),
Err(e) => match e {
PusDistribError::CustomError(_) => Ok(()),
PusDistribError::PusError(e) => Err(e),
},
}
}
fn handle_packet_with_unknown_apid(
&mut self,
sp_header: &SpHeader,
tc_raw: &[u8],
) -> Result<(), Self::Error> {
self.handler_base
.handle_packet_with_unknown_apid(&sp_header, tc_raw)
.ok()
.expect("Unexpected error");
Ok(())
}
};
}
impl ValidatorU16Id for ApidHandlerOwned {
fn validate(&self, packet_id: u16) -> bool {
[0x000, 0x002].contains(&packet_id)
}
}
impl ValidatorU16Id for ApidHandlerShared {
fn validate(&self, packet_id: u16) -> bool {
[0x000, 0x002].contains(&packet_id)
}
}
impl CcsdsPacketHandler for ApidHandlerOwned {
apid_handler_impl!();
}
impl CcsdsPacketHandler for ApidHandlerShared {
apid_handler_impl!();
}
#[test]
fn test_pus_distribution_as_raw_packet() {
let mut pus_distrib = PusDistributor::new(PusHandlerOwnedQueue::default());
let tc = generate_ping_tc_as_vec();
let result = pus_distrib.pass_tc(&tc);
assert!(result.is_ok());
assert_eq!(pus_distrib.service_distributor_mut().0.len(), 1);
let packet_info = pus_distrib.service_distributor_mut().0.pop_front().unwrap();
assert_eq!(packet_info.service, 17);
assert_eq!(packet_info.apid, 0x002);
assert_eq!(packet_info.packet, tc);
}
#[test]
fn test_pus_distribution_combined_handler() {
let known_packet_queue = Arc::new(Mutex::default());
let unknown_packet_queue = Arc::new(Mutex::default());
let pus_queue = Arc::new(Mutex::default());
let pus_handler = PusHandlerSharedQueue(pus_queue.clone());
let handler_base = BasicApidHandlerSharedQueue {
known_packet_queue: known_packet_queue.clone(),
unknown_packet_queue: unknown_packet_queue.clone(),
};
let pus_distrib = PusDistributor::new(pus_handler);
is_send(&pus_distrib);
let apid_handler = ApidHandlerShared {
pus_distrib,
handler_base,
};
let mut ccsds_distrib = CcsdsDistributor::new(apid_handler);
let mut test_buf: [u8; 32] = [0; 32];
let tc_slice = generate_ping_tc(test_buf.as_mut_slice());
// Pass packet to distributor
ccsds_distrib
.pass_tc(tc_slice)
.expect("Passing TC slice failed");
let recvd_ccsds = known_packet_queue.lock().unwrap().pop_front();
assert!(unknown_packet_queue.lock().unwrap().is_empty());
assert!(recvd_ccsds.is_some());
let (apid, packet) = recvd_ccsds.unwrap();
assert_eq!(apid, 0x002);
assert_eq!(packet.as_slice(), tc_slice);
let recvd_pus = pus_queue.lock().unwrap().pop_front();
assert!(recvd_pus.is_some());
let packet_info = recvd_pus.unwrap();
assert_eq!(packet_info.service, 17);
assert_eq!(packet_info.apid, 0x002);
assert_eq!(packet_info.packet, tc_slice);
}
#[test]
fn test_accessing_combined_distributor() {
let pus_handler = PusHandlerOwnedQueue::default();
let handler_base = BasicApidHandlerOwnedQueue::default();
let pus_distrib = PusDistributor::new(pus_handler);
let apid_handler = ApidHandlerOwned {
pus_distrib,
handler_base,
};
let mut ccsds_distrib = CcsdsDistributor::new(apid_handler);
let mut test_buf: [u8; 32] = [0; 32];
let tc_slice = generate_ping_tc(test_buf.as_mut_slice());
ccsds_distrib
.pass_tc(tc_slice)
.expect("Passing TC slice failed");
let apid_handler_casted_back = ccsds_distrib.packet_handler_mut();
assert!(!apid_handler_casted_back
.handler_base
.known_packet_queue
.is_empty());
let handler_owned_queue = apid_handler_casted_back
.pus_distrib
.service_distributor_mut();
assert!(!handler_owned_queue.0.is_empty());
let packet_info = handler_owned_queue.0.pop_front().unwrap();
assert_eq!(packet_info.service, 17);
assert_eq!(packet_info.apid, 0x002);
assert_eq!(packet_info.packet, tc_slice);
}
#[test]
fn test_pus_distrib_error_custom_error() {
let error = PusDistribError::CustomError(GenericSendError::RxDisconnected);
let error_string = format!("{}", error);
assert_eq!(
error_string,
"pus distribution error: rx side has disconnected"
);
}
#[test]
fn test_pus_distrib_error_pus_error() {
let error = PusDistribError::<GenericSendError>::PusError(PusError::CrcCalculationMissing);
let error_string = format!("{}", error);
assert_eq!(
error_string,
"pus distribution error: crc16 was not calculated"
);
}
}

View File

@ -3,50 +3,6 @@ use spacepackets::time::cds::CdsTime;
use spacepackets::time::TimeWriter; use spacepackets::time::TimeWriter;
use spacepackets::SpHeader; use spacepackets::SpHeader;
#[cfg(feature = "std")]
pub use std_mod::*;
#[cfg(feature = "std")]
pub mod std_mod {
use crate::pool::{
PoolProvider, SharedStaticMemoryPool, StaticMemoryPool, StoreAddr, StoreError,
};
use crate::pus::EcssTmtcError;
use spacepackets::ecss::tm::PusTmCreator;
use spacepackets::ecss::WritablePusPacket;
use std::sync::{Arc, RwLock};
#[derive(Clone)]
pub struct SharedTmPool(pub SharedStaticMemoryPool);
impl SharedTmPool {
pub fn new(shared_pool: StaticMemoryPool) -> Self {
Self(Arc::new(RwLock::new(shared_pool)))
}
pub fn clone_backing_pool(&self) -> SharedStaticMemoryPool {
self.0.clone()
}
pub fn shared_pool(&self) -> &SharedStaticMemoryPool {
&self.0
}
pub fn shared_pool_mut(&mut self) -> &mut SharedStaticMemoryPool {
&mut self.0
}
pub fn add_pus_tm(&self, pus_tm: &PusTmCreator) -> Result<StoreAddr, EcssTmtcError> {
let mut pg = self.0.write().map_err(|_| StoreError::LockError)?;
let addr = pg.free_element(pus_tm.len_written(), |buf| {
pus_tm
.write_to_bytes(buf)
.expect("writing PUS TM to store failed");
})?;
Ok(addr)
}
}
}
pub struct PusTmWithCdsShortHelper { pub struct PusTmWithCdsShortHelper {
apid: u16, apid: u16,
cds_short_buf: [u8; 7], cds_short_buf: [u8; 7],

View File

@ -1,4 +1,4 @@
use satrs::pool::{PoolGuard, PoolProvider, StaticMemoryPool, StaticPoolConfig, StoreAddr}; use satrs::pool::{PoolAddr, PoolGuard, PoolProvider, StaticMemoryPool, StaticPoolConfig};
use std::ops::DerefMut; use std::ops::DerefMut;
use std::sync::mpsc; use std::sync::mpsc;
use std::sync::mpsc::{Receiver, Sender}; use std::sync::mpsc::{Receiver, Sender};
@ -12,7 +12,7 @@ fn threaded_usage() {
let pool_cfg = StaticPoolConfig::new(vec![(16, 6), (32, 3), (8, 12)], false); let pool_cfg = StaticPoolConfig::new(vec![(16, 6), (32, 3), (8, 12)], false);
let shared_pool = Arc::new(RwLock::new(StaticMemoryPool::new(pool_cfg))); let shared_pool = Arc::new(RwLock::new(StaticMemoryPool::new(pool_cfg)));
let shared_clone = shared_pool.clone(); let shared_clone = shared_pool.clone();
let (tx, rx): (Sender<StoreAddr>, Receiver<StoreAddr>) = mpsc::channel(); let (tx, rx): (Sender<PoolAddr>, Receiver<PoolAddr>) = mpsc::channel();
let jh0 = thread::spawn(move || { let jh0 = thread::spawn(move || {
let mut dummy = shared_pool.write().unwrap(); let mut dummy = shared_pool.write().unwrap();
let addr = dummy.add(&DUMMY_DATA).expect("Writing data failed"); let addr = dummy.add(&DUMMY_DATA).expect("Writing data failed");

View File

@ -7,8 +7,8 @@ use satrs::params::U32Pair;
use satrs::params::{Params, ParamsHeapless, WritableToBeBytes}; use satrs::params::{Params, ParamsHeapless, WritableToBeBytes};
use satrs::pus::event_man::{DefaultPusEventMgmtBackend, EventReporter, PusEventDispatcher}; use satrs::pus::event_man::{DefaultPusEventMgmtBackend, EventReporter, PusEventDispatcher};
use satrs::pus::test_util::TEST_COMPONENT_ID_0; use satrs::pus::test_util::TEST_COMPONENT_ID_0;
use satrs::pus::PusTmAsVec;
use satrs::request::UniqueApidTargetId; use satrs::request::UniqueApidTargetId;
use satrs::tmtc::PacketAsVec;
use spacepackets::ecss::tm::PusTmReader; use spacepackets::ecss::tm::PusTmReader;
use spacepackets::ecss::{PusError, PusPacket}; use spacepackets::ecss::{PusError, PusPacket};
use std::sync::mpsc::{self, SendError, TryRecvError}; use std::sync::mpsc::{self, SendError, TryRecvError};
@ -37,7 +37,7 @@ fn test_threaded_usage() {
let pus_event_man_send_provider = EventU32SenderMpsc::new(1, pus_event_man_tx); let pus_event_man_send_provider = EventU32SenderMpsc::new(1, pus_event_man_tx);
event_man.subscribe_all(pus_event_man_send_provider.target_id()); event_man.subscribe_all(pus_event_man_send_provider.target_id());
event_man.add_sender(pus_event_man_send_provider); event_man.add_sender(pus_event_man_send_provider);
let (event_tx, event_rx) = mpsc::channel::<PusTmAsVec>(); let (event_tx, event_rx) = mpsc::channel::<PacketAsVec>();
let reporter = let reporter =
EventReporter::new(TEST_ID.raw(), 0x02, 0, 128).expect("Creating event reporter failed"); EventReporter::new(TEST_ID.raw(), 0x02, 0, 128).expect("Creating event reporter failed");
let pus_event_man = PusEventDispatcher::new(reporter, DefaultPusEventMgmtBackend::default()); let pus_event_man = PusEventDispatcher::new(reporter, DefaultPusEventMgmtBackend::default());

View File

@ -7,13 +7,12 @@ pub mod crossbeam_test {
FailParams, RequestId, VerificationReporter, VerificationReporterCfg, FailParams, RequestId, VerificationReporter, VerificationReporterCfg,
VerificationReportingProvider, VerificationReportingProvider,
}; };
use satrs::pus::TmInSharedPoolSenderWithCrossbeam; use satrs::tmtc::{PacketSenderWithSharedPool, SharedStaticMemoryPool};
use satrs::tmtc::tm_helper::SharedTmPool;
use spacepackets::ecss::tc::{PusTcCreator, PusTcReader, PusTcSecondaryHeader}; use spacepackets::ecss::tc::{PusTcCreator, PusTcReader, PusTcSecondaryHeader};
use spacepackets::ecss::tm::PusTmReader; use spacepackets::ecss::tm::PusTmReader;
use spacepackets::ecss::{EcssEnumU16, EcssEnumU8, PusPacket, WritablePusPacket}; use spacepackets::ecss::{EcssEnumU16, EcssEnumU8, PusPacket, WritablePusPacket};
use spacepackets::SpHeader; use spacepackets::SpHeader;
use std::sync::{Arc, RwLock}; use std::sync::RwLock;
use std::thread; use std::thread;
use std::time::Duration; use std::time::Duration;
@ -36,12 +35,15 @@ pub mod crossbeam_test {
// Shared pool object to store the verification PUS telemetry // Shared pool object to store the verification PUS telemetry
let pool_cfg = let pool_cfg =
StaticPoolConfig::new(vec![(10, 32), (10, 64), (10, 128), (10, 1024)], false); StaticPoolConfig::new(vec![(10, 32), (10, 64), (10, 128), (10, 1024)], false);
let shared_tm_pool = SharedTmPool::new(StaticMemoryPool::new(pool_cfg.clone())); let shared_tm_pool =
let shared_tc_pool_0 = Arc::new(RwLock::new(StaticMemoryPool::new(pool_cfg))); SharedStaticMemoryPool::new(RwLock::new(StaticMemoryPool::new(pool_cfg.clone())));
let shared_tc_pool_1 = shared_tc_pool_0.clone(); let shared_tc_pool =
SharedStaticMemoryPool::new(RwLock::new(StaticMemoryPool::new(pool_cfg)));
let shared_tc_pool_1 = shared_tc_pool.clone();
let (tx, rx) = crossbeam_channel::bounded(10); let (tx, rx) = crossbeam_channel::bounded(10);
let sender_0 = TmInSharedPoolSenderWithCrossbeam::new(shared_tm_pool.clone(), tx.clone()); let sender =
let sender_1 = sender_0.clone(); PacketSenderWithSharedPool::new_with_shared_packet_pool(tx.clone(), &shared_tm_pool);
let sender_1 = sender.clone();
let mut reporter_with_sender_0 = VerificationReporter::new(TEST_COMPONENT_ID_0.id(), &cfg); let mut reporter_with_sender_0 = VerificationReporter::new(TEST_COMPONENT_ID_0.id(), &cfg);
let mut reporter_with_sender_1 = reporter_with_sender_0.clone(); let mut reporter_with_sender_1 = reporter_with_sender_0.clone();
// For test purposes, we retrieve the request ID from the TCs and pass them to the receiver // For test purposes, we retrieve the request ID from the TCs and pass them to the receiver
@ -52,7 +54,7 @@ pub mod crossbeam_test {
let (tx_tc_0, rx_tc_0) = crossbeam_channel::bounded(3); let (tx_tc_0, rx_tc_0) = crossbeam_channel::bounded(3);
let (tx_tc_1, rx_tc_1) = crossbeam_channel::bounded(3); let (tx_tc_1, rx_tc_1) = crossbeam_channel::bounded(3);
{ {
let mut tc_guard = shared_tc_pool_0.write().unwrap(); let mut tc_guard = shared_tc_pool.write().unwrap();
let sph = SpHeader::new_for_unseg_tc(TEST_APID, 0, 0); let sph = SpHeader::new_for_unseg_tc(TEST_APID, 0, 0);
let tc_header = PusTcSecondaryHeader::new_simple(17, 1); let tc_header = PusTcSecondaryHeader::new_simple(17, 1);
let pus_tc_0 = PusTcCreator::new_no_app_data(sph, tc_header, true); let pus_tc_0 = PusTcCreator::new_no_app_data(sph, tc_header, true);
@ -81,7 +83,7 @@ pub mod crossbeam_test {
.expect("Receive timeout"); .expect("Receive timeout");
let tc_len; let tc_len;
{ {
let mut tc_guard = shared_tc_pool_0.write().unwrap(); let mut tc_guard = shared_tc_pool.write().unwrap();
let pg = tc_guard.read_with_guard(tc_addr); let pg = tc_guard.read_with_guard(tc_addr);
tc_len = pg.read(&mut tc_buf).unwrap(); tc_len = pg.read(&mut tc_buf).unwrap();
} }
@ -89,24 +91,24 @@ pub mod crossbeam_test {
let token = reporter_with_sender_0.add_tc_with_req_id(req_id_0); let token = reporter_with_sender_0.add_tc_with_req_id(req_id_0);
let accepted_token = reporter_with_sender_0 let accepted_token = reporter_with_sender_0
.acceptance_success(&sender_0, token, &FIXED_STAMP) .acceptance_success(&sender, token, &FIXED_STAMP)
.expect("Acceptance success failed"); .expect("Acceptance success failed");
// Do some start handling here // Do some start handling here
let started_token = reporter_with_sender_0 let started_token = reporter_with_sender_0
.start_success(&sender_0, accepted_token, &FIXED_STAMP) .start_success(&sender, accepted_token, &FIXED_STAMP)
.expect("Start success failed"); .expect("Start success failed");
// Do some step handling here // Do some step handling here
reporter_with_sender_0 reporter_with_sender_0
.step_success(&sender_0, &started_token, &FIXED_STAMP, EcssEnumU8::new(0)) .step_success(&sender, &started_token, &FIXED_STAMP, EcssEnumU8::new(0))
.expect("Start success failed"); .expect("Start success failed");
// Finish up // Finish up
reporter_with_sender_0 reporter_with_sender_0
.step_success(&sender_0, &started_token, &FIXED_STAMP, EcssEnumU8::new(1)) .step_success(&sender, &started_token, &FIXED_STAMP, EcssEnumU8::new(1))
.expect("Start success failed"); .expect("Start success failed");
reporter_with_sender_0 reporter_with_sender_0
.completion_success(&sender_0, started_token, &FIXED_STAMP) .completion_success(&sender, started_token, &FIXED_STAMP)
.expect("Completion success failed"); .expect("Completion success failed");
}); });
@ -145,9 +147,8 @@ pub mod crossbeam_test {
.recv_timeout(Duration::from_millis(50)) .recv_timeout(Duration::from_millis(50))
.expect("Packet reception timeout"); .expect("Packet reception timeout");
let tm_len; let tm_len;
let shared_tm_store = shared_tm_pool.clone_backing_pool();
{ {
let mut rg = shared_tm_store.write().expect("Error locking shared pool"); let mut rg = shared_tm_pool.write().expect("Error locking shared pool");
let store_guard = rg.read_with_guard(tm_in_pool.store_addr); let store_guard = rg.read_with_guard(tm_in_pool.store_addr);
tm_len = store_guard tm_len = store_guard
.read(&mut tm_buf) .read(&mut tm_buf)

View File

@ -17,22 +17,26 @@ use core::{
use std::{ use std::{
io::{Read, Write}, io::{Read, Write},
net::{IpAddr, Ipv4Addr, SocketAddr, TcpStream}, net::{IpAddr, Ipv4Addr, SocketAddr, TcpStream},
sync::Mutex, sync::{mpsc, Mutex},
thread, thread,
}; };
use hashbrown::HashSet; use hashbrown::HashSet;
use satrs::{ use satrs::{
encoding::cobs::encode_packet_with_cobs, encoding::{
ccsds::{SpValidity, SpacePacketValidator},
cobs::encode_packet_with_cobs,
},
hal::std::tcp_server::{ hal::std::tcp_server::{
ConnectionResult, HandledConnectionHandler, HandledConnectionInfo, ServerConfig, ConnectionResult, HandledConnectionHandler, HandledConnectionInfo, ServerConfig,
TcpSpacepacketsServer, TcpTmtcInCobsServer, TcpSpacepacketsServer, TcpTmtcInCobsServer,
}, },
tmtc::{ReceivesTcCore, TmPacketSourceCore}, tmtc::PacketSource,
ComponentId,
}; };
use spacepackets::{ use spacepackets::{
ecss::{tc::PusTcCreator, WritablePusPacket}, ecss::{tc::PusTcCreator, WritablePusPacket},
PacketId, SpHeader, CcsdsPacket, PacketId, SpHeader,
}; };
use std::{collections::VecDeque, sync::Arc, vec::Vec}; use std::{collections::VecDeque, sync::Arc, vec::Vec};
@ -61,21 +65,6 @@ impl ConnectionFinishedHandler {
assert!(self.connection_info.is_empty()); assert!(self.connection_info.is_empty());
} }
} }
#[derive(Default, Clone)]
struct SyncTcCacher {
tc_queue: Arc<Mutex<VecDeque<Vec<u8>>>>,
}
impl ReceivesTcCore for SyncTcCacher {
type Error = ();
fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> {
let mut tc_queue = self.tc_queue.lock().expect("tc forwarder failed");
println!("Received TC: {:x?}", tc_raw);
tc_queue.push_back(tc_raw.to_vec());
Ok(())
}
}
#[derive(Default, Clone)] #[derive(Default, Clone)]
struct SyncTmSource { struct SyncTmSource {
@ -89,7 +78,7 @@ impl SyncTmSource {
} }
} }
impl TmPacketSourceCore for SyncTmSource { impl PacketSource for SyncTmSource {
type Error = (); type Error = ();
fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error> { fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result<usize, Self::Error> {
@ -111,20 +100,27 @@ impl TmPacketSourceCore for SyncTmSource {
} }
} }
const TCP_SERVER_ID: ComponentId = 0x05;
const SIMPLE_PACKET: [u8; 5] = [1, 2, 3, 4, 5]; const SIMPLE_PACKET: [u8; 5] = [1, 2, 3, 4, 5];
const INVERTED_PACKET: [u8; 5] = [5, 4, 3, 4, 1]; const INVERTED_PACKET: [u8; 5] = [5, 4, 3, 4, 1];
const AUTO_PORT_ADDR: SocketAddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); const AUTO_PORT_ADDR: SocketAddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0);
#[test] #[test]
fn test_cobs_server() { fn test_cobs_server() {
let tc_receiver = SyncTcCacher::default(); let (tc_sender, tc_receiver) = mpsc::channel();
let mut tm_source = SyncTmSource::default(); let mut tm_source = SyncTmSource::default();
// Insert a telemetry packet which will be read back by the client at a later stage. // Insert a telemetry packet which will be read back by the client at a later stage.
tm_source.add_tm(&INVERTED_PACKET); tm_source.add_tm(&INVERTED_PACKET);
let mut tcp_server = TcpTmtcInCobsServer::new( let mut tcp_server = TcpTmtcInCobsServer::new(
ServerConfig::new(AUTO_PORT_ADDR, Duration::from_millis(2), 1024, 1024), ServerConfig::new(
TCP_SERVER_ID,
AUTO_PORT_ADDR,
Duration::from_millis(2),
1024,
1024,
),
tm_source, tm_source,
tc_receiver.clone(), tc_sender.clone(),
ConnectionFinishedHandler::default(), ConnectionFinishedHandler::default(),
None, None,
) )
@ -137,7 +133,7 @@ fn test_cobs_server() {
// Call the connection handler in separate thread, does block. // Call the connection handler in separate thread, does block.
thread::spawn(move || { thread::spawn(move || {
let result = tcp_server.handle_next_connection(Some(Duration::from_millis(400))); let result = tcp_server.handle_all_connections(Some(Duration::from_millis(400)));
if result.is_err() { if result.is_err() {
panic!("handling connection failed: {:?}", result.unwrap_err()); panic!("handling connection failed: {:?}", result.unwrap_err());
} }
@ -190,32 +186,53 @@ fn test_cobs_server() {
panic!("connection was not handled properly"); panic!("connection was not handled properly");
} }
// Check that the packet was received and decoded successfully. // Check that the packet was received and decoded successfully.
let mut tc_queue = tc_receiver let tc_with_sender = tc_receiver.try_recv().expect("no TC received");
.tc_queue assert_eq!(tc_with_sender.packet, SIMPLE_PACKET);
.lock() assert_eq!(tc_with_sender.sender_id, TCP_SERVER_ID);
.expect("locking tc queue failed"); matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty));
assert_eq!(tc_queue.len(), 1);
assert_eq!(tc_queue.pop_front().unwrap(), &SIMPLE_PACKET);
drop(tc_queue);
} }
const TEST_APID_0: u16 = 0x02; const TEST_APID_0: u16 = 0x02;
const TEST_PACKET_ID_0: PacketId = PacketId::new_for_tc(true, TEST_APID_0); const TEST_PACKET_ID_0: PacketId = PacketId::new_for_tc(true, TEST_APID_0);
#[derive(Default)]
pub struct SimpleVerificator {
pub valid_ids: HashSet<PacketId>,
}
impl SpacePacketValidator for SimpleVerificator {
fn validate(
&self,
sp_header: &SpHeader,
_raw_buf: &[u8],
) -> satrs::encoding::ccsds::SpValidity {
if self.valid_ids.contains(&sp_header.packet_id()) {
return SpValidity::Valid;
}
SpValidity::Skip
}
}
#[test] #[test]
fn test_ccsds_server() { fn test_ccsds_server() {
let tc_receiver = SyncTcCacher::default(); let (tc_sender, tc_receiver) = mpsc::channel();
let mut tm_source = SyncTmSource::default(); let mut tm_source = SyncTmSource::default();
let sph = SpHeader::new_for_unseg_tc(TEST_APID_0, 0, 0); let sph = SpHeader::new_for_unseg_tc(TEST_APID_0, 0, 0);
let verif_tm = PusTcCreator::new_simple(sph, 1, 1, &[], true); let verif_tm = PusTcCreator::new_simple(sph, 1, 1, &[], true);
let tm_0 = verif_tm.to_vec().expect("tm generation failed"); let tm_0 = verif_tm.to_vec().expect("tm generation failed");
tm_source.add_tm(&tm_0); tm_source.add_tm(&tm_0);
let mut packet_id_lookup = HashSet::new(); let mut packet_id_lookup = SimpleVerificator::default();
packet_id_lookup.insert(TEST_PACKET_ID_0); packet_id_lookup.valid_ids.insert(TEST_PACKET_ID_0);
let mut tcp_server = TcpSpacepacketsServer::new( let mut tcp_server = TcpSpacepacketsServer::new(
ServerConfig::new(AUTO_PORT_ADDR, Duration::from_millis(2), 1024, 1024), ServerConfig::new(
TCP_SERVER_ID,
AUTO_PORT_ADDR,
Duration::from_millis(2),
1024,
1024,
),
tm_source, tm_source,
tc_receiver.clone(), tc_sender,
packet_id_lookup, packet_id_lookup,
ConnectionFinishedHandler::default(), ConnectionFinishedHandler::default(),
None, None,
@ -228,7 +245,7 @@ fn test_ccsds_server() {
let set_if_done = conn_handled.clone(); let set_if_done = conn_handled.clone();
// Call the connection handler in separate thread, does block. // Call the connection handler in separate thread, does block.
thread::spawn(move || { thread::spawn(move || {
let result = tcp_server.handle_next_connection(Some(Duration::from_millis(500))); let result = tcp_server.handle_all_connections(Some(Duration::from_millis(500)));
if result.is_err() { if result.is_err() {
panic!("handling connection failed: {:?}", result.unwrap_err()); panic!("handling connection failed: {:?}", result.unwrap_err());
} }
@ -282,7 +299,8 @@ fn test_ccsds_server() {
panic!("connection was not handled properly"); panic!("connection was not handled properly");
} }
// Check that TC has arrived. // Check that TC has arrived.
let mut tc_queue = tc_receiver.tc_queue.lock().unwrap(); let tc_with_sender = tc_receiver.try_recv().expect("no TC received");
assert_eq!(tc_queue.len(), 1); assert_eq!(tc_with_sender.packet, tc_0);
assert_eq!(tc_queue.pop_front().unwrap(), tc_0); assert_eq!(tc_with_sender.sender_id, TCP_SERVER_ID);
matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty));
} }