diff --git a/README.md b/README.md index b1e76c9..350c2c3 100644 --- a/README.md +++ b/README.md @@ -24,11 +24,6 @@ A lot of the architecture and general design considerations are based on the through the 2 missions [FLP](https://www.irs.uni-stuttgart.de/en/research/satellitetechnology-and-instruments/smallsatelliteprogram/flying-laptop/) and [EIVE](https://www.irs.uni-stuttgart.de/en/research/satellitetechnology-and-instruments/smallsatelliteprogram/EIVE/). -This framework is in the early stages of development. Important features are missing. New releases -with breaking changes are released regularly, with all changes documented inside respective -changelog files. You should only use this framework if your are willing to work in this -environment. - # Overview This project currently contains following crates: diff --git a/satrs-book/src/communication.md b/satrs-book/src/communication.md index f102f6b..5ab55e6 100644 --- a/satrs-book/src/communication.md +++ b/satrs-book/src/communication.md @@ -17,7 +17,7 @@ it is still centered around small packets. `sat-rs` provides support for these E standards and also attempts to fill the gap to the internet protocol by providing the following components. -1. [UDP TMTC Server](https://docs.rs/satrs/latest/satrs/hal/host/udp_server/index.html). +1. [UDP TMTC Server](https://docs.rs/satrs/latest/satrs/hal/std/udp_server/index.html). UDP is already packet based which makes it an excellent fit for exchanging space packets. 2. [TCP TMTC Server Components](https://docs.rs/satrs/latest/satrs/hal/std/tcp_server/index.html). TCP is a stream based protocol, so the library provides building blocks to parse telemetry @@ -39,8 +39,12 @@ task might be to store all arriving telemetry persistently. This is especially i space systems which do not have permanent contact like low-earth-orbit (LEO) satellites. The most important task of a TC source is to deliver the telecommands to the correct recipients. -For modern component oriented software using message passing, this usually includes staged -demultiplexing components to determine where a command needs to be sent. +For component oriented software using message passing, this usually includes staged demultiplexing +components to determine where a command needs to be sent. + +Using a generic concept of a TC source and a TM sink as part of the software design simplifies +the flexibility of the TMTC infrastructure: Newly added TM generators and TC receiver only have to +forward their generated or received packets to those handler objects. # Low-level protocols and the bridge to the communcation subsystem diff --git a/satrs-book/src/modes-and-health.md b/satrs-book/src/modes-and-health.md index 4cb6878..e5b0193 100644 --- a/satrs-book/src/modes-and-health.md +++ b/satrs-book/src/modes-and-health.md @@ -1,11 +1,11 @@ # Modes -Modes are an extremely useful concept for complex system in general. They also allow simplified -system reasoning for both system operators and OBSW developers. They model the behaviour of a -component and also provide observability of a system. A few examples of how to model -different components of a space system with modes will be given. +Modes are an extremely useful concept to model complex systems. They allow simplified +system reasoning for both system operators and OBSW developers. They also provide a way to alter +the behaviour of a component and also provide observability of a system. A few examples of how to +model the mode of different components within a space system with modes will be given. -## Modelling a pyhsical devices with modes +## Pyhsical device component with modes The following simple mode scheme with the following three mode @@ -13,7 +13,8 @@ The following simple mode scheme with the following three mode - `ON` - `NORMAL` -can be applied to a large number of simpler devices of a remote system, for example sensors. +can be applied to a large number of simpler device controllers of a remote system, for example +sensors. 1. `OFF` means that a device is physically switched off, and the corresponding software component does not poll the device regularly. @@ -31,7 +32,7 @@ for the majority of devices: 2. `NORMAL` or `ON` to `OFF`: Any important shutdown configuration or handling must be performed before powering off the device. -## Modelling a controller with modes +## Controller components with modes Controller components are not modelling physical devices, but a mode scheme is still the best way to model most of these components. diff --git a/satrs-example/satrs-tmtc/.gitignore b/satrs-example/pytmtc/.gitignore similarity index 100% rename from satrs-example/satrs-tmtc/.gitignore rename to satrs-example/pytmtc/.gitignore diff --git a/satrs-example/satrs-tmtc/common.py b/satrs-example/pytmtc/common.py similarity index 100% rename from satrs-example/satrs-tmtc/common.py rename to satrs-example/pytmtc/common.py diff --git a/satrs-example/satrs-tmtc/main.py b/satrs-example/pytmtc/main.py similarity index 92% rename from satrs-example/satrs-tmtc/main.py rename to satrs-example/pytmtc/main.py index a3e0caf..23f10b0 100755 --- a/satrs-example/satrs-tmtc/main.py +++ b/satrs-example/pytmtc/main.py @@ -103,7 +103,9 @@ class PusHandler(GenericApidHandlerBase): def handle_tm(self, apid: int, packet: bytes, _user_args: Any): try: - pus_tm = PusTelemetry.unpack(packet, time_reader=CdsShortTimestamp.empty()) + pus_tm = PusTelemetry.unpack( + packet, timestamp_len=CdsShortTimestamp.TIMESTAMP_SIZE + ) except ValueError as e: _LOGGER.warning("Could not generate PUS TM object from raw data") _LOGGER.warning(f"Raw Packet: [{packet.hex(sep=',')}], REPR: {packet!r}") @@ -111,7 +113,7 @@ class PusHandler(GenericApidHandlerBase): service = pus_tm.service if service == 1: tm_packet = Service1Tm.unpack( - data=packet, params=UnpackParams(CdsShortTimestamp.empty(), 1, 2) + data=packet, params=UnpackParams(CdsShortTimestamp.TIMESTAMP_SIZE, 1, 2) ) res = self.verif_wrapper.add_tm(tm_packet) if res is None: @@ -128,7 +130,9 @@ class PusHandler(GenericApidHandlerBase): elif service == 3: _LOGGER.info("No handling for HK packets implemented") _LOGGER.info(f"Raw packet: 0x[{packet.hex(sep=',')}]") - pus_tm = PusTelemetry.unpack(packet, time_reader=CdsShortTimestamp.empty()) + pus_tm = PusTelemetry.unpack( + packet, timestamp_len=CdsShortTimestamp.TIMESTAMP_SIZE + ) if pus_tm.subservice == 25: if len(pus_tm.source_data) < 8: raise ValueError("No addressable ID in HK packet") @@ -136,7 +140,7 @@ class PusHandler(GenericApidHandlerBase): _LOGGER.info(json_str) elif service == 5: tm_packet = PusTelemetry.unpack( - packet, time_reader=CdsShortTimestamp.empty() + packet, timestamp_len=CdsShortTimestamp.TIMESTAMP_SIZE ) src_data = tm_packet.source_data event_u32 = EventU32.unpack(src_data) @@ -145,7 +149,7 @@ class PusHandler(GenericApidHandlerBase): _LOGGER.info("Received test event") elif service == 17: tm_packet = Service17Tm.unpack( - packet, time_reader=CdsShortTimestamp.empty() + packet, timestamp_len=CdsShortTimestamp.TIMESTAMP_SIZE ) if tm_packet.subservice == 2: self.file_logger.info("Received Ping Reply TM[17,2]") @@ -162,7 +166,7 @@ class PusHandler(GenericApidHandlerBase): f"The service {service} is not implemented in Telemetry Factory" ) tm_packet = PusTelemetry.unpack( - packet, time_reader=CdsShortTimestamp.empty() + packet, timestamp_len=CdsShortTimestamp.TIMESTAMP_SIZE ) self.raw_logger.log_tm(pus_tm) @@ -197,15 +201,15 @@ class TcHandler(TcHandlerBase): _LOGGER.info(log_entry.log_str) def queue_finished_cb(self, info: ProcedureWrapper): - if info.proc_type == TcProcedureType.DEFAULT: - def_proc = info.to_def_procedure() + if info.proc_type == TcProcedureType.TREE_COMMANDING: + def_proc = info.to_tree_commanding_procedure() _LOGGER.info(f"Queue handling finished for command {def_proc.cmd_path}") def feed_cb(self, info: ProcedureWrapper, wrapper: FeedWrapper): q = self.queue_helper q.queue_wrapper = wrapper.queue_wrapper - if info.proc_type == TcProcedureType.DEFAULT: - def_proc = info.to_def_procedure() + if info.proc_type == TcProcedureType.TREE_COMMANDING: + def_proc = info.to_tree_commanding_procedure() assert def_proc.cmd_path is not None pus_tc.pack_pus_telecommands(q, def_proc.cmd_path) @@ -256,6 +260,7 @@ def main(): while True: state = tmtc_backend.periodic_op(None) if state.request == BackendRequest.TERMINATION_NO_ERROR: + tmtc_backend.close_com_if() sys.exit(0) elif state.request == BackendRequest.DELAY_IDLE: _LOGGER.info("TMTC Client in IDLE mode") @@ -270,6 +275,7 @@ def main(): elif state.request == BackendRequest.CALL_NEXT: pass except KeyboardInterrupt: + tmtc_backend.close_com_if() sys.exit(0) diff --git a/satrs-example/satrs-tmtc/pus_tc.py b/satrs-example/pytmtc/pus_tc.py similarity index 100% rename from satrs-example/satrs-tmtc/pus_tc.py rename to satrs-example/pytmtc/pus_tc.py diff --git a/satrs-example/satrs-tmtc/pus_tm.py b/satrs-example/pytmtc/pus_tm.py similarity index 100% rename from satrs-example/satrs-tmtc/pus_tm.py rename to satrs-example/pytmtc/pus_tm.py diff --git a/satrs-example/satrs-tmtc/requirements.txt b/satrs-example/pytmtc/requirements.txt similarity index 83% rename from satrs-example/satrs-tmtc/requirements.txt rename to satrs-example/pytmtc/requirements.txt index b3f6f2a..325615c 100644 --- a/satrs-example/satrs-tmtc/requirements.txt +++ b/satrs-example/pytmtc/requirements.txt @@ -1,2 +1,2 @@ -tmtccmd == 8.0.0rc1 +tmtccmd == 8.0.0rc2 # -e git+https://github.com/robamu-org/tmtccmd@97e5e51101a08b21472b3ddecc2063359f7e307a#egg=tmtccmd diff --git a/satrs-example/satrs-tmtc/tc_definitions.py b/satrs-example/pytmtc/tc_definitions.py similarity index 100% rename from satrs-example/satrs-tmtc/tc_definitions.py rename to satrs-example/pytmtc/tc_definitions.py diff --git a/satrs-example/satrs-tmtc/tmtc_conf.json b/satrs-example/pytmtc/tmtc_conf.json similarity index 100% rename from satrs-example/satrs-tmtc/tmtc_conf.json rename to satrs-example/pytmtc/tmtc_conf.json diff --git a/satrs-example/src/acs/mgm.rs b/satrs-example/src/acs/mgm.rs index 1cb7eee..d50bc6d 100644 --- a/satrs-example/src/acs/mgm.rs +++ b/satrs-example/src/acs/mgm.rs @@ -11,7 +11,7 @@ use std::sync::{Arc, Mutex}; use satrs::mode::{ ModeAndSubmode, ModeError, ModeProvider, ModeReply, ModeRequest, ModeRequestHandler, }; -use satrs::pus::{EcssTmSenderCore, PusTmVariant}; +use satrs::pus::{EcssTmSender, PusTmVariant}; use satrs::request::{GenericMessage, MessageMetadata, UniqueApidTargetId}; use satrs_example::config::components::PUS_MODE_SERVICE; @@ -64,7 +64,7 @@ pub struct MpscModeLeafInterface { /// Example MGM device handler strongly based on the LIS3MDL MEMS device. #[derive(new)] #[allow(clippy::too_many_arguments)] -pub struct MgmHandlerLis3Mdl { +pub struct MgmHandlerLis3Mdl { id: UniqueApidTargetId, dev_str: &'static str, mode_interface: MpscModeLeafInterface, @@ -85,9 +85,7 @@ pub struct MgmHandlerLis3Mdl - MgmHandlerLis3Mdl -{ +impl MgmHandlerLis3Mdl { pub fn periodic_operation(&mut self) { self.stamp_helper.update_from_now(); // Handle requests. @@ -203,7 +201,7 @@ impl } } -impl ModeProvider +impl ModeProvider for MgmHandlerLis3Mdl { fn mode_and_submode(&self) -> ModeAndSubmode { @@ -211,7 +209,7 @@ impl ModeProvider } } -impl ModeRequestHandler +impl ModeRequestHandler for MgmHandlerLis3Mdl { type Error = ModeError; diff --git a/satrs-example/src/ccsds.rs b/satrs-example/src/ccsds.rs deleted file mode 100644 index 1841d17..0000000 --- a/satrs-example/src/ccsds.rs +++ /dev/null @@ -1,53 +0,0 @@ -use satrs::pus::ReceivesEcssPusTc; -use satrs::spacepackets::{CcsdsPacket, SpHeader}; -use satrs::tmtc::{CcsdsPacketHandler, ReceivesCcsdsTc}; -use satrs::ValidatorU16Id; -use satrs_example::config::components::Apid; -use satrs_example::config::APID_VALIDATOR; - -#[derive(Clone)] -pub struct CcsdsReceiver< - TcSource: ReceivesCcsdsTc + ReceivesEcssPusTc + Clone, - E, -> { - pub tc_source: TcSource, -} - -impl< - TcSource: ReceivesCcsdsTc + ReceivesEcssPusTc + Clone + 'static, - E: 'static, - > ValidatorU16Id for CcsdsReceiver -{ - fn validate(&self, apid: u16) -> bool { - APID_VALIDATOR.contains(&apid) - } -} - -impl< - TcSource: ReceivesCcsdsTc + ReceivesEcssPusTc + Clone + 'static, - E: 'static, - > CcsdsPacketHandler for CcsdsReceiver -{ - type Error = E; - - fn handle_packet_with_valid_apid( - &mut self, - sp_header: &SpHeader, - tc_raw: &[u8], - ) -> Result<(), Self::Error> { - if sp_header.apid() == Apid::Cfdp as u16 { - } else { - return self.tc_source.pass_ccsds(sp_header, tc_raw); - } - Ok(()) - } - - fn handle_packet_with_unknown_apid( - &mut self, - sp_header: &SpHeader, - _tc_raw: &[u8], - ) -> Result<(), Self::Error> { - log::warn!("unknown APID 0x{:x?} detected", sp_header.apid()); - Ok(()) - } -} diff --git a/satrs-example/src/config.rs b/satrs-example/src/config.rs index 7e474e9..5168927 100644 --- a/satrs-example/src/config.rs +++ b/satrs-example/src/config.rs @@ -132,6 +132,7 @@ pub mod components { GenericPus = 2, Acs = 3, Cfdp = 4, + Tmtc = 5, } // Component IDs for components with the PUS APID. @@ -150,6 +151,12 @@ pub mod components { Mgm0 = 0, } + #[derive(Copy, Clone, PartialEq, Eq)] + pub enum TmtcId { + UdpServer = 0, + TcpServer = 1, + } + pub const PUS_ACTION_SERVICE: UniqueApidTargetId = UniqueApidTargetId::new(Apid::GenericPus as u16, PusId::PusAction as u32); pub const PUS_EVENT_MANAGEMENT: UniqueApidTargetId = @@ -166,6 +173,10 @@ pub mod components { UniqueApidTargetId::new(Apid::Sched as u16, 0); pub const MGM_HANDLER_0: UniqueApidTargetId = UniqueApidTargetId::new(Apid::Acs as u16, AcsId::Mgm0 as u32); + pub const UDP_SERVER: UniqueApidTargetId = + UniqueApidTargetId::new(Apid::Tmtc as u16, TmtcId::UdpServer as u32); + pub const TCP_SERVER: UniqueApidTargetId = + UniqueApidTargetId::new(Apid::Tmtc as u16, TmtcId::TcpServer as u32); } pub mod pool { diff --git a/satrs-example/src/events.rs b/satrs-example/src/events.rs index 4d7ea9f..5d1bdaf 100644 --- a/satrs-example/src/events.rs +++ b/satrs-example/src/events.rs @@ -5,7 +5,7 @@ use satrs::event_man::{EventMessageU32, EventRoutingError}; use satrs::params::WritableToBeBytes; use satrs::pus::event::EventTmHookProvider; use satrs::pus::verification::VerificationReporter; -use satrs::pus::EcssTmSenderCore; +use satrs::pus::EcssTmSender; use satrs::request::UniqueApidTargetId; use satrs::{ event_man::{ @@ -38,7 +38,7 @@ impl EventTmHookProvider for EventApidSetter { /// The PUS event handler subscribes for all events and converts them into ECSS PUS 5 event /// packets. It also handles the verification completion of PUS event service requests. -pub struct PusEventHandler { +pub struct PusEventHandler { event_request_rx: mpsc::Receiver, pus_event_dispatcher: DefaultPusEventU32Dispatcher<()>, pus_event_man_rx: mpsc::Receiver, @@ -49,7 +49,7 @@ pub struct PusEventHandler { event_apid_setter: EventApidSetter, } -impl PusEventHandler { +impl PusEventHandler { pub fn new( tm_sender: TmSender, verif_handler: VerificationReporter, @@ -177,12 +177,12 @@ impl EventManagerWrapper { } } -pub struct EventHandler { +pub struct EventHandler { pub event_man_wrapper: EventManagerWrapper, pub pus_event_handler: PusEventHandler, } -impl EventHandler { +impl EventHandler { pub fn new( tm_sender: TmSender, event_request_rx: mpsc::Receiver, diff --git a/satrs-example/src/interface/mod.rs b/satrs-example/src/interface/mod.rs new file mode 100644 index 0000000..d10d73f --- /dev/null +++ b/satrs-example/src/interface/mod.rs @@ -0,0 +1,3 @@ +//! This module contains all component related to the direct interface of the example. +pub mod tcp; +pub mod udp; diff --git a/satrs-example/src/interface/tcp.rs b/satrs-example/src/interface/tcp.rs new file mode 100644 index 0000000..021ad31 --- /dev/null +++ b/satrs-example/src/interface/tcp.rs @@ -0,0 +1,154 @@ +use std::time::Duration; +use std::{ + collections::{HashSet, VecDeque}, + fmt::Debug, + marker::PhantomData, + sync::{Arc, Mutex}, +}; + +use log::{info, warn}; +use satrs::{ + encoding::ccsds::{SpValidity, SpacePacketValidator}, + hal::std::tcp_server::{HandledConnectionHandler, ServerConfig, TcpSpacepacketsServer}, + spacepackets::{CcsdsPacket, PacketId}, + tmtc::{PacketSenderRaw, PacketSource}, +}; + +#[derive(Default)] +pub struct ConnectionFinishedHandler {} + +pub struct SimplePacketValidator { + pub valid_ids: HashSet, +} + +impl SpacePacketValidator for SimplePacketValidator { + fn validate( + &self, + sp_header: &satrs::spacepackets::SpHeader, + _raw_buf: &[u8], + ) -> satrs::encoding::ccsds::SpValidity { + if self.valid_ids.contains(&sp_header.packet_id()) { + return SpValidity::Valid; + } + log::warn!("ignoring space packet with header {:?}", sp_header); + // We could perform a CRC check.. but lets keep this simple and assume that TCP ensures + // data integrity. + SpValidity::Skip + } +} + +impl HandledConnectionHandler for ConnectionFinishedHandler { + fn handled_connection(&mut self, info: satrs::hal::std::tcp_server::HandledConnectionInfo) { + info!( + "Served {} TMs and {} TCs for client {:?}", + info.num_sent_tms, info.num_received_tcs, info.addr + ); + } +} + +#[derive(Default, Clone)] +pub struct SyncTcpTmSource { + tm_queue: Arc>>>, + max_packets_stored: usize, + pub silent_packet_overwrite: bool, +} + +impl SyncTcpTmSource { + pub fn new(max_packets_stored: usize) -> Self { + Self { + tm_queue: Arc::default(), + max_packets_stored, + silent_packet_overwrite: true, + } + } + + pub fn add_tm(&mut self, tm: &[u8]) { + let mut tm_queue = self.tm_queue.lock().expect("locking tm queue failec"); + if tm_queue.len() > self.max_packets_stored { + if !self.silent_packet_overwrite { + warn!("TPC TM source is full, deleting oldest packet"); + } + tm_queue.pop_front(); + } + tm_queue.push_back(tm.to_vec()); + } +} + +impl PacketSource for SyncTcpTmSource { + type Error = (); + + fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result { + let mut tm_queue = self.tm_queue.lock().expect("locking tm queue failed"); + if !tm_queue.is_empty() { + let next_vec = tm_queue.front().unwrap(); + if buffer.len() < next_vec.len() { + panic!( + "provided buffer too small, must be at least {} bytes", + next_vec.len() + ); + } + let next_vec = tm_queue.pop_front().unwrap(); + buffer[0..next_vec.len()].copy_from_slice(&next_vec); + if next_vec.len() > 9 { + let service = next_vec[7]; + let subservice = next_vec[8]; + info!("Sending PUS TM[{service},{subservice}]") + } else { + info!("Sending PUS TM"); + } + return Ok(next_vec.len()); + } + Ok(0) + } +} + +pub type TcpServer = TcpSpacepacketsServer< + SyncTcpTmSource, + ReceivesTc, + SimplePacketValidator, + ConnectionFinishedHandler, + (), + SendError, +>; + +pub struct TcpTask, SendError: Debug + 'static>( + pub TcpServer, + PhantomData, +); + +impl, SendError: Debug + 'static> + TcpTask +{ + pub fn new( + cfg: ServerConfig, + tm_source: SyncTcpTmSource, + tc_sender: TcSender, + valid_ids: HashSet, + ) -> Result { + Ok(Self( + TcpSpacepacketsServer::new( + cfg, + tm_source, + tc_sender, + SimplePacketValidator { valid_ids }, + ConnectionFinishedHandler::default(), + None, + )?, + PhantomData, + )) + } + + pub fn periodic_operation(&mut self) { + loop { + let result = self + .0 + .handle_all_connections(Some(Duration::from_millis(400))); + match result { + Ok(_conn_result) => (), + Err(e) => { + warn!("TCP server error: {e:?}"); + } + } + } + } +} diff --git a/satrs-example/src/udp.rs b/satrs-example/src/interface/udp.rs similarity index 63% rename from satrs-example/src/udp.rs rename to satrs-example/src/interface/udp.rs index 2cb4823..d7816e2 100644 --- a/satrs-example/src/udp.rs +++ b/satrs-example/src/interface/udp.rs @@ -1,20 +1,22 @@ +use core::fmt::Debug; use std::net::{SocketAddr, UdpSocket}; use std::sync::mpsc; use log::{info, warn}; -use satrs::pus::{PusTmAsVec, PusTmInPool}; +use satrs::tmtc::{PacketAsVec, PacketInPool, PacketSenderRaw}; use satrs::{ hal::std::udp_server::{ReceiveResult, UdpTcServer}, pool::{PoolProviderWithGuards, SharedStaticMemoryPool}, - tmtc::CcsdsError, }; +use crate::pus::HandlingStatus; + pub trait UdpTmHandler { fn send_tm_to_udp_client(&mut self, socket: &UdpSocket, recv_addr: &SocketAddr); } pub struct StaticUdpTmHandler { - pub tm_rx: mpsc::Receiver, + pub tm_rx: mpsc::Receiver, pub tm_store: SharedStaticMemoryPool, } @@ -43,7 +45,7 @@ impl UdpTmHandler for StaticUdpTmHandler { } pub struct DynamicUdpTmHandler { - pub tm_rx: mpsc::Receiver, + pub tm_rx: mpsc::Receiver, } impl UdpTmHandler for DynamicUdpTmHandler { @@ -64,49 +66,57 @@ impl UdpTmHandler for DynamicUdpTmHandler { } } -pub struct UdpTmtcServer { - pub udp_tc_server: UdpTcServer>, +pub struct UdpTmtcServer< + TcSender: PacketSenderRaw, + TmHandler: UdpTmHandler, + SendError, +> { + pub udp_tc_server: UdpTcServer, pub tm_handler: TmHandler, } -impl - UdpTmtcServer +impl< + TcSender: PacketSenderRaw, + TmHandler: UdpTmHandler, + SendError: Debug + 'static, + > UdpTmtcServer { pub fn periodic_operation(&mut self) { - while self.poll_tc_server() {} + loop { + if self.poll_tc_server() == HandlingStatus::Empty { + break; + } + } if let Some(recv_addr) = self.udp_tc_server.last_sender() { self.tm_handler .send_tm_to_udp_client(&self.udp_tc_server.socket, &recv_addr); } } - fn poll_tc_server(&mut self) -> bool { + fn poll_tc_server(&mut self) -> HandlingStatus { match self.udp_tc_server.try_recv_tc() { - Ok(_) => true, - Err(e) => match e { - ReceiveResult::ReceiverError(e) => match e { - CcsdsError::ByteConversionError(e) => { - warn!("packet error: {e:?}"); - true + Ok(_) => HandlingStatus::HandledOne, + Err(e) => { + match e { + ReceiveResult::NothingReceived => (), + ReceiveResult::Io(e) => { + warn!("IO error {e}"); } - CcsdsError::CustomError(e) => { - warn!("mpsc custom error {e:?}"); - true + ReceiveResult::Send(send_error) => { + warn!("send error {send_error:?}"); } - }, - ReceiveResult::IoError(e) => { - warn!("IO error {e}"); - false } - ReceiveResult::NothingReceived => false, - }, + HandlingStatus::Empty + } } } } #[cfg(test)] mod tests { + use std::net::Ipv4Addr; use std::{ + cell::RefCell, collections::VecDeque, net::IpAddr, sync::{Arc, Mutex}, @@ -117,21 +127,26 @@ mod tests { ecss::{tc::PusTcCreator, WritablePusPacket}, SpHeader, }, - tmtc::ReceivesTcCore, + tmtc::PacketSenderRaw, + ComponentId, }; use satrs_example::config::{components, OBSW_SERVER_ADDR}; use super::*; - #[derive(Default, Debug, Clone)] - pub struct TestReceiver { - tc_vec: Arc>>>, + const UDP_SERVER_ID: ComponentId = 0x05; + + #[derive(Default, Debug)] + pub struct TestSender { + tc_vec: RefCell>, } - impl ReceivesTcCore for TestReceiver { - type Error = CcsdsError<()>; - fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { - self.tc_vec.lock().unwrap().push_back(tc_raw.to_vec()); + impl PacketSenderRaw for TestSender { + type Error = (); + + fn send_packet(&self, sender_id: ComponentId, tc_raw: &[u8]) -> Result<(), Self::Error> { + let mut mut_queue = self.tc_vec.borrow_mut(); + mut_queue.push_back(PacketAsVec::new(sender_id, tc_raw.to_vec())); Ok(()) } } @@ -150,9 +165,10 @@ mod tests { #[test] fn test_basic() { let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), 0); - let test_receiver = TestReceiver::default(); - let tc_queue = test_receiver.tc_vec.clone(); - let udp_tc_server = UdpTcServer::new(sock_addr, 2048, Box::new(test_receiver)).unwrap(); + let test_receiver = TestSender::default(); + // let tc_queue = test_receiver.tc_vec.clone(); + let udp_tc_server = + UdpTcServer::new(UDP_SERVER_ID, sock_addr, 2048, test_receiver).unwrap(); let tm_handler = TestTmHandler::default(); let tm_handler_calls = tm_handler.addrs_to_send_to.clone(); let mut udp_dyn_server = UdpTmtcServer { @@ -160,16 +176,18 @@ mod tests { tm_handler, }; udp_dyn_server.periodic_operation(); - assert!(tc_queue.lock().unwrap().is_empty()); + let queue = udp_dyn_server.udp_tc_server.tc_sender.tc_vec.borrow(); + assert!(queue.is_empty()); assert!(tm_handler_calls.lock().unwrap().is_empty()); } #[test] fn test_transactions() { - let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), 0); - let test_receiver = TestReceiver::default(); - let tc_queue = test_receiver.tc_vec.clone(); - let udp_tc_server = UdpTcServer::new(sock_addr, 2048, Box::new(test_receiver)).unwrap(); + let sock_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 0); + let test_receiver = TestSender::default(); + // let tc_queue = test_receiver.tc_vec.clone(); + let udp_tc_server = + UdpTcServer::new(UDP_SERVER_ID, sock_addr, 2048, test_receiver).unwrap(); let server_addr = udp_tc_server.socket.local_addr().unwrap(); let tm_handler = TestTmHandler::default(); let tm_handler_calls = tm_handler.addrs_to_send_to.clone(); @@ -183,14 +201,15 @@ mod tests { .unwrap(); let client = UdpSocket::bind("127.0.0.1:0").expect("Connecting to UDP server failed"); let client_addr = client.local_addr().unwrap(); - client.connect(server_addr).unwrap(); - client.send(&ping_tc).unwrap(); + println!("{}", server_addr); + client.send_to(&ping_tc, server_addr).unwrap(); udp_dyn_server.periodic_operation(); { - let mut tc_queue = tc_queue.lock().unwrap(); - assert!(!tc_queue.is_empty()); - let received_tc = tc_queue.pop_front().unwrap(); - assert_eq!(received_tc, ping_tc); + let mut queue = udp_dyn_server.udp_tc_server.tc_sender.tc_vec.borrow_mut(); + assert!(!queue.is_empty()); + let packet_with_sender = queue.pop_front().unwrap(); + assert_eq!(packet_with_sender.packet, ping_tc); + assert_eq!(packet_with_sender.sender_id, UDP_SERVER_ID); } { @@ -201,7 +220,9 @@ mod tests { assert_eq!(received_addr, client_addr); } udp_dyn_server.periodic_operation(); - assert!(tc_queue.lock().unwrap().is_empty()); + let queue = udp_dyn_server.udp_tc_server.tc_sender.tc_vec.borrow(); + assert!(queue.is_empty()); + drop(queue); // Still tries to send to the same client. { let mut tm_handler_calls = tm_handler_calls.lock().unwrap(); diff --git a/satrs-example/src/main.rs b/satrs-example/src/main.rs index a6456d6..cf8e050 100644 --- a/satrs-example/src/main.rs +++ b/satrs-example/src/main.rs @@ -1,34 +1,32 @@ mod acs; -mod ccsds; mod events; mod hk; +mod interface; mod logger; mod pus; mod requests; -mod tcp; -mod tm_funnel; mod tmtc; -mod udp; use crate::events::EventHandler; +use crate::interface::udp::DynamicUdpTmHandler; use crate::pus::stack::PusStack; -use crate::tm_funnel::{TmFunnelDynamic, TmFunnelStatic}; +use crate::tmtc::tc_source::{TcSourceTaskDynamic, TcSourceTaskStatic}; +use crate::tmtc::tm_sink::{TmFunnelDynamic, TmFunnelStatic}; use log::info; use pus::test::create_test_service_dynamic; use satrs::hal::std::tcp_server::ServerConfig; use satrs::hal::std::udp_server::UdpTcServer; use satrs::request::GenericMessage; -use satrs::tmtc::tm_helper::SharedTmPool; +use satrs::tmtc::{PacketSenderWithSharedPool, SharedPacketPool}; use satrs_example::config::pool::{create_sched_tc_pool, create_static_pools}; use satrs_example::config::tasks::{ FREQ_MS_AOCS, FREQ_MS_EVENT_HANDLING, FREQ_MS_PUS_STACK, FREQ_MS_UDP_TMTC, }; use satrs_example::config::{OBSW_SERVER_ADDR, PACKET_ID_VALIDATOR, SERVER_PORT}; -use tmtc::PusTcSourceProviderDynamic; -use udp::DynamicUdpTmHandler; use crate::acs::mgm::{MgmHandlerLis3Mdl, MpscModeLeafInterface, SpiDummyInterface}; -use crate::ccsds::CcsdsReceiver; +use crate::interface::tcp::{SyncTcpTmSource, TcpTask}; +use crate::interface::udp::{StaticUdpTmHandler, UdpTmtcServer}; use crate::logger::setup_logger; use crate::pus::action::{create_action_service_dynamic, create_action_service_static}; use crate::pus::event::{create_event_service_dynamic, create_event_service_static}; @@ -36,19 +34,12 @@ use crate::pus::hk::{create_hk_service_dynamic, create_hk_service_static}; use crate::pus::mode::{create_mode_service_dynamic, create_mode_service_static}; use crate::pus::scheduler::{create_scheduler_service_dynamic, create_scheduler_service_static}; use crate::pus::test::create_test_service_static; -use crate::pus::{PusReceiver, PusTcMpscRouter}; +use crate::pus::{PusTcDistributor, PusTcMpscRouter}; use crate::requests::{CompositeRequest, GenericRequestRouter}; -use crate::tcp::{SyncTcpTmSource, TcpTask}; -use crate::tmtc::{ - PusTcSourceProviderSharedPool, SharedTcPool, TcSourceTaskDynamic, TcSourceTaskStatic, -}; -use crate::udp::{StaticUdpTmHandler, UdpTmtcServer}; use satrs::mode::ModeRequest; use satrs::pus::event_man::EventRequestWithToken; -use satrs::pus::TmInSharedPoolSender; use satrs::spacepackets::{time::cds::CdsTime, time::TimeWriter}; -use satrs::tmtc::CcsdsDistributor; -use satrs_example::config::components::MGM_HANDLER_0; +use satrs_example::config::components::{MGM_HANDLER_0, TCP_SERVER, UDP_SERVER}; use std::net::{IpAddr, SocketAddr}; use std::sync::mpsc; use std::sync::{Arc, RwLock}; @@ -58,16 +49,16 @@ use std::time::Duration; #[allow(dead_code)] fn static_tmtc_pool_main() { let (tm_pool, tc_pool) = create_static_pools(); - let shared_tm_pool = SharedTmPool::new(tm_pool); - let shared_tc_pool = SharedTcPool { - pool: Arc::new(RwLock::new(tc_pool)), - }; + let shared_tm_pool = Arc::new(RwLock::new(tm_pool)); + let shared_tc_pool = Arc::new(RwLock::new(tc_pool)); + let shared_tm_pool_wrapper = SharedPacketPool::new(&shared_tm_pool); + let shared_tc_pool_wrapper = SharedPacketPool::new(&shared_tc_pool); let (tc_source_tx, tc_source_rx) = mpsc::sync_channel(50); let (tm_funnel_tx, tm_funnel_rx) = mpsc::sync_channel(50); let (tm_server_tx, tm_server_rx) = mpsc::sync_channel(50); let tm_funnel_tx_sender = - TmInSharedPoolSender::new(shared_tm_pool.clone(), tm_funnel_tx.clone()); + PacketSenderWithSharedPool::new(tm_funnel_tx.clone(), shared_tm_pool_wrapper.clone()); let (mgm_handler_composite_tx, mgm_handler_composite_rx) = mpsc::channel::>(); @@ -84,10 +75,7 @@ fn static_tmtc_pool_main() { // This helper structure is used by all telecommand providers which need to send telecommands // to the TC source. - let tc_source = PusTcSourceProviderSharedPool { - shared_pool: shared_tc_pool.clone(), - tc_source: tc_source_tx, - }; + let tc_source = PacketSenderWithSharedPool::new(tc_source_tx, shared_tc_pool_wrapper.clone()); // Create event handling components // These sender handles are used to send event requests, for example to enable or disable @@ -119,7 +107,7 @@ fn static_tmtc_pool_main() { }; let pus_test_service = create_test_service_static( tm_funnel_tx_sender.clone(), - shared_tc_pool.pool.clone(), + shared_tc_pool.clone(), event_handler.clone_event_sender(), pus_test_rx, ); @@ -131,27 +119,27 @@ fn static_tmtc_pool_main() { ); let pus_event_service = create_event_service_static( tm_funnel_tx_sender.clone(), - shared_tc_pool.pool.clone(), + shared_tc_pool.clone(), pus_event_rx, event_request_tx, ); let pus_action_service = create_action_service_static( tm_funnel_tx_sender.clone(), - shared_tc_pool.pool.clone(), + shared_tc_pool.clone(), pus_action_rx, request_map.clone(), pus_action_reply_rx, ); let pus_hk_service = create_hk_service_static( tm_funnel_tx_sender.clone(), - shared_tc_pool.pool.clone(), + shared_tc_pool.clone(), pus_hk_rx, request_map.clone(), pus_hk_reply_rx, ); let pus_mode_service = create_mode_service_static( tm_funnel_tx_sender.clone(), - shared_tc_pool.pool.clone(), + shared_tc_pool.clone(), pus_mode_rx, request_map, pus_mode_reply_rx, @@ -165,38 +153,41 @@ fn static_tmtc_pool_main() { pus_mode_service, ); - let ccsds_receiver = CcsdsReceiver { tc_source }; let mut tmtc_task = TcSourceTaskStatic::new( - shared_tc_pool.clone(), + shared_tc_pool_wrapper.clone(), tc_source_rx, - PusReceiver::new(tm_funnel_tx_sender, pus_router), + PusTcDistributor::new(tm_funnel_tx_sender, pus_router), ); let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), SERVER_PORT); - let udp_ccsds_distributor = CcsdsDistributor::new(ccsds_receiver.clone()); - let udp_tc_server = UdpTcServer::new(sock_addr, 2048, Box::new(udp_ccsds_distributor)) + let udp_tc_server = UdpTcServer::new(UDP_SERVER.id(), sock_addr, 2048, tc_source.clone()) .expect("creating UDP TMTC server failed"); let mut udp_tmtc_server = UdpTmtcServer { udp_tc_server, tm_handler: StaticUdpTmHandler { tm_rx: tm_server_rx, - tm_store: shared_tm_pool.clone_backing_pool(), + tm_store: shared_tm_pool.clone(), }, }; - let tcp_ccsds_distributor = CcsdsDistributor::new(ccsds_receiver); - let tcp_server_cfg = ServerConfig::new(sock_addr, Duration::from_millis(400), 4096, 8192); + let tcp_server_cfg = ServerConfig::new( + TCP_SERVER.id(), + sock_addr, + Duration::from_millis(400), + 4096, + 8192, + ); let sync_tm_tcp_source = SyncTcpTmSource::new(200); let mut tcp_server = TcpTask::new( tcp_server_cfg, sync_tm_tcp_source.clone(), - tcp_ccsds_distributor, + tc_source.clone(), PACKET_ID_VALIDATOR.clone(), ) .expect("tcp server creation failed"); let mut tm_funnel = TmFunnelStatic::new( - shared_tm_pool, + shared_tm_pool_wrapper, sync_tm_tcp_source, tm_funnel_rx, tm_server_tx, @@ -225,7 +216,7 @@ fn static_tmtc_pool_main() { info!("Starting TMTC and UDP task"); let jh_udp_tmtc = thread::Builder::new() - .name("TMTC and UDP".to_string()) + .name("SATRS tmtc-udp".to_string()) .spawn(move || { info!("Running UDP server on port {SERVER_PORT}"); loop { @@ -238,7 +229,7 @@ fn static_tmtc_pool_main() { info!("Starting TCP task"); let jh_tcp = thread::Builder::new() - .name("TCP".to_string()) + .name("sat-rs tcp".to_string()) .spawn(move || { info!("Running TCP server on port {SERVER_PORT}"); loop { @@ -257,7 +248,7 @@ fn static_tmtc_pool_main() { info!("Starting event handling task"); let jh_event_handling = thread::Builder::new() - .name("Event".to_string()) + .name("sat-rs events".to_string()) .spawn(move || loop { event_handler.periodic_operation(); thread::sleep(Duration::from_millis(FREQ_MS_EVENT_HANDLING)); @@ -266,7 +257,7 @@ fn static_tmtc_pool_main() { info!("Starting AOCS thread"); let jh_aocs = thread::Builder::new() - .name("AOCS".to_string()) + .name("sat-rs aocs".to_string()) .spawn(move || loop { mgm_handler.periodic_operation(); thread::sleep(Duration::from_millis(FREQ_MS_AOCS)); @@ -275,7 +266,7 @@ fn static_tmtc_pool_main() { info!("Starting PUS handler thread"); let jh_pus_handler = thread::Builder::new() - .name("PUS".to_string()) + .name("sat-rs pus".to_string()) .spawn(move || loop { pus_stack.periodic_operation(); thread::sleep(Duration::from_millis(FREQ_MS_PUS_STACK)); @@ -320,8 +311,6 @@ fn dyn_tmtc_pool_main() { .mode_router_map .insert(MGM_HANDLER_0.raw(), mgm_handler_mode_tx); - let tc_source = PusTcSourceProviderDynamic(tc_source_tx); - // Create event handling components // These sender handles are used to send event requests, for example to enable or disable // certain events. @@ -357,7 +346,7 @@ fn dyn_tmtc_pool_main() { ); let pus_scheduler_service = create_scheduler_service_dynamic( tm_funnel_tx.clone(), - tc_source.0.clone(), + tc_source_tx.clone(), pus_sched_rx, create_sched_tc_pool(), ); @@ -391,16 +380,13 @@ fn dyn_tmtc_pool_main() { pus_mode_service, ); - let ccsds_receiver = CcsdsReceiver { tc_source }; - let mut tmtc_task = TcSourceTaskDynamic::new( tc_source_rx, - PusReceiver::new(tm_funnel_tx.clone(), pus_router), + PusTcDistributor::new(tm_funnel_tx.clone(), pus_router), ); let sock_addr = SocketAddr::new(IpAddr::V4(OBSW_SERVER_ADDR), SERVER_PORT); - let udp_ccsds_distributor = CcsdsDistributor::new(ccsds_receiver.clone()); - let udp_tc_server = UdpTcServer::new(sock_addr, 2048, Box::new(udp_ccsds_distributor)) + let udp_tc_server = UdpTcServer::new(UDP_SERVER.id(), sock_addr, 2048, tc_source_tx.clone()) .expect("creating UDP TMTC server failed"); let mut udp_tmtc_server = UdpTmtcServer { udp_tc_server, @@ -409,13 +395,18 @@ fn dyn_tmtc_pool_main() { }, }; - let tcp_ccsds_distributor = CcsdsDistributor::new(ccsds_receiver); - let tcp_server_cfg = ServerConfig::new(sock_addr, Duration::from_millis(400), 4096, 8192); + let tcp_server_cfg = ServerConfig::new( + TCP_SERVER.id(), + sock_addr, + Duration::from_millis(400), + 4096, + 8192, + ); let sync_tm_tcp_source = SyncTcpTmSource::new(200); let mut tcp_server = TcpTask::new( tcp_server_cfg, sync_tm_tcp_source.clone(), - tcp_ccsds_distributor, + tc_source_tx.clone(), PACKET_ID_VALIDATOR.clone(), ) .expect("tcp server creation failed"); @@ -444,7 +435,7 @@ fn dyn_tmtc_pool_main() { info!("Starting TMTC and UDP task"); let jh_udp_tmtc = thread::Builder::new() - .name("TMTC and UDP".to_string()) + .name("sat-rs tmtc-udp".to_string()) .spawn(move || { info!("Running UDP server on port {SERVER_PORT}"); loop { @@ -457,7 +448,7 @@ fn dyn_tmtc_pool_main() { info!("Starting TCP task"); let jh_tcp = thread::Builder::new() - .name("TCP".to_string()) + .name("sat-rs tcp".to_string()) .spawn(move || { info!("Running TCP server on port {SERVER_PORT}"); loop { @@ -468,7 +459,7 @@ fn dyn_tmtc_pool_main() { info!("Starting TM funnel task"); let jh_tm_funnel = thread::Builder::new() - .name("TM Funnel".to_string()) + .name("sat-rs tm-funnel".to_string()) .spawn(move || loop { tm_funnel.operation(); }) @@ -476,7 +467,7 @@ fn dyn_tmtc_pool_main() { info!("Starting event handling task"); let jh_event_handling = thread::Builder::new() - .name("Event".to_string()) + .name("sat-rs events".to_string()) .spawn(move || loop { event_handler.periodic_operation(); thread::sleep(Duration::from_millis(FREQ_MS_EVENT_HANDLING)); @@ -485,7 +476,7 @@ fn dyn_tmtc_pool_main() { info!("Starting AOCS thread"); let jh_aocs = thread::Builder::new() - .name("AOCS".to_string()) + .name("sat-rs aocs".to_string()) .spawn(move || loop { mgm_handler.periodic_operation(); thread::sleep(Duration::from_millis(FREQ_MS_AOCS)); @@ -494,7 +485,7 @@ fn dyn_tmtc_pool_main() { info!("Starting PUS handler thread"); let jh_pus_handler = thread::Builder::new() - .name("PUS".to_string()) + .name("sat-rs pus".to_string()) .spawn(move || loop { pus_stack.periodic_operation(); thread::sleep(Duration::from_millis(FREQ_MS_PUS_STACK)); diff --git a/satrs-example/src/pus/action.rs b/satrs-example/src/pus/action.rs index 22b6b93..dcdb345 100644 --- a/satrs-example/src/pus/action.rs +++ b/satrs-example/src/pus/action.rs @@ -3,7 +3,7 @@ use satrs::action::{ActionRequest, ActionRequestVariant}; use satrs::params::WritableToBeBytes; use satrs::pool::SharedStaticMemoryPool; use satrs::pus::action::{ - ActionReplyVariant, ActivePusActionRequestStd, DefaultActiveActionRequestMap, PusActionReply, + ActionReplyPus, ActionReplyVariant, ActivePusActionRequestStd, DefaultActiveActionRequestMap, }; use satrs::pus::verification::{ FailParams, FailParamsWithStep, TcStateAccepted, TcStateStarted, VerificationReporter, @@ -11,13 +11,14 @@ use satrs::pus::verification::{ }; use satrs::pus::{ ActiveRequestProvider, EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, - EcssTcInVecConverter, EcssTmSenderCore, EcssTmtcError, GenericConversionError, MpscTcReceiver, - MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, PusReplyHandler, - PusServiceHelper, PusTcToRequestConverter, PusTmAsVec, PusTmInPool, TmInSharedPoolSender, + EcssTcInVecConverter, EcssTmSender, EcssTmtcError, GenericConversionError, MpscTcReceiver, + MpscTmAsVecSender, PusPacketHandlerResult, PusReplyHandler, PusServiceHelper, + PusTcToRequestConverter, }; use satrs::request::{GenericMessage, UniqueApidTargetId}; use satrs::spacepackets::ecss::tc::PusTcReader; use satrs::spacepackets::ecss::{EcssEnumU16, PusPacket}; +use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool}; use satrs_example::config::components::PUS_ACTION_SERVICE; use satrs_example::config::tmtc_err; use std::sync::mpsc; @@ -42,13 +43,13 @@ impl Default for ActionReplyHandler { } } -impl PusReplyHandler for ActionReplyHandler { +impl PusReplyHandler for ActionReplyHandler { type Error = EcssTmtcError; fn handle_unrequested_reply( &mut self, - reply: &GenericMessage, - _tm_sender: &impl EcssTmSenderCore, + reply: &GenericMessage, + _tm_sender: &impl EcssTmSender, ) -> Result<(), Self::Error> { warn!("received unexpected reply for service 8: {reply:?}"); Ok(()) @@ -56,9 +57,9 @@ impl PusReplyHandler for ActionReplyH fn handle_reply( &mut self, - reply: &GenericMessage, + reply: &GenericMessage, active_request: &ActivePusActionRequestStd, - tm_sender: &(impl EcssTmSenderCore + ?Sized), + tm_sender: &(impl EcssTmSender + ?Sized), verification_handler: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result { @@ -121,7 +122,7 @@ impl PusReplyHandler for ActionReplyH fn handle_request_timeout( &mut self, active_request: &ActivePusActionRequestStd, - tm_sender: &impl EcssTmSenderCore, + tm_sender: &impl EcssTmSender, verification_handler: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result<(), Self::Error> { @@ -145,7 +146,7 @@ impl PusTcToRequestConverter for Actio &mut self, token: VerificationToken, tc: &PusTcReader, - tm_sender: &(impl EcssTmSenderCore + ?Sized), + tm_sender: &(impl EcssTmSender + ?Sized), verif_reporter: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result<(ActivePusActionRequestStd, ActionRequest), Self::Error> { @@ -195,12 +196,12 @@ impl PusTcToRequestConverter for Actio } pub fn create_action_service_static( - tm_sender: TmInSharedPoolSender>, + tm_sender: PacketSenderWithSharedPool, tc_pool: SharedStaticMemoryPool, pus_action_rx: mpsc::Receiver, action_router: GenericRequestRouter, - reply_receiver: mpsc::Receiver>, -) -> ActionServiceWrapper { + reply_receiver: mpsc::Receiver>, +) -> ActionServiceWrapper { let action_request_handler = PusTargetedRequestService::new( PusServiceHelper::new( PUS_ACTION_SERVICE.id(), @@ -223,10 +224,10 @@ pub fn create_action_service_static( } pub fn create_action_service_dynamic( - tm_funnel_tx: mpsc::Sender, + tm_funnel_tx: mpsc::Sender, pus_action_rx: mpsc::Receiver, action_router: GenericRequestRouter, - reply_receiver: mpsc::Receiver>, + reply_receiver: mpsc::Receiver>, ) -> ActionServiceWrapper { let action_request_handler = PusTargetedRequestService::new( PusServiceHelper::new( @@ -247,8 +248,7 @@ pub fn create_action_service_dynamic( } } -pub struct ActionServiceWrapper -{ +pub struct ActionServiceWrapper { pub(crate) service: PusTargetedRequestService< MpscTcReceiver, TmSender, @@ -259,15 +259,15 @@ pub struct ActionServiceWrapper, } -impl TargetedPusService +impl TargetedPusService for ActionServiceWrapper { /// Returns [true] if the packet handling is finished. - fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> bool { + fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus { match self.service.poll_and_handle_next_tc(time_stamp) { Ok(result) => match result { PusPacketHandlerResult::RequestHandled => {} @@ -280,15 +280,15 @@ impl Targete PusPacketHandlerResult::SubserviceNotImplemented(subservice, _) => { warn!("PUS 8 subservice {subservice} not implemented"); } - PusPacketHandlerResult::Empty => { - return true; - } + PusPacketHandlerResult::Empty => return HandlingStatus::Empty, }, Err(error) => { - error!("PUS packet handling error: {error:?}") + error!("PUS packet handling error: {error:?}"); + // To avoid permanent loops on error cases. + return HandlingStatus::Empty; } } - false + HandlingStatus::HandledOne } fn poll_and_handle_next_reply(&mut self, time_stamp: &[u8]) -> HandlingStatus { @@ -341,7 +341,7 @@ mod tests { DefaultActiveActionRequestMap, ActivePusActionRequestStd, ActionRequest, - PusActionReply, + ActionReplyPus, > { pub fn new_for_action(owner_id: ComponentId, target_id: ComponentId) -> Self { @@ -465,7 +465,10 @@ mod tests { .verif_reporter() .check_next_is_acceptance_success(id, accepted_token.request_id()); self.pus_packet_tx - .send(EcssTcAndToken::new(tc.to_vec().unwrap(), accepted_token)) + .send(EcssTcAndToken::new( + PacketAsVec::new(self.service.service_helper.id(), tc.to_vec().unwrap()), + accepted_token, + )) .unwrap(); } } @@ -497,7 +500,7 @@ mod tests { if let CompositeRequest::Action(action_req) = req.message { assert_eq!(action_req.action_id, action_id); assert_eq!(action_req.variant, ActionRequestVariant::NoData); - let action_reply = PusActionReply::new(action_id, ActionReplyVariant::Completed); + let action_reply = ActionReplyPus::new(action_id, ActionReplyVariant::Completed); testbench .reply_tx .send(GenericMessage::new(req.requestor_info, action_reply)) @@ -617,7 +620,7 @@ mod tests { let (req_id, active_req) = testbench.add_tc(TEST_APID, TEST_UNIQUE_ID_0, &[]); let active_action_req = ActivePusActionRequestStd::new_from_common_req(action_id, active_req); - let reply = PusActionReply::new(action_id, ActionReplyVariant::Completed); + let reply = ActionReplyPus::new(action_id, ActionReplyVariant::Completed); let generic_reply = GenericMessage::new(MessageMetadata::new(req_id.into(), 0), reply); let result = testbench.handle_reply(&generic_reply, &active_action_req, &[]); assert!(result.is_ok()); @@ -638,7 +641,7 @@ mod tests { let active_action_req = ActivePusActionRequestStd::new_from_common_req(action_id, active_req); let error_code = ResultU16::new(2, 3); - let reply = PusActionReply::new( + let reply = ActionReplyPus::new( action_id, ActionReplyVariant::CompletionFailed { error_code, @@ -665,7 +668,7 @@ mod tests { let (req_id, active_req) = testbench.add_tc(TEST_APID, TEST_UNIQUE_ID_0, &[]); let active_action_req = ActivePusActionRequestStd::new_from_common_req(action_id, active_req); - let reply = PusActionReply::new(action_id, ActionReplyVariant::StepSuccess { step: 1 }); + let reply = ActionReplyPus::new(action_id, ActionReplyVariant::StepSuccess { step: 1 }); let generic_reply = GenericMessage::new(MessageMetadata::new(req_id.into(), 0), reply); let result = testbench.handle_reply(&generic_reply, &active_action_req, &[]); assert!(result.is_ok()); @@ -692,7 +695,7 @@ mod tests { let active_action_req = ActivePusActionRequestStd::new_from_common_req(action_id, active_req); let error_code = ResultU16::new(2, 3); - let reply = PusActionReply::new( + let reply = ActionReplyPus::new( action_id, ActionReplyVariant::StepFailed { error_code, @@ -722,7 +725,7 @@ mod tests { fn reply_handling_unrequested_reply() { let mut testbench = ReplyHandlerTestbench::new(TEST_COMPONENT_ID_0.id(), ActionReplyHandler::default()); - let action_reply = PusActionReply::new(5_u32, ActionReplyVariant::Completed); + let action_reply = ActionReplyPus::new(5_u32, ActionReplyVariant::Completed); let unrequested_reply = GenericMessage::new(MessageMetadata::new(10_u32, 15_u64), action_reply); // Right now this function does not do a lot. We simply check that it does not panic or do diff --git a/satrs-example/src/pus/event.rs b/satrs-example/src/pus/event.rs index 865b1f1..4726ba0 100644 --- a/satrs-example/src/pus/event.rs +++ b/satrs-example/src/pus/event.rs @@ -8,17 +8,19 @@ use satrs::pus::event_srv::PusEventServiceHandler; use satrs::pus::verification::VerificationReporter; use satrs::pus::{ EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, - EcssTmSenderCore, MpscTcReceiver, MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, - PusPacketHandlerResult, PusServiceHelper, PusTmAsVec, PusTmInPool, TmInSharedPoolSender, + EcssTmSender, MpscTcReceiver, MpscTmAsVecSender, PusPacketHandlerResult, PusServiceHelper, }; +use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool}; use satrs_example::config::components::PUS_EVENT_MANAGEMENT; +use super::HandlingStatus; + pub fn create_event_service_static( - tm_sender: TmInSharedPoolSender>, + tm_sender: PacketSenderWithSharedPool, tc_pool: SharedStaticMemoryPool, pus_event_rx: mpsc::Receiver, event_request_tx: mpsc::Sender, -) -> EventServiceWrapper { +) -> EventServiceWrapper { let pus_5_handler = PusEventServiceHandler::new( PusServiceHelper::new( PUS_EVENT_MANAGEMENT.id(), @@ -35,7 +37,7 @@ pub fn create_event_service_static( } pub fn create_event_service_dynamic( - tm_funnel_tx: mpsc::Sender, + tm_funnel_tx: mpsc::Sender, pus_event_rx: mpsc::Receiver, event_request_tx: mpsc::Sender, ) -> EventServiceWrapper { @@ -54,15 +56,15 @@ pub fn create_event_service_dynamic( } } -pub struct EventServiceWrapper { +pub struct EventServiceWrapper { pub handler: PusEventServiceHandler, } -impl +impl EventServiceWrapper { - pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> bool { + pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus { match self.handler.poll_and_handle_next_tc(time_stamp) { Ok(result) => match result { PusPacketHandlerResult::RequestHandled => {} @@ -75,14 +77,12 @@ impl PusPacketHandlerResult::SubserviceNotImplemented(subservice, _) => { warn!("PUS 5 subservice {subservice} not implemented"); } - PusPacketHandlerResult::Empty => { - return true; - } + PusPacketHandlerResult::Empty => return HandlingStatus::Empty, }, Err(error) => { error!("PUS packet handling error: {error:?}") } } - false + HandlingStatus::HandledOne } } diff --git a/satrs-example/src/pus/hk.rs b/satrs-example/src/pus/hk.rs index cb3ebb9..bbecf19 100644 --- a/satrs-example/src/pus/hk.rs +++ b/satrs-example/src/pus/hk.rs @@ -8,14 +8,14 @@ use satrs::pus::verification::{ }; use satrs::pus::{ ActivePusRequestStd, ActiveRequestProvider, DefaultActiveRequestMap, EcssTcAndToken, - EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTmSenderCore, + EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTmSender, EcssTmtcError, GenericConversionError, MpscTcReceiver, MpscTmAsVecSender, - MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, PusReplyHandler, PusServiceHelper, - PusTcToRequestConverter, PusTmAsVec, PusTmInPool, TmInSharedPoolSender, + PusPacketHandlerResult, PusReplyHandler, PusServiceHelper, PusTcToRequestConverter, }; use satrs::request::{GenericMessage, UniqueApidTargetId}; use satrs::spacepackets::ecss::tc::PusTcReader; use satrs::spacepackets::ecss::{hk, PusPacket}; +use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool}; use satrs_example::config::components::PUS_HK_SERVICE; use satrs_example::config::{hk_err, tmtc_err}; use std::sync::mpsc; @@ -46,7 +46,7 @@ impl PusReplyHandler for HkReplyHandler { fn handle_unrequested_reply( &mut self, reply: &GenericMessage, - _tm_sender: &impl EcssTmSenderCore, + _tm_sender: &impl EcssTmSender, ) -> Result<(), Self::Error> { log::warn!("received unexpected reply for service 3: {reply:?}"); Ok(()) @@ -56,7 +56,7 @@ impl PusReplyHandler for HkReplyHandler { &mut self, reply: &GenericMessage, active_request: &ActivePusRequestStd, - tm_sender: &impl EcssTmSenderCore, + tm_sender: &impl EcssTmSender, verification_handler: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result { @@ -77,7 +77,7 @@ impl PusReplyHandler for HkReplyHandler { fn handle_request_timeout( &mut self, active_request: &ActivePusRequestStd, - tm_sender: &impl EcssTmSenderCore, + tm_sender: &impl EcssTmSender, verification_handler: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result<(), Self::Error> { @@ -111,7 +111,7 @@ impl PusTcToRequestConverter for HkRequestConver &mut self, token: VerificationToken, tc: &PusTcReader, - tm_sender: &(impl EcssTmSenderCore + ?Sized), + tm_sender: &(impl EcssTmSender + ?Sized), verif_reporter: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result<(ActivePusRequestStd, HkRequest), Self::Error> { @@ -232,12 +232,12 @@ impl PusTcToRequestConverter for HkRequestConver } pub fn create_hk_service_static( - tm_sender: TmInSharedPoolSender>, + tm_sender: PacketSenderWithSharedPool, tc_pool: SharedStaticMemoryPool, pus_hk_rx: mpsc::Receiver, request_router: GenericRequestRouter, reply_receiver: mpsc::Receiver>, -) -> HkServiceWrapper { +) -> HkServiceWrapper { let pus_3_handler = PusTargetedRequestService::new( PusServiceHelper::new( PUS_HK_SERVICE.id(), @@ -258,7 +258,7 @@ pub fn create_hk_service_static( } pub fn create_hk_service_dynamic( - tm_funnel_tx: mpsc::Sender, + tm_funnel_tx: mpsc::Sender, pus_hk_rx: mpsc::Receiver, request_router: GenericRequestRouter, reply_receiver: mpsc::Receiver>, @@ -282,7 +282,7 @@ pub fn create_hk_service_dynamic( } } -pub struct HkServiceWrapper { +pub struct HkServiceWrapper { pub(crate) service: PusTargetedRequestService< MpscTcReceiver, TmSender, @@ -297,10 +297,10 @@ pub struct HkServiceWrapper, } -impl +impl HkServiceWrapper { - pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> bool { + pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus { match self.service.poll_and_handle_next_tc(time_stamp) { Ok(result) => match result { PusPacketHandlerResult::RequestHandled => {} @@ -313,15 +313,15 @@ impl PusPacketHandlerResult::SubserviceNotImplemented(subservice, _) => { warn!("PUS 3 subservice {subservice} not implemented"); } - PusPacketHandlerResult::Empty => { - return true; - } + PusPacketHandlerResult::Empty => return HandlingStatus::Empty, }, Err(error) => { - error!("PUS packet handling error: {error:?}") + error!("PUS packet handling error: {error:?}"); + // To avoid permanent loops on error cases. + return HandlingStatus::Empty; } } - false + HandlingStatus::HandledOne } pub fn poll_and_handle_next_reply(&mut self, time_stamp: &[u8]) -> HandlingStatus { diff --git a/satrs-example/src/pus/mod.rs b/satrs-example/src/pus/mod.rs index 83bd34a..6e5ec37 100644 --- a/satrs-example/src/pus/mod.rs +++ b/satrs-example/src/pus/mod.rs @@ -1,20 +1,21 @@ use crate::requests::GenericRequestRouter; -use crate::tmtc::MpscStoreAndSendError; use log::warn; +use satrs::pool::PoolAddr; use satrs::pus::verification::{ self, FailParams, TcStateAccepted, TcStateStarted, VerificationReporter, VerificationReporterCfg, VerificationReportingProvider, VerificationToken, }; use satrs::pus::{ ActiveRequestMapProvider, ActiveRequestProvider, EcssTcAndToken, EcssTcInMemConverter, - EcssTcReceiverCore, EcssTmSenderCore, EcssTmtcError, GenericConversionError, - GenericRoutingError, PusPacketHandlerResult, PusPacketHandlingError, PusReplyHandler, - PusRequestRouter, PusServiceHelper, PusTcToRequestConverter, TcInMemory, + EcssTcReceiver, EcssTmSender, EcssTmtcError, GenericConversionError, GenericRoutingError, + PusPacketHandlerResult, PusPacketHandlingError, PusReplyHandler, PusRequestRouter, + PusServiceHelper, PusTcToRequestConverter, TcInMemory, }; -use satrs::queue::GenericReceiveError; +use satrs::queue::{GenericReceiveError, GenericSendError}; use satrs::request::{Apid, GenericMessage, MessageMetadata}; use satrs::spacepackets::ecss::tc::PusTcReader; -use satrs::spacepackets::ecss::PusServiceId; +use satrs::spacepackets::ecss::{PusPacket, PusServiceId}; +use satrs::tmtc::{PacketAsVec, PacketInPool}; use satrs::ComponentId; use satrs_example::config::components::PUS_ROUTING_SERVICE; use satrs_example::config::{tmtc_err, CustomPusServiceId}; @@ -53,7 +54,7 @@ pub struct PusTcMpscRouter { pub mode_tc_sender: Sender, } -pub struct PusReceiver { +pub struct PusTcDistributor { pub id: ComponentId, pub tm_sender: TmSender, pub verif_reporter: VerificationReporter, @@ -61,7 +62,7 @@ pub struct PusReceiver { stamp_helper: TimeStampHelper, } -impl PusReceiver { +impl PusTcDistributor { pub fn new(tm_sender: TmSender, pus_router: PusTcMpscRouter) -> Self { Self { id: PUS_ROUTING_SERVICE.raw(), @@ -75,19 +76,54 @@ impl PusReceiver { } } - pub fn handle_tc_packet( + pub fn handle_tc_packet_vec( &mut self, - tc_in_memory: TcInMemory, - service: u8, - pus_tc: &PusTcReader, - ) -> Result { - let init_token = self.verif_reporter.add_tc(pus_tc); + packet_as_vec: PacketAsVec, + ) -> Result { + self.handle_tc_generic(packet_as_vec.sender_id, None, &packet_as_vec.packet) + } + + pub fn handle_tc_packet_in_store( + &mut self, + packet_in_pool: PacketInPool, + pus_tc_copy: &[u8], + ) -> Result { + self.handle_tc_generic( + packet_in_pool.sender_id, + Some(packet_in_pool.store_addr), + pus_tc_copy, + ) + } + + pub fn handle_tc_generic( + &mut self, + sender_id: ComponentId, + addr_opt: Option, + raw_tc: &[u8], + ) -> Result { + let pus_tc_result = PusTcReader::new(raw_tc); + if pus_tc_result.is_err() { + log::warn!( + "error creating PUS TC from raw data received from {}: {}", + sender_id, + pus_tc_result.unwrap_err() + ); + log::warn!("raw data: {:x?}", raw_tc); + return Ok(PusPacketHandlerResult::RequestHandled); + } + let pus_tc = pus_tc_result.unwrap().0; + let init_token = self.verif_reporter.add_tc(&pus_tc); self.stamp_helper.update_from_now(); let accepted_token = self .verif_reporter .acceptance_success(&self.tm_sender, init_token, self.stamp_helper.stamp()) .expect("Acceptance success failure"); - let service = PusServiceId::try_from(service); + let service = PusServiceId::try_from(pus_tc.service()); + let tc_in_memory: TcInMemory = if let Some(store_addr) = addr_opt { + PacketInPool::new(sender_id, store_addr).into() + } else { + PacketAsVec::new(sender_id, Vec::from(raw_tc)).into() + }; match service { Ok(standard_service) => match standard_service { PusServiceId::Test => self.pus_router.test_tc_sender.send(EcssTcAndToken { @@ -128,12 +164,14 @@ impl PusReceiver { Err(e) => { if let Ok(custom_service) = CustomPusServiceId::try_from(e.number) { match custom_service { - CustomPusServiceId::Mode => { - self.pus_router.mode_tc_sender.send(EcssTcAndToken { + CustomPusServiceId::Mode => self + .pus_router + .mode_tc_sender + .send(EcssTcAndToken { tc_in_memory, token: Some(accepted_token.into()), - })? - } + }) + .map_err(|_| GenericSendError::RxDisconnected)?, CustomPusServiceId::Health => {} } } else { @@ -157,7 +195,7 @@ impl PusReceiver { pub trait TargetedPusService { /// Returns [true] if the packet handling is finished. - fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> bool; + fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus; fn poll_and_handle_next_reply(&mut self, time_stamp: &[u8]) -> HandlingStatus; fn check_for_request_timeouts(&mut self); } @@ -179,12 +217,13 @@ pub trait TargetedPusService { /// /// The handler exposes the following API: /// -/// 1. [Self::handle_one_tc] which tries to poll and handle one TC packet, covering steps 1-5. -/// 2. [Self::check_one_reply] which tries to poll and handle one reply, covering step 6. +/// 1. [Self::poll_and_handle_next_tc] which tries to poll and handle one TC packet, covering +/// steps 1-5. +/// 2. [Self::poll_and_check_next_reply] which tries to poll and handle one reply, covering step 6. /// 3. [Self::check_for_request_timeouts] which checks for request timeouts, covering step 7. pub struct PusTargetedRequestService< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, RequestConverter: PusTcToRequestConverter, @@ -205,8 +244,8 @@ pub struct PusTargetedRequestService< } impl< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, RequestConverter: PusTcToRequestConverter, @@ -435,7 +474,7 @@ where /// Generic timeout handling: Handle the verification failure with a dedicated return code /// and also log the error. pub fn generic_pus_request_timeout_handler( - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), active_request: &(impl ActiveRequestProvider + Debug), verification_handler: &impl VerificationReportingProvider, time_stamp: &[u8], @@ -459,7 +498,7 @@ pub(crate) mod tests { use std::time::Duration; use satrs::pus::test_util::TEST_COMPONENT_ID_0; - use satrs::pus::{MpscTmAsVecSender, PusTmAsVec, PusTmVariant}; + use satrs::pus::{MpscTmAsVecSender, PusTmVariant}; use satrs::request::RequestId; use satrs::{ pus::{ @@ -489,7 +528,7 @@ pub(crate) mod tests { pub id: ComponentId, pub verif_reporter: TestVerificationReporter, pub reply_handler: ReplyHandler, - pub tm_receiver: mpsc::Receiver, + pub tm_receiver: mpsc::Receiver, pub default_timeout: Duration, tm_sender: MpscTmAsVecSender, phantom: std::marker::PhantomData<(ActiveRequestInfo, Reply)>, @@ -589,7 +628,7 @@ pub(crate) mod tests { /// Dummy sender component which does nothing on the [Self::send_tm] call. /// /// Useful for unit tests. - impl EcssTmSenderCore for DummySender { + impl EcssTmSender for DummySender { fn send_tm(&self, _source_id: ComponentId, _tm: PusTmVariant) -> Result<(), EcssTmtcError> { Ok(()) } @@ -696,7 +735,7 @@ pub(crate) mod tests { ReplyType, >, pub request_id: Option, - pub tm_funnel_rx: mpsc::Receiver, + pub tm_funnel_rx: mpsc::Receiver, pub pus_packet_tx: mpsc::Sender, pub reply_tx: mpsc::Sender>, pub request_rx: mpsc::Receiver>, diff --git a/satrs-example/src/pus/mode.rs b/satrs-example/src/pus/mode.rs index 4f2ff13..5f3c0ff 100644 --- a/satrs-example/src/pus/mode.rs +++ b/satrs-example/src/pus/mode.rs @@ -1,5 +1,6 @@ use derive_new::new; use log::{error, warn}; +use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool}; use std::sync::mpsc; use std::time::Duration; @@ -8,8 +9,8 @@ use satrs::pool::SharedStaticMemoryPool; use satrs::pus::verification::VerificationReporter; use satrs::pus::{ DefaultActiveRequestMap, EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, - EcssTcInVecConverter, MpscTcReceiver, MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, - PusPacketHandlerResult, PusServiceHelper, PusTmAsVec, PusTmInPool, TmInSharedPoolSender, + EcssTcInVecConverter, MpscTcReceiver, MpscTmAsVecSender, PusPacketHandlerResult, + PusServiceHelper, }; use satrs::request::GenericMessage; use satrs::{ @@ -20,7 +21,7 @@ use satrs::{ self, FailParams, TcStateAccepted, TcStateStarted, VerificationReportingProvider, VerificationToken, }, - ActivePusRequestStd, ActiveRequestProvider, EcssTmSenderCore, EcssTmtcError, + ActivePusRequestStd, ActiveRequestProvider, EcssTmSender, EcssTmtcError, GenericConversionError, PusReplyHandler, PusTcToRequestConverter, PusTmVariant, }, request::UniqueApidTargetId, @@ -53,7 +54,7 @@ impl PusReplyHandler for ModeReplyHandler { fn handle_unrequested_reply( &mut self, reply: &GenericMessage, - _tm_sender: &impl EcssTmSenderCore, + _tm_sender: &impl EcssTmSender, ) -> Result<(), Self::Error> { log::warn!("received unexpected reply for mode service 5: {reply:?}"); Ok(()) @@ -63,7 +64,7 @@ impl PusReplyHandler for ModeReplyHandler { &mut self, reply: &GenericMessage, active_request: &ActivePusRequestStd, - tm_sender: &impl EcssTmSenderCore, + tm_sender: &impl EcssTmSender, verification_handler: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result { @@ -117,7 +118,7 @@ impl PusReplyHandler for ModeReplyHandler { fn handle_request_timeout( &mut self, active_request: &ActivePusRequestStd, - tm_sender: &impl EcssTmSenderCore, + tm_sender: &impl EcssTmSender, verification_handler: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result<(), Self::Error> { @@ -142,7 +143,7 @@ impl PusTcToRequestConverter for ModeRequestCo &mut self, token: VerificationToken, tc: &PusTcReader, - tm_sender: &(impl EcssTmSenderCore + ?Sized), + tm_sender: &(impl EcssTmSender + ?Sized), verif_reporter: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result<(ActivePusRequestStd, ModeRequest), Self::Error> { @@ -203,12 +204,12 @@ impl PusTcToRequestConverter for ModeRequestCo } pub fn create_mode_service_static( - tm_sender: TmInSharedPoolSender>, + tm_sender: PacketSenderWithSharedPool, tc_pool: SharedStaticMemoryPool, pus_action_rx: mpsc::Receiver, mode_router: GenericRequestRouter, reply_receiver: mpsc::Receiver>, -) -> ModeServiceWrapper { +) -> ModeServiceWrapper { let mode_request_handler = PusTargetedRequestService::new( PusServiceHelper::new( PUS_MODE_SERVICE.id(), @@ -229,7 +230,7 @@ pub fn create_mode_service_static( } pub fn create_mode_service_dynamic( - tm_funnel_tx: mpsc::Sender, + tm_funnel_tx: mpsc::Sender, pus_action_rx: mpsc::Receiver, mode_router: GenericRequestRouter, reply_receiver: mpsc::Receiver>, @@ -253,7 +254,7 @@ pub fn create_mode_service_dynamic( } } -pub struct ModeServiceWrapper { +pub struct ModeServiceWrapper { pub(crate) service: PusTargetedRequestService< MpscTcReceiver, TmSender, @@ -268,11 +269,11 @@ pub struct ModeServiceWrapper, } -impl TargetedPusService +impl TargetedPusService for ModeServiceWrapper { /// Returns [true] if the packet handling is finished. - fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> bool { + fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus { match self.service.poll_and_handle_next_tc(time_stamp) { Ok(result) => match result { PusPacketHandlerResult::RequestHandled => {} @@ -285,15 +286,15 @@ impl Targete PusPacketHandlerResult::SubserviceNotImplemented(subservice, _) => { warn!("PUS mode service: {subservice} not implemented"); } - PusPacketHandlerResult::Empty => { - return true; - } + PusPacketHandlerResult::Empty => return HandlingStatus::Empty, }, Err(error) => { - error!("PUS mode service: packet handling error: {error:?}") + error!("PUS mode service: packet handling error: {error:?}"); + // To avoid permanent loops on error cases. + return HandlingStatus::Empty; } } - false + HandlingStatus::HandledOne } fn poll_and_handle_next_reply(&mut self, time_stamp: &[u8]) -> HandlingStatus { diff --git a/satrs-example/src/pus/scheduler.rs b/satrs-example/src/pus/scheduler.rs index d75c666..5346e19 100644 --- a/satrs-example/src/pus/scheduler.rs +++ b/satrs-example/src/pus/scheduler.rs @@ -9,51 +9,62 @@ use satrs::pus::scheduler_srv::PusSchedServiceHandler; use satrs::pus::verification::VerificationReporter; use satrs::pus::{ EcssTcAndToken, EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, - EcssTmSenderCore, MpscTcReceiver, MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, - PusPacketHandlerResult, PusServiceHelper, PusTmAsVec, PusTmInPool, TmInSharedPoolSender, + EcssTmSender, MpscTcReceiver, MpscTmAsVecSender, PusPacketHandlerResult, PusServiceHelper, }; +use satrs::tmtc::{PacketAsVec, PacketInPool, PacketSenderWithSharedPool}; +use satrs::ComponentId; use satrs_example::config::components::PUS_SCHED_SERVICE; -use crate::tmtc::PusTcSourceProviderSharedPool; +use super::HandlingStatus; pub trait TcReleaser { - fn release(&mut self, enabled: bool, info: &TcInfo, tc: &[u8]) -> bool; + fn release(&mut self, sender_id: ComponentId, enabled: bool, info: &TcInfo, tc: &[u8]) -> bool; } -impl TcReleaser for PusTcSourceProviderSharedPool { - fn release(&mut self, enabled: bool, _info: &TcInfo, tc: &[u8]) -> bool { +impl TcReleaser for PacketSenderWithSharedPool { + fn release( + &mut self, + sender_id: ComponentId, + enabled: bool, + _info: &TcInfo, + tc: &[u8], + ) -> bool { if enabled { + let shared_pool = self.shared_pool.get_mut(); // Transfer TC from scheduler TC pool to shared TC pool. - let released_tc_addr = self - .shared_pool - .pool + let released_tc_addr = shared_pool + .0 .write() .expect("locking pool failed") .add(tc) .expect("adding TC to shared pool failed"); - self.tc_source - .send(released_tc_addr) + self.sender + .send(PacketInPool::new(sender_id, released_tc_addr)) .expect("sending TC to TC source failed"); } true } } -impl TcReleaser for mpsc::Sender> { - fn release(&mut self, enabled: bool, _info: &TcInfo, tc: &[u8]) -> bool { +impl TcReleaser for mpsc::Sender { + fn release( + &mut self, + sender_id: ComponentId, + enabled: bool, + _info: &TcInfo, + tc: &[u8], + ) -> bool { if enabled { // Send released TC to centralized TC source. - self.send(tc.to_vec()) + self.send(PacketAsVec::new(sender_id, tc.to_vec())) .expect("sending TC to TC source failed"); } true } } -pub struct SchedulingServiceWrapper< - TmSender: EcssTmSenderCore, - TcInMemConverter: EcssTcInMemConverter, -> { +pub struct SchedulingServiceWrapper +{ pub pus_11_handler: PusSchedServiceHandler< MpscTcReceiver, TmSender, @@ -66,12 +77,13 @@ pub struct SchedulingServiceWrapper< pub tc_releaser: Box, } -impl +impl SchedulingServiceWrapper { pub fn release_tcs(&mut self) { + let id = self.pus_11_handler.service_helper.id(); let releaser = |enabled: bool, info: &TcInfo, tc: &[u8]| -> bool { - self.tc_releaser.release(enabled, info, tc) + self.tc_releaser.release(id, enabled, info, tc) }; self.pus_11_handler @@ -92,7 +104,7 @@ impl } } - pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> bool { + pub fn poll_and_handle_next_tc(&mut self, time_stamp: &[u8]) -> HandlingStatus { match self .pus_11_handler .poll_and_handle_next_tc(time_stamp, &mut self.sched_tc_pool) @@ -108,24 +120,22 @@ impl PusPacketHandlerResult::SubserviceNotImplemented(subservice, _) => { warn!("PUS11: Subservice {subservice} not implemented"); } - PusPacketHandlerResult::Empty => { - return true; - } + PusPacketHandlerResult::Empty => return HandlingStatus::Empty, }, Err(error) => { error!("PUS packet handling error: {error:?}") } } - false + HandlingStatus::HandledOne } } pub fn create_scheduler_service_static( - tm_sender: TmInSharedPoolSender>, - tc_releaser: PusTcSourceProviderSharedPool, + tm_sender: PacketSenderWithSharedPool, + tc_releaser: PacketSenderWithSharedPool, pus_sched_rx: mpsc::Receiver, sched_tc_pool: StaticMemoryPool, -) -> SchedulingServiceWrapper { +) -> SchedulingServiceWrapper { let scheduler = PusScheduler::new_with_current_init_time(Duration::from_secs(5)) .expect("Creating PUS Scheduler failed"); let pus_11_handler = PusSchedServiceHandler::new( @@ -134,7 +144,7 @@ pub fn create_scheduler_service_static( pus_sched_rx, tm_sender, create_verification_reporter(PUS_SCHED_SERVICE.id(), PUS_SCHED_SERVICE.apid), - EcssTcInSharedStoreConverter::new(tc_releaser.clone_backing_pool(), 2048), + EcssTcInSharedStoreConverter::new(tc_releaser.shared_packet_store().0.clone(), 2048), ), scheduler, ); @@ -147,8 +157,8 @@ pub fn create_scheduler_service_static( } pub fn create_scheduler_service_dynamic( - tm_funnel_tx: mpsc::Sender, - tc_source_sender: mpsc::Sender>, + tm_funnel_tx: mpsc::Sender, + tc_source_sender: mpsc::Sender, pus_sched_rx: mpsc::Receiver, sched_tc_pool: StaticMemoryPool, ) -> SchedulingServiceWrapper { diff --git a/satrs-example/src/pus/stack.rs b/satrs-example/src/pus/stack.rs index a11463c..fac9bce 100644 --- a/satrs-example/src/pus/stack.rs +++ b/satrs-example/src/pus/stack.rs @@ -1,7 +1,7 @@ use crate::pus::mode::ModeServiceWrapper; use derive_new::new; use satrs::{ - pus::{EcssTcInMemConverter, EcssTmSenderCore}, + pus::{EcssTcInMemConverter, EcssTmSender}, spacepackets::time::{cds, TimeWriter}, }; @@ -12,7 +12,7 @@ use super::{ }; #[derive(new)] -pub struct PusStack { +pub struct PusStack { test_srv: TestCustomServiceWrapper, hk_srv_wrapper: HkServiceWrapper, event_srv: EventServiceWrapper, @@ -21,7 +21,7 @@ pub struct PusStack, } -impl +impl PusStack { pub fn periodic_operation(&mut self) { @@ -35,18 +35,29 @@ impl loop { let mut nothing_to_do = true; let mut is_srv_finished = - |tc_handling_done: bool, reply_handling_done: Option| { - if !tc_handling_done - || (reply_handling_done.is_some() - && reply_handling_done.unwrap() == HandlingStatus::Empty) + |_srv_id: u8, + tc_handling_status: HandlingStatus, + reply_handling_status: Option| { + if tc_handling_status == HandlingStatus::HandledOne + || (reply_handling_status.is_some() + && reply_handling_status.unwrap() == HandlingStatus::HandledOne) { nothing_to_do = false; } }; - is_srv_finished(self.test_srv.poll_and_handle_next_packet(&time_stamp), None); - is_srv_finished(self.schedule_srv.poll_and_handle_next_tc(&time_stamp), None); - is_srv_finished(self.event_srv.poll_and_handle_next_tc(&time_stamp), None); is_srv_finished( + 17, + self.test_srv.poll_and_handle_next_packet(&time_stamp), + None, + ); + is_srv_finished( + 11, + self.schedule_srv.poll_and_handle_next_tc(&time_stamp), + None, + ); + is_srv_finished(5, self.event_srv.poll_and_handle_next_tc(&time_stamp), None); + is_srv_finished( + 8, self.action_srv_wrapper.poll_and_handle_next_tc(&time_stamp), Some( self.action_srv_wrapper @@ -54,10 +65,12 @@ impl ), ); is_srv_finished( + 3, self.hk_srv_wrapper.poll_and_handle_next_tc(&time_stamp), Some(self.hk_srv_wrapper.poll_and_handle_next_reply(&time_stamp)), ); is_srv_finished( + 200, self.mode_srv.poll_and_handle_next_tc(&time_stamp), Some(self.mode_srv.poll_and_handle_next_reply(&time_stamp)), ); diff --git a/satrs-example/src/pus/test.rs b/satrs-example/src/pus/test.rs index 0111026..583b72c 100644 --- a/satrs-example/src/pus/test.rs +++ b/satrs-example/src/pus/test.rs @@ -6,24 +6,26 @@ use satrs::pus::test::PusService17TestHandler; use satrs::pus::verification::{FailParams, VerificationReporter, VerificationReportingProvider}; use satrs::pus::EcssTcInSharedStoreConverter; use satrs::pus::{ - EcssTcAndToken, EcssTcInMemConverter, EcssTcInVecConverter, EcssTmSenderCore, MpscTcReceiver, - MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, PusServiceHelper, - PusTmAsVec, PusTmInPool, TmInSharedPoolSender, + EcssTcAndToken, EcssTcInMemConverter, EcssTcInVecConverter, EcssTmSender, MpscTcReceiver, + MpscTmAsVecSender, PusPacketHandlerResult, PusServiceHelper, }; use satrs::spacepackets::ecss::tc::PusTcReader; use satrs::spacepackets::ecss::PusPacket; use satrs::spacepackets::time::cds::CdsTime; use satrs::spacepackets::time::TimeWriter; +use satrs::tmtc::{PacketAsVec, PacketSenderWithSharedPool}; use satrs_example::config::components::PUS_TEST_SERVICE; use satrs_example::config::{tmtc_err, TEST_EVENT}; use std::sync::mpsc; +use super::HandlingStatus; + pub fn create_test_service_static( - tm_sender: TmInSharedPoolSender>, + tm_sender: PacketSenderWithSharedPool, tc_pool: SharedStaticMemoryPool, event_sender: mpsc::Sender, pus_test_rx: mpsc::Receiver, -) -> TestCustomServiceWrapper { +) -> TestCustomServiceWrapper { let pus17_handler = PusService17TestHandler::new(PusServiceHelper::new( PUS_TEST_SERVICE.id(), pus_test_rx, @@ -38,7 +40,7 @@ pub fn create_test_service_static( } pub fn create_test_service_dynamic( - tm_funnel_tx: mpsc::Sender, + tm_funnel_tx: mpsc::Sender, event_sender: mpsc::Sender, pus_test_rx: mpsc::Receiver, ) -> TestCustomServiceWrapper { @@ -55,23 +57,21 @@ pub fn create_test_service_dynamic( } } -pub struct TestCustomServiceWrapper< - TmSender: EcssTmSenderCore, - TcInMemConverter: EcssTcInMemConverter, -> { +pub struct TestCustomServiceWrapper +{ pub handler: PusService17TestHandler, pub test_srv_event_sender: mpsc::Sender, } -impl +impl TestCustomServiceWrapper { - pub fn poll_and_handle_next_packet(&mut self, time_stamp: &[u8]) -> bool { + pub fn poll_and_handle_next_packet(&mut self, time_stamp: &[u8]) -> HandlingStatus { let res = self.handler.poll_and_handle_next_tc(time_stamp); if res.is_err() { warn!("PUS17 handler failed with error {:?}", res.unwrap_err()); - return true; + return HandlingStatus::HandledOne; } match res.unwrap() { PusPacketHandlerResult::RequestHandled => { @@ -135,10 +135,8 @@ impl .expect("Sending start failure verification failed"); } } - PusPacketHandlerResult::Empty => { - return true; - } + PusPacketHandlerResult::Empty => return HandlingStatus::Empty, } - false + HandlingStatus::HandledOne } } diff --git a/satrs-example/src/queue.rs b/satrs-example/src/queue.rs deleted file mode 100644 index 65e2fde..0000000 --- a/satrs-example/src/queue.rs +++ /dev/null @@ -1,45 +0,0 @@ -/// Generic error type for sending something via a message queue. -#[derive(Debug, Copy, Clone)] -pub enum GenericSendError { - RxDisconnected, - QueueFull(Option), -} - -impl Display for GenericSendError { - fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { - match self { - GenericSendError::RxDisconnected => { - write!(f, "rx side has disconnected") - } - GenericSendError::QueueFull(max_cap) => { - write!(f, "queue with max capacity of {max_cap:?} is full") - } - } - } -} - -#[cfg(feature = "std")] -impl Error for GenericSendError {} - -/// Generic error type for sending something via a message queue. -#[derive(Debug, Copy, Clone)] -pub enum GenericRecvError { - Empty, - TxDisconnected, -} - -impl Display for GenericRecvError { - fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { - match self { - Self::TxDisconnected => { - write!(f, "tx side has disconnected") - } - Self::Empty => { - write!(f, "nothing to receive") - } - } - } -} - -#[cfg(feature = "std")] -impl Error for GenericRecvError {} diff --git a/satrs-example/src/requests.rs b/satrs-example/src/requests.rs index 498be3f..445e05e 100644 --- a/satrs-example/src/requests.rs +++ b/satrs-example/src/requests.rs @@ -8,7 +8,7 @@ use satrs::mode::ModeRequest; use satrs::pus::verification::{ FailParams, TcStateAccepted, VerificationReportingProvider, VerificationToken, }; -use satrs::pus::{ActiveRequestProvider, EcssTmSenderCore, GenericRoutingError, PusRequestRouter}; +use satrs::pus::{ActiveRequestProvider, EcssTmSender, GenericRoutingError, PusRequestRouter}; use satrs::queue::GenericSendError; use satrs::request::{GenericMessage, MessageMetadata, UniqueApidTargetId}; use satrs::spacepackets::ecss::tc::PusTcReader; @@ -47,7 +47,7 @@ impl GenericRequestRouter { active_request: &impl ActiveRequestProvider, tc: &PusTcReader, error: GenericRoutingError, - tm_sender: &(impl EcssTmSenderCore + ?Sized), + tm_sender: &(impl EcssTmSender + ?Sized), verif_reporter: &impl VerificationReportingProvider, time_stamp: &[u8], ) { diff --git a/satrs-example/src/tcp.rs b/satrs-example/src/tcp.rs deleted file mode 100644 index 04bb136..0000000 --- a/satrs-example/src/tcp.rs +++ /dev/null @@ -1,127 +0,0 @@ -use std::{ - collections::{HashSet, VecDeque}, - sync::{Arc, Mutex}, -}; - -use log::{info, warn}; -use satrs::{ - hal::std::tcp_server::{ServerConfig, TcpSpacepacketsServer}, - pus::ReceivesEcssPusTc, - spacepackets::PacketId, - tmtc::{CcsdsDistributor, CcsdsError, ReceivesCcsdsTc, TmPacketSourceCore}, -}; - -use crate::ccsds::CcsdsReceiver; - -#[derive(Default, Clone)] -pub struct SyncTcpTmSource { - tm_queue: Arc>>>, - max_packets_stored: usize, - pub silent_packet_overwrite: bool, -} - -impl SyncTcpTmSource { - pub fn new(max_packets_stored: usize) -> Self { - Self { - tm_queue: Arc::default(), - max_packets_stored, - silent_packet_overwrite: true, - } - } - - pub fn add_tm(&mut self, tm: &[u8]) { - let mut tm_queue = self.tm_queue.lock().expect("locking tm queue failec"); - if tm_queue.len() > self.max_packets_stored { - if !self.silent_packet_overwrite { - warn!("TPC TM source is full, deleting oldest packet"); - } - tm_queue.pop_front(); - } - tm_queue.push_back(tm.to_vec()); - } -} - -impl TmPacketSourceCore for SyncTcpTmSource { - type Error = (); - - fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result { - let mut tm_queue = self.tm_queue.lock().expect("locking tm queue failed"); - if !tm_queue.is_empty() { - let next_vec = tm_queue.front().unwrap(); - if buffer.len() < next_vec.len() { - panic!( - "provided buffer too small, must be at least {} bytes", - next_vec.len() - ); - } - let next_vec = tm_queue.pop_front().unwrap(); - buffer[0..next_vec.len()].copy_from_slice(&next_vec); - if next_vec.len() > 9 { - let service = next_vec[7]; - let subservice = next_vec[8]; - info!("Sending PUS TM[{service},{subservice}]") - } else { - info!("Sending PUS TM"); - } - return Ok(next_vec.len()); - } - Ok(0) - } -} - -pub type TcpServerType = TcpSpacepacketsServer< - (), - CcsdsError, - SyncTcpTmSource, - CcsdsDistributor, MpscErrorType>, - HashSet, ->; - -pub struct TcpTask< - TcSource: ReceivesCcsdsTc - + ReceivesEcssPusTc - + Clone - + Send - + 'static, - MpscErrorType: 'static, -> { - server: TcpServerType, -} - -impl< - TcSource: ReceivesCcsdsTc - + ReceivesEcssPusTc - + Clone - + Send - + 'static, - MpscErrorType: 'static + core::fmt::Debug, - > TcpTask -{ - pub fn new( - cfg: ServerConfig, - tm_source: SyncTcpTmSource, - tc_receiver: CcsdsDistributor, MpscErrorType>, - packet_id_lookup: HashSet, - ) -> Result { - Ok(Self { - server: TcpSpacepacketsServer::new(cfg, tm_source, tc_receiver, packet_id_lookup)?, - }) - } - - pub fn periodic_operation(&mut self) { - loop { - let result = self.server.handle_next_connection(); - match result { - Ok(conn_result) => { - info!( - "Served {} TMs and {} TCs for client {:?}", - conn_result.num_sent_tms, conn_result.num_received_tcs, conn_result.addr - ); - } - Err(e) => { - warn!("TCP server error: {e:?}"); - } - } - } - } -} diff --git a/satrs-example/src/tmtc.rs b/satrs-example/src/tmtc.rs deleted file mode 100644 index 43d5889..0000000 --- a/satrs-example/src/tmtc.rs +++ /dev/null @@ -1,212 +0,0 @@ -use log::warn; -use satrs::pus::{ - EcssTcAndToken, MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, ReceivesEcssPusTc, -}; -use satrs::spacepackets::SpHeader; -use std::sync::mpsc::{self, Receiver, SendError, Sender, SyncSender, TryRecvError}; -use thiserror::Error; - -use crate::pus::PusReceiver; -use satrs::pool::{PoolProvider, SharedStaticMemoryPool, StoreAddr, StoreError}; -use satrs::spacepackets::ecss::tc::PusTcReader; -use satrs::spacepackets::ecss::PusPacket; -use satrs::tmtc::ReceivesCcsdsTc; - -#[derive(Debug, Clone, PartialEq, Eq, Error)] -pub enum MpscStoreAndSendError { - #[error("Store error: {0}")] - Store(#[from] StoreError), - #[error("TC send error: {0}")] - TcSend(#[from] SendError), - #[error("TMTC send error: {0}")] - TmTcSend(#[from] SendError), -} - -#[derive(Clone)] -pub struct SharedTcPool { - pub pool: SharedStaticMemoryPool, -} - -impl SharedTcPool { - pub fn add_pus_tc(&mut self, pus_tc: &PusTcReader) -> Result { - let mut pg = self.pool.write().expect("error locking TC store"); - let addr = pg.free_element(pus_tc.len_packed(), |buf| { - buf[0..pus_tc.len_packed()].copy_from_slice(pus_tc.raw_data()); - })?; - Ok(addr) - } -} - -#[derive(Clone)] -pub struct PusTcSourceProviderSharedPool { - pub tc_source: SyncSender, - pub shared_pool: SharedTcPool, -} - -impl PusTcSourceProviderSharedPool { - #[allow(dead_code)] - pub fn clone_backing_pool(&self) -> SharedStaticMemoryPool { - self.shared_pool.pool.clone() - } -} - -impl ReceivesEcssPusTc for PusTcSourceProviderSharedPool { - type Error = MpscStoreAndSendError; - - fn pass_pus_tc(&mut self, _: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error> { - let addr = self.shared_pool.add_pus_tc(pus_tc)?; - self.tc_source.send(addr)?; - Ok(()) - } -} - -impl ReceivesCcsdsTc for PusTcSourceProviderSharedPool { - type Error = MpscStoreAndSendError; - - fn pass_ccsds(&mut self, _: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> { - let mut pool = self.shared_pool.pool.write().expect("locking pool failed"); - let addr = pool.add(tc_raw)?; - drop(pool); - self.tc_source.send(addr)?; - Ok(()) - } -} - -// Newtype, can not implement necessary traits on MPSC sender directly because of orphan rules. -#[derive(Clone)] -pub struct PusTcSourceProviderDynamic(pub Sender>); - -impl ReceivesEcssPusTc for PusTcSourceProviderDynamic { - type Error = SendError>; - - fn pass_pus_tc(&mut self, _: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error> { - self.0.send(pus_tc.raw_data().to_vec())?; - Ok(()) - } -} - -impl ReceivesCcsdsTc for PusTcSourceProviderDynamic { - type Error = mpsc::SendError>; - - fn pass_ccsds(&mut self, _: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> { - self.0.send(tc_raw.to_vec())?; - Ok(()) - } -} - -// TC source components where static pools are the backing memory of the received telecommands. -pub struct TcSourceTaskStatic { - shared_tc_pool: SharedTcPool, - tc_receiver: Receiver, - tc_buf: [u8; 4096], - pus_receiver: PusReceiver, -} - -impl TcSourceTaskStatic { - pub fn new( - shared_tc_pool: SharedTcPool, - tc_receiver: Receiver, - pus_receiver: PusReceiver, - ) -> Self { - Self { - shared_tc_pool, - tc_receiver, - tc_buf: [0; 4096], - pus_receiver, - } - } - - pub fn periodic_operation(&mut self) { - self.poll_tc(); - } - - pub fn poll_tc(&mut self) -> bool { - match self.tc_receiver.try_recv() { - Ok(addr) => { - let pool = self - .shared_tc_pool - .pool - .read() - .expect("locking tc pool failed"); - pool.read(&addr, &mut self.tc_buf) - .expect("reading pool failed"); - drop(pool); - match PusTcReader::new(&self.tc_buf) { - Ok((pus_tc, _)) => { - self.pus_receiver - .handle_tc_packet( - satrs::pus::TcInMemory::StoreAddr(addr), - pus_tc.service(), - &pus_tc, - ) - .ok(); - true - } - Err(e) => { - warn!("error creating PUS TC from raw data: {e}"); - warn!("raw data: {:x?}", self.tc_buf); - true - } - } - } - Err(e) => match e { - TryRecvError::Empty => false, - TryRecvError::Disconnected => { - warn!("tmtc thread: sender disconnected"); - false - } - }, - } - } -} - -// TC source components where the heap is the backing memory of the received telecommands. -pub struct TcSourceTaskDynamic { - pub tc_receiver: Receiver>, - pus_receiver: PusReceiver, -} - -impl TcSourceTaskDynamic { - pub fn new( - tc_receiver: Receiver>, - pus_receiver: PusReceiver, - ) -> Self { - Self { - tc_receiver, - pus_receiver, - } - } - - pub fn periodic_operation(&mut self) { - self.poll_tc(); - } - - pub fn poll_tc(&mut self) -> bool { - match self.tc_receiver.try_recv() { - Ok(tc) => match PusTcReader::new(&tc) { - Ok((pus_tc, _)) => { - self.pus_receiver - .handle_tc_packet( - satrs::pus::TcInMemory::Vec(tc.clone()), - pus_tc.service(), - &pus_tc, - ) - .ok(); - true - } - Err(e) => { - warn!("error creating PUS TC from raw data: {e}"); - warn!("raw data: {:x?}", tc); - true - } - }, - Err(e) => match e { - TryRecvError::Empty => false, - TryRecvError::Disconnected => { - warn!("tmtc thread: sender disconnected"); - false - } - }, - } - } -} diff --git a/satrs-example/src/tmtc/mod.rs b/satrs-example/src/tmtc/mod.rs new file mode 100644 index 0000000..bfd24c5 --- /dev/null +++ b/satrs-example/src/tmtc/mod.rs @@ -0,0 +1,2 @@ +pub mod tc_source; +pub mod tm_sink; diff --git a/satrs-example/src/tmtc/tc_source.rs b/satrs-example/src/tmtc/tc_source.rs new file mode 100644 index 0000000..bd99fb2 --- /dev/null +++ b/satrs-example/src/tmtc/tc_source.rs @@ -0,0 +1,106 @@ +use satrs::{ + pool::PoolProvider, + tmtc::{PacketAsVec, PacketInPool, PacketSenderWithSharedPool, SharedPacketPool}, +}; +use std::sync::mpsc::{self, TryRecvError}; + +use satrs::pus::MpscTmAsVecSender; + +use crate::pus::{HandlingStatus, PusTcDistributor}; + +// TC source components where static pools are the backing memory of the received telecommands. +pub struct TcSourceTaskStatic { + shared_tc_pool: SharedPacketPool, + tc_receiver: mpsc::Receiver, + tc_buf: [u8; 4096], + pus_distributor: PusTcDistributor, +} + +impl TcSourceTaskStatic { + pub fn new( + shared_tc_pool: SharedPacketPool, + tc_receiver: mpsc::Receiver, + pus_receiver: PusTcDistributor, + ) -> Self { + Self { + shared_tc_pool, + tc_receiver, + tc_buf: [0; 4096], + pus_distributor: pus_receiver, + } + } + + pub fn periodic_operation(&mut self) { + self.poll_tc(); + } + + pub fn poll_tc(&mut self) -> HandlingStatus { + // Right now, we only expect ECSS PUS packets. + // If packets like CFDP are expected, we might have to check the APID first. + match self.tc_receiver.try_recv() { + Ok(packet_in_pool) => { + let pool = self + .shared_tc_pool + .0 + .read() + .expect("locking tc pool failed"); + pool.read(&packet_in_pool.store_addr, &mut self.tc_buf) + .expect("reading pool failed"); + drop(pool); + self.pus_distributor + .handle_tc_packet_in_store(packet_in_pool, &self.tc_buf) + .ok(); + HandlingStatus::HandledOne + } + Err(e) => match e { + TryRecvError::Empty => HandlingStatus::Empty, + TryRecvError::Disconnected => { + log::warn!("tmtc thread: sender disconnected"); + HandlingStatus::Empty + } + }, + } + } +} + +// TC source components where the heap is the backing memory of the received telecommands. +pub struct TcSourceTaskDynamic { + pub tc_receiver: mpsc::Receiver, + pus_distributor: PusTcDistributor, +} + +impl TcSourceTaskDynamic { + pub fn new( + tc_receiver: mpsc::Receiver, + pus_receiver: PusTcDistributor, + ) -> Self { + Self { + tc_receiver, + pus_distributor: pus_receiver, + } + } + + pub fn periodic_operation(&mut self) { + self.poll_tc(); + } + + pub fn poll_tc(&mut self) -> HandlingStatus { + // Right now, we only expect ECSS PUS packets. + // If packets like CFDP are expected, we might have to check the APID first. + match self.tc_receiver.try_recv() { + Ok(packet_as_vec) => { + self.pus_distributor + .handle_tc_packet_vec(packet_as_vec) + .ok(); + HandlingStatus::HandledOne + } + Err(e) => match e { + TryRecvError::Empty => HandlingStatus::Empty, + TryRecvError::Disconnected => { + log::warn!("tmtc thread: sender disconnected"); + HandlingStatus::Empty + } + }, + } + } +} diff --git a/satrs-example/src/tm_funnel.rs b/satrs-example/src/tmtc/tm_sink.rs similarity index 87% rename from satrs-example/src/tm_funnel.rs rename to satrs-example/src/tmtc/tm_sink.rs index 61cddd1..955a997 100644 --- a/satrs-example/src/tm_funnel.rs +++ b/satrs-example/src/tmtc/tm_sink.rs @@ -4,7 +4,7 @@ use std::{ }; use log::info; -use satrs::pus::{PusTmAsVec, PusTmInPool}; +use satrs::tmtc::{PacketAsVec, PacketInPool, SharedPacketPool}; use satrs::{ pool::PoolProvider, seq_count::{CcsdsSimpleSeqCountProvider, SequenceCountProviderCore}, @@ -13,10 +13,9 @@ use satrs::{ time::cds::MIN_CDS_FIELD_LEN, CcsdsPacket, }, - tmtc::tm_helper::SharedTmPool, }; -use crate::tcp::SyncTcpTmSource; +use crate::interface::tcp::SyncTcpTmSource; #[derive(Default)] pub struct CcsdsSeqCounterMap { @@ -77,17 +76,17 @@ impl TmFunnelCommon { pub struct TmFunnelStatic { common: TmFunnelCommon, - shared_tm_store: SharedTmPool, - tm_funnel_rx: mpsc::Receiver, - tm_server_tx: mpsc::SyncSender, + shared_tm_store: SharedPacketPool, + tm_funnel_rx: mpsc::Receiver, + tm_server_tx: mpsc::SyncSender, } impl TmFunnelStatic { pub fn new( - shared_tm_store: SharedTmPool, + shared_tm_store: SharedPacketPool, sync_tm_tcp_source: SyncTcpTmSource, - tm_funnel_rx: mpsc::Receiver, - tm_server_tx: mpsc::SyncSender, + tm_funnel_rx: mpsc::Receiver, + tm_server_tx: mpsc::SyncSender, ) -> Self { Self { common: TmFunnelCommon::new(sync_tm_tcp_source), @@ -101,7 +100,7 @@ impl TmFunnelStatic { if let Ok(pus_tm_in_pool) = self.tm_funnel_rx.recv() { // Read the TM, set sequence counter and message counter, and finally update // the CRC. - let shared_pool = self.shared_tm_store.clone_backing_pool(); + let shared_pool = self.shared_tm_store.0.clone(); let mut pool_guard = shared_pool.write().expect("Locking TM pool failed"); let mut tm_copy = Vec::new(); pool_guard @@ -124,15 +123,15 @@ impl TmFunnelStatic { pub struct TmFunnelDynamic { common: TmFunnelCommon, - tm_funnel_rx: mpsc::Receiver, - tm_server_tx: mpsc::Sender, + tm_funnel_rx: mpsc::Receiver, + tm_server_tx: mpsc::Sender, } impl TmFunnelDynamic { pub fn new( sync_tm_tcp_source: SyncTcpTmSource, - tm_funnel_rx: mpsc::Receiver, - tm_server_tx: mpsc::Sender, + tm_funnel_rx: mpsc::Receiver, + tm_server_tx: mpsc::Sender, ) -> Self { Self { common: TmFunnelCommon::new(sync_tm_tcp_source), diff --git a/satrs-mib/CHANGELOG.md b/satrs-mib/CHANGELOG.md index 2b03e74..4e27226 100644 --- a/satrs-mib/CHANGELOG.md +++ b/satrs-mib/CHANGELOG.md @@ -8,6 +8,10 @@ and this project adheres to [Semantic Versioning](http://semver.org/). # [unreleased] +# [v0.1.2] 2024-04-17 + +Allow `satrs-shared` from `v0.1.3` to `"] @@ -23,13 +23,12 @@ version = "1" optional = true [dependencies.satrs-shared] -path = "../satrs-shared" -version = "0.1.3" +version = ">=0.1.3, <0.2" features = ["serde"] [dependencies.satrs-mib-codegen] path = "codegen" -version = "0.1.1" +version = "0.1.2" [dependencies.serde] version = "1" diff --git a/satrs-mib/codegen/Cargo.toml b/satrs-mib/codegen/Cargo.toml index 43ba785..584a66e 100644 --- a/satrs-mib/codegen/Cargo.toml +++ b/satrs-mib/codegen/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "satrs-mib-codegen" -version = "0.1.1" +version = "0.1.2" edition = "2021" description = "satrs-mib proc macro implementation" homepage = "https://egit.irs.uni-stuttgart.de/rust/sat-rs" @@ -28,8 +28,7 @@ features = ["full"] trybuild = { version = "1", features = ["diff"] } [dev-dependencies.satrs-shared] -version = "0.1.3" -path = "../../satrs-shared" +version = ">=0.1.3, <0.2" [dev-dependencies.satrs-mib] path = ".." diff --git a/satrs-shared/CHANGELOG.md b/satrs-shared/CHANGELOG.md index b1656ea..0c62de9 100644 --- a/satrs-shared/CHANGELOG.md +++ b/satrs-shared/CHANGELOG.md @@ -8,6 +8,10 @@ and this project adheres to [Semantic Versioning](http://semver.org/). # [unreleased] +# [v0.1.3] 2024-04-16 + +Allow `spacepackets` range starting with v0.10 and v0.11. + # [v0.1.2] 2024-02-17 - Bumped `spacepackets` to v0.10.0 for `UnsignedEnum` trait change. diff --git a/satrs-shared/Cargo.toml b/satrs-shared/Cargo.toml index fd110fd..2ed8f26 100644 --- a/satrs-shared/Cargo.toml +++ b/satrs-shared/Cargo.toml @@ -18,7 +18,7 @@ default-features = false optional = true [dependencies.spacepackets] -version = "0.11.0-rc.2" +version = ">0.9, <=0.11" default-features = false [features] diff --git a/satrs/CHANGELOG.md b/satrs/CHANGELOG.md index b539457..eeab142 100644 --- a/satrs/CHANGELOG.md +++ b/satrs/CHANGELOG.md @@ -8,7 +8,34 @@ and this project adheres to [Semantic Versioning](http://semver.org/). # [unreleased] -- `spacepackets` v0.11.0 +# [v0.2.0-rc.4] 2024-04-23 + +## Changed + +- The `parse_for_ccsds_space_packets` method now expects a non-mutable slice and does not copy + broken tail packets anymore. It also does not expect a mutable `next_write_idx` argument anymore. + Instead, a `ParseResult` structure is returned which contains the `packets_found` and an + optional `incomplete_tail_start` value. + +## Fixed + +- `parse_for_ccsds_space_packets` did not detect CCSDS space packets at the buffer end with the + smallest possible size of 7 bytes. +- TCP server component now re-registers the internal `mio::Poll` object if the client reset + the connection unexpectedly. Not doing so prevented the server from functioning properly + after a re-connect. + +# [v0.2.0-rc.3] 2024-04-17 + +docs-rs hotfix 2 + +# [v0.2.0-rc.2] 2024-04-17 + +docs-rs hotfix + +# [v0.2.0-rc.1] 2024-04-17 + +- `spacepackets` v0.11 ## Added @@ -21,9 +48,20 @@ and this project adheres to [Semantic Versioning](http://semver.org/). - Introduced generic `EventMessage` which is generic over the event type and the additional parameter type. This message also contains the sender ID which can be useful for debugging or application layer / FDIR logic. +- Stop signal handling for the TCP servers. +- TCP server now uses `mio` crate to allow non-blocking operation. The server can now handle + multiple connections at once, and the context information about handled transfers is + passed via a callback which is inserted as a generic as well. ## Changed +- Renamed `ReceivesTcCore` to `PacketSenderRaw` to better show its primary purpose. It now contains + a `send_raw_tc` method which is not mutable anymore. +- Renamed `TmPacketSourceCore` to `TmPacketSource`. +- Renamed `EcssTmSenderCore` to `EcssTmSender`. +- Renamed `StoreAddr` to `PoolAddr`. +- Reanmed `StoreError` to `PoolError`. +- TCP server generics order. The error generics come last now. - `encoding::ccsds::PacketIdValidator` renamed to `ValidatorU16Id`, which lives in the crate root. It can be used for both CCSDS packet ID and CCSDS APID validation. - `EventManager::try_event_handling` not expects a mutable error handling closure instead of @@ -71,6 +109,9 @@ and this project adheres to [Semantic Versioning](http://semver.org/). ## Removed - Remove `objects` module. +- Removed CCSDS and PUS distributor modules. Their worth is questionable in an architecture + where routing traits are sufficient and the core logic to demultiplex and distribute packets + is simple enough to be application code. # [v0.2.0-rc.0] 2024-02-21 diff --git a/satrs/Cargo.toml b/satrs/Cargo.toml index 7f913e6..dcf8d42 100644 --- a/satrs/Cargo.toml +++ b/satrs/Cargo.toml @@ -1,8 +1,8 @@ [package] name = "satrs" -version = "0.2.0-rc.0" +version = "0.2.0-rc.4" edition = "2021" -rust-version = "1.61" +rust-version = "1.71.1" authors = ["Robin Mueller "] description = "A framework to build software for remote systems" homepage = "https://absatsw.irs.uni-stuttgart.de/projects/sat-rs/" @@ -19,13 +19,26 @@ smallvec = "1" crc = "3" [dependencies.satrs-shared] -version = "0.1.3" -path = "../satrs-shared" +version = ">=0.1.3, <0.2" [dependencies.num_enum] version = ">0.5, <=0.7" default-features = false +[dependencies.spacepackets] +version = "0.11" +default-features = false + +[dependencies.cobs] +git = "https://github.com/robamu/cobs.rs.git" +version = "0.2.3" +branch = "all_features" +default-features = false + +[dependencies.num-traits] +version = "0.2" +default-features = false + [dependencies.dyn-clone] version = "1" optional = true @@ -38,10 +51,6 @@ optional = true version = "0.7" optional = true -[dependencies.num-traits] -version = "0.2" -default-features = false - [dependencies.downcast-rs] version = "1.2" default-features = false @@ -70,16 +79,10 @@ version = "0.5.4" features = ["all"] optional = true -[dependencies.spacepackets] -# git = "https://egit.irs.uni-stuttgart.de/rust/spacepackets.git" -version = "0.11.0-rc.2" -default-features = false - -[dependencies.cobs] -git = "https://github.com/robamu/cobs.rs.git" -version = "0.2.3" -branch = "all_features" -default-features = false +[dependencies.mio] +version = "0.8" +features = ["os-poll", "net"] +optional = true [dev-dependencies] serde = "1" @@ -104,7 +107,8 @@ std = [ "spacepackets/std", "num_enum/std", "thiserror", - "socket2" + "socket2", + "mio" ] alloc = [ "serde/alloc", @@ -122,4 +126,4 @@ doc-images = [] [package.metadata.docs.rs] all-features = true -rustdoc-args = ["--cfg", "doc_cfg", "--generate-link-to-definition"] +rustdoc-args = ["--cfg", "docs_rs", "--generate-link-to-definition"] diff --git a/satrs/src/action.rs b/satrs/src/action.rs index 4aea9f1..95f3a41 100644 --- a/satrs/src/action.rs +++ b/satrs/src/action.rs @@ -1,4 +1,4 @@ -use crate::{params::Params, pool::StoreAddr}; +use crate::{params::Params, pool::PoolAddr}; #[cfg(feature = "alloc")] pub use alloc_mod::*; @@ -21,7 +21,7 @@ impl ActionRequest { #[derive(Clone, Eq, PartialEq, Debug)] pub enum ActionRequestVariant { NoData, - StoreData(StoreAddr), + StoreData(PoolAddr), #[cfg(feature = "alloc")] VecData(alloc::vec::Vec), } diff --git a/satrs/src/encoding/ccsds.rs b/satrs/src/encoding/ccsds.rs index 30adccf..1f21426 100644 --- a/satrs/src/encoding/ccsds.rs +++ b/satrs/src/encoding/ccsds.rs @@ -1,70 +1,133 @@ -use crate::{tmtc::ReceivesTcCore, ValidatorU16Id}; +use spacepackets::{CcsdsPacket, SpHeader}; + +use crate::{tmtc::PacketSenderRaw, ComponentId}; + +#[derive(Debug, Copy, Clone, PartialEq, Eq)] +pub enum SpValidity { + Valid, + /// The space packet can be assumed to have a valid format, but the packet should + /// be skipped. + Skip, + /// The space packet or space packet header has an invalid format, for example a CRC check + /// failed. In that case, the parser loses the packet synchronization and needs to check for + /// the start of a new space packet header start again. The space packet header + /// [spacepackets::PacketId] can be used as a synchronization marker to detect the start + /// of a possible valid packet again. + Invalid, +} + +/// Simple trait to allow user code to check the validity of a space packet. +pub trait SpacePacketValidator { + fn validate(&self, sp_header: &SpHeader, raw_buf: &[u8]) -> SpValidity; +} + +#[derive(Default, Debug, PartialEq, Eq)] +pub struct ParseResult { + pub packets_found: u32, + /// If an incomplete space packet was found, its start index is indicated by this value. + pub incomplete_tail_start: Option, +} /// This function parses a given buffer for tightly packed CCSDS space packets. It uses the -/// [PacketId] field of the CCSDS packets to detect the start of a CCSDS space packet and then -/// uses the length field of the packet to extract CCSDS packets. +/// [spacepackets::SpHeader] of the CCSDS packets and a user provided [SpacePacketValidator] +/// to check whether a received space packet is relevant for processing. /// /// This function is also able to deal with broken tail packets at the end as long a the parser /// can read the full 7 bytes which constitue a space packet header plus one byte minimal size. /// If broken tail packets are detected, they are moved to the front of the buffer, and the write /// index for future write operations will be written to the `next_write_idx` argument. /// -/// The parser will write all packets which were decoded successfully to the given `tc_receiver` -/// and return the number of packets found. If the [ReceivesTcCore::pass_tc] calls fails, the -/// error will be returned. -pub fn parse_buffer_for_ccsds_space_packets( - buf: &mut [u8], - packet_id_validator: &(impl ValidatorU16Id + ?Sized), - tc_receiver: &mut (impl ReceivesTcCore + ?Sized), - next_write_idx: &mut usize, -) -> Result { - *next_write_idx = 0; - let mut packets_found = 0; +/// The parses will behave differently based on the [SpValidity] returned from the user provided +/// [SpacePacketValidator]: +/// +/// 1. [SpValidity::Valid]: The parser will forward all packets to the given `packet_sender` and +/// return the number of packets found.If the [PacketSenderRaw::send_packet] calls fails, the +/// error will be returned. +/// 2. [SpValidity::Invalid]: The parser assumes that the synchronization is lost and tries to +/// find the start of a new space packet header by scanning all the following bytes. +/// 3. [SpValidity::Skip]: The parser skips the packet using the packet length determined from the +/// space packet header. +pub fn parse_buffer_for_ccsds_space_packets( + buf: &[u8], + packet_validator: &(impl SpacePacketValidator + ?Sized), + sender_id: ComponentId, + packet_sender: &(impl PacketSenderRaw + ?Sized), +) -> Result { + let mut parse_result = ParseResult::default(); let mut current_idx = 0; let buf_len = buf.len(); loop { - if current_idx + 7 >= buf.len() { + if current_idx + 7 > buf.len() { break; } - let packet_id = u16::from_be_bytes(buf[current_idx..current_idx + 2].try_into().unwrap()); - if packet_id_validator.validate(packet_id) { - let length_field = - u16::from_be_bytes(buf[current_idx + 4..current_idx + 6].try_into().unwrap()); - let packet_size = length_field + 7; - if (current_idx + packet_size as usize) <= buf_len { - tc_receiver.pass_tc(&buf[current_idx..current_idx + packet_size as usize])?; - packets_found += 1; - } else { - // Move packet to start of buffer if applicable. - if current_idx > 0 { - buf.copy_within(current_idx.., 0); - *next_write_idx = buf.len() - current_idx; + let sp_header = SpHeader::from_be_bytes(&buf[current_idx..]).unwrap().0; + match packet_validator.validate(&sp_header, &buf[current_idx..]) { + SpValidity::Valid => { + let packet_size = sp_header.total_len(); + if (current_idx + packet_size) <= buf_len { + packet_sender + .send_packet(sender_id, &buf[current_idx..current_idx + packet_size])?; + parse_result.packets_found += 1; + } else { + // Move packet to start of buffer if applicable. + parse_result.incomplete_tail_start = Some(current_idx); } + current_idx += packet_size; + continue; + } + SpValidity::Skip => { + current_idx += sp_header.total_len(); + } + // We might have lost sync. Try to find the start of a new space packet header. + SpValidity::Invalid => { + current_idx += 1; } - current_idx += packet_size as usize; - continue; } - current_idx += 1; } - Ok(packets_found) + Ok(parse_result) } #[cfg(test)] mod tests { use spacepackets::{ ecss::{tc::PusTcCreator, WritablePusPacket}, - PacketId, SpHeader, + CcsdsPacket, PacketId, PacketSequenceCtrl, PacketType, SequenceFlags, SpHeader, }; - use crate::encoding::tests::TcCacher; + use crate::{encoding::tests::TcCacher, ComponentId}; - use super::parse_buffer_for_ccsds_space_packets; + use super::{parse_buffer_for_ccsds_space_packets, SpValidity, SpacePacketValidator}; + const PARSER_ID: ComponentId = 0x05; const TEST_APID_0: u16 = 0x02; const TEST_APID_1: u16 = 0x10; const TEST_PACKET_ID_0: PacketId = PacketId::new_for_tc(true, TEST_APID_0); const TEST_PACKET_ID_1: PacketId = PacketId::new_for_tc(true, TEST_APID_1); + #[derive(Default)] + struct SimpleVerificator { + pub enable_second_id: bool, + } + + impl SimpleVerificator { + pub fn new_with_second_id() -> Self { + Self { + enable_second_id: true, + } + } + } + + impl SpacePacketValidator for SimpleVerificator { + fn validate(&self, sp_header: &SpHeader, _raw_buf: &[u8]) -> super::SpValidity { + if sp_header.packet_id() == TEST_PACKET_ID_0 + || (self.enable_second_id && sp_header.packet_id() == TEST_PACKET_ID_1) + { + return SpValidity::Valid; + } + SpValidity::Skip + } + } + #[test] fn test_basic() { let sph = SpHeader::new_from_apid(TEST_APID_0); @@ -73,23 +136,21 @@ mod tests { let packet_len = ping_tc .write_to_bytes(&mut buffer) .expect("writing packet failed"); - let valid_packet_ids = [TEST_PACKET_ID_0]; - let mut tc_cacher = TcCacher::default(); - let mut next_write_idx = 0; + let tc_cacher = TcCacher::default(); let parse_result = parse_buffer_for_ccsds_space_packets( - &mut buffer, - valid_packet_ids.as_slice(), - &mut tc_cacher, - &mut next_write_idx, + &buffer, + &SimpleVerificator::default(), + PARSER_ID, + &tc_cacher, ); assert!(parse_result.is_ok()); - let parsed_packets = parse_result.unwrap(); - assert_eq!(parsed_packets, 1); - assert_eq!(tc_cacher.tc_queue.len(), 1); - assert_eq!( - tc_cacher.tc_queue.pop_front().unwrap(), - buffer[..packet_len] - ); + let parse_result = parse_result.unwrap(); + assert_eq!(parse_result.packets_found, 1); + let mut queue = tc_cacher.tc_queue.borrow_mut(); + assert_eq!(queue.len(), 1); + let packet_with_sender = queue.pop_front().unwrap(); + assert_eq!(packet_with_sender.packet, buffer[..packet_len]); + assert_eq!(packet_with_sender.sender_id, PARSER_ID); } #[test] @@ -104,25 +165,25 @@ mod tests { let packet_len_action = action_tc .write_to_bytes(&mut buffer[packet_len_ping..]) .expect("writing packet failed"); - let valid_packet_ids = [TEST_PACKET_ID_0]; - let mut tc_cacher = TcCacher::default(); - let mut next_write_idx = 0; + let tc_cacher = TcCacher::default(); let parse_result = parse_buffer_for_ccsds_space_packets( - &mut buffer, - valid_packet_ids.as_slice(), - &mut tc_cacher, - &mut next_write_idx, + &buffer, + &SimpleVerificator::default(), + PARSER_ID, + &tc_cacher, ); assert!(parse_result.is_ok()); - let parsed_packets = parse_result.unwrap(); - assert_eq!(parsed_packets, 2); - assert_eq!(tc_cacher.tc_queue.len(), 2); + let parse_result = parse_result.unwrap(); + assert_eq!(parse_result.packets_found, 2); + let mut queue = tc_cacher.tc_queue.borrow_mut(); + assert_eq!(queue.len(), 2); + let packet_with_addr = queue.pop_front().unwrap(); + assert_eq!(packet_with_addr.packet, buffer[..packet_len_ping]); + assert_eq!(packet_with_addr.sender_id, PARSER_ID); + let packet_with_addr = queue.pop_front().unwrap(); + assert_eq!(packet_with_addr.sender_id, PARSER_ID); assert_eq!( - tc_cacher.tc_queue.pop_front().unwrap(), - buffer[..packet_len_ping] - ); - assert_eq!( - tc_cacher.tc_queue.pop_front().unwrap(), + packet_with_addr.packet, buffer[packet_len_ping..packet_len_ping + packet_len_action] ); } @@ -140,25 +201,20 @@ mod tests { let packet_len_action = action_tc .write_to_bytes(&mut buffer[packet_len_ping..]) .expect("writing packet failed"); - let valid_packet_ids = [TEST_PACKET_ID_0, TEST_PACKET_ID_1]; - let mut tc_cacher = TcCacher::default(); - let mut next_write_idx = 0; - let parse_result = parse_buffer_for_ccsds_space_packets( - &mut buffer, - valid_packet_ids.as_slice(), - &mut tc_cacher, - &mut next_write_idx, - ); + let tc_cacher = TcCacher::default(); + let verificator = SimpleVerificator::new_with_second_id(); + let parse_result = + parse_buffer_for_ccsds_space_packets(&buffer, &verificator, PARSER_ID, &tc_cacher); assert!(parse_result.is_ok()); - let parsed_packets = parse_result.unwrap(); - assert_eq!(parsed_packets, 2); - assert_eq!(tc_cacher.tc_queue.len(), 2); + let parse_result = parse_result.unwrap(); + assert_eq!(parse_result.packets_found, 2); + let mut queue = tc_cacher.tc_queue.borrow_mut(); + assert_eq!(queue.len(), 2); + let packet_with_addr = queue.pop_front().unwrap(); + assert_eq!(packet_with_addr.packet, buffer[..packet_len_ping]); + let packet_with_addr = queue.pop_front().unwrap(); assert_eq!( - tc_cacher.tc_queue.pop_front().unwrap(), - buffer[..packet_len_ping] - ); - assert_eq!( - tc_cacher.tc_queue.pop_front().unwrap(), + packet_with_addr.packet, buffer[packet_len_ping..packet_len_ping + packet_len_action] ); } @@ -176,22 +232,25 @@ mod tests { let packet_len_action = action_tc .write_to_bytes(&mut buffer[packet_len_ping..]) .expect("writing packet failed"); - let valid_packet_ids = [TEST_PACKET_ID_0, TEST_PACKET_ID_1]; - let mut tc_cacher = TcCacher::default(); - let mut next_write_idx = 0; + let tc_cacher = TcCacher::default(); + let verificator = SimpleVerificator::new_with_second_id(); let parse_result = parse_buffer_for_ccsds_space_packets( - &mut buffer[..packet_len_ping + packet_len_action - 4], - valid_packet_ids.as_slice(), - &mut tc_cacher, - &mut next_write_idx, + &buffer[..packet_len_ping + packet_len_action - 4], + &verificator, + PARSER_ID, + &tc_cacher, ); assert!(parse_result.is_ok()); - let parsed_packets = parse_result.unwrap(); - assert_eq!(parsed_packets, 1); - assert_eq!(tc_cacher.tc_queue.len(), 1); + let parse_result = parse_result.unwrap(); + assert_eq!(parse_result.packets_found, 1); + assert!(parse_result.incomplete_tail_start.is_some()); + let incomplete_tail_idx = parse_result.incomplete_tail_start.unwrap(); + assert_eq!(incomplete_tail_idx, packet_len_ping); + + let queue = tc_cacher.tc_queue.borrow(); + assert_eq!(queue.len(), 1); // The broken packet was moved to the start, so the next write index should be after the // last segment missing 4 bytes. - assert_eq!(next_write_idx, packet_len_action - 4); } #[test] @@ -202,19 +261,39 @@ mod tests { let packet_len_ping = ping_tc .write_to_bytes(&mut buffer) .expect("writing packet failed"); - let valid_packet_ids = [TEST_PACKET_ID_0, TEST_PACKET_ID_1]; - let mut tc_cacher = TcCacher::default(); - let mut next_write_idx = 0; + let tc_cacher = TcCacher::default(); + + let verificator = SimpleVerificator::new_with_second_id(); let parse_result = parse_buffer_for_ccsds_space_packets( - &mut buffer[..packet_len_ping - 4], - valid_packet_ids.as_slice(), - &mut tc_cacher, - &mut next_write_idx, + &buffer[..packet_len_ping - 4], + &verificator, + PARSER_ID, + &tc_cacher, ); - assert_eq!(next_write_idx, 0); assert!(parse_result.is_ok()); - let parsed_packets = parse_result.unwrap(); - assert_eq!(parsed_packets, 0); - assert_eq!(tc_cacher.tc_queue.len(), 0); + let parse_result = parse_result.unwrap(); + assert_eq!(parse_result.packets_found, 0); + let queue = tc_cacher.tc_queue.borrow(); + assert_eq!(queue.len(), 0); + } + + #[test] + fn test_smallest_packet() { + let ccsds_header_only = SpHeader::new( + PacketId::new(PacketType::Tc, true, TEST_APID_0), + PacketSequenceCtrl::new(SequenceFlags::Unsegmented, 0), + 0, + ); + let mut buf: [u8; 7] = [0; 7]; + ccsds_header_only + .write_to_be_bytes(&mut buf) + .expect("writing failed"); + let verificator = SimpleVerificator::default(); + let tc_cacher = TcCacher::default(); + let parse_result = + parse_buffer_for_ccsds_space_packets(&buf, &verificator, PARSER_ID, &tc_cacher); + assert!(parse_result.is_ok()); + let parse_result = parse_result.unwrap(); + assert_eq!(parse_result.packets_found, 1); } } diff --git a/satrs/src/encoding/cobs.rs b/satrs/src/encoding/cobs.rs index 6953c3b..f4377a2 100644 --- a/satrs/src/encoding/cobs.rs +++ b/satrs/src/encoding/cobs.rs @@ -1,4 +1,4 @@ -use crate::tmtc::ReceivesTcCore; +use crate::{tmtc::PacketSenderRaw, ComponentId}; use cobs::{decode_in_place, encode, max_encoding_length}; /// This function encodes the given packet with COBS and also wraps the encoded packet with @@ -55,11 +55,12 @@ pub fn encode_packet_with_cobs( /// future write operations will be written to the `next_write_idx` argument. /// /// The parser will write all packets which were decoded successfully to the given `tc_receiver`. -pub fn parse_buffer_for_cobs_encoded_packets( +pub fn parse_buffer_for_cobs_encoded_packets( buf: &mut [u8], - tc_receiver: &mut dyn ReceivesTcCore, + sender_id: ComponentId, + packet_sender: &(impl PacketSenderRaw + ?Sized), next_write_idx: &mut usize, -) -> Result { +) -> Result { let mut start_index_packet = 0; let mut start_found = false; let mut last_byte = false; @@ -78,8 +79,10 @@ pub fn parse_buffer_for_cobs_encoded_packets( let decode_result = decode_in_place(&mut buf[start_index_packet..i]); if let Ok(packet_len) = decode_result { packets_found += 1; - tc_receiver - .pass_tc(&buf[start_index_packet..start_index_packet + packet_len])?; + packet_sender.send_packet( + sender_id, + &buf[start_index_packet..start_index_packet + packet_len], + )?; } start_found = false; } else { @@ -100,32 +103,39 @@ pub fn parse_buffer_for_cobs_encoded_packets( pub(crate) mod tests { use cobs::encode; - use crate::encoding::tests::{encode_simple_packet, TcCacher, INVERTED_PACKET, SIMPLE_PACKET}; + use crate::{ + encoding::tests::{encode_simple_packet, TcCacher, INVERTED_PACKET, SIMPLE_PACKET}, + ComponentId, + }; use super::parse_buffer_for_cobs_encoded_packets; + const PARSER_ID: ComponentId = 0x05; + #[test] fn test_parsing_simple_packet() { - let mut test_sender = TcCacher::default(); + let test_sender = TcCacher::default(); let mut encoded_buf: [u8; 16] = [0; 16]; let mut current_idx = 0; encode_simple_packet(&mut encoded_buf, &mut current_idx); let mut next_read_idx = 0; let packets = parse_buffer_for_cobs_encoded_packets( &mut encoded_buf[0..current_idx], - &mut test_sender, + PARSER_ID, + &test_sender, &mut next_read_idx, ) .unwrap(); assert_eq!(packets, 1); - assert_eq!(test_sender.tc_queue.len(), 1); - let packet = &test_sender.tc_queue[0]; - assert_eq!(packet, &SIMPLE_PACKET); + let queue = test_sender.tc_queue.borrow(); + assert_eq!(queue.len(), 1); + let packet = &queue[0]; + assert_eq!(packet.packet, &SIMPLE_PACKET); } #[test] fn test_parsing_consecutive_packets() { - let mut test_sender = TcCacher::default(); + let test_sender = TcCacher::default(); let mut encoded_buf: [u8; 16] = [0; 16]; let mut current_idx = 0; encode_simple_packet(&mut encoded_buf, &mut current_idx); @@ -139,21 +149,23 @@ pub(crate) mod tests { let mut next_read_idx = 0; let packets = parse_buffer_for_cobs_encoded_packets( &mut encoded_buf[0..current_idx], - &mut test_sender, + PARSER_ID, + &test_sender, &mut next_read_idx, ) .unwrap(); assert_eq!(packets, 2); - assert_eq!(test_sender.tc_queue.len(), 2); - let packet0 = &test_sender.tc_queue[0]; - assert_eq!(packet0, &SIMPLE_PACKET); - let packet1 = &test_sender.tc_queue[1]; - assert_eq!(packet1, &INVERTED_PACKET); + let queue = test_sender.tc_queue.borrow(); + assert_eq!(queue.len(), 2); + let packet0 = &queue[0]; + assert_eq!(packet0.packet, &SIMPLE_PACKET); + let packet1 = &queue[1]; + assert_eq!(packet1.packet, &INVERTED_PACKET); } #[test] fn test_split_tail_packet_only() { - let mut test_sender = TcCacher::default(); + let test_sender = TcCacher::default(); let mut encoded_buf: [u8; 16] = [0; 16]; let mut current_idx = 0; encode_simple_packet(&mut encoded_buf, &mut current_idx); @@ -161,17 +173,19 @@ pub(crate) mod tests { let packets = parse_buffer_for_cobs_encoded_packets( // Cut off the sentinel byte at the end. &mut encoded_buf[0..current_idx - 1], - &mut test_sender, + PARSER_ID, + &test_sender, &mut next_read_idx, ) .unwrap(); assert_eq!(packets, 0); - assert_eq!(test_sender.tc_queue.len(), 0); + let queue = test_sender.tc_queue.borrow(); + assert_eq!(queue.len(), 0); assert_eq!(next_read_idx, 0); } fn generic_test_split_packet(cut_off: usize) { - let mut test_sender = TcCacher::default(); + let test_sender = TcCacher::default(); let mut encoded_buf: [u8; 16] = [0; 16]; assert!(cut_off < INVERTED_PACKET.len() + 1); let mut current_idx = 0; @@ -193,13 +207,15 @@ pub(crate) mod tests { let packets = parse_buffer_for_cobs_encoded_packets( // Cut off the sentinel byte at the end. &mut encoded_buf[0..current_idx - cut_off], - &mut test_sender, + PARSER_ID, + &test_sender, &mut next_write_idx, ) .unwrap(); assert_eq!(packets, 1); - assert_eq!(test_sender.tc_queue.len(), 1); - assert_eq!(&test_sender.tc_queue[0], &SIMPLE_PACKET); + let queue = test_sender.tc_queue.borrow(); + assert_eq!(queue.len(), 1); + assert_eq!(&queue[0].packet, &SIMPLE_PACKET); assert_eq!(next_write_idx, next_expected_write_idx); assert_eq!(encoded_buf[..next_expected_write_idx], expected_at_start); } @@ -221,7 +237,7 @@ pub(crate) mod tests { #[test] fn test_zero_at_end() { - let mut test_sender = TcCacher::default(); + let test_sender = TcCacher::default(); let mut encoded_buf: [u8; 16] = [0; 16]; let mut next_write_idx = 0; let mut current_idx = 0; @@ -233,31 +249,35 @@ pub(crate) mod tests { let packets = parse_buffer_for_cobs_encoded_packets( // Cut off the sentinel byte at the end. &mut encoded_buf[0..current_idx], - &mut test_sender, + PARSER_ID, + &test_sender, &mut next_write_idx, ) .unwrap(); assert_eq!(packets, 1); - assert_eq!(test_sender.tc_queue.len(), 1); - assert_eq!(&test_sender.tc_queue[0], &SIMPLE_PACKET); + let queue = test_sender.tc_queue.borrow_mut(); + assert_eq!(queue.len(), 1); + assert_eq!(&queue[0].packet, &SIMPLE_PACKET); assert_eq!(next_write_idx, 1); assert_eq!(encoded_buf[0], 0); } #[test] fn test_all_zeroes() { - let mut test_sender = TcCacher::default(); + let test_sender = TcCacher::default(); let mut all_zeroes: [u8; 5] = [0; 5]; let mut next_write_idx = 0; let packets = parse_buffer_for_cobs_encoded_packets( // Cut off the sentinel byte at the end. &mut all_zeroes, - &mut test_sender, + PARSER_ID, + &test_sender, &mut next_write_idx, ) .unwrap(); assert_eq!(packets, 0); - assert!(test_sender.tc_queue.is_empty()); + let queue = test_sender.tc_queue.borrow(); + assert!(queue.is_empty()); assert_eq!(next_write_idx, 0); } } diff --git a/satrs/src/encoding/mod.rs b/satrs/src/encoding/mod.rs index 94e3dee..99ce51d 100644 --- a/satrs/src/encoding/mod.rs +++ b/satrs/src/encoding/mod.rs @@ -6,9 +6,14 @@ pub use crate::encoding::cobs::{encode_packet_with_cobs, parse_buffer_for_cobs_e #[cfg(test)] pub(crate) mod tests { - use alloc::{collections::VecDeque, vec::Vec}; + use core::cell::RefCell; - use crate::tmtc::ReceivesTcCore; + use alloc::collections::VecDeque; + + use crate::{ + tmtc::{PacketAsVec, PacketSenderRaw}, + ComponentId, + }; use super::cobs::encode_packet_with_cobs; @@ -17,14 +22,15 @@ pub(crate) mod tests { #[derive(Default)] pub(crate) struct TcCacher { - pub(crate) tc_queue: VecDeque>, + pub(crate) tc_queue: RefCell>, } - impl ReceivesTcCore for TcCacher { + impl PacketSenderRaw for TcCacher { type Error = (); - fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { - self.tc_queue.push_back(tc_raw.to_vec()); + fn send_packet(&self, sender_id: ComponentId, tc_raw: &[u8]) -> Result<(), Self::Error> { + let mut mut_queue = self.tc_queue.borrow_mut(); + mut_queue.push_back(PacketAsVec::new(sender_id, tc_raw.to_vec())); Ok(()) } } diff --git a/satrs/src/hal/std/tcp_cobs_server.rs b/satrs/src/hal/std/tcp_cobs_server.rs index 7e7036f..a545016 100644 --- a/satrs/src/hal/std/tcp_cobs_server.rs +++ b/satrs/src/hal/std/tcp_cobs_server.rs @@ -1,19 +1,25 @@ +use alloc::sync::Arc; use alloc::vec; use cobs::encode; +use core::sync::atomic::AtomicBool; +use core::time::Duration; use delegate::delegate; +use mio::net::{TcpListener, TcpStream}; use std::io::Write; use std::net::SocketAddr; -use std::net::TcpListener; -use std::net::TcpStream; use std::vec::Vec; use crate::encoding::parse_buffer_for_cobs_encoded_packets; -use crate::tmtc::ReceivesTc; -use crate::tmtc::TmPacketSource; +use crate::tmtc::PacketSenderRaw; +use crate::tmtc::PacketSource; use crate::hal::std::tcp_server::{ ConnectionResult, ServerConfig, TcpTcParser, TcpTmSender, TcpTmtcError, TcpTmtcGenericServer, }; +use crate::ComponentId; + +use super::tcp_server::HandledConnectionHandler; +use super::tcp_server::HandledConnectionInfo; /// Concrete [TcpTcParser] implementation for the [TcpTmtcInCobsServer]. #[derive(Default)] @@ -23,14 +29,16 @@ impl TcpTcParser for CobsTcParser { fn handle_tc_parsing( &mut self, tc_buffer: &mut [u8], - tc_receiver: &mut (impl ReceivesTc + ?Sized), - conn_result: &mut ConnectionResult, + sender_id: ComponentId, + tc_sender: &(impl PacketSenderRaw + ?Sized), + conn_result: &mut HandledConnectionInfo, current_write_idx: usize, next_write_idx: &mut usize, ) -> Result<(), TcpTmtcError> { conn_result.num_received_tcs += parse_buffer_for_cobs_encoded_packets( &mut tc_buffer[..current_write_idx], - tc_receiver.upcast_mut(), + sender_id, + tc_sender, next_write_idx, ) .map_err(|e| TcpTmtcError::TcError(e))?; @@ -57,8 +65,8 @@ impl TcpTmSender for CobsTmSender { fn handle_tm_sending( &mut self, tm_buffer: &mut [u8], - tm_source: &mut (impl TmPacketSource + ?Sized), - conn_result: &mut ConnectionResult, + tm_source: &mut (impl PacketSource + ?Sized), + conn_result: &mut HandledConnectionInfo, stream: &mut TcpStream, ) -> Result> { let mut tm_was_sent = false; @@ -96,7 +104,7 @@ impl TcpTmSender for CobsTmSender { /// Telemetry will be encoded with the COBS protocol using [cobs::encode] in addition to being /// wrapped with the sentinel value 0 as the packet delimiter as well before being sent back to /// the client. Please note that the server will send as much data as it can retrieve from the -/// [TmPacketSource] in its current implementation. +/// [PacketSource] in its current implementation. /// /// Using a framing protocol like COBS imposes minimal restrictions on the type of TMTC data /// exchanged while also allowing packets with flexible size and a reliable way to reconstruct full @@ -110,21 +118,30 @@ impl TcpTmSender for CobsTmSender { /// The [TCP integration tests](https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/main/satrs/tests/tcp_servers.rs) /// test also serves as the example application for this module. pub struct TcpTmtcInCobsServer< + TmSource: PacketSource, + TcSender: PacketSenderRaw, + HandledConnection: HandledConnectionHandler, TmError, - TcError: 'static, - TmSource: TmPacketSource, - TcReceiver: ReceivesTc, + SendError: 'static, > { - generic_server: - TcpTmtcGenericServer, + pub generic_server: TcpTmtcGenericServer< + TmSource, + TcSender, + CobsTmSender, + CobsTcParser, + HandledConnection, + TmError, + SendError, + >, } impl< + TmSource: PacketSource, + TcReceiver: PacketSenderRaw, + HandledConnection: HandledConnectionHandler, TmError: 'static, TcError: 'static, - TmSource: TmPacketSource, - TcReceiver: ReceivesTc, - > TcpTmtcInCobsServer + > TcpTmtcInCobsServer { /// Create a new TCP TMTC server which exchanges TMTC packets encoded with /// [COBS protocol](https://en.wikipedia.org/wiki/Consistent_Overhead_Byte_Stuffing). @@ -140,6 +157,8 @@ impl< cfg: ServerConfig, tm_source: TmSource, tc_receiver: TcReceiver, + handled_connection: HandledConnection, + stop_signal: Option>, ) -> Result { Ok(Self { generic_server: TcpTmtcGenericServer::new( @@ -148,6 +167,8 @@ impl< CobsTmSender::new(cfg.tm_buffer_size), tm_source, tc_receiver, + handled_connection, + stop_signal, )?, }) } @@ -160,9 +181,10 @@ impl< /// useful if using the port number 0 for OS auto-assignment. pub fn local_addr(&self) -> std::io::Result; - /// Delegation to the [TcpTmtcGenericServer::handle_next_connection] call. - pub fn handle_next_connection( + /// Delegation to the [TcpTmtcGenericServer::handle_all_connections] call. + pub fn handle_all_connections( &mut self, + poll_duration: Option, ) -> Result>; } } @@ -177,21 +199,29 @@ mod tests { use std::{ io::{Read, Write}, net::{IpAddr, Ipv4Addr, SocketAddr, TcpStream}, + panic, + sync::mpsc, thread, + time::Instant, }; use crate::{ encoding::tests::{INVERTED_PACKET, SIMPLE_PACKET}, hal::std::tcp_server::{ - tests::{SyncTcCacher, SyncTmSource}, - ServerConfig, + tests::{ConnectionFinishedHandler, SyncTmSource}, + ConnectionResult, ServerConfig, }, + queue::GenericSendError, + tmtc::PacketAsVec, + ComponentId, }; use alloc::sync::Arc; use cobs::encode; use super::TcpTmtcInCobsServer; + const TCP_SERVER_ID: ComponentId = 0x05; + fn encode_simple_packet(encoded_buf: &mut [u8], current_idx: &mut usize) { encode_packet(&SIMPLE_PACKET, encoded_buf, current_idx) } @@ -210,13 +240,22 @@ mod tests { fn generic_tmtc_server( addr: &SocketAddr, - tc_receiver: SyncTcCacher, + tc_sender: mpsc::Sender, tm_source: SyncTmSource, - ) -> TcpTmtcInCobsServer<(), (), SyncTmSource, SyncTcCacher> { + stop_signal: Option>, + ) -> TcpTmtcInCobsServer< + SyncTmSource, + mpsc::Sender, + ConnectionFinishedHandler, + (), + GenericSendError, + > { TcpTmtcInCobsServer::new( - ServerConfig::new(*addr, Duration::from_millis(2), 1024, 1024), + ServerConfig::new(TCP_SERVER_ID, *addr, Duration::from_millis(2), 1024, 1024), tm_source, - tc_receiver, + tc_sender, + ConnectionFinishedHandler::default(), + stop_signal, ) .expect("TCP server generation failed") } @@ -224,9 +263,10 @@ mod tests { #[test] fn test_server_basic_no_tm() { let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); - let tc_receiver = SyncTcCacher::default(); + let (tc_sender, tc_receiver) = mpsc::channel(); let tm_source = SyncTmSource::default(); - let mut tcp_server = generic_tmtc_server(&auto_port_addr, tc_receiver.clone(), tm_source); + let mut tcp_server = + generic_tmtc_server(&auto_port_addr, tc_sender.clone(), tm_source, None); let dest_addr = tcp_server .local_addr() .expect("retrieving dest addr failed"); @@ -234,13 +274,20 @@ mod tests { let set_if_done = conn_handled.clone(); // Call the connection handler in separate thread, does block. thread::spawn(move || { - let result = tcp_server.handle_next_connection(); + let result = tcp_server.handle_all_connections(Some(Duration::from_millis(100))); if result.is_err() { panic!("handling connection failed: {:?}", result.unwrap_err()); } - let conn_result = result.unwrap(); - assert_eq!(conn_result.num_received_tcs, 1); - assert_eq!(conn_result.num_sent_tms, 0); + let result = result.unwrap(); + assert_eq!(result, ConnectionResult::HandledConnections(1)); + tcp_server + .generic_server + .finished_handler + .check_last_connection(0, 1); + tcp_server + .generic_server + .finished_handler + .check_no_connections_left(); set_if_done.store(true, Ordering::Relaxed); }); // Send TC to server now. @@ -262,24 +309,20 @@ mod tests { panic!("connection was not handled properly"); } // Check that the packet was received and decoded successfully. - let mut tc_queue = tc_receiver - .tc_queue - .lock() - .expect("locking tc queue failed"); - assert_eq!(tc_queue.len(), 1); - assert_eq!(tc_queue.pop_front().unwrap(), &SIMPLE_PACKET); - drop(tc_queue); + let packet_with_sender = tc_receiver.recv().expect("receiving TC failed"); + assert_eq!(packet_with_sender.packet, &SIMPLE_PACKET); + matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty)); } #[test] fn test_server_basic_multi_tm_multi_tc() { let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); - let tc_receiver = SyncTcCacher::default(); + let (tc_sender, tc_receiver) = mpsc::channel(); let mut tm_source = SyncTmSource::default(); tm_source.add_tm(&INVERTED_PACKET); tm_source.add_tm(&SIMPLE_PACKET); let mut tcp_server = - generic_tmtc_server(&auto_port_addr, tc_receiver.clone(), tm_source.clone()); + generic_tmtc_server(&auto_port_addr, tc_sender.clone(), tm_source.clone(), None); let dest_addr = tcp_server .local_addr() .expect("retrieving dest addr failed"); @@ -287,13 +330,20 @@ mod tests { let set_if_done = conn_handled.clone(); // Call the connection handler in separate thread, does block. thread::spawn(move || { - let result = tcp_server.handle_next_connection(); + let result = tcp_server.handle_all_connections(Some(Duration::from_millis(100))); if result.is_err() { panic!("handling connection failed: {:?}", result.unwrap_err()); } - let conn_result = result.unwrap(); - assert_eq!(conn_result.num_received_tcs, 2, "Not enough TCs received"); - assert_eq!(conn_result.num_sent_tms, 2, "Not enough TMs received"); + let result = result.unwrap(); + assert_eq!(result, ConnectionResult::HandledConnections(1)); + tcp_server + .generic_server + .finished_handler + .check_last_connection(2, 2); + tcp_server + .generic_server + .finished_handler + .check_no_connections_left(); set_if_done.store(true, Ordering::Relaxed); }); // Send TC to server now. @@ -367,13 +417,78 @@ mod tests { panic!("connection was not handled properly"); } // Check that the packet was received and decoded successfully. - let mut tc_queue = tc_receiver - .tc_queue - .lock() - .expect("locking tc queue failed"); - assert_eq!(tc_queue.len(), 2); - assert_eq!(tc_queue.pop_front().unwrap(), &SIMPLE_PACKET); - assert_eq!(tc_queue.pop_front().unwrap(), &INVERTED_PACKET); - drop(tc_queue); + let packet_with_sender = tc_receiver.recv().expect("receiving TC failed"); + let packet = &packet_with_sender.packet; + assert_eq!(packet, &SIMPLE_PACKET); + let packet_with_sender = tc_receiver.recv().expect("receiving TC failed"); + let packet = &packet_with_sender.packet; + assert_eq!(packet, &INVERTED_PACKET); + matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty)); + } + + #[test] + fn test_server_accept_timeout() { + let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); + let (tc_sender, _tc_receiver) = mpsc::channel(); + let tm_source = SyncTmSource::default(); + let mut tcp_server = + generic_tmtc_server(&auto_port_addr, tc_sender.clone(), tm_source, None); + let start = Instant::now(); + // Call the connection handler in separate thread, does block. + let thread_jh = thread::spawn(move || loop { + let result = tcp_server.handle_all_connections(Some(Duration::from_millis(20))); + if result.is_err() { + panic!("handling connection failed: {:?}", result.unwrap_err()); + } + let result = result.unwrap(); + if result == ConnectionResult::AcceptTimeout { + break; + } + if Instant::now() - start > Duration::from_millis(100) { + panic!("regular stop signal handling failed"); + } + }); + thread_jh.join().expect("thread join failed"); + } + + #[test] + fn test_server_stop_signal() { + let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); + let (tc_sender, _tc_receiver) = mpsc::channel(); + let tm_source = SyncTmSource::default(); + let stop_signal = Arc::new(AtomicBool::new(false)); + let mut tcp_server = generic_tmtc_server( + &auto_port_addr, + tc_sender.clone(), + tm_source, + Some(stop_signal.clone()), + ); + let dest_addr = tcp_server + .local_addr() + .expect("retrieving dest addr failed"); + let stop_signal_copy = stop_signal.clone(); + let start = Instant::now(); + // Call the connection handler in separate thread, does block. + let thread_jh = thread::spawn(move || loop { + let result = tcp_server.handle_all_connections(Some(Duration::from_millis(20))); + if result.is_err() { + panic!("handling connection failed: {:?}", result.unwrap_err()); + } + let result = result.unwrap(); + if result == ConnectionResult::AcceptTimeout { + panic!("unexpected accept timeout"); + } + if stop_signal_copy.load(Ordering::Relaxed) { + break; + } + if Instant::now() - start > Duration::from_millis(100) { + panic!("regular stop signal handling failed"); + } + }); + // We connect but do not do anything. + let _stream = TcpStream::connect(dest_addr).expect("connecting to TCP server failed"); + stop_signal.store(true, Ordering::Relaxed); + // No need to drop the connection, the stop signal should take take of everything. + thread_jh.join().expect("thread join failed"); } } diff --git a/satrs/src/hal/std/tcp_server.rs b/satrs/src/hal/std/tcp_server.rs index deeb902..983702a 100644 --- a/satrs/src/hal/std/tcp_server.rs +++ b/satrs/src/hal/std/tcp_server.rs @@ -1,21 +1,23 @@ //! Generic TCP TMTC servers with different TMTC format flavours. +use alloc::sync::Arc; use alloc::vec; use alloc::vec::Vec; +use core::sync::atomic::AtomicBool; use core::time::Duration; +use mio::net::{TcpListener, TcpStream}; +use mio::{Events, Interest, Poll, Token}; use socket2::{Domain, Socket, Type}; -use std::io::Read; -use std::net::TcpListener; -use std::net::{SocketAddr, TcpStream}; +use std::io::{self, Read}; +use std::net::SocketAddr; use std::thread; -use crate::tmtc::{ReceivesTc, TmPacketSource}; +use crate::tmtc::{PacketSenderRaw, PacketSource}; +use crate::ComponentId; use thiserror::Error; // Re-export the TMTC in COBS server. pub use crate::hal::std::tcp_cobs_server::{CobsTcParser, CobsTmSender, TcpTmtcInCobsServer}; -pub use crate::hal::std::tcp_spacepackets_server::{ - SpacepacketsTcParser, SpacepacketsTmSender, TcpSpacepacketsServer, -}; +pub use crate::hal::std::tcp_spacepackets_server::{SpacepacketsTmSender, TcpSpacepacketsServer}; /// Configuration struct for the generic TCP TMTC server /// @@ -25,7 +27,7 @@ pub use crate::hal::std::tcp_spacepackets_server::{ /// * `inner_loop_delay` - If a client connects for a longer period, but no TC is received or /// no TM needs to be sent, the TCP server will delay for the specified amount of time /// to reduce CPU load. -/// * `tm_buffer_size` - Size of the TM buffer used to read TM from the [TmPacketSource] and +/// * `tm_buffer_size` - Size of the TM buffer used to read TM from the [PacketSource] and /// encoding of that data. This buffer should at large enough to hold the maximum expected /// TM size read from the packet source. /// * `tc_buffer_size` - Size of the TC buffer used to read encoded telecommands sent from @@ -41,6 +43,7 @@ pub use crate::hal::std::tcp_spacepackets_server::{ /// default. #[derive(Debug, Copy, Clone)] pub struct ServerConfig { + pub id: ComponentId, pub addr: SocketAddr, pub inner_loop_delay: Duration, pub tm_buffer_size: usize, @@ -51,12 +54,14 @@ pub struct ServerConfig { impl ServerConfig { pub fn new( + id: ComponentId, addr: SocketAddr, inner_loop_delay: Duration, tm_buffer_size: usize, tc_buffer_size: usize, ) -> Self { Self { + id, addr, inner_loop_delay, tm_buffer_size, @@ -79,37 +84,62 @@ pub enum TcpTmtcError { /// Result of one connection attempt. Contains the client address if a connection was established, /// in addition to the number of telecommands and telemetry packets exchanged. -#[derive(Debug, Default)] -pub struct ConnectionResult { - pub addr: Option, +#[derive(Debug, PartialEq, Eq)] +pub enum ConnectionResult { + AcceptTimeout, + HandledConnections(u32), +} + +#[derive(Debug)] +pub struct HandledConnectionInfo { + pub addr: SocketAddr, pub num_received_tcs: u32, pub num_sent_tms: u32, + /// The generic TCP server can be stopped using an external signal. If this happened, this + /// boolean will be set to true. + pub stopped_by_signal: bool, +} + +impl HandledConnectionInfo { + pub fn new(addr: SocketAddr) -> Self { + Self { + addr, + num_received_tcs: 0, + num_sent_tms: 0, + stopped_by_signal: false, + } + } +} + +pub trait HandledConnectionHandler { + fn handled_connection(&mut self, info: HandledConnectionInfo); } /// Generic parser abstraction for an object which can parse for telecommands given a raw -/// bytestream received from a TCP socket and send them to a generic [ReceivesTc] telecommand -/// receiver. This allows different encoding schemes for telecommands. -pub trait TcpTcParser { +/// bytestream received from a TCP socket and send them using a generic [PacketSenderRaw] +/// implementation. This allows different encoding schemes for telecommands. +pub trait TcpTcParser { fn handle_tc_parsing( &mut self, tc_buffer: &mut [u8], - tc_receiver: &mut (impl ReceivesTc + ?Sized), - conn_result: &mut ConnectionResult, + sender_id: ComponentId, + tc_sender: &(impl PacketSenderRaw + ?Sized), + conn_result: &mut HandledConnectionInfo, current_write_idx: usize, next_write_idx: &mut usize, - ) -> Result<(), TcpTmtcError>; + ) -> Result<(), TcpTmtcError>; } /// Generic sender abstraction for an object which can pull telemetry from a given TM source -/// using a [TmPacketSource] and then send them back to a client using a given [TcpStream]. +/// using a [PacketSource] and then send them back to a client using a given [TcpStream]. /// The concrete implementation can also perform any encoding steps which are necessary before /// sending back the data to a client. pub trait TcpTmSender { fn handle_tm_sending( &mut self, tm_buffer: &mut [u8], - tm_source: &mut (impl TmPacketSource + ?Sized), - conn_result: &mut ConnectionResult, + tm_source: &mut (impl PacketSource + ?Sized), + conn_result: &mut HandledConnectionInfo, stream: &mut TcpStream, ) -> Result>; } @@ -121,9 +151,9 @@ pub trait TcpTmSender { /// through the following 4 core abstractions: /// /// 1. [TcpTcParser] to parse for telecommands from the raw bytestream received from a client. -/// 2. Parsed telecommands will be sent to the [ReceivesTc] telecommand receiver. +/// 2. Parsed telecommands will be sent using the [PacketSenderRaw] object. /// 3. [TcpTmSender] to send telemetry pulled from a TM source back to the client. -/// 4. [TmPacketSource] as a generic TM source used by the [TcpTmSender]. +/// 4. [PacketSource] as a generic TM source used by the [TcpTmSender]. /// /// It is possible to specify custom abstractions to build a dedicated TCP TMTC server without /// having to re-implement common logic. @@ -131,32 +161,49 @@ pub trait TcpTmSender { /// Currently, this framework offers the following concrete implementations: /// /// 1. [TcpTmtcInCobsServer] to exchange TMTC wrapped inside the COBS framing protocol. +/// 2. [TcpSpacepacketsServer] to exchange space packets via TCP. pub struct TcpTmtcGenericServer< + TmSource: PacketSource, + TcSender: PacketSenderRaw, + TmSender: TcpTmSender, + TcParser: TcpTcParser, + HandledConnection: HandledConnectionHandler, TmError, - TcError, - TmSource: TmPacketSource, - TcReceiver: ReceivesTc, - TmSender: TcpTmSender, - TcParser: TcpTcParser, + TcSendError, > { + pub id: ComponentId, + pub finished_handler: HandledConnection, pub(crate) listener: TcpListener, pub(crate) inner_loop_delay: Duration, pub(crate) tm_source: TmSource, pub(crate) tm_buffer: Vec, - pub(crate) tc_receiver: TcReceiver, + pub(crate) tc_sender: TcSender, pub(crate) tc_buffer: Vec, - tc_handler: TcParser, - tm_handler: TmSender, + poll: Poll, + events: Events, + pub tc_handler: TcParser, + pub tm_handler: TmSender, + stop_signal: Option>, } impl< + TmSource: PacketSource, + TcSender: PacketSenderRaw, + TmSender: TcpTmSender, + TcParser: TcpTcParser, + HandledConnection: HandledConnectionHandler, TmError: 'static, - TcError: 'static, - TmSource: TmPacketSource, - TcReceiver: ReceivesTc, - TmSender: TcpTmSender, - TcParser: TcpTcParser, - > TcpTmtcGenericServer + TcSendError: 'static, + > + TcpTmtcGenericServer< + TmSource, + TcSender, + TmSender, + TcParser, + HandledConnection, + TmError, + TcSendError, + > { /// Create a new generic TMTC server instance. /// @@ -168,32 +215,58 @@ impl< /// * `tm_sender` - Sends back telemetry to the client using the specified TM source. /// * `tm_source` - Generic TM source used by the server to pull telemetry packets which are /// then sent back to the client. - /// * `tc_receiver` - Any received telecommand which was decoded successfully will be forwarded - /// to this TC receiver. + /// * `tc_sender` - Any received telecommand which was decoded successfully will be forwarded + /// using this TC sender. + /// * `stop_signal` - Can be used to stop the server even if a connection is ongoing. pub fn new( cfg: ServerConfig, tc_parser: TcParser, tm_sender: TmSender, tm_source: TmSource, - tc_receiver: TcReceiver, + tc_receiver: TcSender, + finished_handler: HandledConnection, + stop_signal: Option>, ) -> Result { // Create a TCP listener bound to two addresses. let socket = Socket::new(Domain::IPV4, Type::STREAM, None)?; + socket.set_reuse_address(cfg.reuse_addr)?; #[cfg(unix)] socket.set_reuse_port(cfg.reuse_port)?; + // MIO does not do this for us. We want the accept calls to be non-blocking. + socket.set_nonblocking(true)?; let addr = (cfg.addr).into(); socket.bind(&addr)?; socket.listen(128)?; + + // Create a poll instance. + let poll = Poll::new()?; + // Create storage for events. + let events = Events::with_capacity(32); + let listener: std::net::TcpListener = socket.into(); + let mut mio_listener = TcpListener::from_std(listener); + + // Start listening for incoming connections. + poll.registry().register( + &mut mio_listener, + Token(0), + Interest::READABLE | Interest::WRITABLE, + )?; + Ok(Self { + id: cfg.id, tc_handler: tc_parser, tm_handler: tm_sender, - listener: socket.into(), + poll, + events, + listener: mio_listener, inner_loop_delay: cfg.inner_loop_delay, tm_source, tm_buffer: vec![0; cfg.tm_buffer_size], - tc_receiver, + tc_sender: tc_receiver, tc_buffer: vec![0; cfg.tc_buffer_size], + stop_signal, + finished_handler, }) } @@ -208,11 +281,11 @@ impl< self.listener.local_addr() } - /// This call is used to handle the next connection to a client. Right now, it performs + /// This call is used to handle all connection from clients. Right now, it performs /// the following steps: /// - /// 1. It calls the [std::net::TcpListener::accept] method internally using the blocking API - /// until a client connects. + /// 1. It calls the [std::net::TcpListener::accept] method until a client connects. An optional + /// timeout can be specified for non-blocking acceptance. /// 2. It reads all the telecommands from the client and parses all received data using the /// user specified [TcpTcParser]. /// 3. After reading and parsing all telecommands, it sends back all telemetry using the @@ -221,15 +294,66 @@ impl< /// The server will delay for a user-specified period if the client connects to the server /// for prolonged periods and there is no traffic for the server. This is the case if the /// client does not send any telecommands and no telemetry needs to be sent back to the client. - pub fn handle_next_connection( + pub fn handle_all_connections( &mut self, - ) -> Result> { - let mut connection_result = ConnectionResult::default(); + poll_timeout: Option, + ) -> Result> { + let mut handled_connections = 0; + // Poll Mio for events. + self.poll.poll(&mut self.events, poll_timeout)?; + let mut acceptable_connection = false; + // Process each event. + for event in self.events.iter() { + if event.token() == Token(0) { + acceptable_connection = true; + } else { + // Should never happen.. + panic!("unexpected TCP event token"); + } + } + // I'd love to do this in the loop above, but there are issues with multiple borrows. + if acceptable_connection { + // There might be mutliple connections available. Accept until all of them have + // been handled. + loop { + match self.listener.accept() { + Ok((stream, addr)) => { + if let Err(e) = self.handle_accepted_connection(stream, addr) { + self.reregister_poll_interest()?; + return Err(e); + } + handled_connections += 1; + } + Err(ref err) if err.kind() == io::ErrorKind::WouldBlock => break, + Err(err) => { + self.reregister_poll_interest()?; + return Err(TcpTmtcError::Io(err)); + } + } + } + } + if handled_connections > 0 { + return Ok(ConnectionResult::HandledConnections(handled_connections)); + } + Ok(ConnectionResult::AcceptTimeout) + } + + fn reregister_poll_interest(&mut self) -> io::Result<()> { + self.poll.registry().reregister( + &mut self.listener, + Token(0), + Interest::READABLE | Interest::WRITABLE, + ) + } + + fn handle_accepted_connection( + &mut self, + mut stream: TcpStream, + addr: SocketAddr, + ) -> Result<(), TcpTmtcError> { let mut current_write_idx; let mut next_write_idx = 0; - let (mut stream, addr) = self.listener.accept()?; - stream.set_nonblocking(true)?; - connection_result.addr = Some(addr); + let mut connection_result = HandledConnectionInfo::new(addr); current_write_idx = next_write_idx; loop { let read_result = stream.read(&mut self.tc_buffer[current_write_idx..]); @@ -240,7 +364,8 @@ impl< if current_write_idx > 0 { self.tc_handler.handle_tc_parsing( &mut self.tc_buffer, - &mut self.tc_receiver, + self.id, + &self.tc_sender, &mut connection_result, current_write_idx, &mut next_write_idx, @@ -254,7 +379,8 @@ impl< if current_write_idx == self.tc_buffer.capacity() { self.tc_handler.handle_tc_parsing( &mut self.tc_buffer, - &mut self.tc_receiver, + self.id, + &self.tc_sender, &mut connection_result, current_write_idx, &mut next_write_idx, @@ -268,7 +394,8 @@ impl< std::io::ErrorKind::WouldBlock | std::io::ErrorKind::TimedOut => { self.tc_handler.handle_tc_parsing( &mut self.tc_buffer, - &mut self.tc_receiver, + self.id, + &self.tc_sender, &mut connection_result, current_write_idx, &mut next_write_idx, @@ -284,6 +411,18 @@ impl< // No TC read, no TM was sent, but the client has not disconnected. // Perform an inner delay to avoid burning CPU time. thread::sleep(self.inner_loop_delay); + // Optional stop signal handling. + if self.stop_signal.is_some() + && self + .stop_signal + .as_ref() + .unwrap() + .load(std::sync::atomic::Ordering::Relaxed) + { + connection_result.stopped_by_signal = true; + self.finished_handler.handled_connection(connection_result); + return Ok(()); + } } } _ => { @@ -298,7 +437,8 @@ impl< &mut connection_result, &mut stream, )?; - Ok(connection_result) + self.finished_handler.handled_connection(connection_result); + Ok(()) } } @@ -308,21 +448,9 @@ pub(crate) mod tests { use alloc::{collections::VecDeque, sync::Arc, vec::Vec}; - use crate::tmtc::{ReceivesTcCore, TmPacketSourceCore}; + use crate::tmtc::PacketSource; - #[derive(Default, Clone)] - pub(crate) struct SyncTcCacher { - pub(crate) tc_queue: Arc>>>, - } - impl ReceivesTcCore for SyncTcCacher { - type Error = (); - - fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { - let mut tc_queue = self.tc_queue.lock().expect("tc forwarder failed"); - tc_queue.push_back(tc_raw.to_vec()); - Ok(()) - } - } + use super::*; #[derive(Default, Clone)] pub(crate) struct SyncTmSource { @@ -336,7 +464,7 @@ pub(crate) mod tests { } } - impl TmPacketSourceCore for SyncTmSource { + impl PacketSource for SyncTmSource { type Error = (); fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result { @@ -356,4 +484,30 @@ pub(crate) mod tests { Ok(0) } } + + #[derive(Default)] + pub struct ConnectionFinishedHandler { + connection_info: VecDeque, + } + + impl HandledConnectionHandler for ConnectionFinishedHandler { + fn handled_connection(&mut self, info: HandledConnectionInfo) { + self.connection_info.push_back(info); + } + } + + impl ConnectionFinishedHandler { + pub fn check_last_connection(&mut self, num_tms: u32, num_tcs: u32) { + let last_conn_result = self + .connection_info + .pop_back() + .expect("no connection info available"); + assert_eq!(last_conn_result.num_received_tcs, num_tcs); + assert_eq!(last_conn_result.num_sent_tms, num_tms); + } + + pub fn check_no_connections_left(&self) { + assert!(self.connection_info.is_empty()); + } + } } diff --git a/satrs/src/hal/std/tcp_spacepackets_server.rs b/satrs/src/hal/std/tcp_spacepackets_server.rs index a33b137..854cc7c 100644 --- a/satrs/src/hal/std/tcp_spacepackets_server.rs +++ b/satrs/src/hal/std/tcp_spacepackets_server.rs @@ -1,49 +1,44 @@ +use alloc::sync::Arc; +use core::{sync::atomic::AtomicBool, time::Duration}; use delegate::delegate; -use std::{ - io::Write, - net::{SocketAddr, TcpListener, TcpStream}, -}; +use mio::net::{TcpListener, TcpStream}; +use std::{io::Write, net::SocketAddr}; use crate::{ - encoding::parse_buffer_for_ccsds_space_packets, - tmtc::{ReceivesTc, TmPacketSource}, - ValidatorU16Id, + encoding::{ccsds::SpacePacketValidator, parse_buffer_for_ccsds_space_packets}, + tmtc::{PacketSenderRaw, PacketSource}, + ComponentId, }; use super::tcp_server::{ - ConnectionResult, ServerConfig, TcpTcParser, TcpTmSender, TcpTmtcError, TcpTmtcGenericServer, + ConnectionResult, HandledConnectionHandler, HandledConnectionInfo, ServerConfig, TcpTcParser, + TcpTmSender, TcpTmtcError, TcpTmtcGenericServer, }; -/// Concrete [TcpTcParser] implementation for the [TcpSpacepacketsServer]. -pub struct SpacepacketsTcParser { - packet_id_lookup: PacketIdChecker, -} - -impl SpacepacketsTcParser { - pub fn new(packet_id_lookup: PacketIdChecker) -> Self { - Self { packet_id_lookup } - } -} - -impl TcpTcParser - for SpacepacketsTcParser -{ +impl TcpTcParser for T { fn handle_tc_parsing( &mut self, tc_buffer: &mut [u8], - tc_receiver: &mut (impl ReceivesTc + ?Sized), - conn_result: &mut ConnectionResult, + sender_id: ComponentId, + tc_sender: &(impl PacketSenderRaw + ?Sized), + conn_result: &mut HandledConnectionInfo, current_write_idx: usize, next_write_idx: &mut usize, ) -> Result<(), TcpTmtcError> { // Reader vec full, need to parse for packets. - conn_result.num_received_tcs += parse_buffer_for_ccsds_space_packets( - &mut tc_buffer[..current_write_idx], - &self.packet_id_lookup, - tc_receiver.upcast_mut(), - next_write_idx, + let parse_result = parse_buffer_for_ccsds_space_packets( + &tc_buffer[..current_write_idx], + self, + sender_id, + tc_sender, ) .map_err(|e| TcpTmtcError::TcError(e))?; + if let Some(broken_tail_start) = parse_result.incomplete_tail_start { + // Copy broken tail to front of buffer. + tc_buffer.copy_within(broken_tail_start..current_write_idx, 0); + *next_write_idx = current_write_idx - broken_tail_start; + } + conn_result.num_received_tcs += parse_result.packets_found; Ok(()) } } @@ -56,8 +51,8 @@ impl TcpTmSender for SpacepacketsTmSender { fn handle_tm_sending( &mut self, tm_buffer: &mut [u8], - tm_source: &mut (impl TmPacketSource + ?Sized), - conn_result: &mut ConnectionResult, + tm_source: &mut (impl PacketSource + ?Sized), + conn_result: &mut HandledConnectionInfo, stream: &mut TcpStream, ) -> Result> { let mut tm_was_sent = false; @@ -84,37 +79,41 @@ impl TcpTmSender for SpacepacketsTmSender { /// /// This serves only works if /// [CCSDS 133.0-B-2 space packets](https://public.ccsds.org/Pubs/133x0b2e1.pdf) are the only -/// packet type being exchanged. It uses the CCSDS [spacepackets::PacketId] as the packet delimiter -/// and start marker when parsing for packets. The user specifies a set of expected -/// [spacepackets::PacketId]s as part of the server configuration for that purpose. +/// packet type being exchanged. It uses the CCSDS space packet header [spacepackets::SpHeader] and +/// a user specified [SpacePacketValidator] to determine the space packets relevant for further +/// processing. /// /// ## Example +/// /// The [TCP server integration tests](https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/main/satrs/tests/tcp_servers.rs) /// also serves as the example application for this module. pub struct TcpSpacepacketsServer< + TmSource: PacketSource, + TcSender: PacketSenderRaw, + Validator: SpacePacketValidator, + HandledConnection: HandledConnectionHandler, TmError, - TcError: 'static, - TmSource: TmPacketSource, - TcReceiver: ReceivesTc, - PacketIdChecker: ValidatorU16Id, + SendError: 'static, > { - generic_server: TcpTmtcGenericServer< - TmError, - TcError, + pub generic_server: TcpTmtcGenericServer< TmSource, - TcReceiver, + TcSender, SpacepacketsTmSender, - SpacepacketsTcParser, + Validator, + HandledConnection, + TmError, + SendError, >, } impl< + TmSource: PacketSource, + TcSender: PacketSenderRaw, + Validator: SpacePacketValidator, + HandledConnection: HandledConnectionHandler, TmError: 'static, TcError: 'static, - TmSource: TmPacketSource, - TcReceiver: ReceivesTc, - PacketIdChecker: ValidatorU16Id, - > TcpSpacepacketsServer + > TcpSpacepacketsServer { /// /// ## Parameter @@ -122,23 +121,31 @@ impl< /// * `cfg` - Configuration of the server. /// * `tm_source` - Generic TM source used by the server to pull telemetry packets which are /// then sent back to the client. - /// * `tc_receiver` - Any received telecommands which were decoded successfully will be - /// forwarded to this TC receiver. - /// * `packet_id_lookup` - This lookup table contains the relevant packets IDs for packet - /// parsing. This mechanism is used to have a start marker for finding CCSDS packets. + /// * `tc_sender` - Any received telecommands which were decoded successfully will be + /// forwarded using this [PacketSenderRaw]. + /// * `validator` - Used to determine the space packets relevant for further processing and + /// to detect broken space packets. + /// * `handled_connection_hook` - Called to notify the user about a succesfully handled + /// connection. + /// * `stop_signal` - Can be used to shut down the TCP server even for longer running + /// connections. pub fn new( cfg: ServerConfig, tm_source: TmSource, - tc_receiver: TcReceiver, - packet_id_checker: PacketIdChecker, + tc_sender: TcSender, + validator: Validator, + handled_connection_hook: HandledConnection, + stop_signal: Option>, ) -> Result { Ok(Self { generic_server: TcpTmtcGenericServer::new( cfg, - SpacepacketsTcParser::new(packet_id_checker), + validator, SpacepacketsTmSender::default(), tm_source, - tc_receiver, + tc_sender, + handled_connection_hook, + stop_signal, )?, }) } @@ -151,9 +158,10 @@ impl< /// useful if using the port number 0 for OS auto-assignment. pub fn local_addr(&self) -> std::io::Result; - /// Delegation to the [TcpTmtcGenericServer::handle_next_connection] call. - pub fn handle_next_connection( + /// Delegation to the [TcpTmtcGenericServer::handle_all_connections] call. + pub fn handle_all_connections( &mut self, + poll_timeout: Option ) -> Result>; } } @@ -170,6 +178,7 @@ mod tests { use std::{ io::{Read, Write}, net::{IpAddr, Ipv4Addr, SocketAddr, TcpStream}, + sync::mpsc, thread, }; @@ -177,32 +186,62 @@ mod tests { use hashbrown::HashSet; use spacepackets::{ ecss::{tc::PusTcCreator, WritablePusPacket}, - PacketId, SpHeader, + CcsdsPacket, PacketId, SpHeader, }; - use crate::hal::std::tcp_server::{ - tests::{SyncTcCacher, SyncTmSource}, - ServerConfig, + use crate::{ + encoding::ccsds::{SpValidity, SpacePacketValidator}, + hal::std::tcp_server::{ + tests::{ConnectionFinishedHandler, SyncTmSource}, + ConnectionResult, ServerConfig, + }, + queue::GenericSendError, + tmtc::PacketAsVec, + ComponentId, }; use super::TcpSpacepacketsServer; + const TCP_SERVER_ID: ComponentId = 0x05; const TEST_APID_0: u16 = 0x02; const TEST_PACKET_ID_0: PacketId = PacketId::new_for_tc(true, TEST_APID_0); const TEST_APID_1: u16 = 0x10; const TEST_PACKET_ID_1: PacketId = PacketId::new_for_tc(true, TEST_APID_1); + #[derive(Default)] + pub struct SimpleValidator(pub HashSet); + + impl SpacePacketValidator for SimpleValidator { + fn validate(&self, sp_header: &SpHeader, _raw_buf: &[u8]) -> SpValidity { + if self.0.contains(&sp_header.packet_id()) { + return SpValidity::Valid; + } + // Simple case: Assume that the interface always contains valid space packets. + SpValidity::Skip + } + } + fn generic_tmtc_server( addr: &SocketAddr, - tc_receiver: SyncTcCacher, + tc_sender: mpsc::Sender, tm_source: SyncTmSource, - packet_id_lookup: HashSet, - ) -> TcpSpacepacketsServer<(), (), SyncTmSource, SyncTcCacher, HashSet> { + validator: SimpleValidator, + stop_signal: Option>, + ) -> TcpSpacepacketsServer< + SyncTmSource, + mpsc::Sender, + SimpleValidator, + ConnectionFinishedHandler, + (), + GenericSendError, + > { TcpSpacepacketsServer::new( - ServerConfig::new(*addr, Duration::from_millis(2), 1024, 1024), + ServerConfig::new(TCP_SERVER_ID, *addr, Duration::from_millis(2), 1024, 1024), tm_source, - tc_receiver, - packet_id_lookup, + tc_sender, + validator, + ConnectionFinishedHandler::default(), + stop_signal, ) .expect("TCP server generation failed") } @@ -210,15 +249,16 @@ mod tests { #[test] fn test_basic_tc_only() { let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); - let tc_receiver = SyncTcCacher::default(); + let (tc_sender, tc_receiver) = mpsc::channel(); let tm_source = SyncTmSource::default(); - let mut packet_id_lookup = HashSet::new(); - packet_id_lookup.insert(TEST_PACKET_ID_0); + let mut validator = SimpleValidator::default(); + validator.0.insert(TEST_PACKET_ID_0); let mut tcp_server = generic_tmtc_server( &auto_port_addr, - tc_receiver.clone(), + tc_sender.clone(), tm_source, - packet_id_lookup, + validator, + None, ); let dest_addr = tcp_server .local_addr() @@ -227,13 +267,20 @@ mod tests { let set_if_done = conn_handled.clone(); // Call the connection handler in separate thread, does block. thread::spawn(move || { - let result = tcp_server.handle_next_connection(); + let result = tcp_server.handle_all_connections(Some(Duration::from_millis(100))); if result.is_err() { panic!("handling connection failed: {:?}", result.unwrap_err()); } let conn_result = result.unwrap(); - assert_eq!(conn_result.num_received_tcs, 1); - assert_eq!(conn_result.num_sent_tms, 0); + matches!(conn_result, ConnectionResult::HandledConnections(1)); + tcp_server + .generic_server + .finished_handler + .check_last_connection(0, 1); + tcp_server + .generic_server + .finished_handler + .check_no_connections_left(); set_if_done.store(true, Ordering::Relaxed); }); let ping_tc = @@ -254,16 +301,15 @@ mod tests { if !conn_handled.load(Ordering::Relaxed) { panic!("connection was not handled properly"); } - // Check that TC has arrived. - let mut tc_queue = tc_receiver.tc_queue.lock().unwrap(); - assert_eq!(tc_queue.len(), 1); - assert_eq!(tc_queue.pop_front().unwrap(), tc_0); + let packet = tc_receiver.try_recv().expect("receiving TC failed"); + assert_eq!(packet.packet, tc_0); + matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty)); } #[test] fn test_multi_tc_multi_tm() { let auto_port_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); - let tc_receiver = SyncTcCacher::default(); + let (tc_sender, tc_receiver) = mpsc::channel(); let mut tm_source = SyncTmSource::default(); // Add telemetry @@ -280,14 +326,15 @@ mod tests { tm_source.add_tm(&tm_1); // Set up server - let mut packet_id_lookup = HashSet::new(); - packet_id_lookup.insert(TEST_PACKET_ID_0); - packet_id_lookup.insert(TEST_PACKET_ID_1); + let mut validator = SimpleValidator::default(); + validator.0.insert(TEST_PACKET_ID_0); + validator.0.insert(TEST_PACKET_ID_1); let mut tcp_server = generic_tmtc_server( &auto_port_addr, - tc_receiver.clone(), + tc_sender.clone(), tm_source, - packet_id_lookup, + validator, + None, ); let dest_addr = tcp_server .local_addr() @@ -297,16 +344,20 @@ mod tests { // Call the connection handler in separate thread, does block. thread::spawn(move || { - let result = tcp_server.handle_next_connection(); + let result = tcp_server.handle_all_connections(Some(Duration::from_millis(100))); if result.is_err() { panic!("handling connection failed: {:?}", result.unwrap_err()); } let conn_result = result.unwrap(); - assert_eq!( - conn_result.num_received_tcs, 2, - "wrong number of received TCs" - ); - assert_eq!(conn_result.num_sent_tms, 2, "wrong number of sent TMs"); + matches!(conn_result, ConnectionResult::HandledConnections(1)); + tcp_server + .generic_server + .finished_handler + .check_last_connection(2, 2); + tcp_server + .generic_server + .finished_handler + .check_no_connections_left(); set_if_done.store(true, Ordering::Relaxed); }); let mut stream = TcpStream::connect(dest_addr).expect("connecting to TCP server failed"); @@ -357,9 +408,10 @@ mod tests { panic!("connection was not handled properly"); } // Check that TC has arrived. - let mut tc_queue = tc_receiver.tc_queue.lock().unwrap(); - assert_eq!(tc_queue.len(), 2); - assert_eq!(tc_queue.pop_front().unwrap(), tc_0); - assert_eq!(tc_queue.pop_front().unwrap(), tc_1); + let packet_0 = tc_receiver.try_recv().expect("receiving TC failed"); + assert_eq!(packet_0.packet, tc_0); + let packet_1 = tc_receiver.try_recv().expect("receiving TC failed"); + assert_eq!(packet_1.packet, tc_1); + matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty)); } } diff --git a/satrs/src/hal/std/udp_server.rs b/satrs/src/hal/std/udp_server.rs index 8f77c2b..4211c77 100644 --- a/satrs/src/hal/std/udp_server.rs +++ b/satrs/src/hal/std/udp_server.rs @@ -1,7 +1,8 @@ //! Generic UDP TC server. -use crate::tmtc::{ReceivesTc, ReceivesTcCore}; -use std::boxed::Box; -use std::io::{Error, ErrorKind}; +use crate::tmtc::PacketSenderRaw; +use crate::ComponentId; +use core::fmt::Debug; +use std::io::{self, ErrorKind}; use std::net::{SocketAddr, ToSocketAddrs, UdpSocket}; use std::vec; use std::vec::Vec; @@ -11,45 +12,46 @@ use std::vec::Vec; /// /// It caches all received telecomands into a vector. The maximum expected telecommand size should /// be declared upfront. This avoids dynamic allocation during run-time. The user can specify a TC -/// receiver in form of a special trait object which implements [ReceivesTc]. Please note that the -/// receiver should copy out the received data if it the data is required past the -/// [ReceivesTcCore::pass_tc] call. +/// sender in form of a special trait object which implements [PacketSenderRaw]. For example, this +/// can be used to send the telecommands to a centralized TC source component for further +/// processing and routing. /// /// # Examples /// /// ``` /// use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket}; +/// use std::sync::mpsc; /// use spacepackets::ecss::WritablePusPacket; /// use satrs::hal::std::udp_server::UdpTcServer; -/// use satrs::tmtc::{ReceivesTc, ReceivesTcCore}; +/// use satrs::ComponentId; +/// use satrs::tmtc::PacketSenderRaw; /// use spacepackets::SpHeader; /// use spacepackets::ecss::tc::PusTcCreator; /// -/// #[derive (Default)] -/// struct PingReceiver {} -/// impl ReceivesTcCore for PingReceiver { -/// type Error = (); -/// fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { -/// assert_eq!(tc_raw.len(), 13); -/// Ok(()) -/// } -/// } +/// const UDP_SERVER_ID: ComponentId = 0x05; /// -/// let mut buf = [0; 32]; +/// let (packet_sender, packet_receiver) = mpsc::channel(); /// let dest_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 7777); -/// let ping_receiver = PingReceiver::default(); -/// let mut udp_tc_server = UdpTcServer::new(dest_addr, 2048, Box::new(ping_receiver)) +/// let mut udp_tc_server = UdpTcServer::new(UDP_SERVER_ID, dest_addr, 2048, packet_sender) /// .expect("Creating UDP TMTC server failed"); /// let sph = SpHeader::new_from_apid(0x02); /// let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true); -/// let len = pus_tc -/// .write_to_bytes(&mut buf) -/// .expect("Error writing PUS TC packet"); -/// assert_eq!(len, 13); -/// let client = UdpSocket::bind("127.0.0.1:7778").expect("Connecting to UDP server failed"); +/// // Can not fail. +/// let ping_tc_raw = pus_tc.to_vec().unwrap(); +/// +/// // Now create a UDP client and send the ping telecommand to the server. +/// let client = UdpSocket::bind("127.0.0.1:0").expect("creating UDP client failed"); /// client -/// .send_to(&buf[0..len], dest_addr) +/// .send_to(&ping_tc_raw, dest_addr) /// .expect("Error sending PUS TC via UDP"); +/// let recv_result = udp_tc_server.try_recv_tc(); +/// assert!(recv_result.is_ok()); +/// // The packet is received by the UDP TC server and sent via the mpsc channel. +/// let sent_packet_with_sender = packet_receiver.try_recv().expect("expected telecommand"); +/// assert_eq!(sent_packet_with_sender.packet, ping_tc_raw); +/// assert_eq!(sent_packet_with_sender.sender_id, UDP_SERVER_ID); +/// // No more packets received. +/// matches!(packet_receiver.try_recv(), Err(mpsc::TryRecvError::Empty)); /// ``` /// /// The [satrs-example crate](https://egit.irs.uni-stuttgart.de/rust/fsrc-launchpad/src/branch/main/satrs-example) @@ -57,65 +59,45 @@ use std::vec::Vec; /// [example code](https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/main/satrs-example/src/tmtc.rs#L67) /// on how to use this TC server. It uses the server to receive PUS telecommands on a specific port /// and then forwards them to a generic CCSDS packet receiver. -pub struct UdpTcServer { +pub struct UdpTcServer, SendError> { + pub id: ComponentId, pub socket: UdpSocket, recv_buf: Vec, sender_addr: Option, - tc_receiver: Box>, + pub tc_sender: TcSender, } -#[derive(Debug)] -pub enum ReceiveResult { +#[derive(Debug, thiserror::Error)] +pub enum ReceiveResult { + #[error("nothing was received")] NothingReceived, - IoError(Error), - ReceiverError(E), + #[error(transparent)] + Io(#[from] io::Error), + #[error(transparent)] + Send(SendError), } -impl From for ReceiveResult { - fn from(e: Error) -> Self { - ReceiveResult::IoError(e) - } -} - -impl PartialEq for ReceiveResult { - fn eq(&self, other: &Self) -> bool { - use ReceiveResult::*; - match (self, other) { - (IoError(ref e), IoError(ref other_e)) => e.kind() == other_e.kind(), - (NothingReceived, NothingReceived) => true, - (ReceiverError(e), ReceiverError(other_e)) => e == other_e, - _ => false, - } - } -} - -impl Eq for ReceiveResult {} - -impl ReceivesTcCore for UdpTcServer { - type Error = E; - - fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { - self.tc_receiver.pass_tc(tc_raw) - } -} - -impl UdpTcServer { +impl, SendError: Debug + 'static> + UdpTcServer +{ pub fn new( + id: ComponentId, addr: A, max_recv_size: usize, - tc_receiver: Box>, - ) -> Result { + tc_sender: TcSender, + ) -> Result { let server = Self { + id, socket: UdpSocket::bind(addr)?, recv_buf: vec![0; max_recv_size], sender_addr: None, - tc_receiver, + tc_sender, }; server.socket.set_nonblocking(true)?; Ok(server) } - pub fn try_recv_tc(&mut self) -> Result<(usize, SocketAddr), ReceiveResult> { + pub fn try_recv_tc(&mut self) -> Result<(usize, SocketAddr), ReceiveResult> { let res = match self.socket.recv_from(&mut self.recv_buf) { Ok(res) => res, Err(e) => { @@ -128,9 +110,9 @@ impl UdpTcServer { }; let (num_bytes, from) = res; self.sender_addr = Some(from); - self.tc_receiver - .pass_tc(&self.recv_buf[0..num_bytes]) - .map_err(|e| ReceiveResult::ReceiverError(e))?; + self.tc_sender + .send_packet(self.id, &self.recv_buf[0..num_bytes]) + .map_err(ReceiveResult::Send)?; Ok(res) } @@ -142,29 +124,35 @@ impl UdpTcServer { #[cfg(test)] mod tests { use crate::hal::std::udp_server::{ReceiveResult, UdpTcServer}; - use crate::tmtc::ReceivesTcCore; + use crate::queue::GenericSendError; + use crate::tmtc::PacketSenderRaw; + use crate::ComponentId; + use core::cell::RefCell; use spacepackets::ecss::tc::PusTcCreator; use spacepackets::ecss::WritablePusPacket; use spacepackets::SpHeader; - use std::boxed::Box; use std::collections::VecDeque; use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket}; use std::vec::Vec; fn is_send(_: &T) {} + const UDP_SERVER_ID: ComponentId = 0x05; + #[derive(Default)] struct PingReceiver { - pub sent_cmds: VecDeque>, + pub sent_cmds: RefCell>>, } - impl ReceivesTcCore for PingReceiver { - type Error = (); + impl PacketSenderRaw for PingReceiver { + type Error = GenericSendError; - fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { + fn send_packet(&self, sender_id: ComponentId, tc_raw: &[u8]) -> Result<(), Self::Error> { + assert_eq!(sender_id, UDP_SERVER_ID); let mut sent_data = Vec::new(); sent_data.extend_from_slice(tc_raw); - self.sent_cmds.push_back(sent_data); + let mut queue = self.sent_cmds.borrow_mut(); + queue.push_back(sent_data); Ok(()) } } @@ -175,7 +163,7 @@ mod tests { let dest_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 7777); let ping_receiver = PingReceiver::default(); is_send(&ping_receiver); - let mut udp_tc_server = UdpTcServer::new(dest_addr, 2048, Box::new(ping_receiver)) + let mut udp_tc_server = UdpTcServer::new(UDP_SERVER_ID, dest_addr, 2048, ping_receiver) .expect("Creating UDP TMTC server failed"); is_send(&udp_tc_server); let sph = SpHeader::new_from_apid(0x02); @@ -195,9 +183,10 @@ mod tests { udp_tc_server.last_sender().expect("No sender set"), local_addr ); - let ping_receiver: &mut PingReceiver = udp_tc_server.tc_receiver.downcast_mut().unwrap(); - assert_eq!(ping_receiver.sent_cmds.len(), 1); - let sent_cmd = ping_receiver.sent_cmds.pop_front().unwrap(); + let ping_receiver = &mut udp_tc_server.tc_sender; + let mut queue = ping_receiver.sent_cmds.borrow_mut(); + assert_eq!(queue.len(), 1); + let sent_cmd = queue.pop_front().unwrap(); assert_eq!(sent_cmd, buf[0..len]); } @@ -205,11 +194,11 @@ mod tests { fn test_nothing_received() { let dest_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 7779); let ping_receiver = PingReceiver::default(); - let mut udp_tc_server = UdpTcServer::new(dest_addr, 2048, Box::new(ping_receiver)) + let mut udp_tc_server = UdpTcServer::new(UDP_SERVER_ID, dest_addr, 2048, ping_receiver) .expect("Creating UDP TMTC server failed"); let res = udp_tc_server.try_recv_tc(); assert!(res.is_err()); let err = res.unwrap_err(); - assert_eq!(err, ReceiveResult::NothingReceived); + matches!(err, ReceiveResult::NothingReceived); } } diff --git a/satrs/src/lib.rs b/satrs/src/lib.rs index f6b43e6..ad7777d 100644 --- a/satrs/src/lib.rs +++ b/satrs/src/lib.rs @@ -72,6 +72,18 @@ impl ValidatorU16Id for hashbrown::HashSet { } } +impl ValidatorU16Id for u16 { + fn validate(&self, id: u16) -> bool { + id == *self + } +} + +impl ValidatorU16Id for &u16 { + fn validate(&self, id: u16) -> bool { + id == **self + } +} + impl ValidatorU16Id for [u16] { fn validate(&self, id: u16) -> bool { self.binary_search(&id).is_ok() diff --git a/satrs/src/mode.rs b/satrs/src/mode.rs index 65519a5..01beac9 100644 --- a/satrs/src/mode.rs +++ b/satrs/src/mode.rs @@ -269,14 +269,8 @@ pub trait ModeReplySender { #[cfg(feature = "alloc")] pub mod alloc_mod { - use crate::{ - mode::ModeRequest, - queue::GenericTargetedMessagingError, - request::{ - MessageMetadata, MessageSender, MessageSenderAndReceiver, MessageSenderMap, - RequestAndReplySenderAndReceiver, RequestId, - }, - ComponentId, + use crate::request::{ + MessageSender, MessageSenderAndReceiver, MessageSenderMap, RequestAndReplySenderAndReceiver, }; use super::*; @@ -558,8 +552,6 @@ pub mod alloc_mod { pub mod std_mod { use std::sync::mpsc; - use crate::request::GenericMessage; - use super::*; pub type ModeRequestHandlerMpsc = ModeRequestHandlerInterface< diff --git a/satrs/src/params.rs b/satrs/src/params.rs index 10fb41c..da770eb 100644 --- a/satrs/src/params.rs +++ b/satrs/src/params.rs @@ -43,7 +43,7 @@ //! This includes the [ParamsHeapless] enumeration for contained values which do not require heap //! allocation, and the [Params] which enumerates [ParamsHeapless] and some additional types which //! require [alloc] support but allow for more flexbility. -use crate::pool::StoreAddr; +use crate::pool::PoolAddr; use core::fmt::Debug; use core::mem::size_of; use paste::paste; @@ -588,15 +588,15 @@ from_conversions_for_raw!( #[non_exhaustive] pub enum Params { Heapless(ParamsHeapless), - Store(StoreAddr), + Store(PoolAddr), #[cfg(feature = "alloc")] Vec(Vec), #[cfg(feature = "alloc")] String(String), } -impl From for Params { - fn from(x: StoreAddr) -> Self { +impl From for Params { + fn from(x: PoolAddr) -> Self { Self::Store(x) } } diff --git a/satrs/src/pool.rs b/satrs/src/pool.rs index 1c3b8a4..77ca66a 100644 --- a/satrs/src/pool.rs +++ b/satrs/src/pool.rs @@ -82,7 +82,7 @@ use spacepackets::ByteConversionError; use std::error::Error; type NumBlocks = u16; -pub type StoreAddr = u64; +pub type PoolAddr = u64; /// Simple address type used for transactions with the local pool. #[derive(Debug, Copy, Clone, PartialEq, Eq)] @@ -100,14 +100,14 @@ impl StaticPoolAddr { } } -impl From for StoreAddr { +impl From for PoolAddr { fn from(value: StaticPoolAddr) -> Self { ((value.pool_idx as u64) << 16) | value.packet_idx as u64 } } -impl From for StaticPoolAddr { - fn from(value: StoreAddr) -> Self { +impl From for StaticPoolAddr { + fn from(value: PoolAddr) -> Self { Self { pool_idx: ((value >> 16) & 0xff) as u16, packet_idx: (value & 0xff) as u16, @@ -150,59 +150,59 @@ impl Error for StoreIdError {} #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] -pub enum StoreError { +pub enum PoolError { /// Requested data block is too large DataTooLarge(usize), /// The store is full. Contains the index of the full subpool StoreFull(u16), /// Store ID is invalid. This also includes partial errors where only the subpool is invalid - InvalidStoreId(StoreIdError, Option), + InvalidStoreId(StoreIdError, Option), /// Valid subpool and packet index, but no data is stored at the given address - DataDoesNotExist(StoreAddr), + DataDoesNotExist(PoolAddr), ByteConversionError(spacepackets::ByteConversionError), LockError, /// Internal or configuration errors InternalError(u32), } -impl Display for StoreError { +impl Display for PoolError { fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { match self { - StoreError::DataTooLarge(size) => { + PoolError::DataTooLarge(size) => { write!(f, "data to store with size {size} is too large") } - StoreError::StoreFull(u16) => { + PoolError::StoreFull(u16) => { write!(f, "store is too full. index for full subpool: {u16}") } - StoreError::InvalidStoreId(id_e, addr) => { + PoolError::InvalidStoreId(id_e, addr) => { write!(f, "invalid store ID: {id_e}, address: {addr:?}") } - StoreError::DataDoesNotExist(addr) => { + PoolError::DataDoesNotExist(addr) => { write!(f, "no data exists at address {addr:?}") } - StoreError::InternalError(e) => { + PoolError::InternalError(e) => { write!(f, "internal error: {e}") } - StoreError::ByteConversionError(e) => { + PoolError::ByteConversionError(e) => { write!(f, "store error: {e}") } - StoreError::LockError => { + PoolError::LockError => { write!(f, "lock error") } } } } -impl From for StoreError { +impl From for PoolError { fn from(value: ByteConversionError) -> Self { Self::ByteConversionError(value) } } #[cfg(feature = "std")] -impl Error for StoreError { +impl Error for PoolError { fn source(&self) -> Option<&(dyn Error + 'static)> { - if let StoreError::InvalidStoreId(e, _) = self { + if let PoolError::InvalidStoreId(e, _) = self { return Some(e); } None @@ -217,44 +217,41 @@ impl Error for StoreError { /// pool structure being wrapped inside a lock. pub trait PoolProvider { /// Add new data to the pool. The provider should attempt to reserve a memory block with the - /// appropriate size and then copy the given data to the block. Yields a [StoreAddr] which can + /// appropriate size and then copy the given data to the block. Yields a [PoolAddr] which can /// be used to access the data stored in the pool - fn add(&mut self, data: &[u8]) -> Result; + fn add(&mut self, data: &[u8]) -> Result; /// The provider should attempt to reserve a free memory block with the appropriate size first. /// It then executes a user-provided closure and passes a mutable reference to that memory /// block to the closure. This allows the user to write data to the memory block. - /// The function should yield a [StoreAddr] which can be used to access the data stored in the + /// The function should yield a [PoolAddr] which can be used to access the data stored in the /// pool. fn free_element( &mut self, len: usize, writer: W, - ) -> Result; + ) -> Result; - /// Modify data added previously using a given [StoreAddr]. The provider should use the store + /// Modify data added previously using a given [PoolAddr]. The provider should use the store /// address to determine if a memory block exists for that address. If it does, it should /// call the user-provided closure and pass a mutable reference to the memory block /// to the closure. This allows the user to modify the memory block. - fn modify( - &mut self, - addr: &StoreAddr, - updater: U, - ) -> Result<(), StoreError>; + fn modify(&mut self, addr: &PoolAddr, updater: U) + -> Result<(), PoolError>; /// The provider should copy the data from the memory block to the user-provided buffer if /// it exists. - fn read(&self, addr: &StoreAddr, buf: &mut [u8]) -> Result; + fn read(&self, addr: &PoolAddr, buf: &mut [u8]) -> Result; - /// Delete data inside the pool given a [StoreAddr]. - fn delete(&mut self, addr: StoreAddr) -> Result<(), StoreError>; - fn has_element_at(&self, addr: &StoreAddr) -> Result; + /// Delete data inside the pool given a [PoolAddr]. + fn delete(&mut self, addr: PoolAddr) -> Result<(), PoolError>; + fn has_element_at(&self, addr: &PoolAddr) -> Result; /// Retrieve the length of the data at the given store address. - fn len_of_data(&self, addr: &StoreAddr) -> Result; + fn len_of_data(&self, addr: &PoolAddr) -> Result; #[cfg(feature = "alloc")] - fn read_as_vec(&self, addr: &StoreAddr) -> Result, StoreError> { + fn read_as_vec(&self, addr: &PoolAddr) -> Result, PoolError> { let mut vec = alloc::vec![0; self.len_of_data(addr)?]; self.read(addr, &mut vec)?; Ok(vec) @@ -271,7 +268,7 @@ pub trait PoolProviderWithGuards: PoolProvider { /// This can prevent memory leaks. Users can read the data and release the guard /// if the data in the store is valid for further processing. If the data is faulty, no /// manual deletion is necessary when returning from a processing function prematurely. - fn read_with_guard(&mut self, addr: StoreAddr) -> PoolGuard; + fn read_with_guard(&mut self, addr: PoolAddr) -> PoolGuard; /// This function behaves like [PoolProvider::modify], but consumes the provided /// address and returns a RAII conformant guard object. @@ -281,20 +278,20 @@ pub trait PoolProviderWithGuards: PoolProvider { /// This can prevent memory leaks. Users can read (and modify) the data and release the guard /// if the data in the store is valid for further processing. If the data is faulty, no /// manual deletion is necessary when returning from a processing function prematurely. - fn modify_with_guard(&mut self, addr: StoreAddr) -> PoolRwGuard; + fn modify_with_guard(&mut self, addr: PoolAddr) -> PoolRwGuard; } pub struct PoolGuard<'a, MemProvider: PoolProvider + ?Sized> { pool: &'a mut MemProvider, - pub addr: StoreAddr, + pub addr: PoolAddr, no_deletion: bool, - deletion_failed_error: Option, + deletion_failed_error: Option, } /// This helper object can be used to safely access pool data without worrying about memory /// leaks. impl<'a, MemProvider: PoolProvider> PoolGuard<'a, MemProvider> { - pub fn new(pool: &'a mut MemProvider, addr: StoreAddr) -> Self { + pub fn new(pool: &'a mut MemProvider, addr: PoolAddr) -> Self { Self { pool, addr, @@ -303,12 +300,12 @@ impl<'a, MemProvider: PoolProvider> PoolGuard<'a, MemProvider> { } } - pub fn read(&self, buf: &mut [u8]) -> Result { + pub fn read(&self, buf: &mut [u8]) -> Result { self.pool.read(&self.addr, buf) } #[cfg(feature = "alloc")] - pub fn read_as_vec(&self) -> Result, StoreError> { + pub fn read_as_vec(&self) -> Result, PoolError> { self.pool.read_as_vec(&self.addr) } @@ -334,19 +331,19 @@ pub struct PoolRwGuard<'a, MemProvider: PoolProvider + ?Sized> { } impl<'a, MemProvider: PoolProvider> PoolRwGuard<'a, MemProvider> { - pub fn new(pool: &'a mut MemProvider, addr: StoreAddr) -> Self { + pub fn new(pool: &'a mut MemProvider, addr: PoolAddr) -> Self { Self { guard: PoolGuard::new(pool, addr), } } - pub fn update(&mut self, updater: &mut U) -> Result<(), StoreError> { + pub fn update(&mut self, updater: &mut U) -> Result<(), PoolError> { self.guard.pool.modify(&self.guard.addr, updater) } delegate!( to self.guard { - pub fn read(&self, buf: &mut [u8]) -> Result; + pub fn read(&self, buf: &mut [u8]) -> Result; /// Releasing the pool guard will disable the automatic deletion of the data when the guard /// is dropped. pub fn release(&mut self); @@ -357,7 +354,7 @@ impl<'a, MemProvider: PoolProvider> PoolRwGuard<'a, MemProvider> { #[cfg(feature = "alloc")] mod alloc_mod { use super::{PoolGuard, PoolProvider, PoolProviderWithGuards, PoolRwGuard, StaticPoolAddr}; - use crate::pool::{NumBlocks, StoreAddr, StoreError, StoreIdError}; + use crate::pool::{NumBlocks, PoolAddr, PoolError, StoreIdError}; use alloc::vec; use alloc::vec::Vec; use spacepackets::ByteConversionError; @@ -422,7 +419,7 @@ mod alloc_mod { /// fitting subpool is full. This might be added in the future. /// /// Transactions with the [pool][StaticMemoryPool] are done using a generic - /// [address][StoreAddr] type. Adding any data to the pool will yield a store address. + /// [address][PoolAddr] type. Adding any data to the pool will yield a store address. /// Modification and read operations are done using a reference to a store address. Deletion /// will consume the store address. pub struct StaticMemoryPool { @@ -452,41 +449,41 @@ mod alloc_mod { local_pool } - fn addr_check(&self, addr: &StaticPoolAddr) -> Result { + fn addr_check(&self, addr: &StaticPoolAddr) -> Result { self.validate_addr(addr)?; let pool_idx = addr.pool_idx as usize; let size_list = self.sizes_lists.get(pool_idx).unwrap(); let curr_size = size_list[addr.packet_idx as usize]; if curr_size == STORE_FREE { - return Err(StoreError::DataDoesNotExist(StoreAddr::from(*addr))); + return Err(PoolError::DataDoesNotExist(PoolAddr::from(*addr))); } Ok(curr_size) } - fn validate_addr(&self, addr: &StaticPoolAddr) -> Result<(), StoreError> { + fn validate_addr(&self, addr: &StaticPoolAddr) -> Result<(), PoolError> { let pool_idx = addr.pool_idx as usize; if pool_idx >= self.pool_cfg.cfg.len() { - return Err(StoreError::InvalidStoreId( + return Err(PoolError::InvalidStoreId( StoreIdError::InvalidSubpool(addr.pool_idx), - Some(StoreAddr::from(*addr)), + Some(PoolAddr::from(*addr)), )); } if addr.packet_idx >= self.pool_cfg.cfg[addr.pool_idx as usize].0 { - return Err(StoreError::InvalidStoreId( + return Err(PoolError::InvalidStoreId( StoreIdError::InvalidPacketIdx(addr.packet_idx), - Some(StoreAddr::from(*addr)), + Some(PoolAddr::from(*addr)), )); } Ok(()) } - fn reserve(&mut self, data_len: usize) -> Result { + fn reserve(&mut self, data_len: usize) -> Result { let mut subpool_idx = self.find_subpool(data_len, 0)?; if self.pool_cfg.spill_to_higher_subpools { - while let Err(StoreError::StoreFull(_)) = self.find_empty(subpool_idx) { + while let Err(PoolError::StoreFull(_)) = self.find_empty(subpool_idx) { if (subpool_idx + 1) as usize == self.sizes_lists.len() { - return Err(StoreError::StoreFull(subpool_idx)); + return Err(PoolError::StoreFull(subpool_idx)); } subpool_idx += 1; } @@ -500,7 +497,7 @@ mod alloc_mod { }) } - fn find_subpool(&self, req_size: usize, start_at_subpool: u16) -> Result { + fn find_subpool(&self, req_size: usize, start_at_subpool: u16) -> Result { for (i, &(_, elem_size)) in self.pool_cfg.cfg.iter().enumerate() { if i < start_at_subpool as usize { continue; @@ -509,21 +506,21 @@ mod alloc_mod { return Ok(i as u16); } } - Err(StoreError::DataTooLarge(req_size)) + Err(PoolError::DataTooLarge(req_size)) } - fn write(&mut self, addr: &StaticPoolAddr, data: &[u8]) -> Result<(), StoreError> { - let packet_pos = self.raw_pos(addr).ok_or(StoreError::InternalError(0))?; + fn write(&mut self, addr: &StaticPoolAddr, data: &[u8]) -> Result<(), PoolError> { + let packet_pos = self.raw_pos(addr).ok_or(PoolError::InternalError(0))?; let subpool = self .pool .get_mut(addr.pool_idx as usize) - .ok_or(StoreError::InternalError(1))?; + .ok_or(PoolError::InternalError(1))?; let pool_slice = &mut subpool[packet_pos..packet_pos + data.len()]; pool_slice.copy_from_slice(data); Ok(()) } - fn find_empty(&mut self, subpool: u16) -> Result<(u16, &mut usize), StoreError> { + fn find_empty(&mut self, subpool: u16) -> Result<(u16, &mut usize), PoolError> { if let Some(size_list) = self.sizes_lists.get_mut(subpool as usize) { for (i, elem_size) in size_list.iter_mut().enumerate() { if *elem_size == STORE_FREE { @@ -531,12 +528,12 @@ mod alloc_mod { } } } else { - return Err(StoreError::InvalidStoreId( + return Err(PoolError::InvalidStoreId( StoreIdError::InvalidSubpool(subpool), None, )); } - Err(StoreError::StoreFull(subpool)) + Err(PoolError::StoreFull(subpool)) } fn raw_pos(&self, addr: &StaticPoolAddr) -> Option { @@ -546,10 +543,10 @@ mod alloc_mod { } impl PoolProvider for StaticMemoryPool { - fn add(&mut self, data: &[u8]) -> Result { + fn add(&mut self, data: &[u8]) -> Result { let data_len = data.len(); if data_len > POOL_MAX_SIZE { - return Err(StoreError::DataTooLarge(data_len)); + return Err(PoolError::DataTooLarge(data_len)); } let addr = self.reserve(data_len)?; self.write(&addr, data)?; @@ -560,9 +557,9 @@ mod alloc_mod { &mut self, len: usize, mut writer: W, - ) -> Result { + ) -> Result { if len > POOL_MAX_SIZE { - return Err(StoreError::DataTooLarge(len)); + return Err(PoolError::DataTooLarge(len)); } let addr = self.reserve(len)?; let raw_pos = self.raw_pos(&addr).unwrap(); @@ -574,9 +571,9 @@ mod alloc_mod { fn modify( &mut self, - addr: &StoreAddr, + addr: &PoolAddr, mut updater: U, - ) -> Result<(), StoreError> { + ) -> Result<(), PoolError> { let addr = StaticPoolAddr::from(*addr); let curr_size = self.addr_check(&addr)?; let raw_pos = self.raw_pos(&addr).unwrap(); @@ -586,7 +583,7 @@ mod alloc_mod { Ok(()) } - fn read(&self, addr: &StoreAddr, buf: &mut [u8]) -> Result { + fn read(&self, addr: &PoolAddr, buf: &mut [u8]) -> Result { let addr = StaticPoolAddr::from(*addr); let curr_size = self.addr_check(&addr)?; if buf.len() < curr_size { @@ -604,7 +601,7 @@ mod alloc_mod { Ok(curr_size) } - fn delete(&mut self, addr: StoreAddr) -> Result<(), StoreError> { + fn delete(&mut self, addr: PoolAddr) -> Result<(), PoolError> { let addr = StaticPoolAddr::from(addr); self.addr_check(&addr)?; let block_size = self.pool_cfg.cfg.get(addr.pool_idx as usize).unwrap().1; @@ -617,7 +614,7 @@ mod alloc_mod { Ok(()) } - fn has_element_at(&self, addr: &StoreAddr) -> Result { + fn has_element_at(&self, addr: &PoolAddr) -> Result { let addr = StaticPoolAddr::from(*addr); self.validate_addr(&addr)?; let pool_idx = addr.pool_idx as usize; @@ -629,7 +626,7 @@ mod alloc_mod { Ok(true) } - fn len_of_data(&self, addr: &StoreAddr) -> Result { + fn len_of_data(&self, addr: &PoolAddr) -> Result { let addr = StaticPoolAddr::from(*addr); self.validate_addr(&addr)?; let pool_idx = addr.pool_idx as usize; @@ -643,11 +640,11 @@ mod alloc_mod { } impl PoolProviderWithGuards for StaticMemoryPool { - fn modify_with_guard(&mut self, addr: StoreAddr) -> PoolRwGuard { + fn modify_with_guard(&mut self, addr: PoolAddr) -> PoolRwGuard { PoolRwGuard::new(self, addr) } - fn read_with_guard(&mut self, addr: StoreAddr) -> PoolGuard { + fn read_with_guard(&mut self, addr: PoolAddr) -> PoolGuard { PoolGuard::new(self, addr) } } @@ -656,8 +653,8 @@ mod alloc_mod { #[cfg(test)] mod tests { use crate::pool::{ - PoolGuard, PoolProvider, PoolProviderWithGuards, PoolRwGuard, StaticMemoryPool, - StaticPoolAddr, StaticPoolConfig, StoreError, StoreIdError, POOL_MAX_SIZE, + PoolError, PoolGuard, PoolProvider, PoolProviderWithGuards, PoolRwGuard, StaticMemoryPool, + StaticPoolAddr, StaticPoolConfig, StoreIdError, POOL_MAX_SIZE, }; use std::vec; @@ -781,7 +778,7 @@ mod tests { let res = local_pool.free_element(8, |_| {}); assert!(res.is_err()); let err = res.unwrap_err(); - assert_eq!(err, StoreError::StoreFull(1)); + assert_eq!(err, PoolError::StoreFull(1)); // Verify that the two deletions are successful assert!(local_pool.delete(addr0).is_ok()); @@ -803,7 +800,7 @@ mod tests { assert!(res.is_err()); assert!(matches!( res.unwrap_err(), - StoreError::DataDoesNotExist { .. } + PoolError::DataDoesNotExist { .. } )); } @@ -816,8 +813,8 @@ mod tests { let res = local_pool.add(&test_buf); assert!(res.is_err()); let err = res.unwrap_err(); - assert!(matches!(err, StoreError::StoreFull { .. })); - if let StoreError::StoreFull(subpool) = err { + assert!(matches!(err, PoolError::StoreFull { .. })); + if let PoolError::StoreFull(subpool) = err { assert_eq!(subpool, 2); } } @@ -835,7 +832,7 @@ mod tests { let err = res.unwrap_err(); assert!(matches!( err, - StoreError::InvalidStoreId(StoreIdError::InvalidSubpool(3), Some(_)) + PoolError::InvalidStoreId(StoreIdError::InvalidSubpool(3), Some(_)) )); } @@ -852,7 +849,7 @@ mod tests { let err = res.unwrap_err(); assert!(matches!( err, - StoreError::InvalidStoreId(StoreIdError::InvalidPacketIdx(1), Some(_)) + PoolError::InvalidStoreId(StoreIdError::InvalidPacketIdx(1), Some(_)) )); } @@ -863,7 +860,7 @@ mod tests { let res = local_pool.add(&data_too_large); assert!(res.is_err()); let err = res.unwrap_err(); - assert_eq!(err, StoreError::DataTooLarge(20)); + assert_eq!(err, PoolError::DataTooLarge(20)); } #[test] @@ -871,10 +868,7 @@ mod tests { let mut local_pool = basic_small_pool(); let res = local_pool.free_element(POOL_MAX_SIZE + 1, |_| {}); assert!(res.is_err()); - assert_eq!( - res.unwrap_err(), - StoreError::DataTooLarge(POOL_MAX_SIZE + 1) - ); + assert_eq!(res.unwrap_err(), PoolError::DataTooLarge(POOL_MAX_SIZE + 1)); } #[test] @@ -883,7 +877,7 @@ mod tests { // Try to request a slot which is too large let res = local_pool.free_element(20, |_| {}); assert!(res.is_err()); - assert_eq!(res.unwrap_err(), StoreError::DataTooLarge(20)); + assert_eq!(res.unwrap_err(), PoolError::DataTooLarge(20)); } #[test] @@ -1003,7 +997,7 @@ mod tests { let should_fail = local_pool.free_element(8, |_| {}); assert!(should_fail.is_err()); if let Err(err) = should_fail { - assert_eq!(err, StoreError::StoreFull(1)); + assert_eq!(err, PoolError::StoreFull(1)); } else { panic!("unexpected store address"); } @@ -1034,7 +1028,7 @@ mod tests { let should_fail = local_pool.free_element(8, |_| {}); assert!(should_fail.is_err()); if let Err(err) = should_fail { - assert_eq!(err, StoreError::StoreFull(2)); + assert_eq!(err, PoolError::StoreFull(2)); } else { panic!("unexpected store address"); } diff --git a/satrs/src/pus/action.rs b/satrs/src/pus/action.rs index 875621f..75d5962 100644 --- a/satrs/src/pus/action.rs +++ b/satrs/src/pus/action.rs @@ -7,11 +7,9 @@ use crate::{ use satrs_shared::res_code::ResultU16; #[cfg(feature = "std")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] pub use std_mod::*; #[cfg(feature = "alloc")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] #[allow(unused_imports)] pub use alloc_mod::*; @@ -41,18 +39,18 @@ pub enum ActionReplyVariant { } #[derive(Debug, PartialEq, Clone)] -pub struct PusActionReply { +pub struct ActionReplyPus { pub action_id: ActionId, pub variant: ActionReplyVariant, } -impl PusActionReply { +impl ActionReplyPus { pub fn new(action_id: ActionId, variant: ActionReplyVariant) -> Self { Self { action_id, variant } } } -pub type GenericActionReplyPus = GenericMessage; +pub type GenericActionReplyPus = GenericMessage; impl GenericActionReplyPus { pub fn new_action_reply( @@ -60,12 +58,11 @@ impl GenericActionReplyPus { action_id: ActionId, reply: ActionReplyVariant, ) -> Self { - Self::new(requestor_info, PusActionReply::new(action_id, reply)) + Self::new(requestor_info, ActionReplyPus::new(action_id, reply)) } } #[cfg(feature = "alloc")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] pub mod alloc_mod { use crate::{ action::ActionRequest, @@ -76,13 +73,13 @@ pub mod alloc_mod { ComponentId, }; - use super::PusActionReply; + use super::ActionReplyPus; /// Helper type definition for a mode handler which can handle mode requests. pub type ActionRequestHandlerInterface = - MessageSenderAndReceiver; + MessageSenderAndReceiver; - impl, R: MessageReceiver> + impl, R: MessageReceiver> ActionRequestHandlerInterface { pub fn try_recv_action_request( @@ -95,7 +92,7 @@ pub mod alloc_mod { &self, request_id: RequestId, target_id: ComponentId, - reply: PusActionReply, + reply: ActionReplyPus, ) -> Result<(), GenericTargetedMessagingError> { self.send_message(request_id, target_id, reply) } @@ -104,14 +101,14 @@ pub mod alloc_mod { /// Helper type defintion for a mode handler object which can send mode requests and receive /// mode replies. pub type ActionRequestorInterface = - MessageSenderAndReceiver; + MessageSenderAndReceiver; - impl, R: MessageReceiver> + impl, R: MessageReceiver> ActionRequestorInterface { pub fn try_recv_action_reply( &self, - ) -> Result>, GenericTargetedMessagingError> { + ) -> Result>, GenericTargetedMessagingError> { self.try_recv_message() } @@ -127,7 +124,6 @@ pub mod alloc_mod { } #[cfg(feature = "std")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] pub mod std_mod { use std::sync::mpsc; @@ -179,626 +175,23 @@ pub mod std_mod { pub type DefaultActiveActionRequestMap = DefaultActiveRequestMap; pub type ActionRequestHandlerMpsc = ActionRequestHandlerInterface< - mpsc::Sender>, + mpsc::Sender>, mpsc::Receiver>, >; pub type ActionRequestHandlerMpscBounded = ActionRequestHandlerInterface< - mpsc::SyncSender>, + mpsc::SyncSender>, mpsc::Receiver>, >; pub type ActionRequestorMpsc = ActionRequestorInterface< mpsc::Sender>, - mpsc::Receiver>, + mpsc::Receiver>, >; pub type ActionRequestorBoundedMpsc = ActionRequestorInterface< mpsc::SyncSender>, - mpsc::Receiver>, + mpsc::Receiver>, >; - - /* - pub type ModeRequestorAndHandlerMpsc = ModeInterface< - mpsc::Sender>, - mpsc::Receiver>, - mpsc::Sender>, - mpsc::Receiver>, - >; - pub type ModeRequestorAndHandlerMpscBounded = ModeInterface< - mpsc::SyncSender>, - mpsc::Receiver>, - mpsc::SyncSender>, - mpsc::Receiver>, - >; - */ } #[cfg(test)] -mod tests { - /* - use core::{cell::RefCell, time::Duration}; - use std::{sync::mpsc, time::SystemTimeError}; - - use alloc::{collections::VecDeque, vec::Vec}; - use delegate::delegate; - - use spacepackets::{ - ecss::{ - tc::{PusTcCreator, PusTcReader}, - tm::PusTmReader, - PusPacket, - }, - time::{cds, TimeWriter}, - CcsdsPacket, - }; - - use crate::{ - action::ActionRequestVariant, - params::{self, ParamsRaw, WritableToBeBytes}, - pus::{ - tests::{ - PusServiceHandlerWithVecCommon, PusTestHarness, SimplePusPacketHandler, - TestConverter, TestRouter, APP_DATA_TOO_SHORT, - }, - verification::{ - self, - tests::{SharedVerificationMap, TestVerificationReporter, VerificationStatus}, - FailParams, TcStateAccepted, TcStateNone, TcStateStarted, - VerificationReportingProvider, - }, - EcssTcInMemConverter, EcssTcInVecConverter, EcssTmtcError, GenericRoutingError, - MpscTcReceiver, PusPacketHandlerResult, PusPacketHandlingError, PusRequestRouter, - PusServiceHelper, PusTcToRequestConverter, TmAsVecSenderWithMpsc, - }, - }; - - use super::*; - - impl PusRequestRouter for TestRouter { - type Error = GenericRoutingError; - - fn route( - &self, - target_id: TargetId, - request: Request, - _token: VerificationToken, - ) -> Result<(), Self::Error> { - self.routing_requests - .borrow_mut() - .push_back((target_id, request)); - self.check_for_injected_error() - } - - fn handle_error( - &self, - target_id: TargetId, - token: VerificationToken, - tc: &PusTcReader, - error: Self::Error, - time_stamp: &[u8], - verif_reporter: &impl VerificationReportingProvider, - ) { - self.routing_errors - .borrow_mut() - .push_back((target_id, error)); - } - } - - impl PusTcToRequestConverter for TestConverter<8> { - type Error = PusPacketHandlingError; - fn convert( - &mut self, - token: VerificationToken, - tc: &PusTcReader, - time_stamp: &[u8], - verif_reporter: &impl VerificationReportingProvider, - ) -> Result<(TargetId, ActionRequest), Self::Error> { - self.conversion_request.push_back(tc.raw_data().to_vec()); - self.check_service(tc)?; - let target_id = tc.apid(); - if tc.user_data().len() < 4 { - verif_reporter - .start_failure( - token, - FailParams::new( - time_stamp, - &APP_DATA_TOO_SHORT, - (tc.user_data().len() as u32).to_be_bytes().as_ref(), - ), - ) - .expect("start success failure"); - return Err(PusPacketHandlingError::NotEnoughAppData { - expected: 4, - found: tc.user_data().len(), - }); - } - if tc.subservice() == 1 { - verif_reporter - .start_success(token, time_stamp) - .expect("start success failure"); - return Ok(( - target_id.into(), - ActionRequest { - action_id: u32::from_be_bytes(tc.user_data()[0..4].try_into().unwrap()), - variant: ActionRequestVariant::VecData(tc.user_data()[4..].to_vec()), - }, - )); - } - Err(PusPacketHandlingError::InvalidAppData( - "unexpected app data".into(), - )) - } - } - - pub struct PusDynRequestHandler { - srv_helper: PusServiceHelper< - MpscTcReceiver, - TmAsVecSenderWithMpsc, - EcssTcInVecConverter, - TestVerificationReporter, - >, - request_converter: TestConverter, - request_router: TestRouter, - } - - struct Pus8RequestTestbenchWithVec { - common: PusServiceHandlerWithVecCommon, - handler: PusDynRequestHandler<8, ActionRequest>, - } - - impl Pus8RequestTestbenchWithVec { - pub fn new() -> Self { - let (common, srv_helper) = PusServiceHandlerWithVecCommon::new_with_test_verif_sender(); - Self { - common, - handler: PusDynRequestHandler { - srv_helper, - request_converter: TestConverter::default(), - request_router: TestRouter::default(), - }, - } - } - - delegate! { - to self.handler.request_converter { - pub fn check_next_conversion(&mut self, tc: &PusTcCreator); - } - } - delegate! { - to self.handler.request_router { - pub fn retrieve_next_request(&mut self) -> (TargetId, ActionRequest); - } - } - delegate! { - to self.handler.request_router { - pub fn retrieve_next_routing_error(&mut self) -> (TargetId, GenericRoutingError); - } - } - } - - impl PusTestHarness for Pus8RequestTestbenchWithVec { - delegate! { - to self.common { - fn send_tc(&mut self, tc: &PusTcCreator) -> VerificationToken; - fn read_next_tm(&mut self) -> PusTmReader<'_>; - fn check_no_tm_available(&self) -> bool; - fn check_next_verification_tm( - &self, - subservice: u8, - expected_request_id: verification::RequestId, - ); - } - } - } - impl SimplePusPacketHandler for Pus8RequestTestbenchWithVec { - fn handle_one_tc(&mut self) -> Result { - let possible_packet = self.handler.srv_helper.retrieve_and_accept_next_packet()?; - if possible_packet.is_none() { - return Ok(PusPacketHandlerResult::Empty); - } - let ecss_tc_and_token = possible_packet.unwrap(); - let tc = self - .handler - .srv_helper - .tc_in_mem_converter - .convert_ecss_tc_in_memory_to_reader(&ecss_tc_and_token.tc_in_memory)?; - let time_stamp = cds::TimeProvider::from_now_with_u16_days() - .expect("timestamp generation failed") - .to_vec() - .unwrap(); - let (target_id, action_request) = self.handler.request_converter.convert( - ecss_tc_and_token.token, - &tc, - &time_stamp, - &self.handler.srv_helper.common.verification_handler, - )?; - if let Err(e) = self.handler.request_router.route( - target_id, - action_request, - ecss_tc_and_token.token, - ) { - self.handler.request_router.handle_error( - target_id, - ecss_tc_and_token.token, - &tc, - e.clone(), - &time_stamp, - &self.handler.srv_helper.common.verification_handler, - ); - return Err(e.into()); - } - Ok(PusPacketHandlerResult::RequestHandled) - } - } - - const TIMEOUT_ERROR_CODE: ResultU16 = ResultU16::new(1, 2); - const COMPLETION_ERROR_CODE: ResultU16 = ResultU16::new(2, 0); - const COMPLETION_ERROR_CODE_STEP: ResultU16 = ResultU16::new(2, 1); - - #[derive(Default)] - pub struct TestReplyHandlerHook { - pub unexpected_replies: VecDeque, - pub timeouts: RefCell>, - } - - impl ReplyHandlerHook for TestReplyHandlerHook { - fn handle_unexpected_reply(&mut self, reply: &GenericActionReplyPus) { - self.unexpected_replies.push_back(reply.clone()); - } - - fn timeout_callback(&self, active_request: &ActivePusActionRequest) { - self.timeouts.borrow_mut().push_back(active_request.clone()); - } - - fn timeout_error_code(&self) -> ResultU16 { - TIMEOUT_ERROR_CODE - } - } - - pub struct Pus8ReplyTestbench { - verif_reporter: TestVerificationReporter, - #[allow(dead_code)] - ecss_tm_receiver: mpsc::Receiver>, - handler: PusService8ReplyHandler< - TestVerificationReporter, - DefaultActiveActionRequestMap, - TestReplyHandlerHook, - mpsc::Sender>, - >, - } - - impl Pus8ReplyTestbench { - pub fn new(normal_ctor: bool) -> Self { - let reply_handler_hook = TestReplyHandlerHook::default(); - let shared_verif_map = SharedVerificationMap::default(); - let test_verif_reporter = TestVerificationReporter::new(shared_verif_map.clone()); - let (ecss_tm_sender, ecss_tm_receiver) = mpsc::channel(); - let reply_handler = if normal_ctor { - PusService8ReplyHandler::new_from_now_with_default_map( - test_verif_reporter.clone(), - 128, - reply_handler_hook, - ecss_tm_sender, - ) - .expect("creating reply handler failed") - } else { - PusService8ReplyHandler::new_from_now( - test_verif_reporter.clone(), - DefaultActiveActionRequestMap::default(), - 128, - reply_handler_hook, - ecss_tm_sender, - ) - .expect("creating reply handler failed") - }; - Self { - verif_reporter: test_verif_reporter, - ecss_tm_receiver, - handler: reply_handler, - } - } - - pub fn init_handling_for_request( - &mut self, - request_id: RequestId, - _action_id: ActionId, - ) -> VerificationToken { - assert!(!self.handler.request_active(request_id)); - // let action_req = ActionRequest::new(action_id, ActionRequestVariant::NoData); - let token = self.add_tc_with_req_id(request_id.into()); - let token = self - .verif_reporter - .acceptance_success(token, &[]) - .expect("acceptance success failure"); - let token = self - .verif_reporter - .start_success(token, &[]) - .expect("start success failure"); - let verif_info = self - .verif_reporter - .verification_info(&verification::RequestId::from(request_id)) - .expect("no verification info found"); - assert!(verif_info.started.expect("request was not started")); - assert!(verif_info.accepted.expect("request was not accepted")); - token - } - - pub fn next_unrequested_reply(&self) -> Option { - self.handler.user_hook.unexpected_replies.front().cloned() - } - - pub fn assert_request_completion_success(&self, step: Option, request_id: RequestId) { - let verif_info = self - .verif_reporter - .verification_info(&verification::RequestId::from(request_id)) - .expect("no verification info found"); - self.assert_request_completion_common(request_id, &verif_info, step, true); - } - - pub fn assert_request_completion_failure( - &self, - step: Option, - request_id: RequestId, - fail_enum: ResultU16, - fail_data: &[u8], - ) { - let verif_info = self - .verif_reporter - .verification_info(&verification::RequestId::from(request_id)) - .expect("no verification info found"); - self.assert_request_completion_common(request_id, &verif_info, step, false); - assert_eq!(verif_info.fail_enum.unwrap(), fail_enum.raw() as u64); - assert_eq!(verif_info.failure_data.unwrap(), fail_data); - } - - pub fn assert_request_completion_common( - &self, - request_id: RequestId, - verif_info: &VerificationStatus, - step: Option, - completion_success: bool, - ) { - if let Some(step) = step { - assert!(verif_info.step_status.is_some()); - assert!(verif_info.step_status.unwrap()); - assert_eq!(step, verif_info.step); - } - assert_eq!( - verif_info.completed.expect("request is not completed"), - completion_success - ); - assert!(!self.handler.request_active(request_id)); - } - - pub fn assert_request_step_failure(&self, step: u16, request_id: RequestId) { - let verif_info = self - .verif_reporter - .verification_info(&verification::RequestId::from(request_id)) - .expect("no verification info found"); - assert!(verif_info.step_status.is_some()); - assert!(!verif_info.step_status.unwrap()); - assert_eq!(step, verif_info.step); - } - pub fn add_routed_request( - &mut self, - request_id: verification::RequestId, - target_id: TargetId, - action_id: ActionId, - token: VerificationToken, - timeout: Duration, - ) { - if self.handler.request_active(request_id.into()) { - panic!("request already present"); - } - self.handler - .add_routed_action_request(request_id, target_id, action_id, token, timeout); - if !self.handler.request_active(request_id.into()) { - panic!("request should be active now"); - } - } - - delegate! { - to self.handler { - pub fn request_active(&self, request_id: RequestId) -> bool; - - pub fn handle_action_reply( - &mut self, - action_reply_with_ids: GenericMessage, - time_stamp: &[u8] - ) -> Result<(), EcssTmtcError>; - - pub fn update_time_from_now(&mut self) -> Result<(), SystemTimeError>; - - pub fn check_for_timeouts(&mut self, time_stamp: &[u8]) -> Result<(), EcssTmtcError>; - } - to self.verif_reporter { - fn add_tc_with_req_id(&mut self, req_id: verification::RequestId) -> VerificationToken; - } - } - } - - #[test] - fn test_reply_handler_completion_success() { - let mut reply_testbench = Pus8ReplyTestbench::new(true); - let sender_id = 0x06; - let request_id = 0x02; - let target_id = 0x05; - let action_id = 0x03; - let token = reply_testbench.init_handling_for_request(request_id, action_id); - reply_testbench.add_routed_request( - request_id.into(), - target_id, - action_id, - token, - Duration::from_millis(1), - ); - assert!(reply_testbench.request_active(request_id)); - let action_reply = GenericMessage::new( - request_id, - sender_id, - ActionReplyPusWithActionId { - action_id, - variant: ActionReplyPus::Completed, - }, - ); - reply_testbench - .handle_action_reply(action_reply, &[]) - .expect("reply handling failure"); - reply_testbench.assert_request_completion_success(None, request_id); - } - - #[test] - fn test_reply_handler_step_success() { - let mut reply_testbench = Pus8ReplyTestbench::new(false); - let request_id = 0x02; - let target_id = 0x05; - let action_id = 0x03; - let token = reply_testbench.init_handling_for_request(request_id, action_id); - reply_testbench.add_routed_request( - request_id.into(), - target_id, - action_id, - token, - Duration::from_millis(1), - ); - let action_reply = GenericActionReplyPus::new_action_reply( - request_id, - action_id, - action_id, - ActionReplyPus::StepSuccess { step: 1 }, - ); - reply_testbench - .handle_action_reply(action_reply, &[]) - .expect("reply handling failure"); - let action_reply = GenericActionReplyPus::new_action_reply( - request_id, - action_id, - action_id, - ActionReplyPus::Completed, - ); - reply_testbench - .handle_action_reply(action_reply, &[]) - .expect("reply handling failure"); - reply_testbench.assert_request_completion_success(Some(1), request_id); - } - - #[test] - fn test_reply_handler_completion_failure() { - let mut reply_testbench = Pus8ReplyTestbench::new(true); - let sender_id = 0x01; - let request_id = 0x02; - let target_id = 0x05; - let action_id = 0x03; - let token = reply_testbench.init_handling_for_request(request_id, action_id); - reply_testbench.add_routed_request( - request_id.into(), - target_id, - action_id, - token, - Duration::from_millis(1), - ); - let params_raw = ParamsRaw::U32(params::U32(5)); - let action_reply = GenericActionReplyPus::new_action_reply( - request_id, - sender_id, - action_id, - ActionReplyPus::CompletionFailed { - error_code: COMPLETION_ERROR_CODE, - params: params_raw.into(), - }, - ); - reply_testbench - .handle_action_reply(action_reply, &[]) - .expect("reply handling failure"); - reply_testbench.assert_request_completion_failure( - None, - request_id, - COMPLETION_ERROR_CODE, - ¶ms_raw.to_vec().unwrap(), - ); - } - - #[test] - fn test_reply_handler_step_failure() { - let mut reply_testbench = Pus8ReplyTestbench::new(false); - let sender_id = 0x01; - let request_id = 0x02; - let target_id = 0x05; - let action_id = 0x03; - let token = reply_testbench.init_handling_for_request(request_id, action_id); - reply_testbench.add_routed_request( - request_id.into(), - target_id, - action_id, - token, - Duration::from_millis(1), - ); - let action_reply = GenericActionReplyPus::new_action_reply( - request_id, - sender_id, - action_id, - ActionReplyPus::StepFailed { - error_code: COMPLETION_ERROR_CODE_STEP, - step: 2, - params: ParamsRaw::U32(crate::params::U32(5)).into(), - }, - ); - reply_testbench - .handle_action_reply(action_reply, &[]) - .expect("reply handling failure"); - reply_testbench.assert_request_step_failure(2, request_id); - } - - #[test] - fn test_reply_handler_timeout_handling() { - let mut reply_testbench = Pus8ReplyTestbench::new(true); - let request_id = 0x02; - let target_id = 0x06; - let action_id = 0x03; - let token = reply_testbench.init_handling_for_request(request_id, action_id); - reply_testbench.add_routed_request( - request_id.into(), - target_id, - action_id, - token, - Duration::from_millis(1), - ); - let timeout_param = Duration::from_millis(1).as_millis() as u64; - let timeout_param_raw = timeout_param.to_be_bytes(); - std::thread::sleep(Duration::from_millis(2)); - reply_testbench - .update_time_from_now() - .expect("time update failure"); - reply_testbench.check_for_timeouts(&[]).unwrap(); - reply_testbench.assert_request_completion_failure( - None, - request_id, - TIMEOUT_ERROR_CODE, - &timeout_param_raw, - ); - } - - #[test] - fn test_unrequested_reply() { - let mut reply_testbench = Pus8ReplyTestbench::new(true); - let sender_id = 0x01; - let request_id = 0x02; - let action_id = 0x03; - - let action_reply = GenericActionReplyPus::new_action_reply( - request_id, - sender_id, - action_id, - ActionReplyPus::Completed, - ); - reply_testbench - .handle_action_reply(action_reply, &[]) - .expect("reply handling failure"); - let reply = reply_testbench.next_unrequested_reply(); - assert!(reply.is_some()); - let reply = reply.unwrap(); - assert_eq!(reply.message.action_id, action_id); - assert_eq!(reply.request_id, request_id); - assert_eq!(reply.message.variant, ActionReplyPus::Completed); - } - */ -} +mod tests {} diff --git a/satrs/src/pus/event.rs b/satrs/src/pus/event.rs index 3ac928f..c4363bd 100644 --- a/satrs/src/pus/event.rs +++ b/satrs/src/pus/event.rs @@ -132,7 +132,7 @@ impl EventReportCreator { #[cfg(feature = "alloc")] mod alloc_mod { use super::*; - use crate::pus::{EcssTmSenderCore, EcssTmtcError}; + use crate::pus::{EcssTmSender, EcssTmtcError}; use crate::ComponentId; use alloc::vec; use alloc::vec::Vec; @@ -194,7 +194,7 @@ mod alloc_mod { pub fn event_info( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), time_stamp: &[u8], event_id: impl EcssEnumeration, params: Option<&[u8]>, @@ -211,7 +211,7 @@ mod alloc_mod { pub fn event_low_severity( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), time_stamp: &[u8], event_id: impl EcssEnumeration, params: Option<&[u8]>, @@ -228,7 +228,7 @@ mod alloc_mod { pub fn event_medium_severity( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), time_stamp: &[u8], event_id: impl EcssEnumeration, params: Option<&[u8]>, @@ -245,7 +245,7 @@ mod alloc_mod { pub fn event_high_severity( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), time_stamp: &[u8], event_id: impl EcssEnumeration, params: Option<&[u8]>, @@ -268,7 +268,7 @@ mod tests { use crate::events::{EventU32, Severity}; use crate::pus::test_util::TEST_COMPONENT_ID_0; use crate::pus::tests::CommonTmInfo; - use crate::pus::{ChannelWithId, EcssTmSenderCore, EcssTmtcError, PusTmVariant}; + use crate::pus::{ChannelWithId, EcssTmSender, EcssTmtcError, PusTmVariant}; use crate::ComponentId; use spacepackets::ecss::PusError; use spacepackets::ByteConversionError; @@ -301,7 +301,7 @@ mod tests { } } - impl EcssTmSenderCore for TestSender { + impl EcssTmSender for TestSender { fn send_tm(&self, sender_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { match tm { PusTmVariant::InStore(_) => { diff --git a/satrs/src/pus/event_man.rs b/satrs/src/pus/event_man.rs index eecb375..b8ddb6b 100644 --- a/satrs/src/pus/event_man.rs +++ b/satrs/src/pus/event_man.rs @@ -10,13 +10,11 @@ use hashbrown::HashSet; pub use crate::pus::event::EventReporter; use crate::pus::verification::TcStateToken; #[cfg(feature = "alloc")] -use crate::pus::EcssTmSenderCore; +use crate::pus::EcssTmSender; use crate::pus::EcssTmtcError; #[cfg(feature = "alloc")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] pub use alloc_mod::*; #[cfg(feature = "heapless")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "heapless")))] pub use heapless_mod::*; /// This trait allows the PUS event manager implementation to stay generic over various types @@ -44,7 +42,6 @@ pub mod heapless_mod { use crate::events::LargestEventRaw; use core::marker::PhantomData; - #[cfg_attr(doc_cfg, doc(cfg(feature = "heapless")))] // TODO: After a new version of heapless is released which uses hash32 version 0.3, try using // regular Event type again. #[derive(Default)] @@ -178,7 +175,7 @@ pub mod alloc_mod { pub fn generate_pus_event_tm_generic( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), time_stamp: &[u8], event: Event, params: Option<&[u8]>, @@ -240,7 +237,7 @@ pub mod alloc_mod { pub fn generate_pus_event_tm( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), time_stamp: &[u8], event: EventU32TypedSev, aux_data: Option<&[u8]>, @@ -257,9 +254,8 @@ pub mod alloc_mod { #[cfg(test)] mod tests { use super::*; - use crate::events::SeverityInfo; - use crate::pus::PusTmAsVec; use crate::request::UniqueApidTargetId; + use crate::{events::SeverityInfo, tmtc::PacketAsVec}; use std::sync::mpsc::{self, TryRecvError}; const INFO_EVENT: EventU32TypedSev = @@ -284,7 +280,7 @@ mod tests { #[test] fn test_basic() { let event_man = create_basic_man_1(); - let (event_tx, event_rx) = mpsc::channel::(); + let (event_tx, event_rx) = mpsc::channel::(); let event_sent = event_man .generate_pus_event_tm(&event_tx, &EMPTY_STAMP, INFO_EVENT, None) .expect("Sending info event failed"); @@ -297,7 +293,7 @@ mod tests { #[test] fn test_disable_event() { let mut event_man = create_basic_man_2(); - let (event_tx, event_rx) = mpsc::channel::(); + let (event_tx, event_rx) = mpsc::channel::(); // let mut sender = TmAsVecSenderWithMpsc::new(0, "test", event_tx); let res = event_man.disable_tm_for_event(&LOW_SEV_EVENT); assert!(res.is_ok()); @@ -320,7 +316,7 @@ mod tests { #[test] fn test_reenable_event() { let mut event_man = create_basic_man_1(); - let (event_tx, event_rx) = mpsc::channel::(); + let (event_tx, event_rx) = mpsc::channel::(); let mut res = event_man.disable_tm_for_event_with_sev(&INFO_EVENT); assert!(res.is_ok()); assert!(res.unwrap()); diff --git a/satrs/src/pus/event_srv.rs b/satrs/src/pus/event_srv.rs index bb08f58..c782b3a 100644 --- a/satrs/src/pus/event_srv.rs +++ b/satrs/src/pus/event_srv.rs @@ -9,13 +9,13 @@ use std::sync::mpsc::Sender; use super::verification::VerificationReportingProvider; use super::{ - EcssTcInMemConverter, EcssTcReceiverCore, EcssTmSenderCore, GenericConversionError, + EcssTcInMemConverter, EcssTcReceiver, EcssTmSender, GenericConversionError, GenericRoutingError, PusServiceHelper, }; pub struct PusEventServiceHandler< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, > { @@ -25,8 +25,8 @@ pub struct PusEventServiceHandler< } impl< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, > PusEventServiceHandler @@ -167,7 +167,8 @@ mod tests { use crate::pus::verification::{ RequestId, VerificationReporter, VerificationReportingProvider, }; - use crate::pus::{GenericConversionError, MpscTcReceiver, MpscTmInSharedPoolSenderBounded}; + use crate::pus::{GenericConversionError, MpscTcReceiver}; + use crate::tmtc::PacketSenderWithSharedPool; use crate::{ events::EventU32, pus::{ @@ -186,7 +187,7 @@ mod tests { common: PusServiceHandlerWithSharedStoreCommon, handler: PusEventServiceHandler< MpscTcReceiver, - MpscTmInSharedPoolSenderBounded, + PacketSenderWithSharedPool, EcssTcInSharedStoreConverter, VerificationReporter, >, @@ -212,9 +213,13 @@ mod tests { .expect("acceptance success failure") } + fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator) { + self.common + .send_tc(self.handler.service_helper.id(), token, tc); + } + delegate! { to self.common { - fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator); fn read_next_tm(&mut self) -> PusTmReader<'_>; fn check_no_tm_available(&self) -> bool; fn check_next_verification_tm(&self, subservice: u8, expected_request_id: RequestId); diff --git a/satrs/src/pus/mod.rs b/satrs/src/pus/mod.rs index 64187e4..78caa4d 100644 --- a/satrs/src/pus/mod.rs +++ b/satrs/src/pus/mod.rs @@ -2,10 +2,13 @@ //! //! This module contains structures to make working with the PUS C standard easier. //! The satrs-example application contains various usage examples of these components. -use crate::pool::{StoreAddr, StoreError}; +use crate::pool::{PoolAddr, PoolError}; use crate::pus::verification::{TcStateAccepted, TcStateToken, VerificationToken}; use crate::queue::{GenericReceiveError, GenericSendError}; use crate::request::{GenericMessage, MessageMetadata, RequestId}; +#[cfg(feature = "alloc")] +use crate::tmtc::PacketAsVec; +use crate::tmtc::PacketInPool; use crate::ComponentId; use core::fmt::{Display, Formatter}; use core::time::Duration; @@ -44,12 +47,12 @@ use self::verification::VerificationReportingProvider; #[derive(Debug, PartialEq, Eq, Clone)] pub enum PusTmVariant<'time, 'src_data> { - InStore(StoreAddr), + InStore(PoolAddr), Direct(PusTmCreator<'time, 'src_data>), } -impl From for PusTmVariant<'_, '_> { - fn from(value: StoreAddr) -> Self { +impl From for PusTmVariant<'_, '_> { + fn from(value: PoolAddr) -> Self { Self::InStore(value) } } @@ -62,10 +65,10 @@ impl<'time, 'src_data> From> for PusTmVariant<'ti #[derive(Debug, Clone, PartialEq, Eq)] pub enum EcssTmtcError { - Store(StoreError), + Store(PoolError), ByteConversion(ByteConversionError), Pus(PusError), - CantSendAddr(StoreAddr), + CantSendAddr(PoolAddr), CantSendDirectTm, Send(GenericSendError), Receive(GenericReceiveError), @@ -99,8 +102,8 @@ impl Display for EcssTmtcError { } } -impl From for EcssTmtcError { - fn from(value: StoreError) -> Self { +impl From for EcssTmtcError { + fn from(value: PoolError) -> Self { Self::Store(value) } } @@ -153,15 +156,15 @@ pub trait ChannelWithId: Send { /// Generic trait for a user supplied sender object. /// /// This sender object is responsible for sending PUS telemetry to a TM sink. -pub trait EcssTmSenderCore: Send { - fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError>; +pub trait EcssTmSender: Send { + fn send_tm(&self, sender_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError>; } /// Generic trait for a user supplied sender object. /// /// This sender object is responsible for sending PUS telecommands to a TC recipient. Each /// telecommand can optionally have a token which contains its verification state. -pub trait EcssTcSenderCore { +pub trait EcssTcSender { fn send_tc(&self, tc: PusTcCreator, token: Option) -> Result<(), EcssTmtcError>; } @@ -169,32 +172,32 @@ pub trait EcssTcSenderCore { #[derive(Default)] pub struct EcssTmDummySender {} -impl EcssTmSenderCore for EcssTmDummySender { +impl EcssTmSender for EcssTmDummySender { fn send_tm(&self, _source_id: ComponentId, _tm: PusTmVariant) -> Result<(), EcssTmtcError> { Ok(()) } } -/// A PUS telecommand packet can be stored in memory using different methods. Right now, +/// A PUS telecommand packet can be stored in memory and sent using different methods. Right now, /// storage inside a pool structure like [crate::pool::StaticMemoryPool], and storage inside a /// `Vec` are supported. #[non_exhaustive] #[derive(Debug, Clone, PartialEq, Eq)] pub enum TcInMemory { - StoreAddr(StoreAddr), + Pool(PacketInPool), #[cfg(feature = "alloc")] - Vec(alloc::vec::Vec), + Vec(PacketAsVec), } -impl From for TcInMemory { - fn from(value: StoreAddr) -> Self { - Self::StoreAddr(value) +impl From for TcInMemory { + fn from(value: PacketInPool) -> Self { + Self::Pool(value) } } #[cfg(feature = "alloc")] -impl From> for TcInMemory { - fn from(value: alloc::vec::Vec) -> Self { +impl From for TcInMemory { + fn from(value: PacketAsVec) -> Self { Self::Vec(value) } } @@ -262,25 +265,26 @@ impl From for TryRecvTmtcError { } } -impl From for TryRecvTmtcError { - fn from(value: StoreError) -> Self { +impl From for TryRecvTmtcError { + fn from(value: PoolError) -> Self { Self::Tmtc(value.into()) } } /// Generic trait for a user supplied receiver object. -pub trait EcssTcReceiverCore { +pub trait EcssTcReceiver { fn recv_tc(&self) -> Result; } -/// Generic trait for objects which can receive ECSS PUS telecommands. This trait is -/// implemented by the [crate::tmtc::pus_distrib::PusDistributor] objects to allow passing PUS TC -/// packets into it. It is generally assumed that the telecommand is stored in some pool structure, -/// and the store address is passed as well. This allows efficient zero-copy forwarding of -/// telecommands. -pub trait ReceivesEcssPusTc { +/// Generic trait for objects which can send ECSS PUS telecommands. +pub trait PacketSenderPusTc: Send { type Error; - fn pass_pus_tc(&mut self, header: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error>; + fn send_pus_tc( + &self, + sender_id: ComponentId, + header: &SpHeader, + pus_tc: &PusTcReader, + ) -> Result<(), Self::Error>; } pub trait ActiveRequestMapProvider: Sized { @@ -326,7 +330,7 @@ pub trait PusReplyHandler { &mut self, reply: &GenericMessage, active_request: &ActiveRequestInfo, - tm_sender: &impl EcssTmSenderCore, + tm_sender: &impl EcssTmSender, verification_handler: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result; @@ -334,14 +338,14 @@ pub trait PusReplyHandler { fn handle_unrequested_reply( &mut self, reply: &GenericMessage, - tm_sender: &impl EcssTmSenderCore, + tm_sender: &impl EcssTmSender, ) -> Result<(), Self::Error>; /// Handle the timeout of an active request. fn handle_request_timeout( &mut self, active_request: &ActiveRequestInfo, - tm_sender: &impl EcssTmSenderCore, + tm_sender: &impl EcssTmSender, verification_handler: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result<(), Self::Error>; @@ -353,9 +357,7 @@ pub mod alloc_mod { use super::*; - use crate::pus::verification::VerificationReportingProvider; - - /// Extension trait for [EcssTmSenderCore]. + /// Extension trait for [EcssTmSender]. /// /// It provides additional functionality, for example by implementing the [Downcast] trait /// and the [DynClone] trait. @@ -366,37 +368,36 @@ pub mod alloc_mod { /// [DynClone] allows cloning the trait object as long as the boxed object implements /// [Clone]. #[cfg(feature = "alloc")] - #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] - pub trait EcssTmSender: EcssTmSenderCore + Downcast + DynClone { + pub trait EcssTmSenderExt: EcssTmSender + Downcast + DynClone { // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast(&self) -> &dyn EcssTmSenderCore; + fn upcast(&self) -> &dyn EcssTmSender; // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast_mut(&mut self) -> &mut dyn EcssTmSenderCore; + fn upcast_mut(&mut self) -> &mut dyn EcssTmSender; } - /// Blanket implementation for all types which implement [EcssTmSenderCore] and are clonable. - impl EcssTmSender for T + /// Blanket implementation for all types which implement [EcssTmSender] and are clonable. + impl EcssTmSenderExt for T where - T: EcssTmSenderCore + Clone + 'static, + T: EcssTmSender + Clone + 'static, { // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast(&self) -> &dyn EcssTmSenderCore { + fn upcast(&self) -> &dyn EcssTmSender { self } // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast_mut(&mut self) -> &mut dyn EcssTmSenderCore { + fn upcast_mut(&mut self) -> &mut dyn EcssTmSender { self } } - dyn_clone::clone_trait_object!(EcssTmSender); - impl_downcast!(EcssTmSender); + dyn_clone::clone_trait_object!(EcssTmSenderExt); + impl_downcast!(EcssTmSenderExt); - /// Extension trait for [EcssTcSenderCore]. + /// Extension trait for [EcssTcSender]. /// /// It provides additional functionality, for example by implementing the [Downcast] trait /// and the [DynClone] trait. @@ -407,16 +408,15 @@ pub mod alloc_mod { /// [DynClone] allows cloning the trait object as long as the boxed object implements /// [Clone]. #[cfg(feature = "alloc")] - #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] - pub trait EcssTcSender: EcssTcSenderCore + Downcast + DynClone {} + pub trait EcssTcSenderExt: EcssTcSender + Downcast + DynClone {} - /// Blanket implementation for all types which implement [EcssTcSenderCore] and are clonable. - impl EcssTcSender for T where T: EcssTcSenderCore + Clone + 'static {} + /// Blanket implementation for all types which implement [EcssTcSender] and are clonable. + impl EcssTcSenderExt for T where T: EcssTcSender + Clone + 'static {} - dyn_clone::clone_trait_object!(EcssTcSender); - impl_downcast!(EcssTcSender); + dyn_clone::clone_trait_object!(EcssTcSenderExt); + impl_downcast!(EcssTcSenderExt); - /// Extension trait for [EcssTcReceiverCore]. + /// Extension trait for [EcssTcReceiver]. /// /// It provides additional functionality, for example by implementing the [Downcast] trait /// and the [DynClone] trait. @@ -427,13 +427,12 @@ pub mod alloc_mod { /// [DynClone] allows cloning the trait object as long as the boxed object implements /// [Clone]. #[cfg(feature = "alloc")] - #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] - pub trait EcssTcReceiver: EcssTcReceiverCore + Downcast {} + pub trait EcssTcReceiverExt: EcssTcReceiver + Downcast {} - /// Blanket implementation for all types which implement [EcssTcReceiverCore] and are clonable. - impl EcssTcReceiver for T where T: EcssTcReceiverCore + 'static {} + /// Blanket implementation for all types which implement [EcssTcReceiver] and are clonable. + impl EcssTcReceiverExt for T where T: EcssTcReceiver + 'static {} - impl_downcast!(EcssTcReceiver); + impl_downcast!(EcssTcReceiverExt); /// This trait is an abstraction for the conversion of a PUS telecommand into a generic request /// type. @@ -457,7 +456,7 @@ pub mod alloc_mod { &mut self, token: VerificationToken, tc: &PusTcReader, - tm_sender: &(impl EcssTmSenderCore + ?Sized), + tm_sender: &(impl EcssTmSender + ?Sized), verif_reporter: &impl VerificationReportingProvider, time_stamp: &[u8], ) -> Result<(ActiveRequestInfo, Request), Self::Error>; @@ -549,7 +548,6 @@ pub mod alloc_mod { > { #[cfg(feature = "std")] - #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] pub fn new_from_now( active_request_map: ActiveRequestMap, fail_data_buf_size: usize, @@ -636,7 +634,6 @@ pub mod alloc_mod { /// Update the current time used for timeout checks based on the current OS time. #[cfg(feature = "std")] - #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] pub fn update_time_from_now(&mut self) -> Result<(), std::time::SystemTimeError> { self.current_time = UnixTimestamp::from_now()?; Ok(()) @@ -651,22 +648,20 @@ pub mod alloc_mod { } #[cfg(feature = "std")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] pub mod std_mod { use crate::pool::{ - PoolProvider, PoolProviderWithGuards, SharedStaticMemoryPool, StoreAddr, StoreError, + PoolAddr, PoolError, PoolProvider, PoolProviderWithGuards, SharedStaticMemoryPool, }; use crate::pus::verification::{TcStateAccepted, VerificationToken}; use crate::pus::{ - EcssTcAndToken, EcssTcReceiverCore, EcssTmSenderCore, EcssTmtcError, GenericReceiveError, + EcssTcAndToken, EcssTcReceiver, EcssTmSender, EcssTmtcError, GenericReceiveError, GenericSendError, PusTmVariant, TryRecvTmtcError, }; - use crate::tmtc::tm_helper::SharedTmPool; + use crate::tmtc::{PacketAsVec, PacketSenderWithSharedPool}; use crate::ComponentId; use alloc::vec::Vec; use core::time::Duration; use spacepackets::ecss::tc::PusTcReader; - use spacepackets::ecss::tm::PusTmCreator; use spacepackets::ecss::WritablePusPacket; use spacepackets::time::StdTimestampError; use spacepackets::ByteConversionError; @@ -680,25 +675,20 @@ pub mod std_mod { use super::verification::{TcStateToken, VerificationReportingProvider}; use super::{AcceptedEcssTcAndToken, ActiveRequestProvider, TcInMemory}; + use crate::tmtc::PacketInPool; - #[derive(Debug)] - pub struct PusTmInPool { - pub source_id: ComponentId, - pub store_addr: StoreAddr, - } - - impl From> for EcssTmtcError { - fn from(_: mpsc::SendError) -> Self { + impl From> for EcssTmtcError { + fn from(_: mpsc::SendError) -> Self { Self::Send(GenericSendError::RxDisconnected) } } - impl EcssTmSenderCore for mpsc::Sender { + impl EcssTmSender for mpsc::Sender { fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { match tm { PusTmVariant::InStore(store_addr) => self - .send(PusTmInPool { - source_id, + .send(PacketInPool { + sender_id: source_id, store_addr, }) .map_err(|_| GenericSendError::RxDisconnected)?, @@ -708,12 +698,12 @@ pub mod std_mod { } } - impl EcssTmSenderCore for mpsc::SyncSender { + impl EcssTmSender for mpsc::SyncSender { fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { match tm { PusTmVariant::InStore(store_addr) => self - .try_send(PusTmInPool { - source_id, + .try_send(PacketInPool { + sender_id: source_id, store_addr, }) .map_err(|e| EcssTmtcError::Send(e.into()))?, @@ -723,21 +713,15 @@ pub mod std_mod { } } - #[derive(Debug)] - pub struct PusTmAsVec { - pub source_id: ComponentId, - pub packet: Vec, - } + pub type MpscTmAsVecSender = mpsc::Sender; - pub type MpscTmAsVecSender = mpsc::Sender; - - impl EcssTmSenderCore for MpscTmAsVecSender { + impl EcssTmSender for MpscTmAsVecSender { fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { match tm { PusTmVariant::InStore(addr) => return Err(EcssTmtcError::CantSendAddr(addr)), PusTmVariant::Direct(tm) => self - .send(PusTmAsVec { - source_id, + .send(PacketAsVec { + sender_id: source_id, packet: tm.to_vec()?, }) .map_err(|e| EcssTmtcError::Send(e.into()))?, @@ -746,15 +730,15 @@ pub mod std_mod { } } - pub type MpscTmAsVecSenderBounded = mpsc::SyncSender; + pub type MpscTmAsVecSenderBounded = mpsc::SyncSender; - impl EcssTmSenderCore for MpscTmAsVecSenderBounded { + impl EcssTmSender for MpscTmAsVecSenderBounded { fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { match tm { PusTmVariant::InStore(addr) => return Err(EcssTmtcError::CantSendAddr(addr)), PusTmVariant::Direct(tm) => self - .send(PusTmAsVec { - source_id, + .send(PacketAsVec { + sender_id: source_id, packet: tm.to_vec()?, }) .map_err(|e| EcssTmtcError::Send(e.into()))?, @@ -763,47 +747,9 @@ pub mod std_mod { } } - #[derive(Clone)] - pub struct TmInSharedPoolSender { - shared_tm_store: SharedTmPool, - sender: Sender, - } - - impl TmInSharedPoolSender { - pub fn send_direct_tm( - &self, - source_id: ComponentId, - tm: PusTmCreator, - ) -> Result<(), EcssTmtcError> { - let addr = self.shared_tm_store.add_pus_tm(&tm)?; - self.sender.send_tm(source_id, PusTmVariant::InStore(addr)) - } - } - - impl EcssTmSenderCore for TmInSharedPoolSender { - fn send_tm(&self, source_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { - if let PusTmVariant::Direct(tm) = tm { - return self.send_direct_tm(source_id, tm); - } - self.sender.send_tm(source_id, tm) - } - } - - impl TmInSharedPoolSender { - pub fn new(shared_tm_store: SharedTmPool, sender: Sender) -> Self { - Self { - shared_tm_store, - sender, - } - } - } - - pub type MpscTmInSharedPoolSender = TmInSharedPoolSender>; - pub type MpscTmInSharedPoolSenderBounded = TmInSharedPoolSender>; - pub type MpscTcReceiver = mpsc::Receiver; - impl EcssTcReceiverCore for MpscTcReceiver { + impl EcssTcReceiver for MpscTcReceiver { fn recv_tc(&self) -> Result { self.try_recv().map_err(|e| match e { TryRecvError::Empty => TryRecvTmtcError::Empty, @@ -819,16 +765,14 @@ pub mod std_mod { use super::*; use crossbeam_channel as cb; - pub type TmInSharedPoolSenderWithCrossbeam = TmInSharedPoolSender>; - - impl From> for EcssTmtcError { - fn from(_: cb::SendError) -> Self { + impl From> for EcssTmtcError { + fn from(_: cb::SendError) -> Self { Self::Send(GenericSendError::RxDisconnected) } } - impl From> for EcssTmtcError { - fn from(value: cb::TrySendError) -> Self { + impl From> for EcssTmtcError { + fn from(value: cb::TrySendError) -> Self { match value { cb::TrySendError::Full(_) => Self::Send(GenericSendError::QueueFull(None)), cb::TrySendError::Disconnected(_) => { @@ -838,37 +782,31 @@ pub mod std_mod { } } - impl EcssTmSenderCore for cb::Sender { + impl EcssTmSender for cb::Sender { fn send_tm( &self, - source_id: ComponentId, + sender_id: ComponentId, tm: PusTmVariant, ) -> Result<(), EcssTmtcError> { match tm { PusTmVariant::InStore(addr) => self - .try_send(PusTmInPool { - source_id, - store_addr: addr, - }) + .try_send(PacketInPool::new(sender_id, addr)) .map_err(|e| EcssTmtcError::Send(e.into()))?, PusTmVariant::Direct(_) => return Err(EcssTmtcError::CantSendDirectTm), }; Ok(()) } } - impl EcssTmSenderCore for cb::Sender { + impl EcssTmSender for cb::Sender { fn send_tm( &self, - source_id: ComponentId, + sender_id: ComponentId, tm: PusTmVariant, ) -> Result<(), EcssTmtcError> { match tm { PusTmVariant::InStore(addr) => return Err(EcssTmtcError::CantSendAddr(addr)), PusTmVariant::Direct(tm) => self - .send(PusTmAsVec { - source_id, - packet: tm.to_vec()?, - }) + .send(PacketAsVec::new(sender_id, tm.to_vec()?)) .map_err(|e| EcssTmtcError::Send(e.into()))?, }; Ok(()) @@ -1010,6 +948,8 @@ pub mod std_mod { fn tc_slice_raw(&self) -> &[u8]; + fn sender_id(&self) -> Option; + fn cache_and_convert( &mut self, possible_packet: &TcInMemory, @@ -1032,6 +972,7 @@ pub mod std_mod { /// [SharedStaticMemoryPool]. #[derive(Default, Clone)] pub struct EcssTcInVecConverter { + sender_id: Option, pub pus_tc_raw: Option>, } @@ -1039,16 +980,21 @@ pub mod std_mod { fn cache(&mut self, tc_in_memory: &TcInMemory) -> Result<(), PusTcFromMemError> { self.pus_tc_raw = None; match tc_in_memory { - super::TcInMemory::StoreAddr(_) => { + super::TcInMemory::Pool(_packet_in_pool) => { return Err(PusTcFromMemError::InvalidFormat(tc_in_memory.clone())); } - super::TcInMemory::Vec(vec) => { - self.pus_tc_raw = Some(vec.clone()); + super::TcInMemory::Vec(packet_with_sender) => { + self.pus_tc_raw = Some(packet_with_sender.packet.clone()); + self.sender_id = Some(packet_with_sender.sender_id); } }; Ok(()) } + fn sender_id(&self) -> Option { + self.sender_id + } + fn tc_slice_raw(&self) -> &[u8] { if self.pus_tc_raw.is_none() { return &[]; @@ -1062,6 +1008,7 @@ pub mod std_mod { /// packets should be avoided. Please note that this structure is not able to convert TCs which /// are stored as a `Vec`. pub struct EcssTcInSharedStoreConverter { + sender_id: Option, shared_tc_store: SharedStaticMemoryPool, pus_buf: Vec, } @@ -1069,15 +1016,16 @@ pub mod std_mod { impl EcssTcInSharedStoreConverter { pub fn new(shared_tc_store: SharedStaticMemoryPool, max_expected_tc_size: usize) -> Self { Self { + sender_id: None, shared_tc_store, pus_buf: alloc::vec![0; max_expected_tc_size], } } - pub fn copy_tc_to_buf(&mut self, addr: StoreAddr) -> Result<(), PusTcFromMemError> { + pub fn copy_tc_to_buf(&mut self, addr: PoolAddr) -> Result<(), PusTcFromMemError> { // Keep locked section as short as possible. let mut tc_pool = self.shared_tc_store.write().map_err(|_| { - PusTcFromMemError::EcssTmtc(EcssTmtcError::Store(StoreError::LockError)) + PusTcFromMemError::EcssTmtc(EcssTmtcError::Store(PoolError::LockError)) })?; let tc_size = tc_pool.len_of_data(&addr).map_err(EcssTmtcError::Store)?; if tc_size > self.pus_buf.len() { @@ -1099,8 +1047,9 @@ pub mod std_mod { impl EcssTcInMemConverter for EcssTcInSharedStoreConverter { fn cache(&mut self, tc_in_memory: &TcInMemory) -> Result<(), PusTcFromMemError> { match tc_in_memory { - super::TcInMemory::StoreAddr(addr) => { - self.copy_tc_to_buf(*addr)?; + super::TcInMemory::Pool(packet_in_pool) => { + self.copy_tc_to_buf(packet_in_pool.store_addr)?; + self.sender_id = Some(packet_in_pool.sender_id); } super::TcInMemory::Vec(_) => { return Err(PusTcFromMemError::InvalidFormat(tc_in_memory.clone())); @@ -1112,11 +1061,15 @@ pub mod std_mod { fn tc_slice_raw(&self) -> &[u8] { self.pus_buf.as_ref() } + + fn sender_id(&self) -> Option { + self.sender_id + } } pub struct PusServiceBase< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, VerificationReporter: VerificationReportingProvider, > { pub id: ComponentId, @@ -1135,8 +1088,8 @@ pub mod std_mod { /// by using the [EcssTcInMemConverter] abstraction. This object provides some convenience /// methods to make the generic parts of TC handling easier. pub struct PusServiceHelper< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, > { @@ -1145,8 +1098,8 @@ pub mod std_mod { } impl< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, > PusServiceHelper @@ -1177,7 +1130,7 @@ pub mod std_mod { &self.common.tm_sender } - /// This function can be used to poll the internal [EcssTcReceiverCore] object for the next + /// This function can be used to poll the internal [EcssTcReceiver] object for the next /// telecommand packet. It will return `Ok(None)` if there are not packets available. /// In any other case, it will perform the acceptance of the ECSS TC packet using the /// internal [VerificationReportingProvider] object. It will then return the telecommand @@ -1236,14 +1189,14 @@ pub mod std_mod { pub type PusServiceHelperStaticWithMpsc = PusServiceHelper< MpscTcReceiver, - MpscTmInSharedPoolSender, + PacketSenderWithSharedPool, TcInMemConverter, VerificationReporter, >; pub type PusServiceHelperStaticWithBoundedMpsc = PusServiceHelper< MpscTcReceiver, - MpscTmInSharedPoolSenderBounded, + PacketSenderWithSharedPool, TcInMemConverter, VerificationReporter, >; @@ -1313,7 +1266,7 @@ pub mod tests { use crate::pool::{PoolProvider, SharedStaticMemoryPool, StaticMemoryPool, StaticPoolConfig}; use crate::pus::verification::{RequestId, VerificationReporter}; - use crate::tmtc::tm_helper::SharedTmPool; + use crate::tmtc::{PacketAsVec, PacketInPool, PacketSenderWithSharedPool, SharedPacketPool}; use crate::ComponentId; use super::test_util::{TEST_APID, TEST_COMPONENT_ID_0}; @@ -1370,14 +1323,14 @@ pub mod tests { pus_buf: RefCell<[u8; 2048]>, tm_buf: [u8; 2048], tc_pool: SharedStaticMemoryPool, - tm_pool: SharedTmPool, + tm_pool: SharedPacketPool, tc_sender: mpsc::SyncSender, - tm_receiver: mpsc::Receiver, + tm_receiver: mpsc::Receiver, } pub type PusServiceHelperStatic = PusServiceHelper< MpscTcReceiver, - MpscTmInSharedPoolSenderBounded, + PacketSenderWithSharedPool, EcssTcInSharedStoreConverter, VerificationReporter, >; @@ -1392,14 +1345,16 @@ pub mod tests { let tc_pool = StaticMemoryPool::new(pool_cfg.clone()); let tm_pool = StaticMemoryPool::new(pool_cfg); let shared_tc_pool = SharedStaticMemoryPool::new(RwLock::new(tc_pool)); - let shared_tm_pool = SharedTmPool::new(tm_pool); + let shared_tm_pool = SharedStaticMemoryPool::new(RwLock::new(tm_pool)); + let shared_tm_pool_wrapper = SharedPacketPool::new(&shared_tm_pool); let (test_srv_tc_tx, test_srv_tc_rx) = mpsc::sync_channel(10); let (tm_tx, tm_rx) = mpsc::sync_channel(10); let verif_cfg = VerificationReporterCfg::new(TEST_APID, 1, 2, 8).unwrap(); let verification_handler = VerificationReporter::new(TEST_COMPONENT_ID_0.id(), &verif_cfg); - let test_srv_tm_sender = TmInSharedPoolSender::new(shared_tm_pool.clone(), tm_tx); + let test_srv_tm_sender = + PacketSenderWithSharedPool::new(tm_tx, shared_tm_pool_wrapper.clone()); let in_store_converter = EcssTcInSharedStoreConverter::new(shared_tc_pool.clone(), 2048); ( @@ -1407,7 +1362,7 @@ pub mod tests { pus_buf: RefCell::new([0; 2048]), tm_buf: [0; 2048], tc_pool: shared_tc_pool, - tm_pool: shared_tm_pool, + tm_pool: shared_tm_pool_wrapper, tc_sender: test_srv_tc_tx, tm_receiver: tm_rx, }, @@ -1420,7 +1375,12 @@ pub mod tests { ), ) } - pub fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator) { + pub fn send_tc( + &self, + sender_id: ComponentId, + token: &VerificationToken, + tc: &PusTcCreator, + ) { let mut mut_buf = self.pus_buf.borrow_mut(); let tc_size = tc.write_to_bytes(mut_buf.as_mut_slice()).unwrap(); let mut tc_pool = self.tc_pool.write().unwrap(); @@ -1428,7 +1388,10 @@ pub mod tests { drop(tc_pool); // Send accepted TC to test service handler. self.tc_sender - .send(EcssTcAndToken::new(addr, *token)) + .send(EcssTcAndToken::new( + PacketInPool::new(sender_id, addr), + *token, + )) .expect("sending tc failed"); } @@ -1469,7 +1432,7 @@ pub mod tests { pub struct PusServiceHandlerWithVecCommon { current_tm: Option>, tc_sender: mpsc::Sender, - tm_receiver: mpsc::Receiver, + tm_receiver: mpsc::Receiver, } pub type PusServiceHelperDynamic = PusServiceHelper< MpscTcReceiver, @@ -1542,11 +1505,19 @@ pub mod tests { } impl PusServiceHandlerWithVecCommon { - pub fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator) { + pub fn send_tc( + &self, + sender_id: ComponentId, + token: &VerificationToken, + tc: &PusTcCreator, + ) { // Send accepted TC to test service handler. self.tc_sender .send(EcssTcAndToken::new( - TcInMemory::Vec(tc.to_vec().expect("pus tc conversion to vec failed")), + TcInMemory::Vec(PacketAsVec::new( + sender_id, + tc.to_vec().expect("pus tc conversion to vec failed"), + )), *token, )) .expect("sending tc failed"); diff --git a/satrs/src/pus/mode.rs b/satrs/src/pus/mode.rs index abb6b99..5ff78bf 100644 --- a/satrs/src/pus/mode.rs +++ b/satrs/src/pus/mode.rs @@ -26,11 +26,9 @@ pub enum Subservice { } #[cfg(feature = "alloc")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] pub mod alloc_mod {} #[cfg(feature = "alloc")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] pub mod std_mod {} #[cfg(test)] diff --git a/satrs/src/pus/scheduler.rs b/satrs/src/pus/scheduler.rs index b3b9ef2..1088142 100644 --- a/satrs/src/pus/scheduler.rs +++ b/satrs/src/pus/scheduler.rs @@ -14,7 +14,7 @@ use spacepackets::{ByteConversionError, CcsdsPacket}; #[cfg(feature = "std")] use std::error::Error; -use crate::pool::{PoolProvider, StoreError}; +use crate::pool::{PoolError, PoolProvider}; #[cfg(feature = "alloc")] pub use alloc_mod::*; @@ -151,7 +151,7 @@ pub enum ScheduleError { }, /// Nested time-tagged commands are not allowed. NestedScheduledTc, - StoreError(StoreError), + StoreError(PoolError), TcDataEmpty, TimestampError(TimestampError), WrongSubservice(u8), @@ -206,8 +206,8 @@ impl From for ScheduleError { } } -impl From for ScheduleError { - fn from(e: StoreError) -> Self { +impl From for ScheduleError { + fn from(e: PoolError) -> Self { Self::StoreError(e) } } @@ -240,7 +240,7 @@ impl Error for ScheduleError { pub trait PusSchedulerProvider { type TimeProvider: CcsdsTimeProvider + TimeReader; - fn reset(&mut self, store: &mut (impl PoolProvider + ?Sized)) -> Result<(), StoreError>; + fn reset(&mut self, store: &mut (impl PoolProvider + ?Sized)) -> Result<(), PoolError>; fn is_enabled(&self) -> bool; @@ -345,12 +345,9 @@ pub mod alloc_mod { }, vec::Vec, }; - use spacepackets::time::{ - cds::{self, DaysLen24Bits}, - UnixTime, - }; + use spacepackets::time::cds::{self, DaysLen24Bits}; - use crate::pool::StoreAddr; + use crate::pool::PoolAddr; use super::*; @@ -371,8 +368,8 @@ pub mod alloc_mod { } enum DeletionResult { - WithoutStoreDeletion(Option), - WithStoreDeletion(Result), + WithoutStoreDeletion(Option), + WithStoreDeletion(Result), } /// This is the core data structure for scheduling PUS telecommands with [alloc] support. @@ -426,7 +423,6 @@ pub mod alloc_mod { /// Like [Self::new], but sets the `init_current_time` parameter to the current system time. #[cfg(feature = "std")] - #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] pub fn new_with_current_init_time(time_margin: Duration) -> Result { Ok(Self::new(UnixTime::now()?, time_margin)) } @@ -528,7 +524,7 @@ pub mod alloc_mod { &mut self, time_window: TimeWindow, pool: &mut (impl PoolProvider + ?Sized), - ) -> Result { + ) -> Result { let range = self.retrieve_by_time_filter(time_window); let mut del_packets = 0; let mut res_if_fails = None; @@ -558,7 +554,7 @@ pub mod alloc_mod { pub fn delete_all( &mut self, pool: &mut (impl PoolProvider + ?Sized), - ) -> Result { + ) -> Result { self.delete_by_time_filter(TimeWindow::::new_select_all(), pool) } @@ -604,7 +600,7 @@ pub mod alloc_mod { /// Please note that this function will stop on the first telecommand with a request ID match. /// In case of duplicate IDs (which should generally not happen), this function needs to be /// called repeatedly. - pub fn delete_by_request_id(&mut self, req_id: &RequestId) -> Option { + pub fn delete_by_request_id(&mut self, req_id: &RequestId) -> Option { if let DeletionResult::WithoutStoreDeletion(v) = self.delete_by_request_id_internal_without_store_deletion(req_id) { @@ -618,7 +614,7 @@ pub mod alloc_mod { &mut self, req_id: &RequestId, pool: &mut (impl PoolProvider + ?Sized), - ) -> Result { + ) -> Result { if let DeletionResult::WithStoreDeletion(v) = self.delete_by_request_id_internal_with_store_deletion(req_id, pool) { @@ -670,7 +666,6 @@ pub mod alloc_mod { } #[cfg(feature = "std")] - #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] pub fn update_time_from_now(&mut self) -> Result<(), SystemTimeError> { self.current_time = UnixTime::now()?; Ok(()) @@ -696,7 +691,7 @@ pub mod alloc_mod { releaser: R, tc_store: &mut (impl PoolProvider + ?Sized), tc_buf: &mut [u8], - ) -> Result { + ) -> Result { self.release_telecommands_internal(releaser, tc_store, Some(tc_buf)) } @@ -710,7 +705,7 @@ pub mod alloc_mod { &mut self, releaser: R, tc_store: &mut (impl PoolProvider + ?Sized), - ) -> Result { + ) -> Result { self.release_telecommands_internal(releaser, tc_store, None) } @@ -719,7 +714,7 @@ pub mod alloc_mod { mut releaser: R, tc_store: &mut (impl PoolProvider + ?Sized), mut tc_buf: Option<&mut [u8]>, - ) -> Result { + ) -> Result { let tcs_to_release = self.telecommands_to_release(); let mut released_tcs = 0; let mut store_error = Ok(()); @@ -765,7 +760,7 @@ pub mod alloc_mod { mut releaser: R, tc_store: &(impl PoolProvider + ?Sized), tc_buf: &mut [u8], - ) -> Result, (alloc::vec::Vec, StoreError)> { + ) -> Result, (alloc::vec::Vec, PoolError)> { let tcs_to_release = self.telecommands_to_release(); let mut released_tcs = alloc::vec::Vec::new(); for tc in tcs_to_release { @@ -796,7 +791,7 @@ pub mod alloc_mod { /// The holding store for the telecommands needs to be passed so all the stored telecommands /// can be deleted to avoid a memory leak. If at last one deletion operation fails, the error /// will be returned but the method will still try to delete all the commands in the schedule. - fn reset(&mut self, store: &mut (impl PoolProvider + ?Sized)) -> Result<(), StoreError> { + fn reset(&mut self, store: &mut (impl PoolProvider + ?Sized)) -> Result<(), PoolError> { self.enabled = false; let mut deletion_ok = Ok(()); for tc_lists in &mut self.tc_map { @@ -854,7 +849,7 @@ pub mod alloc_mod { mod tests { use super::*; use crate::pool::{ - PoolProvider, StaticMemoryPool, StaticPoolAddr, StaticPoolConfig, StoreAddr, StoreError, + PoolAddr, PoolError, PoolProvider, StaticMemoryPool, StaticPoolAddr, StaticPoolConfig, }; use alloc::collections::btree_map::Range; use spacepackets::ecss::tc::{PusTcCreator, PusTcReader, PusTcSecondaryHeader}; @@ -993,7 +988,7 @@ mod tests { .insert_unwrapped_and_stored_tc( UnixTime::new_only_secs(100), TcInfo::new( - StoreAddr::from(StaticPoolAddr { + PoolAddr::from(StaticPoolAddr { pool_idx: 0, packet_idx: 1, }), @@ -1010,7 +1005,7 @@ mod tests { .insert_unwrapped_and_stored_tc( UnixTime::new_only_secs(100), TcInfo::new( - StoreAddr::from(StaticPoolAddr { + PoolAddr::from(StaticPoolAddr { pool_idx: 0, packet_idx: 2, }), @@ -1054,8 +1049,8 @@ mod tests { fn common_check( enabled: bool, - store_addr: &StoreAddr, - expected_store_addrs: Vec, + store_addr: &PoolAddr, + expected_store_addrs: Vec, counter: &mut usize, ) { assert!(enabled); @@ -1064,8 +1059,8 @@ mod tests { } fn common_check_disabled( enabled: bool, - store_addr: &StoreAddr, - expected_store_addrs: Vec, + store_addr: &PoolAddr, + expected_store_addrs: Vec, counter: &mut usize, ) { assert!(!enabled); @@ -1519,7 +1514,7 @@ mod tests { // TC could not even be read.. assert_eq!(err.0, 0); match err.1 { - StoreError::DataDoesNotExist(addr) => { + PoolError::DataDoesNotExist(addr) => { assert_eq!(tc_info_0.addr(), addr); } _ => panic!("unexpected error {}", err.1), @@ -1542,7 +1537,7 @@ mod tests { assert!(reset_res.is_err()); let err = reset_res.unwrap_err(); match err { - StoreError::DataDoesNotExist(addr) => { + PoolError::DataDoesNotExist(addr) => { assert_eq!(addr, tc_info_0.addr()); } _ => panic!("unexpected error {err}"), @@ -1644,7 +1639,7 @@ mod tests { let err = insert_res.unwrap_err(); match err { ScheduleError::StoreError(e) => match e { - StoreError::StoreFull(_) => {} + PoolError::StoreFull(_) => {} _ => panic!("unexpected store error {e}"), }, _ => panic!("unexpected error {err}"), diff --git a/satrs/src/pus/scheduler_srv.rs b/satrs/src/pus/scheduler_srv.rs index 6812770..4d538b8 100644 --- a/satrs/src/pus/scheduler_srv.rs +++ b/satrs/src/pus/scheduler_srv.rs @@ -1,12 +1,12 @@ use super::scheduler::PusSchedulerProvider; use super::verification::{VerificationReporter, VerificationReportingProvider}; use super::{ - EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcReceiverCore, - EcssTmSenderCore, MpscTcReceiver, MpscTmInSharedPoolSender, MpscTmInSharedPoolSenderBounded, - PusServiceHelper, PusTmAsVec, + EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcReceiver, + EcssTmSender, MpscTcReceiver, PusServiceHelper, }; use crate::pool::PoolProvider; use crate::pus::{PusPacketHandlerResult, PusPacketHandlingError}; +use crate::tmtc::{PacketAsVec, PacketSenderWithSharedPool}; use alloc::string::ToString; use spacepackets::ecss::{scheduling, PusPacket}; use spacepackets::time::cds::CdsTime; @@ -21,8 +21,8 @@ use std::sync::mpsc; /// [Self::scheduler] and [Self::scheduler_mut] function and then use the scheduler API to release /// telecommands when applicable. pub struct PusSchedServiceHandler< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, PusScheduler: PusSchedulerProvider, @@ -33,8 +33,8 @@ pub struct PusSchedServiceHandler< } impl< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, Scheduler: PusSchedulerProvider, @@ -212,7 +212,7 @@ impl< /// mpsc queues. pub type PusService11SchedHandlerDynWithMpsc = PusSchedServiceHandler< MpscTcReceiver, - mpsc::Sender, + mpsc::Sender, EcssTcInVecConverter, VerificationReporter, PusScheduler, @@ -221,7 +221,7 @@ pub type PusService11SchedHandlerDynWithMpsc = PusSchedServiceHand /// queues. pub type PusService11SchedHandlerDynWithBoundedMpsc = PusSchedServiceHandler< MpscTcReceiver, - mpsc::SyncSender, + mpsc::SyncSender, EcssTcInVecConverter, VerificationReporter, PusScheduler, @@ -230,7 +230,7 @@ pub type PusService11SchedHandlerDynWithBoundedMpsc = PusSchedServ /// mpsc queues. pub type PusService11SchedHandlerStaticWithMpsc = PusSchedServiceHandler< MpscTcReceiver, - MpscTmInSharedPoolSender, + PacketSenderWithSharedPool, EcssTcInSharedStoreConverter, VerificationReporter, PusScheduler, @@ -239,7 +239,7 @@ pub type PusService11SchedHandlerStaticWithMpsc = PusSchedServiceH /// mpsc queues. pub type PusService11SchedHandlerStaticWithBoundedMpsc = PusSchedServiceHandler< MpscTcReceiver, - MpscTmInSharedPoolSenderBounded, + PacketSenderWithSharedPool, EcssTcInSharedStoreConverter, VerificationReporter, PusScheduler, @@ -257,10 +257,8 @@ mod tests { verification::{RequestId, TcStateAccepted, VerificationToken}, EcssTcInSharedStoreConverter, }; - use crate::pus::{ - MpscTcReceiver, MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, - PusPacketHandlingError, - }; + use crate::pus::{MpscTcReceiver, PusPacketHandlerResult, PusPacketHandlingError}; + use crate::tmtc::PacketSenderWithSharedPool; use alloc::collections::VecDeque; use delegate::delegate; use spacepackets::ecss::scheduling::Subservice; @@ -279,7 +277,7 @@ mod tests { common: PusServiceHandlerWithSharedStoreCommon, handler: PusSchedServiceHandler< MpscTcReceiver, - MpscTmInSharedPoolSenderBounded, + PacketSenderWithSharedPool, EcssTcInSharedStoreConverter, VerificationReporter, TestScheduler, @@ -317,9 +315,13 @@ mod tests { .expect("acceptance success failure") } + fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator) { + self.common + .send_tc(self.handler.service_helper.id(), token, tc); + } + delegate! { to self.common { - fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator); fn read_next_tm(&mut self) -> PusTmReader<'_>; fn check_no_tm_available(&self) -> bool; fn check_next_verification_tm(&self, subservice: u8, expected_request_id: RequestId); @@ -342,7 +344,7 @@ mod tests { fn reset( &mut self, _store: &mut (impl crate::pool::PoolProvider + ?Sized), - ) -> Result<(), crate::pool::StoreError> { + ) -> Result<(), crate::pool::PoolError> { self.reset_count += 1; Ok(()) } diff --git a/satrs/src/pus/test.rs b/satrs/src/pus/test.rs index 58abb0f..a1ca93e 100644 --- a/satrs/src/pus/test.rs +++ b/satrs/src/pus/test.rs @@ -1,7 +1,7 @@ use crate::pus::{ - PartialPusHandlingError, PusPacketHandlerResult, PusPacketHandlingError, PusTmAsVec, - PusTmInPool, PusTmVariant, + PartialPusHandlingError, PusPacketHandlerResult, PusPacketHandlingError, PusTmVariant, }; +use crate::tmtc::{PacketAsVec, PacketSenderWithSharedPool}; use spacepackets::ecss::tm::{PusTmCreator, PusTmSecondaryHeader}; use spacepackets::ecss::PusPacket; use spacepackets::SpHeader; @@ -9,16 +9,15 @@ use std::sync::mpsc; use super::verification::{VerificationReporter, VerificationReportingProvider}; use super::{ - EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcReceiverCore, - EcssTmSenderCore, GenericConversionError, MpscTcReceiver, MpscTmInSharedPoolSender, - MpscTmInSharedPoolSenderBounded, PusServiceHelper, + EcssTcInMemConverter, EcssTcInSharedStoreConverter, EcssTcInVecConverter, EcssTcReceiver, + EcssTmSender, GenericConversionError, MpscTcReceiver, PusServiceHelper, }; /// This is a helper class for [std] environments to handle generic PUS 17 (test service) packets. /// This handler only processes ping requests and generates a ping reply for them accordingly. pub struct PusService17TestHandler< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, > { @@ -27,8 +26,8 @@ pub struct PusService17TestHandler< } impl< - TcReceiver: EcssTcReceiverCore, - TmSender: EcssTmSenderCore, + TcReceiver: EcssTcReceiver, + TmSender: EcssTmSender, TcInMemConverter: EcssTcInMemConverter, VerificationReporter: VerificationReportingProvider, > PusService17TestHandler @@ -127,7 +126,7 @@ impl< /// mpsc queues. pub type PusService17TestHandlerDynWithMpsc = PusService17TestHandler< MpscTcReceiver, - mpsc::Sender, + mpsc::Sender, EcssTcInVecConverter, VerificationReporter, >; @@ -135,23 +134,15 @@ pub type PusService17TestHandlerDynWithMpsc = PusService17TestHandler< /// queues. pub type PusService17TestHandlerDynWithBoundedMpsc = PusService17TestHandler< MpscTcReceiver, - mpsc::SyncSender, + mpsc::SyncSender, EcssTcInVecConverter, VerificationReporter, >; -/// Helper type definition for a PUS 17 handler with a shared store TMTC memory backend and regular -/// mpsc queues. -pub type PusService17TestHandlerStaticWithMpsc = PusService17TestHandler< - MpscTcReceiver, - MpscTmInSharedPoolSender, - EcssTcInSharedStoreConverter, - VerificationReporter, ->; /// Helper type definition for a PUS 17 handler with a shared store TMTC memory backend and bounded /// mpsc queues. pub type PusService17TestHandlerStaticWithBoundedMpsc = PusService17TestHandler< MpscTcReceiver, - MpscTmInSharedPoolSenderBounded, + PacketSenderWithSharedPool, EcssTcInSharedStoreConverter, VerificationReporter, >; @@ -168,9 +159,9 @@ mod tests { use crate::pus::verification::{TcStateAccepted, VerificationToken}; use crate::pus::{ EcssTcInSharedStoreConverter, EcssTcInVecConverter, GenericConversionError, MpscTcReceiver, - MpscTmAsVecSender, MpscTmInSharedPoolSenderBounded, PusPacketHandlerResult, - PusPacketHandlingError, + MpscTmAsVecSender, PusPacketHandlerResult, PusPacketHandlingError, }; + use crate::tmtc::PacketSenderWithSharedPool; use crate::ComponentId; use delegate::delegate; use spacepackets::ecss::tc::{PusTcCreator, PusTcSecondaryHeader}; @@ -185,7 +176,7 @@ mod tests { common: PusServiceHandlerWithSharedStoreCommon, handler: PusService17TestHandler< MpscTcReceiver, - MpscTmInSharedPoolSenderBounded, + PacketSenderWithSharedPool, EcssTcInSharedStoreConverter, VerificationReporter, >, @@ -212,10 +203,14 @@ mod tests { .expect("acceptance success failure") } + fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator) { + self.common + .send_tc(self.handler.service_helper.id(), token, tc); + } + delegate! { to self.common { fn read_next_tm(&mut self) -> PusTmReader<'_>; - fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator); fn check_no_tm_available(&self) -> bool; fn check_next_verification_tm( &self, @@ -263,9 +258,13 @@ mod tests { .expect("acceptance success failure") } + fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator) { + self.common + .send_tc(self.handler.service_helper.id(), token, tc); + } + delegate! { to self.common { - fn send_tc(&self, token: &VerificationToken, tc: &PusTcCreator); fn read_next_tm(&mut self) -> PusTmReader<'_>; fn check_no_tm_available(&self) -> bool; fn check_next_verification_tm( diff --git a/satrs/src/pus/verification.rs b/satrs/src/pus/verification.rs index 2cab962..2f81e41 100644 --- a/satrs/src/pus/verification.rs +++ b/satrs/src/pus/verification.rs @@ -19,10 +19,9 @@ //! use satrs::pus::verification::{ //! VerificationReportingProvider, VerificationReporterCfg, VerificationReporter //! }; +//! use satrs::tmtc::{SharedStaticMemoryPool, PacketSenderWithSharedPool}; //! use satrs::seq_count::SeqCountProviderSimple; //! use satrs::request::UniqueApidTargetId; -//! use satrs::pus::MpscTmInSharedPoolSender; -//! use satrs::tmtc::tm_helper::SharedTmPool; //! use spacepackets::ecss::PusPacket; //! use spacepackets::SpHeader; //! use spacepackets::ecss::tc::{PusTcCreator, PusTcSecondaryHeader}; @@ -34,10 +33,9 @@ //! //! let pool_cfg = StaticPoolConfig::new(vec![(10, 32), (10, 64), (10, 128), (10, 1024)], false); //! let tm_pool = StaticMemoryPool::new(pool_cfg.clone()); -//! let shared_tm_store = SharedTmPool::new(tm_pool); -//! let tm_store = shared_tm_store.clone_backing_pool(); -//! let (verif_tx, verif_rx) = mpsc::channel(); -//! let sender = MpscTmInSharedPoolSender::new(shared_tm_store, verif_tx); +//! let shared_tm_pool = SharedStaticMemoryPool::new(RwLock::new(tm_pool)); +//! let (verif_tx, verif_rx) = mpsc::sync_channel(10); +//! let sender = PacketSenderWithSharedPool::new_with_shared_packet_pool(verif_tx, &shared_tm_pool); //! let cfg = VerificationReporterCfg::new(TEST_APID, 1, 2, 8).unwrap(); //! let mut reporter = VerificationReporter::new(TEST_COMPONENT_ID.id(), &cfg); //! @@ -61,7 +59,7 @@ //! let tm_in_store = verif_rx.recv_timeout(Duration::from_millis(10)).unwrap(); //! let tm_len; //! { -//! let mut rg = tm_store.write().expect("Error locking shared pool"); +//! let mut rg = shared_tm_pool.write().expect("Error locking shared pool"); //! let store_guard = rg.read_with_guard(tm_in_store.store_addr); //! tm_len = store_guard.read(&mut tm_buf).expect("Error reading TM slice"); //! } @@ -81,7 +79,7 @@ //! The [integration test](https://egit.irs.uni-stuttgart.de/rust/fsrc-launchpad/src/branch/main/fsrc-core/tests/verification_test.rs) //! for the verification module contains examples how this module could be used in a more complex //! context involving multiple threads -use crate::pus::{source_buffer_large_enough, EcssTmSenderCore, EcssTmtcError}; +use crate::pus::{source_buffer_large_enough, EcssTmSender, EcssTmtcError}; use core::fmt::{Debug, Display, Formatter}; use core::hash::{Hash, Hasher}; use core::marker::PhantomData; @@ -100,18 +98,11 @@ pub use crate::seq_count::SeqCountProviderSimple; pub use spacepackets::ecss::verification::*; #[cfg(feature = "alloc")] -#[cfg_attr(feature = "doc_cfg", doc(cfg(feature = "alloc")))] pub use alloc_mod::*; use crate::request::Apid; use crate::ComponentId; -/* -#[cfg(feature = "std")] -#[cfg_attr(feature = "doc_cfg", doc(cfg(feature = "std")))] -pub use std_mod::*; - */ - /// This is a request identifier as specified in 5.4.11.2 c. of the PUS standard. /// /// This field equivalent to the first two bytes of the CCSDS space packet header. @@ -425,35 +416,35 @@ pub trait VerificationReportingProvider { fn acceptance_success( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, time_stamp: &[u8], ) -> Result, EcssTmtcError>; fn acceptance_failure( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParams, ) -> Result<(), EcssTmtcError>; fn start_success( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, time_stamp: &[u8], ) -> Result, EcssTmtcError>; fn start_failure( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParams, ) -> Result<(), EcssTmtcError>; fn step_success( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: &VerificationToken, time_stamp: &[u8], step: impl EcssEnumeration, @@ -461,21 +452,21 @@ pub trait VerificationReportingProvider { fn step_failure( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParamsWithStep, ) -> Result<(), EcssTmtcError>; fn completion_success( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, time_stamp: &[u8], ) -> Result<(), EcssTmtcError>; fn completion_failure( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParams, ) -> Result<(), EcssTmtcError>; @@ -886,7 +877,7 @@ pub mod alloc_mod { use spacepackets::ecss::PusError; use super::*; - use crate::{pus::PusTmVariant, ComponentId}; + use crate::pus::PusTmVariant; use core::cell::RefCell; #[derive(Clone)] @@ -1027,7 +1018,7 @@ pub mod alloc_mod { /// Package and send a PUS TM\[1, 1\] packet, see 8.1.2.1 of the PUS standard fn acceptance_success( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, time_stamp: &[u8], ) -> Result, EcssTmtcError> { @@ -1044,7 +1035,7 @@ pub mod alloc_mod { /// Package and send a PUS TM\[1, 2\] packet, see 8.1.2.2 of the PUS standard fn acceptance_failure( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParams, ) -> Result<(), EcssTmtcError> { @@ -1063,7 +1054,7 @@ pub mod alloc_mod { /// Requires a token previously acquired by calling [Self::acceptance_success]. fn start_success( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, time_stamp: &[u8], ) -> Result, EcssTmtcError> { @@ -1083,7 +1074,7 @@ pub mod alloc_mod { /// the token because verification handling is done. fn start_failure( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParams, ) -> Result<(), EcssTmtcError> { @@ -1102,7 +1093,7 @@ pub mod alloc_mod { /// Requires a token previously acquired by calling [Self::start_success]. fn step_success( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: &VerificationToken, time_stamp: &[u8], step: impl EcssEnumeration, @@ -1123,7 +1114,7 @@ pub mod alloc_mod { /// token because verification handling is done. fn step_failure( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParamsWithStep, ) -> Result<(), EcssTmtcError> { @@ -1144,7 +1135,7 @@ pub mod alloc_mod { fn completion_success( &self, // sender_id: ComponentId, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, time_stamp: &[u8], ) -> Result<(), EcssTmtcError> { @@ -1164,7 +1155,7 @@ pub mod alloc_mod { /// token because verification handling is done. fn completion_failure( &self, - sender: &(impl EcssTmSenderCore + ?Sized), + sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParams, ) -> Result<(), EcssTmtcError> { @@ -1269,7 +1260,7 @@ pub mod test_util { fn acceptance_success( &self, - _sender: &(impl EcssTmSenderCore + ?Sized), + _sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, time_stamp: &[u8], ) -> Result, EcssTmtcError> { @@ -1288,7 +1279,7 @@ pub mod test_util { fn acceptance_failure( &self, - _sender: &(impl EcssTmSenderCore + ?Sized), + _sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParams, ) -> Result<(), EcssTmtcError> { @@ -1306,7 +1297,7 @@ pub mod test_util { fn start_success( &self, - _sender: &(impl EcssTmSenderCore + ?Sized), + _sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, time_stamp: &[u8], ) -> Result, EcssTmtcError> { @@ -1325,7 +1316,7 @@ pub mod test_util { fn start_failure( &self, - _sender: &(impl EcssTmSenderCore + ?Sized), + _sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParams, ) -> Result<(), EcssTmtcError> { @@ -1343,7 +1334,7 @@ pub mod test_util { fn step_success( &self, - _sender: &(impl EcssTmSenderCore + ?Sized), + _sender: &(impl EcssTmSender + ?Sized), token: &VerificationToken, time_stamp: &[u8], step: impl EcssEnumeration, @@ -1363,7 +1354,7 @@ pub mod test_util { fn step_failure( &self, - _sender: &(impl EcssTmSenderCore + ?Sized), + _sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParamsWithStep, ) -> Result<(), EcssTmtcError> { @@ -1381,7 +1372,7 @@ pub mod test_util { fn completion_success( &self, - _sender: &(impl EcssTmSenderCore + ?Sized), + _sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, time_stamp: &[u8], ) -> Result<(), EcssTmtcError> { @@ -1397,7 +1388,7 @@ pub mod test_util { fn completion_failure( &self, - _sender: &(impl EcssTmSenderCore + ?Sized), + _sender: &(impl EcssTmSender + ?Sized), token: VerificationToken, params: FailParams, ) -> Result<(), EcssTmtcError> { @@ -1636,17 +1627,17 @@ pub mod test_util { #[cfg(test)] pub mod tests { - use crate::pool::{StaticMemoryPool, StaticPoolConfig}; + use crate::pool::{SharedStaticMemoryPool, StaticMemoryPool, StaticPoolConfig}; use crate::pus::test_util::{TEST_APID, TEST_COMPONENT_ID_0}; use crate::pus::tests::CommonTmInfo; use crate::pus::verification::{ - EcssTmSenderCore, EcssTmtcError, FailParams, FailParamsWithStep, RequestId, TcStateNone, + EcssTmSender, EcssTmtcError, FailParams, FailParamsWithStep, RequestId, TcStateNone, VerificationReporter, VerificationReporterCfg, VerificationToken, }; - use crate::pus::{ChannelWithId, MpscTmInSharedPoolSender, PusTmVariant}; + use crate::pus::{ChannelWithId, PusTmVariant}; use crate::request::MessageMetadata; use crate::seq_count::{CcsdsSimpleSeqCountProvider, SequenceCountProviderCore}; - use crate::tmtc::tm_helper::SharedTmPool; + use crate::tmtc::{PacketSenderWithSharedPool, SharedPacketPool}; use crate::ComponentId; use alloc::format; use spacepackets::ecss::tc::{PusTcCreator, PusTcReader, PusTcSecondaryHeader}; @@ -1658,7 +1649,7 @@ pub mod tests { use spacepackets::{ByteConversionError, SpHeader}; use std::cell::RefCell; use std::collections::VecDeque; - use std::sync::mpsc; + use std::sync::{mpsc, RwLock}; use std::vec; use std::vec::Vec; @@ -1694,7 +1685,7 @@ pub mod tests { } } - impl EcssTmSenderCore for TestSender { + impl EcssTmSender for TestSender { fn send_tm(&self, sender_id: ComponentId, tm: PusTmVariant) -> Result<(), EcssTmtcError> { match tm { PusTmVariant::InStore(_) => { @@ -2128,9 +2119,10 @@ pub mod tests { #[test] fn test_mpsc_verif_send() { let pool = StaticMemoryPool::new(StaticPoolConfig::new(vec![(8, 8)], false)); - let shared_tm_store = SharedTmPool::new(pool); - let (tx, _) = mpsc::channel(); - let mpsc_verif_sender = MpscTmInSharedPoolSender::new(shared_tm_store, tx); + let shared_tm_store = + SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new(pool))); + let (tx, _) = mpsc::sync_channel(10); + let mpsc_verif_sender = PacketSenderWithSharedPool::new(tx, shared_tm_store); is_send(&mpsc_verif_sender); } diff --git a/satrs/src/request.rs b/satrs/src/request.rs index f2104ed..3999278 100644 --- a/satrs/src/request.rs +++ b/satrs/src/request.rs @@ -193,8 +193,6 @@ impl> MessageReceiverWithId { #[cfg(feature = "alloc")] pub mod alloc_mod { - use core::marker::PhantomData; - use crate::queue::GenericSendError; use super::*; @@ -333,7 +331,7 @@ pub mod std_mod { use super::*; use std::sync::mpsc; - use crate::queue::{GenericReceiveError, GenericSendError, GenericTargetedMessagingError}; + use crate::queue::{GenericReceiveError, GenericSendError}; impl MessageSender for mpsc::Sender> { fn send(&self, message: GenericMessage) -> Result<(), GenericTargetedMessagingError> { diff --git a/satrs/src/tmtc/ccsds_distrib.rs b/satrs/src/tmtc/ccsds_distrib.rs deleted file mode 100644 index 607b461..0000000 --- a/satrs/src/tmtc/ccsds_distrib.rs +++ /dev/null @@ -1,403 +0,0 @@ -//! CCSDS packet routing components. -//! -//! The routing components consist of two core components: -//! 1. [CcsdsDistributor] component which dispatches received packets to a user-provided handler -//! 2. [CcsdsPacketHandler] trait which should be implemented by the user-provided packet handler. -//! -//! The [CcsdsDistributor] implements the [ReceivesCcsdsTc] and [ReceivesTcCore] trait which allows to -//! pass raw or CCSDS packets to it. Upon receiving a packet, it performs the following steps: -//! -//! 1. It tries to identify the target Application Process Identifier (APID) based on the -//! respective CCSDS space packet header field. If that process fails, a [ByteConversionError] is -//! returned to the user -//! 2. If a valid APID is found and matches one of the APIDs provided by -//! [CcsdsPacketHandler::valid_apids], it will pass the packet to the user provided -//! [CcsdsPacketHandler::handle_known_apid] function. If no valid APID is found, the packet -//! will be passed to the [CcsdsPacketHandler::handle_unknown_apid] function. -//! -//! # Example -//! -//! ```rust -//! use satrs::ValidatorU16Id; -//! use satrs::tmtc::ccsds_distrib::{CcsdsPacketHandler, CcsdsDistributor}; -//! use satrs::tmtc::{ReceivesTc, ReceivesTcCore}; -//! use spacepackets::{CcsdsPacket, SpHeader}; -//! use spacepackets::ecss::WritablePusPacket; -//! use spacepackets::ecss::tc::PusTcCreator; -//! -//! #[derive (Default)] -//! struct ConcreteApidHandler { -//! known_call_count: u32, -//! unknown_call_count: u32 -//! } -//! -//! impl ConcreteApidHandler { -//! fn mutable_foo(&mut self) {} -//! } -//! -//! impl ValidatorU16Id for ConcreteApidHandler { -//! fn validate(&self, apid: u16) -> bool { apid == 0x0002 } -//! } -//! -//! impl CcsdsPacketHandler for ConcreteApidHandler { -//! type Error = (); -//! fn handle_packet_with_valid_apid(&mut self, sp_header: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> { -//! assert_eq!(sp_header.apid(), 0x002); -//! assert_eq!(tc_raw.len(), 13); -//! self.known_call_count += 1; -//! Ok(()) -//! } -//! fn handle_packet_with_unknown_apid(&mut self, sp_header: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> { -//! assert_eq!(sp_header.apid(), 0x003); -//! assert_eq!(tc_raw.len(), 13); -//! self.unknown_call_count += 1; -//! Ok(()) -//! } -//! } -//! -//! let apid_handler = ConcreteApidHandler::default(); -//! let mut ccsds_distributor = CcsdsDistributor::new(apid_handler); -//! -//! // Create and pass PUS telecommand with a valid APID -//! let sp_header = SpHeader::new_for_unseg_tc(0x002, 0x34, 0); -//! let mut pus_tc = PusTcCreator::new_simple(sp_header, 17, 1, &[], true); -//! let mut test_buf: [u8; 32] = [0; 32]; -//! let mut size = pus_tc -//! .write_to_bytes(test_buf.as_mut_slice()) -//! .expect("Error writing TC to buffer"); -//! let tc_slice = &test_buf[0..size]; -//! ccsds_distributor.pass_tc(&tc_slice).expect("Passing TC slice failed"); -//! -//! // Now pass a packet with an unknown APID to the distributor -//! pus_tc.set_apid(0x003); -//! size = pus_tc -//! .write_to_bytes(test_buf.as_mut_slice()) -//! .expect("Error writing TC to buffer"); -//! let tc_slice = &test_buf[0..size]; -//! ccsds_distributor.pass_tc(&tc_slice).expect("Passing TC slice failed"); -//! -//! // Retrieve the APID handler. -//! let handler_ref = ccsds_distributor.packet_handler(); -//! assert_eq!(handler_ref.known_call_count, 1); -//! assert_eq!(handler_ref.unknown_call_count, 1); -//! -//! // Mutable access to the handler. -//! let mutable_handler_ref = ccsds_distributor.packet_handler_mut(); -//! mutable_handler_ref.mutable_foo(); -//! ``` -use crate::{ - tmtc::{ReceivesCcsdsTc, ReceivesTcCore}, - ValidatorU16Id, -}; -use core::fmt::{Display, Formatter}; -use spacepackets::{ByteConversionError, CcsdsPacket, SpHeader}; -#[cfg(feature = "std")] -use std::error::Error; - -/// Generic trait for a handler or dispatcher object handling CCSDS packets. -/// -/// Users should implement this trait on their custom CCSDS packet handler and then pass a boxed -/// instance of this handler to the [CcsdsDistributor]. The distributor will use the trait -/// interface to dispatch received packets to the user based on the Application Process Identifier -/// (APID) field of the CCSDS packet. The APID will be checked using the generic [ValidatorU16Id] -/// trait. -pub trait CcsdsPacketHandler: ValidatorU16Id { - type Error; - - fn handle_packet_with_valid_apid( - &mut self, - sp_header: &SpHeader, - tc_raw: &[u8], - ) -> Result<(), Self::Error>; - - fn handle_packet_with_unknown_apid( - &mut self, - sp_header: &SpHeader, - tc_raw: &[u8], - ) -> Result<(), Self::Error>; -} - -/// The CCSDS distributor dispatches received CCSDS packets to a user provided packet handler. -pub struct CcsdsDistributor, E> { - /// User provided APID handler stored as a generic trait object. - /// It can be cast back to the original concrete type using [Self::packet_handler] or - /// the [Self::packet_handler_mut] method. - packet_handler: PacketHandler, -} - -#[derive(Debug, Copy, Clone, PartialEq, Eq)] -pub enum CcsdsError { - CustomError(E), - ByteConversionError(ByteConversionError), -} - -impl Display for CcsdsError { - fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { - match self { - Self::CustomError(e) => write!(f, "{e}"), - Self::ByteConversionError(e) => write!(f, "{e}"), - } - } -} - -#[cfg(feature = "std")] -impl Error for CcsdsError { - fn source(&self) -> Option<&(dyn Error + 'static)> { - match self { - Self::CustomError(e) => e.source(), - Self::ByteConversionError(e) => e.source(), - } - } -} - -impl, E: 'static> ReceivesCcsdsTc - for CcsdsDistributor -{ - type Error = CcsdsError; - - fn pass_ccsds(&mut self, header: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error> { - self.dispatch_ccsds(header, tc_raw) - } -} - -impl, E: 'static> ReceivesTcCore - for CcsdsDistributor -{ - type Error = CcsdsError; - - fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { - if tc_raw.len() < 7 { - return Err(CcsdsError::ByteConversionError( - ByteConversionError::FromSliceTooSmall { - found: tc_raw.len(), - expected: 7, - }, - )); - } - let (sp_header, _) = - SpHeader::from_be_bytes(tc_raw).map_err(|e| CcsdsError::ByteConversionError(e))?; - self.dispatch_ccsds(&sp_header, tc_raw) - } -} - -impl, E: 'static> CcsdsDistributor { - pub fn new(packet_handler: PacketHandler) -> Self { - CcsdsDistributor { packet_handler } - } - - pub fn packet_handler(&self) -> &PacketHandler { - &self.packet_handler - } - - pub fn packet_handler_mut(&mut self) -> &mut PacketHandler { - &mut self.packet_handler - } - - fn dispatch_ccsds(&mut self, sp_header: &SpHeader, tc_raw: &[u8]) -> Result<(), CcsdsError> { - let valid_apid = self.packet_handler().validate(sp_header.apid()); - if valid_apid { - self.packet_handler - .handle_packet_with_valid_apid(sp_header, tc_raw) - .map_err(|e| CcsdsError::CustomError(e))?; - return Ok(()); - } - self.packet_handler - .handle_packet_with_unknown_apid(sp_header, tc_raw) - .map_err(|e| CcsdsError::CustomError(e)) - } -} - -#[cfg(test)] -pub(crate) mod tests { - use super::*; - use crate::tmtc::ccsds_distrib::{CcsdsDistributor, CcsdsPacketHandler}; - use spacepackets::ecss::tc::PusTcCreator; - use spacepackets::ecss::WritablePusPacket; - use spacepackets::CcsdsPacket; - use std::collections::VecDeque; - use std::sync::{Arc, Mutex}; - use std::vec::Vec; - - fn is_send(_: &T) {} - - pub fn generate_ping_tc(buf: &mut [u8]) -> &[u8] { - let sph = SpHeader::new_for_unseg_tc(0x002, 0x34, 0); - let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true); - let size = pus_tc - .write_to_bytes(buf) - .expect("Error writing TC to buffer"); - assert_eq!(size, 13); - &buf[0..size] - } - - pub fn generate_ping_tc_as_vec() -> Vec { - let sph = SpHeader::new_for_unseg_tc(0x002, 0x34, 0); - PusTcCreator::new_simple(sph, 17, 1, &[], true) - .to_vec() - .unwrap() - } - - type SharedPacketQueue = Arc)>>>; - pub struct BasicApidHandlerSharedQueue { - pub known_packet_queue: SharedPacketQueue, - pub unknown_packet_queue: SharedPacketQueue, - } - - #[derive(Default)] - pub struct BasicApidHandlerOwnedQueue { - pub known_packet_queue: VecDeque<(u16, Vec)>, - pub unknown_packet_queue: VecDeque<(u16, Vec)>, - } - - impl ValidatorU16Id for BasicApidHandlerSharedQueue { - fn validate(&self, packet_id: u16) -> bool { - [0x000, 0x002].contains(&packet_id) - } - } - - impl CcsdsPacketHandler for BasicApidHandlerSharedQueue { - type Error = (); - - fn handle_packet_with_valid_apid( - &mut self, - sp_header: &SpHeader, - tc_raw: &[u8], - ) -> Result<(), Self::Error> { - let mut vec = Vec::new(); - vec.extend_from_slice(tc_raw); - self.known_packet_queue - .lock() - .unwrap() - .push_back((sp_header.apid(), vec)); - Ok(()) - } - - fn handle_packet_with_unknown_apid( - &mut self, - sp_header: &SpHeader, - tc_raw: &[u8], - ) -> Result<(), Self::Error> { - let mut vec = Vec::new(); - vec.extend_from_slice(tc_raw); - self.unknown_packet_queue - .lock() - .unwrap() - .push_back((sp_header.apid(), vec)); - Ok(()) - } - } - - impl ValidatorU16Id for BasicApidHandlerOwnedQueue { - fn validate(&self, packet_id: u16) -> bool { - [0x000, 0x002].contains(&packet_id) - } - } - - impl CcsdsPacketHandler for BasicApidHandlerOwnedQueue { - type Error = (); - - fn handle_packet_with_valid_apid( - &mut self, - sp_header: &SpHeader, - tc_raw: &[u8], - ) -> Result<(), Self::Error> { - let mut vec = Vec::new(); - vec.extend_from_slice(tc_raw); - self.known_packet_queue.push_back((sp_header.apid(), vec)); - Ok(()) - } - - fn handle_packet_with_unknown_apid( - &mut self, - sp_header: &SpHeader, - tc_raw: &[u8], - ) -> Result<(), Self::Error> { - let mut vec = Vec::new(); - vec.extend_from_slice(tc_raw); - self.unknown_packet_queue.push_back((sp_header.apid(), vec)); - Ok(()) - } - } - - #[test] - fn test_distribs_known_apid() { - let known_packet_queue = Arc::new(Mutex::default()); - let unknown_packet_queue = Arc::new(Mutex::default()); - let apid_handler = BasicApidHandlerSharedQueue { - known_packet_queue: known_packet_queue.clone(), - unknown_packet_queue: unknown_packet_queue.clone(), - }; - let mut ccsds_distrib = CcsdsDistributor::new(apid_handler); - is_send(&ccsds_distrib); - let mut test_buf: [u8; 32] = [0; 32]; - let tc_slice = generate_ping_tc(test_buf.as_mut_slice()); - - ccsds_distrib.pass_tc(tc_slice).expect("Passing TC failed"); - let recvd = known_packet_queue.lock().unwrap().pop_front(); - assert!(unknown_packet_queue.lock().unwrap().is_empty()); - assert!(recvd.is_some()); - let (apid, packet) = recvd.unwrap(); - assert_eq!(apid, 0x002); - assert_eq!(packet, tc_slice); - } - - #[test] - fn test_unknown_apid_handling() { - let apid_handler = BasicApidHandlerOwnedQueue::default(); - let mut ccsds_distrib = CcsdsDistributor::new(apid_handler); - let sph = SpHeader::new_for_unseg_tc(0x004, 0x34, 0); - let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true); - let mut test_buf: [u8; 32] = [0; 32]; - pus_tc - .write_to_bytes(test_buf.as_mut_slice()) - .expect("Error writing TC to buffer"); - ccsds_distrib.pass_tc(&test_buf).expect("Passing TC failed"); - assert!(ccsds_distrib.packet_handler().known_packet_queue.is_empty()); - let apid_handler = ccsds_distrib.packet_handler_mut(); - let recvd = apid_handler.unknown_packet_queue.pop_front(); - assert!(recvd.is_some()); - let (apid, packet) = recvd.unwrap(); - assert_eq!(apid, 0x004); - assert_eq!(packet.as_slice(), test_buf); - } - - #[test] - fn test_ccsds_distribution() { - let mut ccsds_distrib = CcsdsDistributor::new(BasicApidHandlerOwnedQueue::default()); - let sph = SpHeader::new_for_unseg_tc(0x002, 0x34, 0); - let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true); - let tc_vec = pus_tc.to_vec().unwrap(); - ccsds_distrib - .pass_ccsds(&sph, &tc_vec) - .expect("passing CCSDS TC failed"); - let recvd = ccsds_distrib - .packet_handler_mut() - .known_packet_queue - .pop_front(); - assert!(recvd.is_some()); - let recvd = recvd.unwrap(); - assert_eq!(recvd.0, 0x002); - assert_eq!(recvd.1, tc_vec); - } - - #[test] - fn test_distribution_short_packet_fails() { - let mut ccsds_distrib = CcsdsDistributor::new(BasicApidHandlerOwnedQueue::default()); - let sph = SpHeader::new_for_unseg_tc(0x002, 0x34, 0); - let pus_tc = PusTcCreator::new_simple(sph, 17, 1, &[], true); - let tc_vec = pus_tc.to_vec().unwrap(); - let result = ccsds_distrib.pass_tc(&tc_vec[0..6]); - assert!(result.is_err()); - let error = result.unwrap_err(); - if let CcsdsError::ByteConversionError(ByteConversionError::FromSliceTooSmall { - found, - expected, - }) = error - { - assert_eq!(found, 6); - assert_eq!(expected, 7); - } else { - panic!("Unexpected error variant"); - } - } -} diff --git a/satrs/src/tmtc/mod.rs b/satrs/src/tmtc/mod.rs index d4f3333..f075c42 100644 --- a/satrs/src/tmtc/mod.rs +++ b/satrs/src/tmtc/mod.rs @@ -1,115 +1,651 @@ //! Telemetry and Telecommanding (TMTC) module. Contains packet routing components with special //! support for CCSDS and ECSS packets. //! -//! The distributor modules provided by this module use trait objects provided by the user to -//! directly dispatch received packets to packet listeners based on packet fields like the CCSDS -//! Application Process ID (APID) or the ECSS PUS service type. This allows for fast packet -//! routing without the overhead and complication of using message queues. However, it also requires +//! It is recommended to read the [sat-rs book chapter](https://absatsw.irs.uni-stuttgart.de/projects/sat-rs/book/communication.html) +//! about communication first. The TMTC abstractions provided by this framework are based on the +//! assumption that all telemetry is sent to a special handler object called the TM sink while +//! all received telecommands are sent to a special handler object called TC source. Using +//! a design like this makes it simpler to add new TC packet sources or new telemetry generators: +//! They only need to send the received and generated data to these objects. +use crate::queue::GenericSendError; +use crate::{ + pool::{PoolAddr, PoolError}, + ComponentId, +}; +#[cfg(feature = "std")] +pub use alloc_mod::*; #[cfg(feature = "alloc")] use downcast_rs::{impl_downcast, Downcast}; -use spacepackets::SpHeader; +use spacepackets::{ + ecss::{ + tc::PusTcReader, + tm::{PusTmCreator, PusTmReader}, + }, + SpHeader, +}; +#[cfg(feature = "std")] +use std::sync::mpsc; +#[cfg(feature = "std")] +pub use std_mod::*; -#[cfg(feature = "alloc")] -pub mod ccsds_distrib; -#[cfg(feature = "alloc")] -pub mod pus_distrib; pub mod tm_helper; -#[cfg(feature = "alloc")] -pub use ccsds_distrib::{CcsdsDistributor, CcsdsError, CcsdsPacketHandler}; -#[cfg(feature = "alloc")] -pub use pus_distrib::{PusDistributor, PusServiceDistributor}; +/// Simple type modelling packet stored inside a pool structure. This structure is intended to +/// be used when sending a packet via a message queue, so it also contains the sender ID. +#[derive(Debug, PartialEq, Eq, Clone)] +pub struct PacketInPool { + pub sender_id: ComponentId, + pub store_addr: PoolAddr, +} -/// Generic trait for object which can receive any telecommands in form of a raw bytestream, with +impl PacketInPool { + pub fn new(sender_id: ComponentId, store_addr: PoolAddr) -> Self { + Self { + sender_id, + store_addr, + } + } +} + +/// Generic trait for object which can send any packets in form of a raw bytestream, with /// no assumptions about the received protocol. -/// -/// This trait is implemented by both the [crate::tmtc::pus_distrib::PusDistributor] and the -/// [crate::tmtc::ccsds_distrib::CcsdsDistributor] which allows to pass the respective packets in -/// raw byte format into them. -pub trait ReceivesTcCore { +pub trait PacketSenderRaw: Send { type Error; - fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error>; + fn send_packet(&self, sender_id: ComponentId, packet: &[u8]) -> Result<(), Self::Error>; } -/// Extension trait of [ReceivesTcCore] which allows downcasting by implementing [Downcast] and -/// is also sendable. +/// Extension trait of [PacketSenderRaw] which allows downcasting by implementing [Downcast]. #[cfg(feature = "alloc")] -pub trait ReceivesTc: ReceivesTcCore + Downcast + Send { +pub trait PacketSenderRawExt: PacketSenderRaw + Downcast { // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast(&self) -> &dyn ReceivesTcCore; + fn upcast(&self) -> &dyn PacketSenderRaw; // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast_mut(&mut self) -> &mut dyn ReceivesTcCore; + fn upcast_mut(&mut self) -> &mut dyn PacketSenderRaw; } -/// Blanket implementation to automatically implement [ReceivesTc] when the [alloc] feature -/// is enabled. +/// Blanket implementation to automatically implement [PacketSenderRawExt] when the [alloc] +/// feature is enabled. #[cfg(feature = "alloc")] -impl ReceivesTc for T +impl PacketSenderRawExt for T where - T: ReceivesTcCore + Send + 'static, + T: PacketSenderRaw + Send + 'static, { // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast(&self) -> &dyn ReceivesTcCore { + fn upcast(&self) -> &dyn PacketSenderRaw { self } // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast_mut(&mut self) -> &mut dyn ReceivesTcCore { + fn upcast_mut(&mut self) -> &mut dyn PacketSenderRaw { self } } #[cfg(feature = "alloc")] -impl_downcast!(ReceivesTc assoc Error); +impl_downcast!(PacketSenderRawExt assoc Error); -/// Generic trait for object which can receive CCSDS space packets, for example ECSS PUS packets -/// for CCSDS File Delivery Protocol (CFDP) packets. -/// -/// This trait is implemented by both the [crate::tmtc::pus_distrib::PusDistributor] and the -/// [crate::tmtc::ccsds_distrib::CcsdsDistributor] which allows -/// to pass the respective packets in raw byte format or in CCSDS format into them. -pub trait ReceivesCcsdsTc { +/// Generic trait for object which can send CCSDS space packets, for example ECSS PUS packets +/// or CCSDS File Delivery Protocol (CFDP) packets wrapped in space packets. +pub trait PacketSenderCcsds: Send { type Error; - fn pass_ccsds(&mut self, header: &SpHeader, tc_raw: &[u8]) -> Result<(), Self::Error>; + fn send_ccsds( + &self, + sender_id: ComponentId, + header: &SpHeader, + tc_raw: &[u8], + ) -> Result<(), Self::Error>; } -/// Generic trait for a TM packet source, with no restrictions on the type of TM. +#[cfg(feature = "std")] +impl PacketSenderCcsds for mpsc::Sender { + type Error = GenericSendError; + + fn send_ccsds( + &self, + sender_id: ComponentId, + _: &SpHeader, + tc_raw: &[u8], + ) -> Result<(), Self::Error> { + self.send(PacketAsVec::new(sender_id, tc_raw.to_vec())) + .map_err(|_| GenericSendError::RxDisconnected) + } +} + +#[cfg(feature = "std")] +impl PacketSenderCcsds for mpsc::SyncSender { + type Error = GenericSendError; + + fn send_ccsds( + &self, + sender_id: ComponentId, + _: &SpHeader, + packet_raw: &[u8], + ) -> Result<(), Self::Error> { + self.try_send(PacketAsVec::new(sender_id, packet_raw.to_vec())) + .map_err(|e| match e { + mpsc::TrySendError::Full(_) => GenericSendError::QueueFull(None), + mpsc::TrySendError::Disconnected(_) => GenericSendError::RxDisconnected, + }) + } +} + +/// Generic trait for a packet receiver, with no restrictions on the type of packet. /// Implementors write the telemetry into the provided buffer and return the size of the telemetry. -pub trait TmPacketSourceCore { +pub trait PacketSource: Send { type Error; fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result; } -/// Extension trait of [TmPacketSourceCore] which allows downcasting by implementing [Downcast] and -/// is also sendable. +/// Extension trait of [PacketSource] which allows downcasting by implementing [Downcast]. #[cfg(feature = "alloc")] -pub trait TmPacketSource: TmPacketSourceCore + Downcast + Send { +pub trait PacketSourceExt: PacketSource + Downcast { // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast(&self) -> &dyn TmPacketSourceCore; + fn upcast(&self) -> &dyn PacketSource; // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast_mut(&mut self) -> &mut dyn TmPacketSourceCore; + fn upcast_mut(&mut self) -> &mut dyn PacketSource; } -/// Blanket implementation to automatically implement [ReceivesTc] when the [alloc] feature +/// Blanket implementation to automatically implement [PacketSourceExt] when the [alloc] feature /// is enabled. #[cfg(feature = "alloc")] -impl TmPacketSource for T +impl PacketSourceExt for T where - T: TmPacketSourceCore + Send + 'static, + T: PacketSource + 'static, { // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast(&self) -> &dyn TmPacketSourceCore { + fn upcast(&self) -> &dyn PacketSource { self } // Remove this once trait upcasting coercion has been implemented. // Tracking issue: https://github.com/rust-lang/rust/issues/65991 - fn upcast_mut(&mut self) -> &mut dyn TmPacketSourceCore { + fn upcast_mut(&mut self) -> &mut dyn PacketSource { self } } + +/// Helper trait for any generic (static) store which allows storing raw or CCSDS packets. +pub trait CcsdsPacketPool { + fn add_ccsds_tc(&mut self, _: &SpHeader, tc_raw: &[u8]) -> Result { + self.add_raw_tc(tc_raw) + } + + fn add_raw_tc(&mut self, tc_raw: &[u8]) -> Result; +} + +/// Helper trait for any generic (static) store which allows storing ECSS PUS Telecommand packets. +pub trait PusTcPool { + fn add_pus_tc(&mut self, pus_tc: &PusTcReader) -> Result; +} + +/// Helper trait for any generic (static) store which allows storing ECSS PUS Telemetry packets. +pub trait PusTmPool { + fn add_pus_tm_from_reader(&mut self, pus_tm: &PusTmReader) -> Result; + fn add_pus_tm_from_creator(&mut self, pus_tm: &PusTmCreator) -> Result; +} + +/// Generic trait for any sender component able to send packets stored inside a pool structure. +pub trait PacketInPoolSender: Send { + fn send_packet( + &self, + sender_id: ComponentId, + store_addr: PoolAddr, + ) -> Result<(), GenericSendError>; +} + +#[cfg(feature = "alloc")] +pub mod alloc_mod { + use alloc::vec::Vec; + + use super::*; + + /// Simple type modelling packet stored in the heap. This structure is intended to + /// be used when sending a packet via a message queue, so it also contains the sender ID. + #[derive(Debug, PartialEq, Eq, Clone)] + pub struct PacketAsVec { + pub sender_id: ComponentId, + pub packet: Vec, + } + + impl PacketAsVec { + pub fn new(sender_id: ComponentId, packet: Vec) -> Self { + Self { sender_id, packet } + } + } +} +#[cfg(feature = "std")] +pub mod std_mod { + + use core::cell::RefCell; + + #[cfg(feature = "crossbeam")] + use crossbeam_channel as cb; + use spacepackets::ecss::WritablePusPacket; + use thiserror::Error; + + use crate::pool::PoolProvider; + use crate::pus::{EcssTmSender, EcssTmtcError, PacketSenderPusTc}; + + use super::*; + + /// Newtype wrapper around the [SharedStaticMemoryPool] to enable extension helper traits on + /// top of the regular shared memory pool API. + #[derive(Clone)] + pub struct SharedPacketPool(pub SharedStaticMemoryPool); + + impl SharedPacketPool { + pub fn new(pool: &SharedStaticMemoryPool) -> Self { + Self(pool.clone()) + } + } + + impl PusTcPool for SharedPacketPool { + fn add_pus_tc(&mut self, pus_tc: &PusTcReader) -> Result { + let mut pg = self.0.write().map_err(|_| PoolError::LockError)?; + let addr = pg.free_element(pus_tc.len_packed(), |buf| { + buf[0..pus_tc.len_packed()].copy_from_slice(pus_tc.raw_data()); + })?; + Ok(addr) + } + } + + impl PusTmPool for SharedPacketPool { + fn add_pus_tm_from_reader(&mut self, pus_tm: &PusTmReader) -> Result { + let mut pg = self.0.write().map_err(|_| PoolError::LockError)?; + let addr = pg.free_element(pus_tm.len_packed(), |buf| { + buf[0..pus_tm.len_packed()].copy_from_slice(pus_tm.raw_data()); + })?; + Ok(addr) + } + + fn add_pus_tm_from_creator( + &mut self, + pus_tm: &PusTmCreator, + ) -> Result { + let mut pg = self.0.write().map_err(|_| PoolError::LockError)?; + let mut result = Ok(0); + let addr = pg.free_element(pus_tm.len_written(), |buf| { + result = pus_tm.write_to_bytes(buf); + })?; + result?; + Ok(addr) + } + } + + impl CcsdsPacketPool for SharedPacketPool { + fn add_raw_tc(&mut self, tc_raw: &[u8]) -> Result { + let mut pg = self.0.write().map_err(|_| PoolError::LockError)?; + let addr = pg.free_element(tc_raw.len(), |buf| { + buf[0..tc_raw.len()].copy_from_slice(tc_raw); + })?; + Ok(addr) + } + } + + #[cfg(feature = "std")] + impl PacketSenderRaw for mpsc::Sender { + type Error = GenericSendError; + + fn send_packet(&self, sender_id: ComponentId, packet: &[u8]) -> Result<(), Self::Error> { + self.send(PacketAsVec::new(sender_id, packet.to_vec())) + .map_err(|_| GenericSendError::RxDisconnected) + } + } + + #[cfg(feature = "std")] + impl PacketSenderRaw for mpsc::SyncSender { + type Error = GenericSendError; + + fn send_packet(&self, sender_id: ComponentId, tc_raw: &[u8]) -> Result<(), Self::Error> { + self.try_send(PacketAsVec::new(sender_id, tc_raw.to_vec())) + .map_err(|e| match e { + mpsc::TrySendError::Full(_) => GenericSendError::QueueFull(None), + mpsc::TrySendError::Disconnected(_) => GenericSendError::RxDisconnected, + }) + } + } + + #[derive(Debug, Clone, PartialEq, Eq, Error)] + pub enum StoreAndSendError { + #[error("Store error: {0}")] + Store(#[from] PoolError), + #[error("Genreric send error: {0}")] + Send(#[from] GenericSendError), + } + + pub use crate::pool::SharedStaticMemoryPool; + + impl PacketInPoolSender for mpsc::Sender { + fn send_packet( + &self, + sender_id: ComponentId, + store_addr: PoolAddr, + ) -> Result<(), GenericSendError> { + self.send(PacketInPool::new(sender_id, store_addr)) + .map_err(|_| GenericSendError::RxDisconnected) + } + } + + impl PacketInPoolSender for mpsc::SyncSender { + fn send_packet( + &self, + sender_id: ComponentId, + store_addr: PoolAddr, + ) -> Result<(), GenericSendError> { + self.try_send(PacketInPool::new(sender_id, store_addr)) + .map_err(|e| match e { + mpsc::TrySendError::Full(_) => GenericSendError::QueueFull(None), + mpsc::TrySendError::Disconnected(_) => GenericSendError::RxDisconnected, + }) + } + } + + #[cfg(feature = "crossbeam")] + impl PacketInPoolSender for cb::Sender { + fn send_packet( + &self, + sender_id: ComponentId, + store_addr: PoolAddr, + ) -> Result<(), GenericSendError> { + self.try_send(PacketInPool::new(sender_id, store_addr)) + .map_err(|e| match e { + cb::TrySendError::Full(_) => GenericSendError::QueueFull(None), + cb::TrySendError::Disconnected(_) => GenericSendError::RxDisconnected, + }) + } + } + + /// This is the primary structure used to send packets stored in a dedicated memory pool + /// structure. + #[derive(Clone)] + pub struct PacketSenderWithSharedPool< + Sender: PacketInPoolSender = mpsc::SyncSender, + PacketPool: CcsdsPacketPool = SharedPacketPool, + > { + pub sender: Sender, + pub shared_pool: RefCell, + } + + impl PacketSenderWithSharedPool { + pub fn new_with_shared_packet_pool( + packet_sender: Sender, + shared_pool: &SharedStaticMemoryPool, + ) -> Self { + Self { + sender: packet_sender, + shared_pool: RefCell::new(SharedPacketPool::new(shared_pool)), + } + } + } + + impl + PacketSenderWithSharedPool + { + pub fn new(packet_sender: Sender, shared_pool: PacketStore) -> Self { + Self { + sender: packet_sender, + shared_pool: RefCell::new(shared_pool), + } + } + } + + impl + PacketSenderWithSharedPool + { + pub fn shared_packet_store(&self) -> PacketStore { + let pool = self.shared_pool.borrow(); + pool.clone() + } + } + + impl PacketSenderRaw + for PacketSenderWithSharedPool + { + type Error = StoreAndSendError; + + fn send_packet(&self, sender_id: ComponentId, packet: &[u8]) -> Result<(), Self::Error> { + let mut shared_pool = self.shared_pool.borrow_mut(); + let store_addr = shared_pool.add_raw_tc(packet)?; + drop(shared_pool); + self.sender + .send_packet(sender_id, store_addr) + .map_err(StoreAndSendError::Send)?; + Ok(()) + } + } + + impl + PacketSenderPusTc for PacketSenderWithSharedPool + { + type Error = StoreAndSendError; + + fn send_pus_tc( + &self, + sender_id: ComponentId, + _: &SpHeader, + pus_tc: &PusTcReader, + ) -> Result<(), Self::Error> { + let mut shared_pool = self.shared_pool.borrow_mut(); + let store_addr = shared_pool.add_raw_tc(pus_tc.raw_data())?; + drop(shared_pool); + self.sender + .send_packet(sender_id, store_addr) + .map_err(StoreAndSendError::Send)?; + Ok(()) + } + } + + impl PacketSenderCcsds + for PacketSenderWithSharedPool + { + type Error = StoreAndSendError; + + fn send_ccsds( + &self, + sender_id: ComponentId, + _sp_header: &SpHeader, + tc_raw: &[u8], + ) -> Result<(), Self::Error> { + self.send_packet(sender_id, tc_raw) + } + } + + impl EcssTmSender + for PacketSenderWithSharedPool + { + fn send_tm( + &self, + sender_id: crate::ComponentId, + tm: crate::pus::PusTmVariant, + ) -> Result<(), crate::pus::EcssTmtcError> { + let send_addr = |store_addr: PoolAddr| { + self.sender + .send_packet(sender_id, store_addr) + .map_err(EcssTmtcError::Send) + }; + match tm { + crate::pus::PusTmVariant::InStore(store_addr) => send_addr(store_addr), + crate::pus::PusTmVariant::Direct(tm_creator) => { + let mut pool = self.shared_pool.borrow_mut(); + let store_addr = pool.add_pus_tm_from_creator(&tm_creator)?; + send_addr(store_addr) + } + } + } + } +} + +#[cfg(test)] +pub(crate) mod tests { + use alloc::vec; + + use std::sync::RwLock; + + use crate::pool::{PoolProviderWithGuards, StaticMemoryPool, StaticPoolConfig}; + + use super::*; + use std::sync::mpsc; + + pub(crate) fn send_with_sender( + sender_id: ComponentId, + packet_sender: &(impl PacketSenderRaw + ?Sized), + packet: &[u8], + ) -> Result<(), SendError> { + packet_sender.send_packet(sender_id, packet) + } + + #[test] + fn test_basic_mpsc_channel_sender_bounded() { + let (tx, rx) = mpsc::channel(); + let some_packet = vec![1, 2, 3, 4, 5]; + send_with_sender(1, &tx, &some_packet).expect("failed to send packet"); + let rx_packet = rx.try_recv().unwrap(); + assert_eq!(some_packet, rx_packet.packet); + assert_eq!(1, rx_packet.sender_id); + } + + #[test] + fn test_basic_mpsc_channel_receiver_dropped() { + let (tx, rx) = mpsc::channel(); + let some_packet = vec![1, 2, 3, 4, 5]; + drop(rx); + let result = send_with_sender(2, &tx, &some_packet); + assert!(result.is_err()); + matches!(result.unwrap_err(), GenericSendError::RxDisconnected); + } + + #[test] + fn test_basic_mpsc_sync_sender() { + let (tx, rx) = mpsc::sync_channel(3); + let some_packet = vec![1, 2, 3, 4, 5]; + send_with_sender(3, &tx, &some_packet).expect("failed to send packet"); + let rx_packet = rx.try_recv().unwrap(); + assert_eq!(some_packet, rx_packet.packet); + assert_eq!(3, rx_packet.sender_id); + } + + #[test] + fn test_basic_mpsc_sync_sender_receiver_dropped() { + let (tx, rx) = mpsc::sync_channel(3); + let some_packet = vec![1, 2, 3, 4, 5]; + drop(rx); + let result = send_with_sender(0, &tx, &some_packet); + assert!(result.is_err()); + matches!(result.unwrap_err(), GenericSendError::RxDisconnected); + } + + #[test] + fn test_basic_mpsc_sync_sender_queue_full() { + let (tx, rx) = mpsc::sync_channel(1); + let some_packet = vec![1, 2, 3, 4, 5]; + send_with_sender(0, &tx, &some_packet).expect("failed to send packet"); + let result = send_with_sender(1, &tx, &some_packet); + assert!(result.is_err()); + matches!(result.unwrap_err(), GenericSendError::QueueFull(None)); + let rx_packet = rx.try_recv().unwrap(); + assert_eq!(some_packet, rx_packet.packet); + } + + #[test] + fn test_basic_shared_store_sender_unbounded_sender() { + let (tc_tx, tc_rx) = mpsc::channel(); + let pool_cfg = StaticPoolConfig::new(vec![(2, 8)], true); + let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new( + StaticMemoryPool::new(pool_cfg), + ))); + let some_packet = vec![1, 2, 3, 4, 5]; + let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone()); + send_with_sender(5, &tc_sender, &some_packet).expect("failed to send packet"); + let packet_in_pool = tc_rx.try_recv().unwrap(); + let mut pool = shared_pool.0.write().unwrap(); + let read_guard = pool.read_with_guard(packet_in_pool.store_addr); + assert_eq!(read_guard.read_as_vec().unwrap(), some_packet); + assert_eq!(packet_in_pool.sender_id, 5) + } + + #[test] + fn test_basic_shared_store_sender() { + let (tc_tx, tc_rx) = mpsc::sync_channel(10); + let pool_cfg = StaticPoolConfig::new(vec![(2, 8)], true); + let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new( + StaticMemoryPool::new(pool_cfg), + ))); + let some_packet = vec![1, 2, 3, 4, 5]; + let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone()); + send_with_sender(5, &tc_sender, &some_packet).expect("failed to send packet"); + let packet_in_pool = tc_rx.try_recv().unwrap(); + let mut pool = shared_pool.0.write().unwrap(); + let read_guard = pool.read_with_guard(packet_in_pool.store_addr); + assert_eq!(read_guard.read_as_vec().unwrap(), some_packet); + assert_eq!(packet_in_pool.sender_id, 5) + } + + #[test] + fn test_basic_shared_store_sender_rx_dropped() { + let (tc_tx, tc_rx) = mpsc::sync_channel(10); + let pool_cfg = StaticPoolConfig::new(vec![(2, 8)], true); + let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new( + StaticMemoryPool::new(pool_cfg), + ))); + let some_packet = vec![1, 2, 3, 4, 5]; + drop(tc_rx); + let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone()); + let result = send_with_sender(2, &tc_sender, &some_packet); + assert!(result.is_err()); + matches!( + result.unwrap_err(), + StoreAndSendError::Send(GenericSendError::RxDisconnected) + ); + } + + #[test] + fn test_basic_shared_store_sender_queue_full() { + let (tc_tx, tc_rx) = mpsc::sync_channel(1); + let pool_cfg = StaticPoolConfig::new(vec![(2, 8)], true); + let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new( + StaticMemoryPool::new(pool_cfg), + ))); + let some_packet = vec![1, 2, 3, 4, 5]; + let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone()); + send_with_sender(3, &tc_sender, &some_packet).expect("failed to send packet"); + let result = send_with_sender(3, &tc_sender, &some_packet); + assert!(result.is_err()); + matches!( + result.unwrap_err(), + StoreAndSendError::Send(GenericSendError::RxDisconnected) + ); + let packet_in_pool = tc_rx.try_recv().unwrap(); + let mut pool = shared_pool.0.write().unwrap(); + let read_guard = pool.read_with_guard(packet_in_pool.store_addr); + assert_eq!(read_guard.read_as_vec().unwrap(), some_packet); + assert_eq!(packet_in_pool.sender_id, 3); + } + + #[test] + fn test_basic_shared_store_store_error() { + let (tc_tx, tc_rx) = mpsc::sync_channel(1); + let pool_cfg = StaticPoolConfig::new(vec![(1, 8)], true); + let shared_pool = SharedPacketPool::new(&SharedStaticMemoryPool::new(RwLock::new( + StaticMemoryPool::new(pool_cfg), + ))); + let some_packet = vec![1, 2, 3, 4, 5]; + let tc_sender = PacketSenderWithSharedPool::new(tc_tx, shared_pool.clone()); + send_with_sender(4, &tc_sender, &some_packet).expect("failed to send packet"); + let result = send_with_sender(4, &tc_sender, &some_packet); + assert!(result.is_err()); + matches!( + result.unwrap_err(), + StoreAndSendError::Store(PoolError::StoreFull(..)) + ); + let packet_in_pool = tc_rx.try_recv().unwrap(); + let mut pool = shared_pool.0.write().unwrap(); + let read_guard = pool.read_with_guard(packet_in_pool.store_addr); + assert_eq!(read_guard.read_as_vec().unwrap(), some_packet); + assert_eq!(packet_in_pool.sender_id, 4); + } +} diff --git a/satrs/src/tmtc/pus_distrib.rs b/satrs/src/tmtc/pus_distrib.rs deleted file mode 100644 index 53056bc..0000000 --- a/satrs/src/tmtc/pus_distrib.rs +++ /dev/null @@ -1,414 +0,0 @@ -//! ECSS PUS packet routing components. -//! -//! The routing components consist of two core components: -//! 1. [PusDistributor] component which dispatches received packets to a user-provided handler. -//! 2. [PusServiceDistributor] trait which should be implemented by the user-provided PUS packet -//! handler. -//! -//! The [PusDistributor] implements the [ReceivesEcssPusTc], [ReceivesCcsdsTc] and the -//! [ReceivesTcCore] trait which allows to pass raw packets, CCSDS packets and PUS TC packets into -//! it. Upon receiving a packet, it performs the following steps: -//! -//! 1. It tries to extract the [SpHeader] and [spacepackets::ecss::tc::PusTcReader] objects from -//! the raw bytestream. If this process fails, a [PusDistribError::PusError] is returned to the -//! user. -//! 2. If it was possible to extract both components, the packet will be passed to the -//! [PusServiceDistributor::distribute_packet] method provided by the user. -//! -//! # Example -//! -//! ```rust -//! use spacepackets::ecss::WritablePusPacket; -//! use satrs::tmtc::pus_distrib::{PusDistributor, PusServiceDistributor}; -//! use satrs::tmtc::{ReceivesTc, ReceivesTcCore}; -//! use spacepackets::SpHeader; -//! use spacepackets::ecss::tc::{PusTcCreator, PusTcReader}; -//! -//! struct ConcretePusHandler { -//! handler_call_count: u32 -//! } -//! -//! // This is a very simple possible service provider. It increments an internal call count field, -//! // which is used to verify the handler was called -//! impl PusServiceDistributor for ConcretePusHandler { -//! type Error = (); -//! fn distribute_packet(&mut self, service: u8, header: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error> { -//! assert_eq!(service, 17); -//! assert_eq!(pus_tc.len_packed(), 13); -//! self.handler_call_count += 1; -//! Ok(()) -//! } -//! } -//! -//! let service_handler = ConcretePusHandler { -//! handler_call_count: 0 -//! }; -//! let mut pus_distributor = PusDistributor::new(service_handler); -//! -//! // Create and pass PUS ping telecommand with a valid APID -//! let sp_header = SpHeader::new_for_unseg_tc(0x002, 0x34, 0); -//! let mut pus_tc = PusTcCreator::new_simple(sp_header, 17, 1, &[], true); -//! let mut test_buf: [u8; 32] = [0; 32]; -//! let mut size = pus_tc -//! .write_to_bytes(test_buf.as_mut_slice()) -//! .expect("Error writing TC to buffer"); -//! let tc_slice = &test_buf[0..size]; -//! -//! pus_distributor.pass_tc(tc_slice).expect("Passing PUS telecommand failed"); -//! -//! // User helper function to retrieve concrete class. We check the call count here to verify -//! // that the PUS ping telecommand was routed successfully. -//! let concrete_handler = pus_distributor.service_distributor(); -//! assert_eq!(concrete_handler.handler_call_count, 1); -//! ``` -use crate::pus::ReceivesEcssPusTc; -use crate::tmtc::{ReceivesCcsdsTc, ReceivesTcCore}; -use core::fmt::{Display, Formatter}; -use spacepackets::ecss::tc::PusTcReader; -use spacepackets::ecss::{PusError, PusPacket}; -use spacepackets::SpHeader; -#[cfg(feature = "std")] -use std::error::Error; - -/// Trait for a generic distributor object which can distribute PUS packets based on packet -/// properties like the PUS service, space packet header or any other content of the PUS packet. -pub trait PusServiceDistributor { - type Error; - fn distribute_packet( - &mut self, - service: u8, - header: &SpHeader, - pus_tc: &PusTcReader, - ) -> Result<(), Self::Error>; -} - -/// Generic distributor object which dispatches received packets to a user provided handler. -pub struct PusDistributor, E> { - service_distributor: ServiceDistributor, -} - -impl, E> - PusDistributor -{ - pub fn new(service_provider: ServiceDistributor) -> Self { - PusDistributor { - service_distributor: service_provider, - } - } -} - -#[derive(Debug, Copy, Clone, PartialEq, Eq)] -pub enum PusDistribError { - CustomError(E), - PusError(PusError), -} - -impl Display for PusDistribError { - fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { - match self { - PusDistribError::CustomError(e) => write!(f, "pus distribution error: {e}"), - PusDistribError::PusError(e) => write!(f, "pus distribution error: {e}"), - } - } -} - -#[cfg(feature = "std")] -impl Error for PusDistribError { - fn source(&self) -> Option<&(dyn Error + 'static)> { - match self { - Self::CustomError(e) => e.source(), - Self::PusError(e) => e.source(), - } - } -} - -impl, E: 'static> ReceivesTcCore - for PusDistributor -{ - type Error = PusDistribError; - fn pass_tc(&mut self, tm_raw: &[u8]) -> Result<(), Self::Error> { - // Convert to ccsds and call pass_ccsds - let (sp_header, _) = SpHeader::from_be_bytes(tm_raw) - .map_err(|e| PusDistribError::PusError(PusError::ByteConversion(e)))?; - self.pass_ccsds(&sp_header, tm_raw) - } -} - -impl, E: 'static> ReceivesCcsdsTc - for PusDistributor -{ - type Error = PusDistribError; - fn pass_ccsds(&mut self, header: &SpHeader, tm_raw: &[u8]) -> Result<(), Self::Error> { - let (tc, _) = PusTcReader::new(tm_raw).map_err(|e| PusDistribError::PusError(e))?; - self.pass_pus_tc(header, &tc) - } -} - -impl, E: 'static> ReceivesEcssPusTc - for PusDistributor -{ - type Error = PusDistribError; - fn pass_pus_tc(&mut self, header: &SpHeader, pus_tc: &PusTcReader) -> Result<(), Self::Error> { - self.service_distributor - .distribute_packet(pus_tc.service(), header, pus_tc) - .map_err(|e| PusDistribError::CustomError(e)) - } -} - -impl, E: 'static> - PusDistributor -{ - pub fn service_distributor(&self) -> &ServiceDistributor { - &self.service_distributor - } - - pub fn service_distributor_mut(&mut self) -> &mut ServiceDistributor { - &mut self.service_distributor - } -} - -#[cfg(test)] -mod tests { - use super::*; - use crate::queue::GenericSendError; - use crate::tmtc::ccsds_distrib::tests::{ - generate_ping_tc, generate_ping_tc_as_vec, BasicApidHandlerOwnedQueue, - BasicApidHandlerSharedQueue, - }; - use crate::tmtc::ccsds_distrib::{CcsdsDistributor, CcsdsPacketHandler}; - use crate::ValidatorU16Id; - use alloc::format; - use alloc::vec::Vec; - use spacepackets::ecss::PusError; - use spacepackets::CcsdsPacket; - #[cfg(feature = "std")] - use std::collections::VecDeque; - #[cfg(feature = "std")] - use std::sync::{Arc, Mutex}; - - fn is_send(_: &T) {} - - pub struct PacketInfo { - pub service: u8, - pub apid: u16, - pub packet: Vec, - } - - struct PusHandlerSharedQueue(Arc>>); - - #[derive(Default)] - struct PusHandlerOwnedQueue(VecDeque); - - impl PusServiceDistributor for PusHandlerSharedQueue { - type Error = PusError; - fn distribute_packet( - &mut self, - service: u8, - sp_header: &SpHeader, - pus_tc: &PusTcReader, - ) -> Result<(), Self::Error> { - let mut packet: Vec = Vec::new(); - packet.extend_from_slice(pus_tc.raw_data()); - self.0 - .lock() - .expect("Mutex lock failed") - .push_back(PacketInfo { - service, - apid: sp_header.apid(), - packet, - }); - Ok(()) - } - } - - impl PusServiceDistributor for PusHandlerOwnedQueue { - type Error = PusError; - fn distribute_packet( - &mut self, - service: u8, - sp_header: &SpHeader, - pus_tc: &PusTcReader, - ) -> Result<(), Self::Error> { - let mut packet: Vec = Vec::new(); - packet.extend_from_slice(pus_tc.raw_data()); - self.0.push_back(PacketInfo { - service, - apid: sp_header.apid(), - packet, - }); - Ok(()) - } - } - - struct ApidHandlerShared { - pub pus_distrib: PusDistributor, - pub handler_base: BasicApidHandlerSharedQueue, - } - - struct ApidHandlerOwned { - pub pus_distrib: PusDistributor, - handler_base: BasicApidHandlerOwnedQueue, - } - - macro_rules! apid_handler_impl { - () => { - type Error = PusError; - - fn handle_packet_with_valid_apid( - &mut self, - sp_header: &SpHeader, - tc_raw: &[u8], - ) -> Result<(), Self::Error> { - self.handler_base - .handle_packet_with_valid_apid(&sp_header, tc_raw) - .ok() - .expect("Unexpected error"); - match self.pus_distrib.pass_ccsds(&sp_header, tc_raw) { - Ok(_) => Ok(()), - Err(e) => match e { - PusDistribError::CustomError(_) => Ok(()), - PusDistribError::PusError(e) => Err(e), - }, - } - } - - fn handle_packet_with_unknown_apid( - &mut self, - sp_header: &SpHeader, - tc_raw: &[u8], - ) -> Result<(), Self::Error> { - self.handler_base - .handle_packet_with_unknown_apid(&sp_header, tc_raw) - .ok() - .expect("Unexpected error"); - Ok(()) - } - }; - } - - impl ValidatorU16Id for ApidHandlerOwned { - fn validate(&self, packet_id: u16) -> bool { - [0x000, 0x002].contains(&packet_id) - } - } - - impl ValidatorU16Id for ApidHandlerShared { - fn validate(&self, packet_id: u16) -> bool { - [0x000, 0x002].contains(&packet_id) - } - } - - impl CcsdsPacketHandler for ApidHandlerOwned { - apid_handler_impl!(); - } - - impl CcsdsPacketHandler for ApidHandlerShared { - apid_handler_impl!(); - } - - #[test] - fn test_pus_distribution_as_raw_packet() { - let mut pus_distrib = PusDistributor::new(PusHandlerOwnedQueue::default()); - let tc = generate_ping_tc_as_vec(); - let result = pus_distrib.pass_tc(&tc); - assert!(result.is_ok()); - assert_eq!(pus_distrib.service_distributor_mut().0.len(), 1); - let packet_info = pus_distrib.service_distributor_mut().0.pop_front().unwrap(); - assert_eq!(packet_info.service, 17); - assert_eq!(packet_info.apid, 0x002); - assert_eq!(packet_info.packet, tc); - } - - #[test] - fn test_pus_distribution_combined_handler() { - let known_packet_queue = Arc::new(Mutex::default()); - let unknown_packet_queue = Arc::new(Mutex::default()); - let pus_queue = Arc::new(Mutex::default()); - let pus_handler = PusHandlerSharedQueue(pus_queue.clone()); - let handler_base = BasicApidHandlerSharedQueue { - known_packet_queue: known_packet_queue.clone(), - unknown_packet_queue: unknown_packet_queue.clone(), - }; - - let pus_distrib = PusDistributor::new(pus_handler); - is_send(&pus_distrib); - let apid_handler = ApidHandlerShared { - pus_distrib, - handler_base, - }; - let mut ccsds_distrib = CcsdsDistributor::new(apid_handler); - let mut test_buf: [u8; 32] = [0; 32]; - let tc_slice = generate_ping_tc(test_buf.as_mut_slice()); - - // Pass packet to distributor - ccsds_distrib - .pass_tc(tc_slice) - .expect("Passing TC slice failed"); - let recvd_ccsds = known_packet_queue.lock().unwrap().pop_front(); - assert!(unknown_packet_queue.lock().unwrap().is_empty()); - assert!(recvd_ccsds.is_some()); - let (apid, packet) = recvd_ccsds.unwrap(); - assert_eq!(apid, 0x002); - assert_eq!(packet.as_slice(), tc_slice); - let recvd_pus = pus_queue.lock().unwrap().pop_front(); - assert!(recvd_pus.is_some()); - let packet_info = recvd_pus.unwrap(); - assert_eq!(packet_info.service, 17); - assert_eq!(packet_info.apid, 0x002); - assert_eq!(packet_info.packet, tc_slice); - } - - #[test] - fn test_accessing_combined_distributor() { - let pus_handler = PusHandlerOwnedQueue::default(); - let handler_base = BasicApidHandlerOwnedQueue::default(); - let pus_distrib = PusDistributor::new(pus_handler); - - let apid_handler = ApidHandlerOwned { - pus_distrib, - handler_base, - }; - let mut ccsds_distrib = CcsdsDistributor::new(apid_handler); - - let mut test_buf: [u8; 32] = [0; 32]; - let tc_slice = generate_ping_tc(test_buf.as_mut_slice()); - - ccsds_distrib - .pass_tc(tc_slice) - .expect("Passing TC slice failed"); - - let apid_handler_casted_back = ccsds_distrib.packet_handler_mut(); - assert!(!apid_handler_casted_back - .handler_base - .known_packet_queue - .is_empty()); - let handler_owned_queue = apid_handler_casted_back - .pus_distrib - .service_distributor_mut(); - assert!(!handler_owned_queue.0.is_empty()); - let packet_info = handler_owned_queue.0.pop_front().unwrap(); - assert_eq!(packet_info.service, 17); - assert_eq!(packet_info.apid, 0x002); - assert_eq!(packet_info.packet, tc_slice); - } - - #[test] - fn test_pus_distrib_error_custom_error() { - let error = PusDistribError::CustomError(GenericSendError::RxDisconnected); - let error_string = format!("{}", error); - assert_eq!( - error_string, - "pus distribution error: rx side has disconnected" - ); - } - - #[test] - fn test_pus_distrib_error_pus_error() { - let error = PusDistribError::::PusError(PusError::CrcCalculationMissing); - let error_string = format!("{}", error); - assert_eq!( - error_string, - "pus distribution error: crc16 was not calculated" - ); - } -} diff --git a/satrs/src/tmtc/tm_helper.rs b/satrs/src/tmtc/tm_helper.rs index a305472..c8c488e 100644 --- a/satrs/src/tmtc/tm_helper.rs +++ b/satrs/src/tmtc/tm_helper.rs @@ -3,50 +3,6 @@ use spacepackets::time::cds::CdsTime; use spacepackets::time::TimeWriter; use spacepackets::SpHeader; -#[cfg(feature = "std")] -pub use std_mod::*; - -#[cfg(feature = "std")] -pub mod std_mod { - use crate::pool::{ - PoolProvider, SharedStaticMemoryPool, StaticMemoryPool, StoreAddr, StoreError, - }; - use crate::pus::EcssTmtcError; - use spacepackets::ecss::tm::PusTmCreator; - use spacepackets::ecss::WritablePusPacket; - use std::sync::{Arc, RwLock}; - - #[derive(Clone)] - pub struct SharedTmPool(pub SharedStaticMemoryPool); - - impl SharedTmPool { - pub fn new(shared_pool: StaticMemoryPool) -> Self { - Self(Arc::new(RwLock::new(shared_pool))) - } - - pub fn clone_backing_pool(&self) -> SharedStaticMemoryPool { - self.0.clone() - } - pub fn shared_pool(&self) -> &SharedStaticMemoryPool { - &self.0 - } - - pub fn shared_pool_mut(&mut self) -> &mut SharedStaticMemoryPool { - &mut self.0 - } - - pub fn add_pus_tm(&self, pus_tm: &PusTmCreator) -> Result { - let mut pg = self.0.write().map_err(|_| StoreError::LockError)?; - let addr = pg.free_element(pus_tm.len_written(), |buf| { - pus_tm - .write_to_bytes(buf) - .expect("writing PUS TM to store failed"); - })?; - Ok(addr) - } - } -} - pub struct PusTmWithCdsShortHelper { apid: u16, cds_short_buf: [u8; 7], diff --git a/satrs/tests/pools.rs b/satrs/tests/pools.rs index f2baece..9e61097 100644 --- a/satrs/tests/pools.rs +++ b/satrs/tests/pools.rs @@ -1,4 +1,4 @@ -use satrs::pool::{PoolGuard, PoolProvider, StaticMemoryPool, StaticPoolConfig, StoreAddr}; +use satrs::pool::{PoolAddr, PoolGuard, PoolProvider, StaticMemoryPool, StaticPoolConfig}; use std::ops::DerefMut; use std::sync::mpsc; use std::sync::mpsc::{Receiver, Sender}; @@ -12,7 +12,7 @@ fn threaded_usage() { let pool_cfg = StaticPoolConfig::new(vec![(16, 6), (32, 3), (8, 12)], false); let shared_pool = Arc::new(RwLock::new(StaticMemoryPool::new(pool_cfg))); let shared_clone = shared_pool.clone(); - let (tx, rx): (Sender, Receiver) = mpsc::channel(); + let (tx, rx): (Sender, Receiver) = mpsc::channel(); let jh0 = thread::spawn(move || { let mut dummy = shared_pool.write().unwrap(); let addr = dummy.add(&DUMMY_DATA).expect("Writing data failed"); diff --git a/satrs/tests/pus_events.rs b/satrs/tests/pus_events.rs index 6fc518f..a5c3061 100644 --- a/satrs/tests/pus_events.rs +++ b/satrs/tests/pus_events.rs @@ -7,8 +7,8 @@ use satrs::params::U32Pair; use satrs::params::{Params, ParamsHeapless, WritableToBeBytes}; use satrs::pus::event_man::{DefaultPusEventMgmtBackend, EventReporter, PusEventDispatcher}; use satrs::pus::test_util::TEST_COMPONENT_ID_0; -use satrs::pus::PusTmAsVec; use satrs::request::UniqueApidTargetId; +use satrs::tmtc::PacketAsVec; use spacepackets::ecss::tm::PusTmReader; use spacepackets::ecss::{PusError, PusPacket}; use std::sync::mpsc::{self, SendError, TryRecvError}; @@ -37,7 +37,7 @@ fn test_threaded_usage() { let pus_event_man_send_provider = EventU32SenderMpsc::new(1, pus_event_man_tx); event_man.subscribe_all(pus_event_man_send_provider.target_id()); event_man.add_sender(pus_event_man_send_provider); - let (event_tx, event_rx) = mpsc::channel::(); + let (event_tx, event_rx) = mpsc::channel::(); let reporter = EventReporter::new(TEST_ID.raw(), 0x02, 0, 128).expect("Creating event reporter failed"); let pus_event_man = PusEventDispatcher::new(reporter, DefaultPusEventMgmtBackend::default()); diff --git a/satrs/tests/pus_verification.rs b/satrs/tests/pus_verification.rs index 743535f..4451909 100644 --- a/satrs/tests/pus_verification.rs +++ b/satrs/tests/pus_verification.rs @@ -1,4 +1,4 @@ -// #[cfg(feature = "crossbeam")] +#[cfg(feature = "crossbeam")] pub mod crossbeam_test { use hashbrown::HashMap; use satrs::pool::{PoolProvider, PoolProviderWithGuards, StaticMemoryPool, StaticPoolConfig}; @@ -7,13 +7,12 @@ pub mod crossbeam_test { FailParams, RequestId, VerificationReporter, VerificationReporterCfg, VerificationReportingProvider, }; - use satrs::pus::TmInSharedPoolSenderWithCrossbeam; - use satrs::tmtc::tm_helper::SharedTmPool; + use satrs::tmtc::{PacketSenderWithSharedPool, SharedStaticMemoryPool}; use spacepackets::ecss::tc::{PusTcCreator, PusTcReader, PusTcSecondaryHeader}; use spacepackets::ecss::tm::PusTmReader; use spacepackets::ecss::{EcssEnumU16, EcssEnumU8, PusPacket, WritablePusPacket}; use spacepackets::SpHeader; - use std::sync::{Arc, RwLock}; + use std::sync::RwLock; use std::thread; use std::time::Duration; @@ -36,12 +35,15 @@ pub mod crossbeam_test { // Shared pool object to store the verification PUS telemetry let pool_cfg = StaticPoolConfig::new(vec![(10, 32), (10, 64), (10, 128), (10, 1024)], false); - let shared_tm_pool = SharedTmPool::new(StaticMemoryPool::new(pool_cfg.clone())); - let shared_tc_pool_0 = Arc::new(RwLock::new(StaticMemoryPool::new(pool_cfg))); - let shared_tc_pool_1 = shared_tc_pool_0.clone(); + let shared_tm_pool = + SharedStaticMemoryPool::new(RwLock::new(StaticMemoryPool::new(pool_cfg.clone()))); + let shared_tc_pool = + SharedStaticMemoryPool::new(RwLock::new(StaticMemoryPool::new(pool_cfg))); + let shared_tc_pool_1 = shared_tc_pool.clone(); let (tx, rx) = crossbeam_channel::bounded(10); - let sender_0 = TmInSharedPoolSenderWithCrossbeam::new(shared_tm_pool.clone(), tx.clone()); - let sender_1 = sender_0.clone(); + let sender = + PacketSenderWithSharedPool::new_with_shared_packet_pool(tx.clone(), &shared_tm_pool); + let sender_1 = sender.clone(); let mut reporter_with_sender_0 = VerificationReporter::new(TEST_COMPONENT_ID_0.id(), &cfg); let mut reporter_with_sender_1 = reporter_with_sender_0.clone(); // For test purposes, we retrieve the request ID from the TCs and pass them to the receiver @@ -52,7 +54,7 @@ pub mod crossbeam_test { let (tx_tc_0, rx_tc_0) = crossbeam_channel::bounded(3); let (tx_tc_1, rx_tc_1) = crossbeam_channel::bounded(3); { - let mut tc_guard = shared_tc_pool_0.write().unwrap(); + let mut tc_guard = shared_tc_pool.write().unwrap(); let sph = SpHeader::new_for_unseg_tc(TEST_APID, 0, 0); let tc_header = PusTcSecondaryHeader::new_simple(17, 1); let pus_tc_0 = PusTcCreator::new_no_app_data(sph, tc_header, true); @@ -81,7 +83,7 @@ pub mod crossbeam_test { .expect("Receive timeout"); let tc_len; { - let mut tc_guard = shared_tc_pool_0.write().unwrap(); + let mut tc_guard = shared_tc_pool.write().unwrap(); let pg = tc_guard.read_with_guard(tc_addr); tc_len = pg.read(&mut tc_buf).unwrap(); } @@ -89,24 +91,24 @@ pub mod crossbeam_test { let token = reporter_with_sender_0.add_tc_with_req_id(req_id_0); let accepted_token = reporter_with_sender_0 - .acceptance_success(&sender_0, token, &FIXED_STAMP) + .acceptance_success(&sender, token, &FIXED_STAMP) .expect("Acceptance success failed"); // Do some start handling here let started_token = reporter_with_sender_0 - .start_success(&sender_0, accepted_token, &FIXED_STAMP) + .start_success(&sender, accepted_token, &FIXED_STAMP) .expect("Start success failed"); // Do some step handling here reporter_with_sender_0 - .step_success(&sender_0, &started_token, &FIXED_STAMP, EcssEnumU8::new(0)) + .step_success(&sender, &started_token, &FIXED_STAMP, EcssEnumU8::new(0)) .expect("Start success failed"); // Finish up reporter_with_sender_0 - .step_success(&sender_0, &started_token, &FIXED_STAMP, EcssEnumU8::new(1)) + .step_success(&sender, &started_token, &FIXED_STAMP, EcssEnumU8::new(1)) .expect("Start success failed"); reporter_with_sender_0 - .completion_success(&sender_0, started_token, &FIXED_STAMP) + .completion_success(&sender, started_token, &FIXED_STAMP) .expect("Completion success failed"); }); @@ -145,9 +147,8 @@ pub mod crossbeam_test { .recv_timeout(Duration::from_millis(50)) .expect("Packet reception timeout"); let tm_len; - let shared_tm_store = shared_tm_pool.clone_backing_pool(); { - let mut rg = shared_tm_store.write().expect("Error locking shared pool"); + let mut rg = shared_tm_pool.write().expect("Error locking shared pool"); let store_guard = rg.read_with_guard(tm_in_pool.store_addr); tm_len = store_guard .read(&mut tm_buf) diff --git a/satrs/tests/tcp_servers.rs b/satrs/tests/tcp_servers.rs index ff3fe78..602913e 100644 --- a/satrs/tests/tcp_servers.rs +++ b/satrs/tests/tcp_servers.rs @@ -17,34 +17,52 @@ use core::{ use std::{ io::{Read, Write}, net::{IpAddr, Ipv4Addr, SocketAddr, TcpStream}, - sync::Mutex, + sync::{mpsc, Mutex}, thread, }; use hashbrown::HashSet; use satrs::{ - encoding::cobs::encode_packet_with_cobs, - hal::std::tcp_server::{ServerConfig, TcpSpacepacketsServer, TcpTmtcInCobsServer}, - tmtc::{ReceivesTcCore, TmPacketSourceCore}, + encoding::{ + ccsds::{SpValidity, SpacePacketValidator}, + cobs::encode_packet_with_cobs, + }, + hal::std::tcp_server::{ + ConnectionResult, HandledConnectionHandler, HandledConnectionInfo, ServerConfig, + TcpSpacepacketsServer, TcpTmtcInCobsServer, + }, + tmtc::PacketSource, + ComponentId, }; use spacepackets::{ ecss::{tc::PusTcCreator, WritablePusPacket}, - PacketId, SpHeader, + CcsdsPacket, PacketId, SpHeader, }; use std::{collections::VecDeque, sync::Arc, vec::Vec}; -#[derive(Default, Clone)] -struct SyncTcCacher { - tc_queue: Arc>>>, +#[derive(Default)] +pub struct ConnectionFinishedHandler { + connection_info: VecDeque, } -impl ReceivesTcCore for SyncTcCacher { - type Error = (); - fn pass_tc(&mut self, tc_raw: &[u8]) -> Result<(), Self::Error> { - let mut tc_queue = self.tc_queue.lock().expect("tc forwarder failed"); - println!("Received TC: {:x?}", tc_raw); - tc_queue.push_back(tc_raw.to_vec()); - Ok(()) +impl HandledConnectionHandler for ConnectionFinishedHandler { + fn handled_connection(&mut self, info: HandledConnectionInfo) { + self.connection_info.push_back(info); + } +} + +impl ConnectionFinishedHandler { + pub fn check_last_connection(&mut self, num_tms: u32, num_tcs: u32) { + let last_conn_result = self + .connection_info + .pop_back() + .expect("no connection info available"); + assert_eq!(last_conn_result.num_received_tcs, num_tcs); + assert_eq!(last_conn_result.num_sent_tms, num_tms); + } + + pub fn check_no_connections_left(&self) { + assert!(self.connection_info.is_empty()); } } @@ -60,7 +78,7 @@ impl SyncTmSource { } } -impl TmPacketSourceCore for SyncTmSource { +impl PacketSource for SyncTmSource { type Error = (); fn retrieve_packet(&mut self, buffer: &mut [u8]) -> Result { @@ -82,20 +100,29 @@ impl TmPacketSourceCore for SyncTmSource { } } +const TCP_SERVER_ID: ComponentId = 0x05; const SIMPLE_PACKET: [u8; 5] = [1, 2, 3, 4, 5]; const INVERTED_PACKET: [u8; 5] = [5, 4, 3, 4, 1]; const AUTO_PORT_ADDR: SocketAddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0); #[test] fn test_cobs_server() { - let tc_receiver = SyncTcCacher::default(); + let (tc_sender, tc_receiver) = mpsc::channel(); let mut tm_source = SyncTmSource::default(); // Insert a telemetry packet which will be read back by the client at a later stage. tm_source.add_tm(&INVERTED_PACKET); let mut tcp_server = TcpTmtcInCobsServer::new( - ServerConfig::new(AUTO_PORT_ADDR, Duration::from_millis(2), 1024, 1024), + ServerConfig::new( + TCP_SERVER_ID, + AUTO_PORT_ADDR, + Duration::from_millis(2), + 1024, + 1024, + ), tm_source, - tc_receiver.clone(), + tc_sender.clone(), + ConnectionFinishedHandler::default(), + None, ) .expect("TCP server generation failed"); let dest_addr = tcp_server @@ -106,13 +133,20 @@ fn test_cobs_server() { // Call the connection handler in separate thread, does block. thread::spawn(move || { - let result = tcp_server.handle_next_connection(); + let result = tcp_server.handle_all_connections(Some(Duration::from_millis(400))); if result.is_err() { panic!("handling connection failed: {:?}", result.unwrap_err()); } let conn_result = result.unwrap(); - assert_eq!(conn_result.num_received_tcs, 1, "No TC received"); - assert_eq!(conn_result.num_sent_tms, 1, "No TM received"); + assert_eq!(conn_result, ConnectionResult::HandledConnections(1)); + tcp_server + .generic_server + .finished_handler + .check_last_connection(1, 1); + tcp_server + .generic_server + .finished_handler + .check_no_connections_left(); // Signal the main thread we are done. set_if_done.store(true, Ordering::Relaxed); }); @@ -152,33 +186,56 @@ fn test_cobs_server() { panic!("connection was not handled properly"); } // Check that the packet was received and decoded successfully. - let mut tc_queue = tc_receiver - .tc_queue - .lock() - .expect("locking tc queue failed"); - assert_eq!(tc_queue.len(), 1); - assert_eq!(tc_queue.pop_front().unwrap(), &SIMPLE_PACKET); - drop(tc_queue); + let tc_with_sender = tc_receiver.try_recv().expect("no TC received"); + assert_eq!(tc_with_sender.packet, SIMPLE_PACKET); + assert_eq!(tc_with_sender.sender_id, TCP_SERVER_ID); + matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty)); } const TEST_APID_0: u16 = 0x02; const TEST_PACKET_ID_0: PacketId = PacketId::new_for_tc(true, TEST_APID_0); +#[derive(Default)] +pub struct SimpleVerificator { + pub valid_ids: HashSet, +} + +impl SpacePacketValidator for SimpleVerificator { + fn validate( + &self, + sp_header: &SpHeader, + _raw_buf: &[u8], + ) -> satrs::encoding::ccsds::SpValidity { + if self.valid_ids.contains(&sp_header.packet_id()) { + return SpValidity::Valid; + } + SpValidity::Skip + } +} + #[test] fn test_ccsds_server() { - let tc_receiver = SyncTcCacher::default(); + let (tc_sender, tc_receiver) = mpsc::channel(); let mut tm_source = SyncTmSource::default(); let sph = SpHeader::new_for_unseg_tc(TEST_APID_0, 0, 0); let verif_tm = PusTcCreator::new_simple(sph, 1, 1, &[], true); let tm_0 = verif_tm.to_vec().expect("tm generation failed"); tm_source.add_tm(&tm_0); - let mut packet_id_lookup = HashSet::new(); - packet_id_lookup.insert(TEST_PACKET_ID_0); + let mut packet_id_lookup = SimpleVerificator::default(); + packet_id_lookup.valid_ids.insert(TEST_PACKET_ID_0); let mut tcp_server = TcpSpacepacketsServer::new( - ServerConfig::new(AUTO_PORT_ADDR, Duration::from_millis(2), 1024, 1024), + ServerConfig::new( + TCP_SERVER_ID, + AUTO_PORT_ADDR, + Duration::from_millis(2), + 1024, + 1024, + ), tm_source, - tc_receiver.clone(), + tc_sender, packet_id_lookup, + ConnectionFinishedHandler::default(), + None, ) .expect("TCP server generation failed"); let dest_addr = tcp_server @@ -188,13 +245,20 @@ fn test_ccsds_server() { let set_if_done = conn_handled.clone(); // Call the connection handler in separate thread, does block. thread::spawn(move || { - let result = tcp_server.handle_next_connection(); + let result = tcp_server.handle_all_connections(Some(Duration::from_millis(500))); if result.is_err() { panic!("handling connection failed: {:?}", result.unwrap_err()); } let conn_result = result.unwrap(); - assert_eq!(conn_result.num_received_tcs, 1); - assert_eq!(conn_result.num_sent_tms, 1); + assert_eq!(conn_result, ConnectionResult::HandledConnections(1)); + tcp_server + .generic_server + .finished_handler + .check_last_connection(1, 1); + tcp_server + .generic_server + .finished_handler + .check_no_connections_left(); set_if_done.store(true, Ordering::Relaxed); }); let mut stream = TcpStream::connect(dest_addr).expect("connecting to TCP server failed"); @@ -235,7 +299,8 @@ fn test_ccsds_server() { panic!("connection was not handled properly"); } // Check that TC has arrived. - let mut tc_queue = tc_receiver.tc_queue.lock().unwrap(); - assert_eq!(tc_queue.len(), 1); - assert_eq!(tc_queue.pop_front().unwrap(), tc_0); + let tc_with_sender = tc_receiver.try_recv().expect("no TC received"); + assert_eq!(tc_with_sender.packet, tc_0); + assert_eq!(tc_with_sender.sender_id, TCP_SERVER_ID); + matches!(tc_receiver.try_recv(), Err(mpsc::TryRecvError::Empty)); }