Compare commits
17 Commits
Author | SHA1 | Date | |
---|---|---|---|
db07e92b01 | |||
f094558f11 | |||
9debb7e3b5 | |||
5c2b4cf783 | |||
ff19b0d4b9 | |||
5009d12780 | |||
dd74ebecba | |||
bdec426196 | |||
86f2004569 | |||
6f4ec5eafd | |||
7652b33e11 | |||
98fb9b8e36 | |||
217bdb5123 | |||
e1b069808e | |||
1a045d7a84 | |||
28a3daabac | |||
a468d96f14 |
@ -1,8 +0,0 @@
|
||||
---
|
||||
BasedOnStyle: Google
|
||||
IndentWidth: 2
|
||||
---
|
||||
Language: Cpp
|
||||
ColumnLimit: 100
|
||||
ReflowComments: true
|
||||
---
|
10
.gitignore
vendored
10
.gitignore
vendored
@ -1,14 +1,4 @@
|
||||
# PyCharm and CLion
|
||||
.idea/*
|
||||
!/.idea/runConfigurations
|
||||
!/.idea/cmake.xml
|
||||
# !/.idea/codeStyles
|
||||
|
||||
# Eclipse
|
||||
.cproject
|
||||
.project
|
||||
.settings
|
||||
.metadata
|
||||
|
||||
/build*
|
||||
/cmake-build*
|
||||
|
8
.idea/cmake.xml
generated
8
.idea/cmake.xml
generated
@ -1,8 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project version="4">
|
||||
<component name="CMakeSharedSettings">
|
||||
<configurations>
|
||||
<configuration PROFILE_NAME="Debug Test" ENABLED="true" CONFIG_NAME="Debug" GENERATION_OPTIONS="-DFSFW_OSAL=host -DFSFW_BUILD_TESTS=ON" NO_GENERATOR="true" />
|
||||
</configurations>
|
||||
</component>
|
||||
</project>
|
821
CHANGELOG.md
821
CHANGELOG.md
@ -1,821 +0,0 @@
|
||||
Change Log
|
||||
=======
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](http://keepachangelog.com/)
|
||||
and this project adheres to [Semantic Versioning](http://semver.org/).
|
||||
|
||||
# [unreleased]
|
||||
|
||||
## Fixes
|
||||
|
||||
- FreshDeviceHandlerBase did not initialize the fdirInstance
|
||||
- The `PusTmCreator` API only accepted 255 bytes of source data. It can now accept source
|
||||
data with a size limited only by the size of `size_t`.
|
||||
- Important bugfix in CFDP PDU header format: The entity length field and the transaction sequence
|
||||
number fields stored the actual length of the field instead of the length minus 1 like specified
|
||||
in the CFDP standard.
|
||||
- PUS Health Service: Size check for set health command.
|
||||
Perform operation completion for announce health command.
|
||||
https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/746
|
||||
- Linux OSAL `getUptime` fix: Check validity of `/proc/uptime` file before reading uptime.
|
||||
https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/745
|
||||
- Small tweak for version getter
|
||||
https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/744
|
||||
|
||||
## Added
|
||||
|
||||
- FreeRTOS monotonic clock which is not subjected to time jumps of the system clock
|
||||
- add CFDP subsystem ID
|
||||
https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/742
|
||||
- `PusTmZcWriter` now exposes API to set message counter field.
|
||||
- Relative timeshift in the PUS time service.
|
||||
- CFDP support for PDU checksums
|
||||
|
||||
## Changed
|
||||
|
||||
- send HK one-parameter-report back to sender instead of default hk queue
|
||||
- Complete overhaul of HK subsystem. Replaced local data pool manager by periodic HK
|
||||
helper. The shared pool and the periodic HK generation are now distinct concepts.
|
||||
- The local HK manager was replaced by a periodic HK helper which has reduced responsibilities.
|
||||
It takes care of tracking the HK generation using a set specification provided by the user.n
|
||||
However, it leaves serialization of the HK data completely to the developer. This removes a major
|
||||
constraint on the format of the HK data, which was previously constrained to implementors of a
|
||||
certain base class.
|
||||
- The former set classes and pool objects are still available for HK set specification and
|
||||
generation. The API has changed, but the general usage and their architecture has not.
|
||||
- A new set of set classes and helper objects to specify HK sets and data which does not need to be
|
||||
shared was added as well. The majority of datasets do not need to be shared anyway.
|
||||
- The non-shared API retain the capability of appending of a validity blob for each piece of set
|
||||
data at the end of the HK data. For both non-shared and shared data, this capability can be
|
||||
specified in the constructor, and defaults to true.
|
||||
- Improved File System Abstraction to be more in line with normal filesystems.
|
||||
- CFDP implementation was improved, has now even less dependencies on other FSFW components
|
||||
and allows one inserted packet per state machine call.
|
||||
- The PUS time service now dumps the time before setting a new time and after having set the
|
||||
time.
|
||||
- HK generation is now countdown based.
|
||||
- Bump ETL version to 20.35.14
|
||||
https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/748
|
||||
- Renamed `PCDU_2` subsystem ID to `POWER_SWITCH_IF`.
|
||||
https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/743
|
||||
- Add new `PowerSwitchIF::SWITCH_UNKNOWN` returnvalue.
|
||||
https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/743
|
||||
- Assert that `FixedArrayList` is larger than 0 at compile time.
|
||||
https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/740
|
||||
- Health functions are virtual now.
|
||||
- PUS Service Base request queue depth and maximum number of handled packets per cycle is now
|
||||
configurable.
|
||||
- Switched to vendored versions for both the Embedded Template Library (ETL) and the
|
||||
Catch2 unittesting library.
|
||||
|
||||
## Added
|
||||
|
||||
- `EventManager`: Add function to print all listeners.
|
||||
|
||||
## Changed
|
||||
|
||||
- `EventManager`: Queue depth is configurable now
|
||||
|
||||
# [v6.0.0] 2023-02-10
|
||||
|
||||
## Fixes
|
||||
|
||||
- Mode Service: Add allowed subservice
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/739
|
||||
- `CService200ModeManagement`: Various bugfixes which lead to now execution complete being generated
|
||||
on mode announcements, duplicate mode reply generated on announce commands, and the mode read
|
||||
subservice not working properly.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/736
|
||||
- Memory leak fixes for the TCP/IP TMTC bridge.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/737
|
||||
- `Service9TimeManagement`: Fix the time dump at the `SET_TIME` subservice: Include clock timeval
|
||||
seconds instead of uptime.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/726
|
||||
- HAL MGM3100 Handler: Use axis specific gain/scaling factors. Previously,
|
||||
only the X scaling factor was used.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/724
|
||||
- HAL MGM3100 Handler: Z value was previously calculated with bytes of the X value.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/733
|
||||
- DHB `setNormalDatapoolEntriesInvalid`: The default implementation did not set the validity
|
||||
to false correctly because the `read` and `write` calls were missing.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/728
|
||||
- PUS TMTC creator module: Sequence flags were set to continuation segment (0b00) instead
|
||||
of the correct unsegmented flags (0b11) as specified in the standard.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/727
|
||||
- TC Scheduler Service 11: Add size and CRC check for contained TC.
|
||||
Bug: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/issues/719
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/720
|
||||
- Only delete health table entry in `HealthHelper` destructor if
|
||||
health table was set.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/710
|
||||
- I2C Bugfixes: Do not keep iterator as member and fix some incorrect handling with the iterator.
|
||||
Also properly reset the reply size for successfull transfers and erroneous transfers.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/700
|
||||
- Bugfix for Serial Buffer Stream: Setting `doActive` to false now
|
||||
actually fully disables printing.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/680
|
||||
- `TcpTmTcServer.cpp`: The server was actually not able to handle
|
||||
CCSDS packets which were clumped together. This has been fixed now.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/673
|
||||
- `CServiceHealthCommanding`: Add announce all health info implementation
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/725
|
||||
- various fixes related to linux Unittests and memory leaks
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/715
|
||||
- small fix to allow teardown handling
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/713
|
||||
- fix compiler warning for fixed array list copy ctor
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/704
|
||||
- missing include
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/703
|
||||
- defaultconfig did not build anymore
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/702
|
||||
- hotfix
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/699
|
||||
- small fix for helper
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/698
|
||||
- missing retval conv
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/697
|
||||
- DHB Countdown Bug
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/693
|
||||
- doc corrections
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/687
|
||||
- better error printout
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/686
|
||||
- include correction
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/683
|
||||
- better warning for missing include paths
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/676
|
||||
- Service 11 regression
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/670
|
||||
|
||||
## Added
|
||||
|
||||
- `CServiceHealthCommanding`: Add announce all health info implementation
|
||||
PR: https://egit.irs.uni-stuttgart.de/eive/fsfw/pulls/122
|
||||
- Empty constructor for `CdsShortTimeStamper` which does not do an object manager registration.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/730
|
||||
- `Service9TimeManagement`: Add `DUMP_TIME` (129) subservice.
|
||||
- `TcpTmTcServer`: Allow setting the `SO_REUSEADDR` and `SO_REUSEPORT`
|
||||
option on the TCP server. CTOR prototype has changed and expects an explicit
|
||||
TCP configuration struct to be passed.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/722
|
||||
- `DleParser` helper class to parse DLE encoded packets from a byte stream.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/711
|
||||
- `UioMapper` is able to resolve symlinks now.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/709
|
||||
- Add new `UnsignedByteField` class
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/660
|
||||
- publish documentation for development and master branch
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/681
|
||||
- Add Linux HAL options
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/663
|
||||
- Expand SerializeIF
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/656
|
||||
- PUS Service 11: Additional Safety Check
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/666
|
||||
- improvements for auto-formatter script
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/665
|
||||
- provide a weak print char impl
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/674
|
||||
|
||||
## Removed
|
||||
|
||||
- now that doc server is up, remove markdown files
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/688
|
||||
- remove bsp specific code
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/679
|
||||
|
||||
## Changes
|
||||
|
||||
- `CService201HealthCommanding` renamed to `CServiceHealthCommanding`,
|
||||
service ID customizable now. `CServiceHealthCommanding` expects configuration struct
|
||||
`HealthServiceCfg` now
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/725
|
||||
- `AcceptsTelemetryIF`: `getReportReceptionQueue` is const now
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/712
|
||||
- Moved some container returnvalues to dedicated header and namespace
|
||||
so they can be used without template specification.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/707
|
||||
- Remove default secondary header argument for
|
||||
`uint16_t getTcSpacePacketIdFromApid(uint16_t apid, bool secondaryHeaderFlag)` and
|
||||
`uint16_t getTmSpacePacketIdFromApid(uint16_t apid, bool secondaryHeaderFlag)`
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/689
|
||||
- Removed `HasReturnvaluesIF` class in favor of `returnvalue` namespace with `OK` and `FAILED`
|
||||
constants.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/659
|
||||
- Overhaul of the TMTC stack, including various changes and improvements
|
||||
for other modules
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/655
|
||||
which also includes a migration guide
|
||||
- Bump Catch2 dependency to regular version `v3.1.0`
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/678
|
||||
- `SerialBufferAdapter`: Rename `setBuffer` to `setConstBuffer` and update
|
||||
API to expect `const uint8_t*` accordingly.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/677
|
||||
- Remove the following user includes from `fsfw/events/Event.h` and
|
||||
`fsfw/returnvalues/returnvalue.h`:
|
||||
- `#include "events/subsystemIdRanges.h"`
|
||||
- `#include "returnvalues/classIds.h"`
|
||||
The user has to include those themselves now
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/675
|
||||
- `DeviceHandlerBase`: Set command sender before calling `buildCommandFromCommand`.
|
||||
This allows finishing action commands immediately inside the function.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/672
|
||||
- `DeviceHandlerBase`: New signature of `handleDeviceTm` which expects
|
||||
a `const SerializeIF&` and additional helper variant which expects `const uint8_t*`
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/671
|
||||
- Improvements for `AcceptsTelemetryIF` and `AcceptsTelecommandsIF`:
|
||||
- Make functions `const` where it makes sense
|
||||
- Add `const char* getName const` abstract function
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/684
|
||||
- Generic TMTC Bridge Update
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/734
|
||||
- comment tweak to event parser can read everything
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/732
|
||||
- CMakeLists file updates
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/731
|
||||
- improve srv20 error messages
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/723
|
||||
- I2C Linux: remove duplicate printout
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/718
|
||||
- printout handling improvements
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/717
|
||||
- vec getter, reset for content
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/716
|
||||
- updates for source sequence counter
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/714
|
||||
- SP reader getPacketData is const now
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/708
|
||||
- refactoring of serial drivers for linux
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/705
|
||||
- Local Pool Update Remove Add Data Ignore Fault Argument
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/701
|
||||
- Switch to new documentation server
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/694
|
||||
- Windows Tweaks
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/691
|
||||
- Refactor Local Pool API
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/667
|
||||
- group MGM data in local pool vectors
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/664
|
||||
|
||||
|
||||
## CFDP
|
||||
|
||||
- Refactoring of CFDP stack which was done during implementation of the CFDP source and destination
|
||||
handlers.
|
||||
- New filesystem module, changes for filesystem abstraction `HasFileSystemIF` to better
|
||||
fit requirements of CFDP
|
||||
- New `HostFilesystem` implementation of the `HasFileSystemIF`
|
||||
- New `cfdp::UserBase` class which is the abstraction for the CFDP user in an OBSW context.
|
||||
- mib module for the CFDP stack
|
||||
- PDU classes renamed from `...Serializer`/`...Deserializer` to `...Creator`/`...Reader`
|
||||
respetively
|
||||
- Renamed `TcDistributor` to `TcDistributorBase` to prevent confusion
|
||||
- Refactored `TcDisitributorBase` to be more flexible and usable for CFDP distribution
|
||||
- Renamed `CCSDSDistributor` to `CcsdsDistributor` and add feature which allows it
|
||||
to remove the CCSDS header when routing a packet. This allows CCSDS agnostic receiver
|
||||
implementation without an extra component
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/682
|
||||
|
||||
# [v5.0.0] 2022-07-25
|
||||
|
||||
## Changes
|
||||
|
||||
- Renamed auto-formatting script to `auto-formatter.sh` and made it more robust.
|
||||
If `cmake-format` is installed, it will also auto-format the `CMakeLists.txt` files now.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/625
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/626
|
||||
- Bump C++ required version to C++17. Every project which uses the FSFW and every modern
|
||||
compiler supports it
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/622
|
||||
- New CMake option `FSFW_HAL_LINUX_ADD_LIBGPIOD` to specifically exclude `gpiod` code.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/572
|
||||
- HAL Devicehandlers: Periodic printout is run-time configurable now
|
||||
- `oneShotAction` flag in the `TestTask` class is not static anymore
|
||||
- `SimpleRingBuffer::writeData` now checks if the amount is larger than the total size of the
|
||||
Buffer and rejects such writeData calls with `returnvalue::FAILED`
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/586
|
||||
- Major update for version handling, using `git describe` to fetch version information with git.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/601
|
||||
- Add helper functions provided by [`cmake-modules`](https://github.com/bilke/cmake-modules)
|
||||
manually now. Those should not change too often and only a small subset is needed
|
||||
- Separate folder for easier update and for distinction
|
||||
- LICENSE file included
|
||||
- use `int` for version numbers to allow unset or uninitialized version
|
||||
- Initialize Version object with numbers set to -1
|
||||
- Instead of hardcoding the git hash, it is now retrieved from git
|
||||
- `Version` now allows specifying additional version information like the git SHA1 hash and the
|
||||
versions since the last tag
|
||||
- Additional information is set to the last part of the git describe output for `FSFW_VERSION` now.
|
||||
- Version still need to be hand-updated if the FSFW is not included as a submodule for now.
|
||||
- IPC Message Queue Handling: Allow passing an optional `MqArgs` argument into the MessageQueue
|
||||
creation call. It allows passing context information and an arbitrary user argument into
|
||||
the message queue. Also streamlined and simplified `MessageQueue` implementation for all OSALs
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/583
|
||||
- Internal API change: Moved the `fsfw_hal` to the `src` folder and integration and internal
|
||||
tests part of `fsfw_tests` to `src`. Unittests are now in a dedicated folder called `unittests`
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/653
|
||||
|
||||
### Task Module Refactoring
|
||||
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/636
|
||||
|
||||
**Refactoring general task code**
|
||||
|
||||
- There was a lot of duplicate/boilerplate code inside the individual task IF OSAL implementations.
|
||||
Remove it by introducing base classes `PeriodicTaskBase` and `FixedTimeslotTaskBase`.
|
||||
|
||||
**Refactor PeriodicTaskIF**
|
||||
|
||||
- Convert `virtual ReturnValue_t addComponent(object_id_t object)` to
|
||||
`virtual ReturnValue_t addComponent(object_id_t object, uint8_t opCode = 0)`, allowing to pass
|
||||
the operation code passed to `performOperation`. Updated API taking
|
||||
an `ExecutableObjectIF` accordingly
|
||||
|
||||
**Refactor FixedTimeslotTaskIF**
|
||||
|
||||
- Add additional `addSlot` function which takes an `ExecutableObjectIF` pointer and its Object ID
|
||||
|
||||
**Refactor FixedSequenceSlot**
|
||||
|
||||
- Introduce typedef `CustomCheckFunc` for `ReturnValue_t (*customCheckFunction)(const SlotList&)`.
|
||||
- Convert `ReturnValue_t (*customCheckFunction)(const SlotList&)` to
|
||||
`ReturnValue_t (*customCheckFunction)(const SlotList&, void*)`, allowing arbitrary user arguments
|
||||
for the custom checker
|
||||
|
||||
**Linux Task Module**
|
||||
|
||||
- Use composition instead of inheritance for the `PeriodicPosixTask` and make the `PosixTask` a
|
||||
member of the class
|
||||
|
||||
### HAL
|
||||
|
||||
- HAL Linux Uart: Baudrate and bits per word are enums now, avoiding misconfigurations
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/585
|
||||
- HAL Linux SPI: Set the Clock Default State when setting new SPI speed
|
||||
and mode
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/573
|
||||
- GPIO HAL: `Direction`, `GpioOperation` and `Levels` are enum classes now, which prevents
|
||||
name clashes with Windows defines.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/572
|
||||
- HAL Linux Uart: Baudrate and bits per word are enums now, avoiding misconfigurations
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/585
|
||||
|
||||
### Time
|
||||
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/584 and
|
||||
https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/593
|
||||
|
||||
- `timeval` to `TimeOfDay_t`
|
||||
- Added Mutex for gmtime calls: (compare http://www.opengate.at/blog/2020/01/timeless/)
|
||||
- Moved the statics used by Clock in ClockCommon.cpp to this file
|
||||
- Better check for leap seconds
|
||||
- Added Unittests for Clock (only getter)
|
||||
|
||||
### Power
|
||||
|
||||
- `PowerSwitchIF`: Remove `const` specifier from `sendSwitchCommand` and `sendFuseOnCommand` and
|
||||
also specify a `ReturnValue_t` return type
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/590
|
||||
- Extend `PowerSwitcher` module to optionally check current state when calling `turnOn` or
|
||||
`turnOff`. Tis can be helpful to avoid commanding switches which do not need commanding
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/590
|
||||
|
||||
## Removed
|
||||
|
||||
- Removed the `HkSwitchHelper`. This module should not be needed anymore, now that the local
|
||||
datapools have been implemented.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/557
|
||||
|
||||
## Additions
|
||||
|
||||
- New constructor for PoolEntry which allows to simply specify the length of the pool entry.
|
||||
This is also the new default constructor for scalar value with 0 as an initial value
|
||||
- Added options for CI/CD builds: `FSFW_CICD_BUILD`. This allows the source code to know
|
||||
whether it is running in CI/CD
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/623
|
||||
- Basic `clion` support: Update `.gitignore` and add some basic run configurations
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/625
|
||||
- LTO support: Allow using LTO/IPO by setting `FSFW_ENABLE_LTO=1`. CMake is able to detect whether
|
||||
the user compiler supports IPO/LPO. LTO is on by default now. Most modern compilers support it,
|
||||
can make good use of it and it usually makes the code faster and/or smaller.
|
||||
After some more research:
|
||||
Enabling LTO will actually cause the compiler to only produce thin LTO by adding
|
||||
`-flto -fno-fat-lto-objects` to the compiler options. I am not sure this is an ideal choice
|
||||
because if an application linking against the FSFW does not use LTO, there can be compile
|
||||
issues (e.g. observed when compiling the FSFW tests without LTO). This is a known issue as
|
||||
can be seen in the multiple CMake issues for it:
|
||||
- https://gitlab.kitware.com/cmake/cmake/-/issues/22913,
|
||||
- https://gitlab.kitware.com/cmake/cmake/-/issues/16808,
|
||||
- https://gitlab.kitware.com/cmake/cmake/-/issues/21696
|
||||
Easiest solution for now: Keep this option OFF by default.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/616
|
||||
- Linux HAL: Add wiretapping option for I2C. Enabled with `FSFW_HAL_I2C_WIRETAPPING` defined to 1
|
||||
- Dedicated Version class and constant `fsfw::FSFW_VERSION` containing version information
|
||||
inside `fsfw/version.h`
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/559
|
||||
- Added generic PUS TC Scheduler Service 11. It depends on the new added Emebeded Template Library
|
||||
(ETL) dependency.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/594
|
||||
- Added ETL dependency and improved library dependency management
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/592
|
||||
- Add a `DummyPowerSwitcher` module which can be useful for test setups when no PCDU is available
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/590
|
||||
- New typedef for switcher type
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/590
|
||||
- `Subsystem`: New API to add table and sequence entries
|
||||
|
||||
## Fixed
|
||||
|
||||
- TCP TMTC Server: `MutexGuard` was not created properly in
|
||||
`TcpTmTcServer::handleTmSending(socket_t connSocket, bool& tmSent)` call.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/618
|
||||
- Fix infinite recursion in `prepareHealthSetReply` of PUS Health Service 201.
|
||||
Is not currently used right now but might be used in the future
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/617
|
||||
- Move some CMake directives further up top so they are not ignored
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/621
|
||||
- Small bugfix in STM32 HAL for SPI
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/599
|
||||
- HAL GPIO: Improved error checking in `LinuxLibgpioIF::configureGpios(...)`. If a GPIO
|
||||
configuration fails, the function will exit prematurely with a dedicated error code
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/602
|
||||
|
||||
# [v4.0.0]
|
||||
|
||||
## Additions
|
||||
|
||||
- CFDP Packet Stack and related tests added. It also refactors the existing TMTC infastructure to
|
||||
allow sending of CFDP packets to the CCSDS handlers.
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/528
|
||||
- added virtual function to print datasets
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/544
|
||||
- doSendRead Hook
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/545
|
||||
- Dockumentation for DHB
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/551
|
||||
|
||||
### HAL additions
|
||||
|
||||
- Linux Command Executor, which can execute shell commands in blocking and non-blocking mode
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/536
|
||||
- uio Mapper
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/543
|
||||
|
||||
## Changes
|
||||
|
||||
- Applied the `clang-format` auto-formatter to all source code
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/534
|
||||
- Updated Catch2 to v3.0.0-preview4
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/538
|
||||
- Changed CI to use prebuilt docker image
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/549
|
||||
|
||||
## Bugfix
|
||||
|
||||
- CMake fixes in PR https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/533 , was problematic
|
||||
if the uppermost user `CMakeLists.txt` did not have the include paths set up properly, which
|
||||
could lead to compile errors that `#include "fsfw/FSFW.h"` was not found.
|
||||
- Fix for build regression in Catch2 v3.0.0-preview4
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/548
|
||||
- Fix in unittest which failed on CI
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/552
|
||||
- Fix in helper script
|
||||
PR: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/553
|
||||
|
||||
## API Changes
|
||||
|
||||
- Aforementioned changes to existing TMTC stack
|
||||
|
||||
## Known bugs
|
||||
|
||||
-
|
||||
|
||||
# [v3.0.1]
|
||||
|
||||
## API Changes
|
||||
|
||||
*
|
||||
|
||||
## Bugfixes
|
||||
|
||||
* Version number was not updated for v3.0.0 #542
|
||||
|
||||
## Enhancement
|
||||
|
||||
*
|
||||
|
||||
## Known bugs
|
||||
|
||||
*
|
||||
|
||||
# [v3.0.0]
|
||||
|
||||
## API Changes
|
||||
|
||||
#### TCP Socket Changes
|
||||
|
||||
* Keep Open TCP Implementation #496
|
||||
* The socket will now kept open after disconnect. This allows reconnecting.
|
||||
* Only one connection is allowed
|
||||
* No internal influence but clients need to change their Code.
|
||||
|
||||
### GPIO IF
|
||||
|
||||
* Add feature to open GPIO by line name #506
|
||||
|
||||
### Bitutil
|
||||
|
||||
* Unittests for Op Divider and Bitutility #510
|
||||
|
||||
### Filesystem IF changed
|
||||
|
||||
* Filesystem Base Interface: Use IF instead of void pointer #511
|
||||
|
||||
### STM32
|
||||
|
||||
* STM32 SPI Updates #518
|
||||
|
||||
## Bugfixes
|
||||
|
||||
* Small bugfix for LIS3 handler #504
|
||||
* Spelling fixed for function names #509
|
||||
* CMakeLists fixes #517
|
||||
* Out of bound reads and writes in unittests #519
|
||||
* Bug in TmPacketStoredPusC (#478)
|
||||
* Windows ifdef fixed #529
|
||||
|
||||
## Enhancement
|
||||
|
||||
* FSFW.h.in more default values #491
|
||||
* Minor updates for PUS services #498
|
||||
* HasReturnvaluesIF naming for parameter #499
|
||||
* Tests can now be built as part of FSFW and versioning moved to CMake #500
|
||||
* Added integration test code #508
|
||||
* More printouts for rejected TC packets #505
|
||||
* Arrayprinter format improvements #514
|
||||
* Adding code for CI with docker and jenkins #520
|
||||
* Added new function in SerializeAdapter #513
|
||||
* Enables simple deSerialize if you keep track of the buffer position yourself
|
||||
* `` static ReturnValue_t deSerialize(T *object, const uint8_t* buffer,
|
||||
size_t* deserSize, SerializeIF::Endianness streamEndianness) ``
|
||||
* Unittest helper scripts has a new Parameter to open the coverage html in the webrowser #525
|
||||
* ``'-o', '--open', Open coverage data in webbrowser``
|
||||
* Documentation updated. Sphinx Documentation can now be build with python script #526
|
||||
|
||||
## Known bugs
|
||||
|
||||
* Version number was not updated for v3.0.0 #542
|
||||
|
||||
|
||||
All Pull Requests:
|
||||
|
||||
Milestone: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/milestone/19
|
||||
|
||||
# [v2.0.0]
|
||||
|
||||
## API Changes
|
||||
|
||||
|
||||
### File Structure changed to fit more common structure
|
||||
|
||||
* See pull request (#445)
|
||||
* HAL is now part of the main project
|
||||
* **See Instructions below:**
|
||||
|
||||
#### Instruction how to update existing / user code
|
||||
|
||||
* Changes in `#include`:
|
||||
* Rename `internalError` in includes to `internalerror`
|
||||
* Rename `fsfw/hal` to `fsfw_hal`
|
||||
* Rename `fsfw/tests` to `fsfw_tests`
|
||||
* Rename `osal/FreeRTOS` to `osal/freertos`
|
||||
|
||||
* Changes in `CMakeLists.txt`:
|
||||
* Rename `OS_FSFW` to `FSFW_OSAL`
|
||||
|
||||
* Changes in `DleEncoder.cpp`
|
||||
* Create an instance of the `DleEncoder` first before calling the `encode` and `decode` functions
|
||||
|
||||
### Removed osal/linux/Timer (#486)
|
||||
|
||||
* Was redundant to timemanager/Countdown
|
||||
|
||||
#### Instruction how to update existing / user code
|
||||
|
||||
* Use timemanager/Countdown instead
|
||||
|
||||
## Bugfixes
|
||||
|
||||
### TM Stack
|
||||
|
||||
* Increased TM stack robustness by introducing `nullptr` checks and more printouts (#483)
|
||||
|
||||
#### Host OSAL / FreeRTOS
|
||||
|
||||
* QueueMapManager Bugfix (NO_QUEUE was used as MessageQueueId) (#444)
|
||||
|
||||
#### Events
|
||||
|
||||
* Event output is now consistent (#447)
|
||||
|
||||
#### DLE Encoder
|
||||
|
||||
* Fixed possible out of bounds access in DLE Encoder (#492)
|
||||
|
||||
## Enhancment
|
||||
|
||||
* HAL as major new feature, also includes three MEMS devicehandlers as part of #481
|
||||
* Linux HAL updates (#456)
|
||||
* FreeRTOS Header cleaning update and Cmake tweaks (#442)
|
||||
* Printer updates (#453)
|
||||
* New returnvalue for for empty PST (#485)
|
||||
* TMTC Bridge: Increase limit of packets stored (#484)
|
||||
|
||||
## Known bugs
|
||||
|
||||
* Bug in TmPacketStoredPusC (#478)
|
||||
|
||||
|
||||
All Pull Requests:
|
||||
|
||||
Milestone: https://egit.irs.uni-stuttgart.de/fsfw/fsfw/milestone/5
|
||||
|
||||
# [v1.2.0]
|
||||
|
||||
## API Changes
|
||||
|
||||
### FSFW Architecture
|
||||
|
||||
- New src folder which contains all source files except the HAL, contributed code and test code
|
||||
- External and internal API mostly stayed the same
|
||||
- Folder names are now all smaller case: internalError was renamed to internalerror and
|
||||
FreeRTOS was renamed to freertos
|
||||
- Warning if optional headers are used but the modules was not added to the source files to compile
|
||||
|
||||
### HAL
|
||||
|
||||
- HAL added back into FSFW. It is tightly bound to the FSFW, and compiling it as a static library
|
||||
made using it more complicated than necessary
|
||||
|
||||
## Bugfixes
|
||||
|
||||
### FreeRTOS QueueMapManager
|
||||
|
||||
- Fixed a bug which causes the first generated Queue ID to be invalid
|
||||
|
||||
## Enhancements
|
||||
|
||||
### FSFW Architecture
|
||||
|
||||
- See API changes chapter. This change will keep the internal API consistent in the future
|
||||
|
||||
# [v1.1.0]
|
||||
|
||||
## API Changes
|
||||
|
||||
### PUS
|
||||
|
||||
- Added PUS C support
|
||||
- SUBSYSTEM_IDs added for PUS Services
|
||||
- Added new Parameter which must be defined in config: fsfwconfig::FSFW_MAX_TM_PACKET_SIZE
|
||||
|
||||
### ObjectManager
|
||||
|
||||
- ObjectManager is now a singelton
|
||||
|
||||
|
||||
### Configuration
|
||||
|
||||
- Additional configuration option fsfwconfig::FSFW_MAX_TM_PACKET_SIZE which
|
||||
need to be specified in FSFWConfig.h
|
||||
|
||||
### CMake
|
||||
|
||||
- Changed Cmake FSFW_ADDITIONAL_INC_PATH to FSFW_ADDITIONAL_INC_PATHS
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- timemanager/TimeStamperIF.h: Timestamp config was not used correctly, leading to different timestamp sizes than configured in fsfwconfig::FSFW_MISSION_TIMESTAMP_SIZE
|
||||
- TCP server fixes
|
||||
|
||||
## Enhancements
|
||||
|
||||
### FreeRTOS Queue Handles
|
||||
|
||||
- Fixed an internal issue how FreeRTOS MessageQueues were handled
|
||||
|
||||
### Linux OSAL
|
||||
|
||||
- Better printf error messages
|
||||
|
||||
### CMake
|
||||
|
||||
- Check for C++11 as mininimum required Version
|
||||
|
||||
### Debug Output
|
||||
|
||||
- Changed Warning color to magenta, which is well readable on both dark and light mode IDEs
|
||||
|
||||
|
||||
# Changes from ASTP 0.0.1 to 1.0.0
|
||||
|
||||
### Host OSAL
|
||||
|
||||
- Bugfix in MessageQueue, which caused the sender not to be set properly
|
||||
|
||||
### FreeRTOS OSAL
|
||||
|
||||
- vRequestContextSwitchFromISR is declared extern "C" so it can be defined in
|
||||
a C file without issues
|
||||
|
||||
### PUS Services
|
||||
|
||||
- It is now possible to change the message queue depth for the telecommand verification service (PUS1)
|
||||
- The same is possible for the event reporting service (PUS5)
|
||||
- PUS Health Service added, which allows to command and retrieve health via PUS packets
|
||||
|
||||
|
||||
### EnhancedControllerBase
|
||||
|
||||
- New base class for a controller which also implements HasActionsIF and HasLocalDataPoolIF
|
||||
|
||||
### Local Pool
|
||||
|
||||
- Interface of LocalPools has changed. LocalPool is not a template anymore. Instead the size and
|
||||
bucket number of the pools per page and the number of pages are passed to the ctor instead of
|
||||
two ctor arguments and a template parameter
|
||||
|
||||
### Parameter Service
|
||||
|
||||
- The API of the parameter service has been changed to prevent inconsistencies
|
||||
between documentation and actual code and to clarify usage.
|
||||
- The parameter ID now consists of:
|
||||
1. Domain ID (1 byte)
|
||||
2. Unique Identifier (1 byte)
|
||||
3. Linear Index (2 bytes)
|
||||
The linear index can be used for arrays as well as matrices.
|
||||
The parameter load command now explicitely expects the ECSS PTC and PFC
|
||||
information as well as the rows and column number. Rows and column will
|
||||
default to one, which is equivalent to one scalar parameter (the most
|
||||
important use-case)
|
||||
|
||||
### File System Interface
|
||||
|
||||
- A new interfaces specifies the functions for a software object which exposes the file system of
|
||||
a given hardware to use message based file handling (e.g. PUS commanding)
|
||||
|
||||
### Internal Error Reporter
|
||||
|
||||
- The new internal error reporter uses the local data pools. The pool IDs for
|
||||
the exisiting three error values and the new error set will be hardcoded for
|
||||
now, the the constructor for the internal error reporter just takes an object
|
||||
ID for now.
|
||||
|
||||
### Device Handler Base
|
||||
|
||||
- There is an additional `PERFORM_OPERATION` step for the device handler base. It is important
|
||||
that DHB users adapt their polling sequence tables to perform this step. This steps allows for
|
||||
a clear distinction between operation and communication steps
|
||||
- setNormalDatapoolEntriesInvalid is not an abstract method and a default implementation was provided
|
||||
- getTransitionDelayMs is now an abstract method
|
||||
|
||||
### DeviceHandlerIF
|
||||
|
||||
- Typo for UNKNOWN_DEVICE_REPLY
|
||||
|
||||
### Events
|
||||
|
||||
- makeEvent function: Now takes three input parameters instead of two and
|
||||
allows setting a unique ID. Event.cpp source file removed, functions now
|
||||
defined in header directly. Namespaces renamed. Functions declared `constexpr`
|
||||
now
|
||||
|
||||
### Commanding Service Base
|
||||
|
||||
- CSB uses the new fsfwconfig::FSFW_CSB_FIFO_DEPTH variable to determine the FIFO depth for each
|
||||
CSB instance. This variable has to be set in the FSFWConfig.h file
|
||||
|
||||
### Service Interface
|
||||
|
||||
- Proper printf support contained in ServiceInterfacePrinter.h
|
||||
- CPP ostream support now optional (can reduce executable size by 150 - 250 kB)
|
||||
- Amalagated header which determines automatically which service interface to use depending on FSFWConfig.h configuration.
|
||||
Users can just use #include <fsfw/serviceinterface/ServiceInterface.h>
|
||||
- If CPP streams are excluded, sif:: calls won't work anymore and need to be replaced by their printf counterparts.
|
||||
For the fsfw, this can be done by checking the processor define FSFW_CPP_OSTREAM_ENABLED from FSFWConfig.h.
|
||||
For mission code, developers need to replace sif:: calls by the printf counterparts, but only if the CPP stream are excluded.
|
||||
If this is not the case, everything should work as usual.
|
||||
|
||||
### ActionHelper and ActionMessage
|
||||
|
||||
- ActionHelper finish function and ActionMessage::setCompletionReply now expects explicit
|
||||
information whether to report a success or failure message instead of deriving it implicitely
|
||||
from returnvalue
|
||||
|
||||
### PUS Parameter Service 20
|
||||
|
||||
Added PUS parameter service 20 (only custom subservices available).
|
418
CMakeLists.txt
418
CMakeLists.txt
@ -1,418 +0,0 @@
|
||||
cmake_minimum_required(VERSION 3.13)
|
||||
|
||||
set(MSG_PREFIX "fsfw |")
|
||||
|
||||
# Add the cmake folder so the FindSphinx module is found
|
||||
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
|
||||
list(APPEND CMAKE_MODULE_PATH
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/cmake-modules/bilke")
|
||||
list(APPEND CMAKE_MODULE_PATH
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/cmake-modules/rpavlik")
|
||||
|
||||
# ##############################################################################
|
||||
# Version file handling #
|
||||
# ##############################################################################
|
||||
|
||||
set(FSFW_VERSION_IF_GIT_FAILS 6)
|
||||
set(FSFW_SUBVERSION_IF_GIT_FAILS 0)
|
||||
set(FSFW_REVISION_IF_GIT_FAILS 0)
|
||||
|
||||
set(FSFW_GIT_VER_HANDLING_OK FALSE)
|
||||
# Version handling
|
||||
if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/.git)
|
||||
message(STATUS "${MSG_PREFIX} Determining version information with git")
|
||||
include(FsfwHelpers)
|
||||
determine_version_with_git("--exclude" "docker_*")
|
||||
if(GIT_INFO)
|
||||
set(FSFW_GIT_INFO
|
||||
${GIT_INFO}
|
||||
CACHE STRING "Version information retrieved with git describe")
|
||||
list(GET FSFW_GIT_INFO 1 FSFW_VERSION)
|
||||
list(GET FSFW_GIT_INFO 2 FSFW_SUBVERSION)
|
||||
list(GET FSFW_GIT_INFO 3 FSFW_REVISION)
|
||||
list(GET FSFW_GIT_INFO 4 FSFW_VCS_INFO)
|
||||
if(NOT FSFW_VERSION)
|
||||
set(FSFW_VERSION ${FSFW_VERSION_IF_GIT_FAILS})
|
||||
endif()
|
||||
if(NOT FSFW_SUBVERSION)
|
||||
set(FSFW_SUBVERSION ${FSFW_SUBVERSION_IF_GIT_FAILS})
|
||||
endif()
|
||||
if(NOT FSFW_REVISION)
|
||||
set(FSFW_REVISION ${FSFW_REVISION_IF_GIT_FAILS})
|
||||
endif()
|
||||
set(FSFW_GIT_VER_HANDLING_OK TRUE)
|
||||
else()
|
||||
set(FSFW_GIT_VER_HANDLING_OK FALSE)
|
||||
endif()
|
||||
endif()
|
||||
if(NOT FSFW_GIT_VER_HANDLING_OK)
|
||||
set(FSFW_VERSION ${FSFW_VERSION_IF_GIT_FAILS})
|
||||
set(FSFW_SUBVERSION ${FSFW_SUBVERSION_IF_GIT_FAILS})
|
||||
set(FSFW_REVISION ${FSFW_REVISION_IF_GIT_FAILS})
|
||||
endif()
|
||||
|
||||
set(LIB_FSFW_NAME fsfw)
|
||||
project(${LIB_FSFW_NAME}
|
||||
VERSION ${FSFW_VERSION}.${FSFW_SUBVERSION}.${FSFW_REVISION})
|
||||
|
||||
if(NOT CMAKE_CXX_STANDARD)
|
||||
set(CMAKE_CXX_STANDARD 17)
|
||||
set(CMAKE_CXX_STANDARD_REQUIRED True)
|
||||
elseif(${CMAKE_CXX_STANDARD} LESS 17)
|
||||
message(
|
||||
FATAL_ERROR
|
||||
"${MSG_PREFIX} Compiling the FSFW requires a minimum of C++17 support")
|
||||
endif()
|
||||
|
||||
set(FSFW_SOURCES_DIR "${CMAKE_SOURCE_DIR}/src/fsfw")
|
||||
|
||||
# Keep this off by default for now. See PR:
|
||||
# https://egit.irs.uni-stuttgart.de/fsfw/fsfw/pulls/616 for information which
|
||||
# keeping this on by default is problematic
|
||||
option(
|
||||
FSFW_ENABLE_IPO
|
||||
"Enable interprocedural optimization or link-time optimization if available"
|
||||
OFF)
|
||||
if(FSFW_ENABLE_IPO)
|
||||
include(CheckIPOSupported)
|
||||
check_ipo_supported(RESULT IPO_SUPPORTED OUTPUT IPO_ERROR)
|
||||
if(NOT IPO_SUPPORTED)
|
||||
message(STATUS "FSFW | IPO/LTO not supported: ${IPO_ERROR}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
option(FSFW_GENERATE_SECTIONS
|
||||
"Generate function and data sections. Required to remove unused code" ON)
|
||||
if(FSFW_GENERATE_SECTIONS)
|
||||
option(FSFW_REMOVE_UNUSED_CODE "Remove unused code" ON)
|
||||
endif()
|
||||
|
||||
option(FSFW_BUILD_TESTS
|
||||
"Build unittest binary in addition to static library. Requires Catch2"
|
||||
OFF)
|
||||
option(FSFW_ADD_CATCH2 "Add Catch2 unittest framework dependency" OFF)
|
||||
option(FSFW_CICD_BUILD "Build for CI/CD. This can disable problematic test" OFF)
|
||||
option(FSFW_BUILD_DOCS "Build documentation with Sphinx and Doxygen" OFF)
|
||||
if(FSFW_BUILD_TESTS)
|
||||
option(FSFW_TESTS_GEN_COV "Generate coverage data for unittests" ON)
|
||||
endif()
|
||||
|
||||
option(FSFW_WARNING_SHADOW_LOCAL_GCC "Enable -Wshadow=local warning in GCC" ON)
|
||||
# Options to exclude parts of the FSFW from compilation.
|
||||
option(FSFW_ADD_INTERNAL_TESTS "Add internal unit tests" ON)
|
||||
option(FSFW_ADD_HAL "Add Hardware Abstraction Layer" ON)
|
||||
|
||||
if(UNIX)
|
||||
option(FSFW_HAL_LINUX_ADD_PERIPHERAL_DRIVERS "Add Linux peripheral drivers"
|
||||
OFF)
|
||||
option(FSFW_HAL_LINUX_ADD_LIBGPIOD "Attempt to add Linux GPIOD drivers" OFF)
|
||||
option(FSFW_HAL_LINUX_ADD_SERIAL_DRIVERS "Add serial drivers" ON)
|
||||
endif()
|
||||
|
||||
# Optional sources
|
||||
option(FSFW_ADD_PUS "Compile with PUS sources" ON)
|
||||
option(FSFW_ADD_MONITORING "Compile with monitoring components" ON)
|
||||
|
||||
option(FSFW_ADD_RMAP "Compile with RMAP" OFF)
|
||||
option(FSFW_ADD_DATALINKLAYER "Compile with Data Link Layer" OFF)
|
||||
option(FSFW_ADD_COORDINATES "Compile with coordinate components" OFF)
|
||||
option(FSFW_ADD_TMSTORAGE "Compile with tm storage components" OFF)
|
||||
|
||||
# Contrib sources
|
||||
option(FSFW_ADD_SGP4_PROPAGATOR "Add SGP4 propagator code" OFF)
|
||||
|
||||
set(FSFW_TEST_TGT fsfw-tests)
|
||||
set(FSFW_DUMMY_TGT fsfw-dummy)
|
||||
|
||||
add_library(${LIB_FSFW_NAME})
|
||||
|
||||
if(IPO_SUPPORTED AND FSFW_ENABLE_IPO)
|
||||
set_property(TARGET ${LIB_FSFW_NAME} PROPERTY INTERPROCEDURAL_OPTIMIZATION
|
||||
TRUE)
|
||||
endif()
|
||||
|
||||
if(FSFW_BUILD_TESTS)
|
||||
message(
|
||||
STATUS
|
||||
"${MSG_PREFIX} Building the FSFW unittests in addition to the static library"
|
||||
)
|
||||
|
||||
set(FSFW_CONFIG_PATH unittests/testcfg)
|
||||
configure_file(unittests/testcfg/FSFWConfig.h.in FSFWConfig.h)
|
||||
configure_file(unittests/testcfg/TestsConfig.h.in tests/TestsConfig.h)
|
||||
|
||||
project(${FSFW_TEST_TGT} CXX C)
|
||||
add_executable(${FSFW_TEST_TGT})
|
||||
if(IPO_SUPPORTED AND FSFW_ENABLE_IPO)
|
||||
set_property(TARGET ${FSFW_TEST_TGT} PROPERTY INTERPROCEDURAL_OPTIMIZATION
|
||||
TRUE)
|
||||
endif()
|
||||
|
||||
if(FSFW_TESTS_GEN_COV)
|
||||
message(STATUS "${MSG_PREFIX} Generating coverage data for the library")
|
||||
message(STATUS "${MSG_PREFIX} Targets linking against ${LIB_FSFW_NAME} "
|
||||
"will be compiled with coverage data as well")
|
||||
set(CMAKE_BUILD_TYPE "Debug")
|
||||
include(CodeCoverage)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# The documentation for FetchContent recommends declaring all the dependencies
|
||||
# before making them available. We make all declared dependency available here
|
||||
# after their declaration
|
||||
if(FSFW_FETCH_CONTENT_TARGETS)
|
||||
FetchContent_MakeAvailable(${FSFW_FETCH_CONTENT_TARGETS})
|
||||
endif()
|
||||
|
||||
set(FSFW_CORE_INC_PATH "inc")
|
||||
|
||||
set_property(CACHE FSFW_OSAL PROPERTY STRINGS host linux rtems freertos)
|
||||
|
||||
# For configure files
|
||||
target_include_directories(${LIB_FSFW_NAME} PRIVATE ${CMAKE_CURRENT_BINARY_DIR})
|
||||
target_include_directories(${LIB_FSFW_NAME}
|
||||
INTERFACE ${CMAKE_CURRENT_BINARY_DIR})
|
||||
|
||||
# Backwards comptability
|
||||
if(OS_FSFW AND NOT FSFW_OSAL)
|
||||
message(
|
||||
WARNING
|
||||
"${MSG_PREFIX} Please pass the FSFW OSAL as FSFW_OSAL instead of OS_FSFW")
|
||||
set(FSFW_OSAL OS_FSFW)
|
||||
endif()
|
||||
|
||||
if(NOT FSFW_OSAL)
|
||||
message(STATUS "No OS for FSFW via FSFW_OSAL set. Assuming host OS")
|
||||
# Assume host OS and autodetermine from OS_FSFW
|
||||
if(UNIX)
|
||||
set(FSFW_OSAL
|
||||
"linux"
|
||||
CACHE STRING "OS abstraction layer used in the FSFW")
|
||||
elseif(WIN32)
|
||||
set(FSFW_OSAL
|
||||
"host"
|
||||
CACHE STRING "OS abstraction layer used in the FSFW")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
set(FSFW_OSAL_DEFINITION FSFW_OSAL_HOST)
|
||||
|
||||
if(FSFW_OSAL MATCHES host)
|
||||
set(FSFW_OS_NAME "Host")
|
||||
set(FSFW_OSAL_HOST ON)
|
||||
elseif(FSFW_OSAL MATCHES linux)
|
||||
set(FSFW_OS_NAME "Linux")
|
||||
set(FSFW_OSAL_LINUX ON)
|
||||
elseif(FSFW_OSAL MATCHES freertos)
|
||||
set(FSFW_OS_NAME "FreeRTOS")
|
||||
set(FSFW_OSAL_FREERTOS ON)
|
||||
target_link_libraries(${LIB_FSFW_NAME} PRIVATE ${LIB_OS_NAME})
|
||||
elseif(FSFW_OSAL STREQUAL rtems)
|
||||
set(FSFW_OS_NAME "RTEMS")
|
||||
set(FSFW_OSAL_RTEMS ON)
|
||||
else()
|
||||
message(
|
||||
WARNING
|
||||
"${MSG_PREFIX} Invalid operating system for FSFW specified! Setting to host.."
|
||||
)
|
||||
set(FSFW_OS_NAME "Host")
|
||||
set(OS_FSFW "host")
|
||||
endif()
|
||||
|
||||
configure_file(src/fsfw/FSFW.h.in fsfw/FSFW.h)
|
||||
configure_file(src/fsfw/FSFWVersion.h.in fsfw/FSFWVersion.h)
|
||||
|
||||
message(
|
||||
STATUS "${MSG_PREFIX} Compiling FSFW for the ${FSFW_OS_NAME} operating system"
|
||||
)
|
||||
|
||||
add_subdirectory(src)
|
||||
add_subdirectory(contrib)
|
||||
if(FSFW_BUILD_TESTS)
|
||||
add_subdirectory(unittests)
|
||||
endif()
|
||||
if(FSFW_BUILD_DOCS)
|
||||
add_subdirectory(docs)
|
||||
endif()
|
||||
|
||||
if(FSFW_BUILD_TESTS)
|
||||
if(FSFW_TESTS_GEN_COV)
|
||||
if(CMAKE_COMPILER_IS_GNUCXX)
|
||||
include(CodeCoverage)
|
||||
|
||||
# Remove quotes.
|
||||
separate_arguments(COVERAGE_COMPILER_FLAGS NATIVE_COMMAND
|
||||
"${COVERAGE_COMPILER_FLAGS}")
|
||||
|
||||
# Add compile options manually, we don't want coverage for Catch2
|
||||
target_compile_options(${FSFW_TEST_TGT}
|
||||
PRIVATE "${COVERAGE_COMPILER_FLAGS}")
|
||||
target_compile_options(${LIB_FSFW_NAME}
|
||||
PRIVATE "${COVERAGE_COMPILER_FLAGS}")
|
||||
|
||||
# Exclude directories here
|
||||
if(WIN32)
|
||||
set(GCOVR_ADDITIONAL_ARGS "--exclude-throw-branches"
|
||||
"--exclude-unreachable-branches")
|
||||
set(COVERAGE_EXCLUDES "/c/msys64/mingw64/*" "*/fsfw_hal/*")
|
||||
elseif(UNIX)
|
||||
set(COVERAGE_EXCLUDES
|
||||
"/usr/include/*"
|
||||
"/usr/bin/*"
|
||||
"Catch2/*"
|
||||
"/usr/local/include/*"
|
||||
"*/fsfw_tests/*"
|
||||
"*/catch2-src/*"
|
||||
"*/fsfw_hal/*"
|
||||
"unittests/*")
|
||||
endif()
|
||||
|
||||
target_link_options(${FSFW_TEST_TGT} PRIVATE -fprofile-arcs
|
||||
-ftest-coverage)
|
||||
target_link_options(${LIB_FSFW_NAME} PRIVATE -fprofile-arcs
|
||||
-ftest-coverage)
|
||||
# Need to specify this as an interface, otherwise there will the compile
|
||||
# issues
|
||||
target_link_options(${LIB_FSFW_NAME} INTERFACE -fprofile-arcs
|
||||
-ftest-coverage)
|
||||
|
||||
if(WIN32)
|
||||
setup_target_for_coverage_gcovr_html(
|
||||
NAME ${FSFW_TEST_TGT}_coverage EXECUTABLE ${FSFW_TEST_TGT}
|
||||
DEPENDENCIES ${FSFW_TEST_TGT})
|
||||
else()
|
||||
setup_target_for_coverage_lcov(
|
||||
NAME
|
||||
${FSFW_TEST_TGT}_coverage
|
||||
EXECUTABLE
|
||||
${FSFW_TEST_TGT}
|
||||
DEPENDENCIES
|
||||
${FSFW_TEST_TGT}
|
||||
GENHTML_ARGS
|
||||
--html-epilog
|
||||
${CMAKE_SOURCE_DIR}/unittests/lcov_epilog.html)
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
target_link_libraries(${FSFW_TEST_TGT} PRIVATE Catch2::Catch2
|
||||
${LIB_FSFW_NAME})
|
||||
endif()
|
||||
|
||||
# The project CMakeLists file has to set the FSFW_CONFIG_PATH and add it. If
|
||||
# this is not given, we include the default configuration and emit a warning.
|
||||
if(NOT FSFW_CONFIG_PATH)
|
||||
set(DEF_CONF_PATH misc/defaultcfg/fsfwconfig)
|
||||
if(NOT FSFW_BUILD_DOCS)
|
||||
message(
|
||||
WARNING
|
||||
"${MSG_PREFIX} Flight Software Framework configuration path FSFW_CONFIG_PATH not set"
|
||||
)
|
||||
message(
|
||||
WARNING
|
||||
"${MSG_PREFIX} Setting default configuration from ${DEF_CONF_PATH} ..")
|
||||
endif()
|
||||
add_subdirectory(${DEF_CONF_PATH})
|
||||
set(FSFW_CONFIG_PATH ${DEF_CONF_PATH})
|
||||
endif()
|
||||
|
||||
# FSFW might be part of a possibly complicated folder structure, so we extract
|
||||
# the absolute path of the fsfwconfig folder.
|
||||
if(IS_ABSOLUTE ${FSFW_CONFIG_PATH})
|
||||
set(FSFW_CONFIG_PATH_ABSOLUTE ${FSFW_CONFIG_PATH})
|
||||
else()
|
||||
get_filename_component(FSFW_CONFIG_PATH_ABSOLUTE ${FSFW_CONFIG_PATH} REALPATH
|
||||
BASE_DIR ${CMAKE_SOURCE_DIR})
|
||||
endif()
|
||||
|
||||
foreach(INCLUDE_PATH ${FSFW_ADDITIONAL_INC_PATHS})
|
||||
if(IS_ABSOLUTE ${INCLUDE_PATH})
|
||||
set(CURR_ABS_INC_PATH "${INCLUDE_PATH}")
|
||||
else()
|
||||
get_filename_component(CURR_ABS_INC_PATH ${INCLUDE_PATH} REALPATH BASE_DIR
|
||||
${CMAKE_SOURCE_DIR})
|
||||
endif()
|
||||
|
||||
if(CMAKE_VERBOSE)
|
||||
message(STATUS "FSFW include path: ${CURR_ABS_INC_PATH}")
|
||||
endif()
|
||||
|
||||
list(APPEND FSFW_ADD_INC_PATHS_ABS ${CURR_ABS_INC_PATH})
|
||||
endforeach()
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
if(NOT DEFINED FSFW_WARNING_FLAGS)
|
||||
set(FSFW_WARNING_FLAGS
|
||||
-Wall
|
||||
-Wextra
|
||||
-Wimplicit-fallthrough=1
|
||||
-Wno-unused-parameter
|
||||
-Wno-psabi
|
||||
-Wduplicated-cond # check for duplicate conditions
|
||||
-Wduplicated-branches # check for duplicate branches
|
||||
-Wlogical-op # Search for bitwise operations instead of logical
|
||||
-Wnull-dereference # Search for NULL dereference
|
||||
-Wundef # Warn if undefind marcos are used
|
||||
-Wformat=2 # Format string problem detection
|
||||
-Wformat-overflow=2 # Formatting issues in printf
|
||||
-Wformat-truncation=2 # Formatting issues in printf
|
||||
-Wformat-security # Search for dangerous printf operations
|
||||
-Wstrict-overflow=3 # Warn if integer overflows might happen
|
||||
-Warray-bounds=2 # Some array bounds violations will be found
|
||||
-Wshift-overflow=2 # Search for bit left shift overflows (<c++14)
|
||||
-Wcast-qual # Warn if the constness is cast away
|
||||
-Wstringop-overflow=4
|
||||
# -Wstack-protector # Emits a few false positives for low level access
|
||||
# -Wconversion # Creates many false positives -Warith-conversion # Use
|
||||
# with Wconversion to find more implicit conversions -fanalyzer # Should
|
||||
# be used to look through problems
|
||||
)
|
||||
endif()
|
||||
|
||||
if(FSFW_GENERATE_SECTIONS)
|
||||
target_compile_options(${LIB_FSFW_NAME} PRIVATE "-ffunction-sections"
|
||||
"-fdata-sections")
|
||||
endif()
|
||||
|
||||
if(FSFW_REMOVE_UNUSED_CODE)
|
||||
target_link_options(${LIB_FSFW_NAME} PRIVATE "Wl,--gc-sections")
|
||||
endif()
|
||||
|
||||
if(FSFW_WARNING_SHADOW_LOCAL_GCC)
|
||||
list(APPEND WARNING_FLAGS "-Wshadow=local")
|
||||
endif()
|
||||
|
||||
endif()
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(COMPILER_FLAGS "/permissive-")
|
||||
endif()
|
||||
|
||||
# Required include paths to compile the FSFW
|
||||
target_include_directories(
|
||||
${LIB_FSFW_NAME} INTERFACE ${CMAKE_SOURCE_DIR} ${FSFW_CONFIG_PATH_ABSOLUTE}
|
||||
${FSFW_CORE_INC_PATH} ${FSFW_ADD_INC_PATHS_ABS})
|
||||
|
||||
# Includes path required to compile FSFW itself as well We assume that the
|
||||
# fsfwconfig folder uses include relative to the project root here!
|
||||
target_include_directories(
|
||||
${LIB_FSFW_NAME} PRIVATE ${CMAKE_SOURCE_DIR} ${FSFW_CONFIG_PATH_ABSOLUTE}
|
||||
${FSFW_CORE_INC_PATH} ${FSFW_ADD_INC_PATHS_ABS})
|
||||
|
||||
target_compile_options(${LIB_FSFW_NAME} PRIVATE ${FSFW_WARNING_FLAGS}
|
||||
${COMPILER_FLAGS})
|
||||
|
||||
target_link_libraries(${LIB_FSFW_NAME} PRIVATE ${FSFW_ADDITIONAL_LINK_LIBS})
|
||||
target_link_libraries(${LIB_FSFW_NAME} PUBLIC etl::etl)
|
||||
|
||||
string(
|
||||
CONCAT
|
||||
POST_BUILD_COMMENT
|
||||
"######################################################################\n"
|
||||
"Built FSFW v${FSFW_VERSION}.${FSFW_SUBVERSION}.${FSFW_REVISION}, "
|
||||
"Target OSAL: ${FSFW_OS_NAME}\n"
|
||||
"######################################################################\n")
|
||||
|
||||
add_custom_command(
|
||||
TARGET ${LIB_FSFW_NAME}
|
||||
POST_BUILD
|
||||
COMMENT ${POST_BUILD_COMMENT})
|
12
FSFWVersion.h
Normal file
12
FSFWVersion.h
Normal file
@ -0,0 +1,12 @@
|
||||
#ifndef FSFW_DEFAULTCFG_VERSION_H_
|
||||
#define FSFW_DEFAULTCFG_VERSION_H_
|
||||
|
||||
const char* const FSFW_VERSION_NAME = "ASTP";
|
||||
|
||||
#define FSFW_VERSION 0
|
||||
#define FSFW_SUBVERSION 0
|
||||
#define FSFW_REVISION 1
|
||||
|
||||
|
||||
|
||||
#endif /* FSFW_DEFAULTCFG_VERSION_H_ */
|
2
NOTICE
2
NOTICE
@ -21,5 +21,3 @@ their own copyright notices and license terms:
|
||||
|
||||
under contrib/:
|
||||
* sgp4: sgp4 code developed by david vallado under public domain, see https://www.celestrak.com/publications/AIAA/2006-6753/
|
||||
* etl: Embedded Template Library (ETL) with own license
|
||||
* Catch2: Unittest library with own license
|
||||
|
307
README.md
307
README.md
@ -1,5 +1,4 @@
|
||||

|
||||
|
||||

|
||||
# Flight Software Framework (FSFW)
|
||||
|
||||
The Flight Software Framework is a C++ Object Oriented Framework for unmanned,
|
||||
@ -9,208 +8,152 @@ The initial version of the Flight Software Framework was developed during
|
||||
the Flying Laptop Project by the University of Stuttgart in cooperation
|
||||
with Airbus Defence and Space GmbH.
|
||||
|
||||
## Quick facts
|
||||
## Intended Use
|
||||
|
||||
The framework is designed for systems, which communicate with external devices, perform control loops,
|
||||
receive telecommands and send telemetry, and need to maintain a high level of availability. Therefore,
|
||||
a mode and health system provides control over the states of the software and the controlled devices.
|
||||
The framework is designed for systems, which communicate with external devices, perform control loops, receive telecommands and send telemetry, and need to maintain a high level of availability.
|
||||
Therefore, a mode and health system provides control over the states of the software and the controlled devices.
|
||||
In addition, a simple mechanism of event based fault detection, isolation and recovery is implemented as well.
|
||||
|
||||
The FSFW provides abstraction layers for operating systems to provide a uniform operating system
|
||||
abstraction layer (OSAL). Some components of this OSAL are required internally by the FSFW but is
|
||||
also very useful for developers to implement the same application logic on different operating
|
||||
systems with a uniform interface.
|
||||
The recommended hardware is a microprocessor with more than 2 MB of RAM and 1 MB of non-volatile Memory.
|
||||
For reference, current Applications use a Cobham Gaisler UT699 (LEON3FT), a ISISPACE IOBC or a Zynq-7020 SoC.
|
||||
|
||||
Currently, the FSFW provides the following OSALs:
|
||||
|
||||
- Linux
|
||||
- Host
|
||||
- FreeRTOS
|
||||
- RTEMS
|
||||
## Structure
|
||||
|
||||
The recommended hardware is a microprocessor with more than 1 MB of RAM and 1 MB of non-volatile
|
||||
memory. For reference, current applications use a Cobham Gaisler UT699 (LEON3FT), a
|
||||
ISISPACE IOBC or a Zynq-7020 SoC. The `fsfw` was also successfully run on the
|
||||
STM32H743ZI-Nucleo board and on a Raspberry Pi and is currently running on the active
|
||||
satellite mission Flying Laptop.
|
||||
The general structure is driven by the usage of interfaces provided by objects. The FSFW uses C++11 as baseline. The intention behind this is that this C++ Standard should be widely available, even with older compilers.
|
||||
The FSFW uses dynamic allocation during the initialization but provides static containers during runtime.
|
||||
This simplifies the instantiation of objects and allows the usage of some standard containers.
|
||||
Dynamic Allocation after initialization is discouraged and different solutions are provided in the FSFW to achieve that.
|
||||
The fsfw uses Run-time type information.
|
||||
Exceptions are not allowed.
|
||||
|
||||
## Getting started
|
||||
### Failure Handling
|
||||
|
||||
The [Hosted FSFW example](https://egit.irs.uni-stuttgart.de/fsfw/fsfw-example-hosted) provides a
|
||||
good starting point and a demo to see the FSFW capabilities.
|
||||
It is recommended to get started by building and playing around with the demo application.
|
||||
There are also other examples provided for all OSALs using the popular embedded platforms
|
||||
Raspberry Pi, Beagle Bone Black and STM32H7.
|
||||
Functions should return a defined ReturnValue_t to signal to the caller that something is gone wrong.
|
||||
Returnvalues must be unique. For this the function HasReturnvaluesIF::makeReturnCode or the Macro MAKE_RETURN can be used.
|
||||
The CLASS_ID is a unique id for that type of object. See returnvalues/FwClassIds.
|
||||
|
||||
Generally, the FSFW is included in a project by providing
|
||||
a configuration folder, building the static library and linking against it.
|
||||
There are some functions like `printChar` which are different depending on the target architecture
|
||||
and need to be implemented by the mission developer.
|
||||
### OSAL
|
||||
The FSFW provides operation system abstraction layers for Linux, FreeRTOS and RTEMS. A independent OSAL called "host" is currently not finished. This aims to be running on windows as well.
|
||||
The OSAL provides periodic tasks, message queues, clocks and Semaphores as well as Mutexes.
|
||||
|
||||
A template configuration folder was provided and can be copied into the project root to have
|
||||
a starting point. The [configuration section](docs/README-config.md#top) provides more specific
|
||||
information about the possible options.
|
||||
### Core Components
|
||||
|
||||
## Prerequisites
|
||||
Clock:
|
||||
* This is a class of static functions that can be used at anytime
|
||||
* Leap Seconds must be set if any time conversions from UTC to other times is used
|
||||
|
||||
The Embedded Template Library (etl) is a dependency of the FSFW which is automatically
|
||||
installed and provided by the build system unless the correction version was installed.
|
||||
The current recommended version can be found inside the fsfw `CMakeLists.txt` file or by using
|
||||
`ccmake` and looking up the `FSFW_ETL_LIB_MAJOR_VERSION` variable.
|
||||
ObjectManager (must be created):
|
||||
|
||||
You can install the ETL library like this. On Linux, it might be necessary to add `sudo` before
|
||||
the install call:
|
||||
* The component which handles all references. All SystemObjects register at this component.
|
||||
* Any SystemObject needs to have a unique ObjectId. Those can be managed like objects::framework_objects.
|
||||
* A reference to an object can be get by calling the following function. T must be the specific Interface you want to call.
|
||||
A nullptr check of the returning Pointer must be done. This function is based on Run-time type information.
|
||||
|
||||
```cpp
|
||||
git clone https://github.com/ETLCPP/etl
|
||||
cd etl
|
||||
git checkout <currentRecommendedVersion>
|
||||
mkdir build && cd build
|
||||
cmake ..
|
||||
cmake --install .
|
||||
``` c++
|
||||
template <typename T> T* ObjectManagerIF::get( object_id_t id )
|
||||
|
||||
```
|
||||
* A typical way to create all objects on startup is a handing a static produce function to the ObjectManager on creation.
|
||||
By calling objectManager->initialize() the produce function will be called and all SystemObjects will be initialized afterwards.
|
||||
|
||||
Event Manager:
|
||||
|
||||
* Component which allows routing of events
|
||||
* Other objects can subscribe to specific events, ranges of events or all events of an object.
|
||||
* Subscriptions can be done during runtime but should be done during initialization
|
||||
* Amounts of allowed subscriptions must be configured by setting this parameters:
|
||||
|
||||
``` c++
|
||||
namespace fsfwconfig {
|
||||
//! Configure the allocated pool sizes for the event manager.
|
||||
static constexpr size_t FSFW_EVENTMGMR_MATCHTREE_NODES = 240;
|
||||
static constexpr size_t FSFW_EVENTMGMT_EVENTIDMATCHERS = 120;
|
||||
static constexpr size_t FSFW_EVENTMGMR_RANGEMATCHERS = 120;
|
||||
}
|
||||
```
|
||||
|
||||
It is recommended to install `20.27.2` or newer for the package version handling of
|
||||
ETL to work.
|
||||
|
||||
## Adding the library
|
||||
Health Table:
|
||||
|
||||
The following steps show how to add and use FSFW components. It is still recommended to
|
||||
try out the example mentioned above to get started, but the following steps show how to
|
||||
add and link against the FSFW library in general.
|
||||
* A component which holds every health state
|
||||
* Provides a thread safe way to access all health states without the need of message exchanges
|
||||
|
||||
1. Add this repository as a submodule
|
||||
Stores
|
||||
|
||||
```sh
|
||||
git submodule add https://egit.irs.uni-stuttgart.de/fsfw/fsfw.git fsfw
|
||||
```
|
||||
* The message based communication can only exchange a few bytes of information inside the message itself. Therefore, additional information can be exchanged with Stores. With this, only the store address must be exchanged in the message.
|
||||
* Internally, the FSFW uses an IPC Store to exchange data between processes. For incoming TCs a TC Store is used. For outgoing TM a TM store is used.
|
||||
* All of them should use the Thread Safe Class storagemanager/PoolManager
|
||||
|
||||
2. Add the following directive inside the uppermost `CMakeLists.txt` file of your project
|
||||
Tasks
|
||||
|
||||
```cmake
|
||||
add_subdirectory(fsfw)
|
||||
```
|
||||
|
||||
3. Make sure to provide a configuration folder and supply the path to that folder with
|
||||
the `FSFW_CONFIG_PATH` CMake variable from the uppermost `CMakeLists.txt` file.
|
||||
It is also necessary to provide the `printChar` function. You can find an example
|
||||
implementation for a hosted build
|
||||
[here](https://egit.irs.uni-stuttgart.de/fsfw/fsfw-example-hosted/src/branch/master/bsp_hosted/utility/printChar.c).
|
||||
|
||||
4. Link against the FSFW library
|
||||
|
||||
```sh
|
||||
target_link_libraries(${YourProjectName} PRIVATE fsfw)
|
||||
```
|
||||
|
||||
5. It should now be possible use the FSFW as a static library from the user code.
|
||||
|
||||
## Current dependencies
|
||||
|
||||
This library currently has the following vendored dependencies:
|
||||
|
||||
- [Embedded Template Library (etl) v20.39.4](https://github.com/ETLCPP/etl/releases/tag/20.39.4)
|
||||
- [Catch2 v3.7.1](https://github.com/catchorg/Catch2/releases/tag/v3.7.1)
|
||||
- sgp4 propagator
|
||||
|
||||
## Building the unittests
|
||||
|
||||
The FSFW also has unittests which use the [Catch2 library](https://github.com/catchorg/Catch2).
|
||||
These are built by setting the CMake option `FSFW_BUILD_UNITTESTS` to `ON` or `TRUE`
|
||||
from your project `CMakeLists.txt` file or from the command line.
|
||||
|
||||
You can install the Catch2 library, which prevents the build system to avoid re-downloading
|
||||
the dependency if the unit tests are completely rebuilt. The current recommended version
|
||||
can be found inside the fsfw `CMakeLists.txt` file or by using `ccmake` and looking up
|
||||
the `FSFW_CATCH2_LIB_VERSION` variable.
|
||||
|
||||
```sh
|
||||
git clone https://github.com/catchorg/Catch2.git
|
||||
cd Catch2
|
||||
git checkout <currentRecommendedVersion>
|
||||
cmake -Bbuild -H. -DBUILD_TESTING=OFF
|
||||
sudo cmake --build build/ --target install
|
||||
```
|
||||
|
||||
The fsfw-tests binary will be built as part of the static library and dropped alongside it.
|
||||
If the unittests are built, the library and the tests will be built with coverage information by
|
||||
default. This can be disabled by setting the `FSFW_TESTS_COV_GEN` option to `OFF` or `FALSE`.
|
||||
|
||||
You can use the following commands inside the `fsfw` folder to set up the build system
|
||||
|
||||
```sh
|
||||
mkdir build-tests && cd build-tests
|
||||
cmake -DFSFW_BUILD_TESTS=ON -DFSFW_OSAL=host -DCMAKE_BUILD_TYPE=Debug ..
|
||||
```
|
||||
|
||||
You can also use `-DFSFW_OSAL=linux` on Linux systems.
|
||||
|
||||
Coverage data in HTML format can be generated using the `CodeCoverage`
|
||||
[CMake module](https://github.com/bilke/cmake-modules/tree/master).
|
||||
To build the unittests, run them and then generate the coverage data in this format,
|
||||
the following command can be used inside the build directory after the build system was set up
|
||||
|
||||
```sh
|
||||
cmake --build . -- fsfw-tests_coverage -j
|
||||
```
|
||||
|
||||
The `coverage.py` script located in the `script` folder can also be used to do this conveniently.
|
||||
|
||||
## Building the documentations
|
||||
|
||||
The FSFW documentation is built using the tools Sphinx, doxygen and breathe based on the
|
||||
instructions provided in [this blogpost](https://devblogs.microsoft.com/cppblog/clear-functional-c-documentation-with-sphinx-breathe-doxygen-cmake/). If you
|
||||
want to do this locally, set up the prerequisites first. This requires a ``python3``
|
||||
installation as well. Example here is for Ubuntu.
|
||||
|
||||
```sh
|
||||
sudo apt-get install doxygen graphviz
|
||||
```
|
||||
|
||||
And the following Python packages
|
||||
|
||||
```sh
|
||||
python3 -m pip install sphinx breathe
|
||||
```
|
||||
|
||||
You can set up a documentation build system using the following commands
|
||||
|
||||
```sh
|
||||
mkdir build-docs && cd build-docs
|
||||
cmake -DFSFW_BUILD_DOCS=ON -DFSFW_OSAL=host ..
|
||||
```
|
||||
|
||||
Then you can generate the documentation using
|
||||
|
||||
```sh
|
||||
cmake --build . -- Sphinx -j
|
||||
```
|
||||
|
||||
You can find the generated documentation inside the `docs/sphinx` folder inside the build
|
||||
folder. Simply open the `index.html` in the webbrowser of your choice.
|
||||
|
||||
The `helper.py` script located in the script` folder can also be used to create, build
|
||||
and open the documentation conveniently. Try `helper.py -h for more information.
|
||||
|
||||
## Formatting the sources
|
||||
|
||||
The formatting is done by the `clang-format` tool. The configuration is contained within the
|
||||
`.clang-format` file in the repository root. As long as `clang-format` is installed, you
|
||||
can run the `auto-format.sh` helper script to format all source files consistently. Furthermore cmake-format is required to format CMake files which can be installed with:
|
||||
````sh
|
||||
sudo pip install cmakelang
|
||||
````
|
||||
|
||||
## Index
|
||||
|
||||
[1. High-level overview](docs/README-highlevel.md#top) <br>
|
||||
[2. Core components](docs/README-core.md#top) <br>
|
||||
[3. Configuration](docs/README-config.md#top) <br>
|
||||
[4. OSAL overview](docs/README-osal.md#top) <br>
|
||||
[5. PUS services](docs/README-pus.md#top) <br>
|
||||
[6. Device Handler overview](docs/README-devicehandlers.md#top) <br>
|
||||
[7. Controller overview](docs/README-controllers.md#top) <br>
|
||||
[8. Local Data Pools](docs/README-localpools.md#top) <br>
|
||||
There are two different types of tasks:
|
||||
* The PeriodicTask just executes objects that are of type ExecutableObjectIF in the order of the insertion to the Tasks.
|
||||
* FixedTimeslotTask executes a list of calls in the order of the given list. This is intended for DeviceHandlers, where polling should be in a defined order. An example can be found in defaultcfg/fsfwconfig/pollingSequence
|
||||
|
||||
|
||||
### Static Ids in the framework
|
||||
|
||||
Some parts of the framework use a static routing address for communication.
|
||||
An example setup of ids can be found in the example config in "defaultcft/fsfwconfig/objects/Factory::setStaticFrameworkObjectIds()".
|
||||
|
||||
### Events
|
||||
|
||||
Events are tied to objects. EventIds can be generated by calling the Macro MAKE_EVENT. This works analog to the returnvalues.
|
||||
Every object that needs own EventIds has to get a unique SUBSYSTEM_ID.
|
||||
Every SystemObject can call triggerEvent from the parent class.
|
||||
Therefore, event messages contain the specific EventId and the objectId of the object that has triggered.
|
||||
|
||||
### Internal Communication
|
||||
|
||||
Components communicate mostly over Message through Queues.
|
||||
Those queues are created by calling the singleton QueueFactory::instance()->create().
|
||||
|
||||
### External Communication
|
||||
|
||||
The external communication with the mission control system is mostly up to the user implementation.
|
||||
The FSFW provides PUS Services which can be used to but don't need to be used.
|
||||
The services can be seen as a conversion from a TC to a message based communication and back.
|
||||
|
||||
#### CCSDS Frames, CCSDS Space Packets and PUS
|
||||
|
||||
If the communication is based on CCSDS Frames and Space Packets, several classes can be used to distributed the packets to the corresponding services. Those can be found in tcdistribution.
|
||||
If Space Packets are used, a timestamper must be created.
|
||||
An example can be found in the timemanager folder, this uses CCSDSTime::CDS_short.
|
||||
|
||||
#### DeviceHandling
|
||||
|
||||
DeviceHandlers are a core component of the FSFW.
|
||||
The idea is, to have a software counterpart of every physical device to provide a simple mode, health and commanding interface.
|
||||
By separating the underlying Communication Interface with DeviceCommunicationIF, a DH can be tested on different hardware.
|
||||
The DH has mechanisms to monitor the communication with the physical device which allow for FDIR reaction.
|
||||
A standard FDIR component for the DH will be created automatically but can be overwritten by the user.
|
||||
|
||||
#### Modes, Health
|
||||
|
||||
The two interfaces HasModesIF and HasHealthIF provide access for commanding and monitoring of components.
|
||||
On-board Mode Management is implement in hierarchy system.
|
||||
DeviceHandlers and Controllers are the lowest part of the hierarchy.
|
||||
The next layer are Assemblies. Those assemblies act as a component which handle redundancies of handlers.
|
||||
Assemblies share a common core with the next level which are the Subsystems.
|
||||
|
||||
Those Assemblies are intended to act as auto-generated components from a database which describes the subsystem modes.
|
||||
The definitions contain transition and target tables which contain the DH, Assembly and Controller Modes to be commanded.
|
||||
Transition tables contain as many steps as needed to reach the mode from any other mode, e.g. a switch into any higher AOCS mode might first turn on the sensors, than the actuators and the controller as last component.
|
||||
The target table is used to describe the state that is checked continuously by the subsystem.
|
||||
All of this allows System Modes to be generated as Subsystem object as well from the same database.
|
||||
This System contains list of subsystem modes in the transition and target tables.
|
||||
Therefore, it allows a modular system to create system modes and easy commanding of those, because only the highest components must be commanded.
|
||||
|
||||
The health state represents if the component is able to perform its tasks.
|
||||
This can be used to signal the system to avoid using this component instead of a redundant one.
|
||||
The on-board FDIR uses the health state for isolation and recovery.
|
||||
|
||||
## Example config
|
||||
|
||||
A example config can be found in defaultcfg/fsfwconfig.
|
||||
|
||||
## Unit Tests
|
||||
|
||||
Unit Tests are provided in the unittest folder. Those use the catch2 framework but do not include catch2 itself.
|
||||
See README.md in the unittest Folder.
|
162
action/ActionHelper.cpp
Normal file
162
action/ActionHelper.cpp
Normal file
@ -0,0 +1,162 @@
|
||||
#include "ActionHelper.h"
|
||||
#include "HasActionsIF.h"
|
||||
|
||||
#include "../ipc/MessageQueueSenderIF.h"
|
||||
#include "../objectmanager/ObjectManagerIF.h"
|
||||
|
||||
ActionHelper::ActionHelper(HasActionsIF* setOwner,
|
||||
MessageQueueIF* useThisQueue) :
|
||||
owner(setOwner), queueToUse(useThisQueue) {
|
||||
}
|
||||
|
||||
ActionHelper::~ActionHelper() {
|
||||
}
|
||||
|
||||
ReturnValue_t ActionHelper::handleActionMessage(CommandMessage* command) {
|
||||
if (command->getCommand() == ActionMessage::EXECUTE_ACTION) {
|
||||
ActionId_t currentAction = ActionMessage::getActionId(command);
|
||||
prepareExecution(command->getSender(), currentAction,
|
||||
ActionMessage::getStoreId(command));
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
} else {
|
||||
return CommandMessage::UNKNOWN_COMMAND;
|
||||
}
|
||||
}
|
||||
|
||||
ReturnValue_t ActionHelper::initialize(MessageQueueIF* queueToUse_) {
|
||||
ipcStore = objectManager->get<StorageManagerIF>(objects::IPC_STORE);
|
||||
if (ipcStore == nullptr) {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
if(queueToUse_ != nullptr) {
|
||||
setQueueToUse(queueToUse_);
|
||||
}
|
||||
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
void ActionHelper::step(uint8_t step, MessageQueueId_t reportTo,
|
||||
ActionId_t commandId, ReturnValue_t result) {
|
||||
CommandMessage reply;
|
||||
ActionMessage::setStepReply(&reply, commandId, step + STEP_OFFSET, result);
|
||||
queueToUse->sendMessage(reportTo, &reply);
|
||||
}
|
||||
|
||||
void ActionHelper::finish(MessageQueueId_t reportTo, ActionId_t commandId,
|
||||
ReturnValue_t result) {
|
||||
CommandMessage reply;
|
||||
ActionMessage::setCompletionReply(&reply, commandId, result);
|
||||
queueToUse->sendMessage(reportTo, &reply);
|
||||
}
|
||||
|
||||
void ActionHelper::setQueueToUse(MessageQueueIF* queue) {
|
||||
queueToUse = queue;
|
||||
}
|
||||
|
||||
void ActionHelper::prepareExecution(MessageQueueId_t commandedBy,
|
||||
ActionId_t actionId, store_address_t dataAddress) {
|
||||
const uint8_t* dataPtr = NULL;
|
||||
size_t size = 0;
|
||||
ReturnValue_t result = ipcStore->getData(dataAddress, &dataPtr, &size);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
CommandMessage reply;
|
||||
ActionMessage::setStepReply(&reply, actionId, 0, result);
|
||||
queueToUse->sendMessage(commandedBy, &reply);
|
||||
return;
|
||||
}
|
||||
result = owner->executeAction(actionId, commandedBy, dataPtr, size);
|
||||
ipcStore->deleteData(dataAddress);
|
||||
if(result == HasActionsIF::EXECUTION_FINISHED) {
|
||||
CommandMessage reply;
|
||||
ActionMessage::setCompletionReply(&reply, actionId, result);
|
||||
queueToUse->sendMessage(commandedBy, &reply);
|
||||
}
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
CommandMessage reply;
|
||||
ActionMessage::setStepReply(&reply, actionId, 0, result);
|
||||
queueToUse->sendMessage(commandedBy, &reply);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
ReturnValue_t ActionHelper::reportData(MessageQueueId_t reportTo,
|
||||
ActionId_t replyId, SerializeIF* data, bool hideSender) {
|
||||
CommandMessage reply;
|
||||
store_address_t storeAddress;
|
||||
uint8_t *dataPtr;
|
||||
size_t maxSize = data->getSerializedSize();
|
||||
if (maxSize == 0) {
|
||||
//No error, there's simply nothing to report.
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
size_t size = 0;
|
||||
ReturnValue_t result = ipcStore->getFreeElement(&storeAddress, maxSize,
|
||||
&dataPtr);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return result;
|
||||
}
|
||||
result = data->serialize(&dataPtr, &size, maxSize,
|
||||
SerializeIF::Endianness::BIG);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
ipcStore->deleteData(storeAddress);
|
||||
return result;
|
||||
}
|
||||
// We don't need to report the objectId, as we receive REQUESTED data
|
||||
// before the completion success message.
|
||||
// True aperiodic replies need to be reported with
|
||||
// another dedicated message.
|
||||
ActionMessage::setDataReply(&reply, replyId, storeAddress);
|
||||
|
||||
// If the sender needs to be hidden, for example to handle packet
|
||||
// as unrequested reply, this will be done here.
|
||||
if (hideSender) {
|
||||
result = MessageQueueSenderIF::sendMessage(reportTo, &reply);
|
||||
}
|
||||
else {
|
||||
result = queueToUse->sendMessage(reportTo, &reply);
|
||||
}
|
||||
|
||||
if (result != HasReturnvaluesIF::RETURN_OK){
|
||||
ipcStore->deleteData(storeAddress);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
void ActionHelper::resetHelper() {
|
||||
}
|
||||
|
||||
ReturnValue_t ActionHelper::reportData(MessageQueueId_t reportTo,
|
||||
ActionId_t replyId, const uint8_t *data, size_t dataSize,
|
||||
bool hideSender) {
|
||||
CommandMessage reply;
|
||||
store_address_t storeAddress;
|
||||
ReturnValue_t result = ipcStore->addData(&storeAddress, data, dataSize);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return result;
|
||||
}
|
||||
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
ipcStore->deleteData(storeAddress);
|
||||
return result;
|
||||
}
|
||||
|
||||
// We don't need to report the objectId, as we receive REQUESTED data
|
||||
// before the completion success message.
|
||||
// True aperiodic replies need to be reported with
|
||||
// another dedicated message.
|
||||
ActionMessage::setDataReply(&reply, replyId, storeAddress);
|
||||
|
||||
// If the sender needs to be hidden, for example to handle packet
|
||||
// as unrequested reply, this will be done here.
|
||||
if (hideSender) {
|
||||
result = MessageQueueSenderIF::sendMessage(reportTo, &reply);
|
||||
}
|
||||
else {
|
||||
result = queueToUse->sendMessage(reportTo, &reply);
|
||||
}
|
||||
|
||||
if (result != HasReturnvaluesIF::RETURN_OK){
|
||||
ipcStore->deleteData(storeAddress);
|
||||
}
|
||||
return result;
|
||||
}
|
125
action/ActionHelper.h
Normal file
125
action/ActionHelper.h
Normal file
@ -0,0 +1,125 @@
|
||||
#ifndef FSFW_ACTION_ACTIONHELPER_H_
|
||||
#define FSFW_ACTION_ACTIONHELPER_H_
|
||||
|
||||
#include "ActionMessage.h"
|
||||
#include "../serialize/SerializeIF.h"
|
||||
#include "../ipc/MessageQueueIF.h"
|
||||
/**
|
||||
* @brief Action Helper is a helper class which handles action messages
|
||||
*
|
||||
* Components which use the HasActionIF this helper can be used to handle
|
||||
* the action messages.
|
||||
* It does handle step messages as well as other answers to action calls.
|
||||
* It uses the executeAction function of its owner as callback.
|
||||
* The call of the initialize function is mandatory and needs a
|
||||
* valid MessageQueueIF pointer!
|
||||
*/
|
||||
class HasActionsIF;
|
||||
|
||||
class ActionHelper {
|
||||
public:
|
||||
/**
|
||||
* Constructor of the action helper
|
||||
* @param setOwner Pointer to the owner of the interface
|
||||
* @param useThisQueue messageQueue to be used, can be set during
|
||||
* initialize function as well.
|
||||
*/
|
||||
ActionHelper(HasActionsIF* setOwner, MessageQueueIF* useThisQueue);
|
||||
|
||||
virtual ~ActionHelper();
|
||||
/**
|
||||
* Function to be called from the owner with a new command message
|
||||
*
|
||||
* If the message is a valid action message the helper will use the
|
||||
* executeAction function from HasActionsIF.
|
||||
* If the message is invalid or the callback fails a message reply will be
|
||||
* send to the sender of the message automatically.
|
||||
*
|
||||
* @param command Pointer to a command message received by the owner
|
||||
* @return HasReturnvaluesIF::RETURN_OK if the message is a action message,
|
||||
* CommandMessage::UNKNOW_COMMAND if this message ID is unkown
|
||||
*/
|
||||
ReturnValue_t handleActionMessage(CommandMessage* command);
|
||||
/**
|
||||
* Helper initialize function. Must be called before use of any other
|
||||
* helper function
|
||||
* @param queueToUse_ Pointer to the messageQueue to be used, optional
|
||||
* if queue was set in constructor
|
||||
* @return Returns RETURN_OK if successful
|
||||
*/
|
||||
ReturnValue_t initialize(MessageQueueIF* queueToUse_ = nullptr);
|
||||
/**
|
||||
* Function to be called from the owner to send a step message.
|
||||
* Success or failure will be determined by the result value.
|
||||
*
|
||||
* @param step Number of steps already done
|
||||
* @param reportTo The messageQueueId to report the step message to
|
||||
* @param commandId ID of the executed command
|
||||
* @param result Result of the execution
|
||||
*/
|
||||
void step(uint8_t step, MessageQueueId_t reportTo,
|
||||
ActionId_t commandId,
|
||||
ReturnValue_t result = HasReturnvaluesIF::RETURN_OK);
|
||||
/**
|
||||
* Function to be called by the owner to send a action completion message
|
||||
*
|
||||
* @param reportTo MessageQueueId_t to report the action completion message to
|
||||
* @param commandId ID of the executed command
|
||||
* @param result Result of the execution
|
||||
*/
|
||||
void finish(MessageQueueId_t reportTo, ActionId_t commandId,
|
||||
ReturnValue_t result = HasReturnvaluesIF::RETURN_OK);
|
||||
/**
|
||||
* Function to be called by the owner if an action does report data.
|
||||
* Takes a SerializeIF* pointer and serializes it into the IPC store.
|
||||
* @param reportTo MessageQueueId_t to report the action completion
|
||||
* message to
|
||||
* @param replyId ID of the executed command
|
||||
* @param data Pointer to the data
|
||||
* @return Returns RETURN_OK if successful, otherwise failure code
|
||||
*/
|
||||
ReturnValue_t reportData(MessageQueueId_t reportTo, ActionId_t replyId,
|
||||
SerializeIF* data, bool hideSender = false);
|
||||
/**
|
||||
* Function to be called by the owner if an action does report data.
|
||||
* Takes the raw data and writes it into the IPC store.
|
||||
* @param reportTo MessageQueueId_t to report the action completion
|
||||
* message to
|
||||
* @param replyId ID of the executed command
|
||||
* @param data Pointer to the data
|
||||
* @return Returns RETURN_OK if successful, otherwise failure code
|
||||
*/
|
||||
ReturnValue_t reportData(MessageQueueId_t reportTo, ActionId_t replyId,
|
||||
const uint8_t* data, size_t dataSize, bool hideSender = false);
|
||||
/**
|
||||
* Function to setup the MessageQueueIF* of the helper. Can be used to
|
||||
* set the MessageQueueIF* if message queue is unavailable at construction
|
||||
* and initialize but must be setup before first call of other functions.
|
||||
* @param queue Queue to be used by the helper
|
||||
*/
|
||||
void setQueueToUse(MessageQueueIF *queue);
|
||||
protected:
|
||||
//!< Increase of value of this per step
|
||||
static const uint8_t STEP_OFFSET = 1;
|
||||
HasActionsIF* owner;//!< Pointer to the owner
|
||||
//! Queue to be used as response sender, has to be set in ctor or with
|
||||
//! setQueueToUse
|
||||
MessageQueueIF* queueToUse;
|
||||
//! Pointer to an IPC Store, initialized during construction or
|
||||
StorageManagerIF* ipcStore = nullptr;
|
||||
|
||||
/**
|
||||
* Internal function called by handleActionMessage
|
||||
* @param commandedBy MessageQueueID of Commander
|
||||
* @param actionId ID of action to be done
|
||||
* @param dataAddress Address of additional data in IPC Store
|
||||
*/
|
||||
virtual void prepareExecution(MessageQueueId_t commandedBy,
|
||||
ActionId_t actionId, store_address_t dataAddress);
|
||||
/**
|
||||
* @brief Default implementation is empty.
|
||||
*/
|
||||
virtual void resetHelper();
|
||||
};
|
||||
|
||||
#endif /* FSFW_ACTION_ACTIONHELPER_H_ */
|
79
action/ActionMessage.cpp
Normal file
79
action/ActionMessage.cpp
Normal file
@ -0,0 +1,79 @@
|
||||
#include "ActionMessage.h"
|
||||
#include "../objectmanager/ObjectManagerIF.h"
|
||||
#include "../storagemanager/StorageManagerIF.h"
|
||||
|
||||
ActionMessage::ActionMessage() {
|
||||
}
|
||||
|
||||
ActionMessage::~ActionMessage() {
|
||||
}
|
||||
|
||||
void ActionMessage::setCommand(CommandMessage* message, ActionId_t fid,
|
||||
store_address_t parameters) {
|
||||
message->setCommand(EXECUTE_ACTION);
|
||||
message->setParameter(fid);
|
||||
message->setParameter2(parameters.raw);
|
||||
}
|
||||
|
||||
ActionId_t ActionMessage::getActionId(const CommandMessage* message) {
|
||||
return ActionId_t(message->getParameter());
|
||||
}
|
||||
|
||||
store_address_t ActionMessage::getStoreId(const CommandMessage* message) {
|
||||
store_address_t temp;
|
||||
temp.raw = message->getParameter2();
|
||||
return temp;
|
||||
}
|
||||
|
||||
void ActionMessage::setStepReply(CommandMessage* message, ActionId_t fid, uint8_t step,
|
||||
ReturnValue_t result) {
|
||||
if (result == HasReturnvaluesIF::RETURN_OK) {
|
||||
message->setCommand(STEP_SUCCESS);
|
||||
} else {
|
||||
message->setCommand(STEP_FAILED);
|
||||
}
|
||||
message->setParameter(fid);
|
||||
message->setParameter2((step << 16) + result);
|
||||
}
|
||||
|
||||
uint8_t ActionMessage::getStep(const CommandMessage* message) {
|
||||
return uint8_t((message->getParameter2() >> 16) & 0xFF);
|
||||
}
|
||||
|
||||
ReturnValue_t ActionMessage::getReturnCode(const CommandMessage* message) {
|
||||
return message->getParameter2() & 0xFFFF;
|
||||
}
|
||||
|
||||
void ActionMessage::setDataReply(CommandMessage* message, ActionId_t actionId,
|
||||
store_address_t data) {
|
||||
message->setCommand(DATA_REPLY);
|
||||
message->setParameter(actionId);
|
||||
message->setParameter2(data.raw);
|
||||
}
|
||||
|
||||
void ActionMessage::setCompletionReply(CommandMessage* message,
|
||||
ActionId_t fid, ReturnValue_t result) {
|
||||
if (result == HasReturnvaluesIF::RETURN_OK) {
|
||||
message->setCommand(COMPLETION_SUCCESS);
|
||||
} else {
|
||||
message->setCommand(COMPLETION_FAILED);
|
||||
}
|
||||
message->setParameter(fid);
|
||||
message->setParameter2(result);
|
||||
}
|
||||
|
||||
void ActionMessage::clear(CommandMessage* message) {
|
||||
switch(message->getCommand()) {
|
||||
case EXECUTE_ACTION:
|
||||
case DATA_REPLY: {
|
||||
StorageManagerIF *ipcStore = objectManager->get<StorageManagerIF>(
|
||||
objects::IPC_STORE);
|
||||
if (ipcStore != NULL) {
|
||||
ipcStore->deleteData(getStoreId(message));
|
||||
}
|
||||
break;
|
||||
}
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
36
action/ActionMessage.h
Normal file
36
action/ActionMessage.h
Normal file
@ -0,0 +1,36 @@
|
||||
#ifndef FSFW_ACTION_ACTIONMESSAGE_H_
|
||||
#define FSFW_ACTION_ACTIONMESSAGE_H_
|
||||
|
||||
#include "../ipc/CommandMessage.h"
|
||||
#include "../objectmanager/ObjectManagerIF.h"
|
||||
#include "../storagemanager/StorageManagerIF.h"
|
||||
typedef uint32_t ActionId_t;
|
||||
|
||||
class ActionMessage {
|
||||
private:
|
||||
ActionMessage();
|
||||
public:
|
||||
static const uint8_t MESSAGE_ID = messagetypes::ACTION;
|
||||
static const Command_t EXECUTE_ACTION = MAKE_COMMAND_ID(1);
|
||||
static const Command_t STEP_SUCCESS = MAKE_COMMAND_ID(2);
|
||||
static const Command_t STEP_FAILED = MAKE_COMMAND_ID(3);
|
||||
static const Command_t DATA_REPLY = MAKE_COMMAND_ID(4);
|
||||
static const Command_t COMPLETION_SUCCESS = MAKE_COMMAND_ID(5);
|
||||
static const Command_t COMPLETION_FAILED = MAKE_COMMAND_ID(6);
|
||||
virtual ~ActionMessage();
|
||||
static void setCommand(CommandMessage* message, ActionId_t fid,
|
||||
store_address_t parameters);
|
||||
static ActionId_t getActionId(const CommandMessage* message );
|
||||
static store_address_t getStoreId(const CommandMessage* message );
|
||||
static void setStepReply(CommandMessage* message, ActionId_t fid,
|
||||
uint8_t step, ReturnValue_t result = HasReturnvaluesIF::RETURN_OK);
|
||||
static uint8_t getStep(const CommandMessage* message );
|
||||
static ReturnValue_t getReturnCode(const CommandMessage* message );
|
||||
static void setDataReply(CommandMessage* message, ActionId_t actionId,
|
||||
store_address_t data);
|
||||
static void setCompletionReply(CommandMessage* message, ActionId_t fid,
|
||||
ReturnValue_t result = HasReturnvaluesIF::RETURN_OK);
|
||||
static void clear(CommandMessage* message);
|
||||
};
|
||||
|
||||
#endif /* FSFW_ACTION_ACTIONMESSAGE_H_ */
|
127
action/CommandActionHelper.cpp
Normal file
127
action/CommandActionHelper.cpp
Normal file
@ -0,0 +1,127 @@
|
||||
#include "ActionMessage.h"
|
||||
#include "CommandActionHelper.h"
|
||||
#include "CommandsActionsIF.h"
|
||||
#include "HasActionsIF.h"
|
||||
#include "../objectmanager/ObjectManagerIF.h"
|
||||
|
||||
CommandActionHelper::CommandActionHelper(CommandsActionsIF *setOwner) :
|
||||
owner(setOwner), queueToUse(NULL), ipcStore(
|
||||
NULL), commandCount(0), lastTarget(0) {
|
||||
}
|
||||
|
||||
CommandActionHelper::~CommandActionHelper() {
|
||||
}
|
||||
|
||||
ReturnValue_t CommandActionHelper::commandAction(object_id_t commandTo,
|
||||
ActionId_t actionId, SerializeIF *data) {
|
||||
HasActionsIF *receiver = objectManager->get<HasActionsIF>(commandTo);
|
||||
if (receiver == NULL) {
|
||||
return CommandsActionsIF::OBJECT_HAS_NO_FUNCTIONS;
|
||||
}
|
||||
store_address_t storeId;
|
||||
uint8_t *storePointer;
|
||||
size_t maxSize = data->getSerializedSize();
|
||||
ReturnValue_t result = ipcStore->getFreeElement(&storeId, maxSize,
|
||||
&storePointer);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return result;
|
||||
}
|
||||
size_t size = 0;
|
||||
result = data->serialize(&storePointer, &size, maxSize,
|
||||
SerializeIF::Endianness::BIG);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return result;
|
||||
}
|
||||
return sendCommand(receiver->getCommandQueue(), actionId, storeId);
|
||||
}
|
||||
|
||||
ReturnValue_t CommandActionHelper::commandAction(object_id_t commandTo,
|
||||
ActionId_t actionId, const uint8_t *data, uint32_t size) {
|
||||
// if (commandCount != 0) {
|
||||
// return CommandsFunctionsIF::ALREADY_COMMANDING;
|
||||
// }
|
||||
HasActionsIF *receiver = objectManager->get<HasActionsIF>(commandTo);
|
||||
if (receiver == NULL) {
|
||||
return CommandsActionsIF::OBJECT_HAS_NO_FUNCTIONS;
|
||||
}
|
||||
store_address_t storeId;
|
||||
ReturnValue_t result = ipcStore->addData(&storeId, data, size);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return result;
|
||||
}
|
||||
return sendCommand(receiver->getCommandQueue(), actionId, storeId);
|
||||
}
|
||||
|
||||
ReturnValue_t CommandActionHelper::sendCommand(MessageQueueId_t queueId,
|
||||
ActionId_t actionId, store_address_t storeId) {
|
||||
CommandMessage command;
|
||||
ActionMessage::setCommand(&command, actionId, storeId);
|
||||
ReturnValue_t result = queueToUse->sendMessage(queueId, &command);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
ipcStore->deleteData(storeId);
|
||||
}
|
||||
lastTarget = queueId;
|
||||
commandCount++;
|
||||
return result;
|
||||
}
|
||||
|
||||
ReturnValue_t CommandActionHelper::initialize() {
|
||||
ipcStore = objectManager->get<StorageManagerIF>(objects::IPC_STORE);
|
||||
if (ipcStore == NULL) {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
|
||||
queueToUse = owner->getCommandQueuePtr();
|
||||
if (queueToUse == NULL) {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
ReturnValue_t CommandActionHelper::handleReply(CommandMessage *reply) {
|
||||
if (reply->getSender() != lastTarget) {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
switch (reply->getCommand()) {
|
||||
case ActionMessage::COMPLETION_SUCCESS:
|
||||
commandCount--;
|
||||
owner->completionSuccessfulReceived(ActionMessage::getActionId(reply));
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
case ActionMessage::COMPLETION_FAILED:
|
||||
commandCount--;
|
||||
owner->completionFailedReceived(ActionMessage::getActionId(reply),
|
||||
ActionMessage::getReturnCode(reply));
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
case ActionMessage::STEP_SUCCESS:
|
||||
owner->stepSuccessfulReceived(ActionMessage::getActionId(reply),
|
||||
ActionMessage::getStep(reply));
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
case ActionMessage::STEP_FAILED:
|
||||
commandCount--;
|
||||
owner->stepFailedReceived(ActionMessage::getActionId(reply),
|
||||
ActionMessage::getStep(reply),
|
||||
ActionMessage::getReturnCode(reply));
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
case ActionMessage::DATA_REPLY:
|
||||
extractDataForOwner(ActionMessage::getActionId(reply),
|
||||
ActionMessage::getStoreId(reply));
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
default:
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
}
|
||||
|
||||
uint8_t CommandActionHelper::getCommandCount() const {
|
||||
return commandCount;
|
||||
}
|
||||
|
||||
void CommandActionHelper::extractDataForOwner(ActionId_t actionId, store_address_t storeId) {
|
||||
const uint8_t * data = NULL;
|
||||
size_t size = 0;
|
||||
ReturnValue_t result = ipcStore->getData(storeId, &data, &size);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return;
|
||||
}
|
||||
owner->dataReceived(actionId, data, size);
|
||||
ipcStore->deleteData(storeId);
|
||||
}
|
36
action/CommandActionHelper.h
Normal file
36
action/CommandActionHelper.h
Normal file
@ -0,0 +1,36 @@
|
||||
#ifndef COMMANDACTIONHELPER_H_
|
||||
#define COMMANDACTIONHELPER_H_
|
||||
|
||||
#include "ActionMessage.h"
|
||||
#include "../objectmanager/ObjectManagerIF.h"
|
||||
#include "../returnvalues/HasReturnvaluesIF.h"
|
||||
#include "../serialize/SerializeIF.h"
|
||||
#include "../storagemanager/StorageManagerIF.h"
|
||||
#include "../ipc/MessageQueueIF.h"
|
||||
|
||||
class CommandsActionsIF;
|
||||
|
||||
class CommandActionHelper {
|
||||
friend class CommandsActionsIF;
|
||||
public:
|
||||
CommandActionHelper(CommandsActionsIF* owner);
|
||||
virtual ~CommandActionHelper();
|
||||
ReturnValue_t commandAction(object_id_t commandTo,
|
||||
ActionId_t actionId, const uint8_t* data, uint32_t size);
|
||||
ReturnValue_t commandAction(object_id_t commandTo,
|
||||
ActionId_t actionId, SerializeIF* data);
|
||||
ReturnValue_t initialize();
|
||||
ReturnValue_t handleReply(CommandMessage* reply);
|
||||
uint8_t getCommandCount() const;
|
||||
private:
|
||||
CommandsActionsIF* owner;
|
||||
MessageQueueIF* queueToUse;
|
||||
StorageManagerIF* ipcStore;
|
||||
uint8_t commandCount;
|
||||
MessageQueueId_t lastTarget;
|
||||
void extractDataForOwner(ActionId_t actionId, store_address_t storeId);
|
||||
ReturnValue_t sendCommand(MessageQueueId_t queueId, ActionId_t actionId,
|
||||
store_address_t storeId);
|
||||
};
|
||||
|
||||
#endif /* COMMANDACTIONHELPER_H_ */
|
37
action/CommandsActionsIF.h
Normal file
37
action/CommandsActionsIF.h
Normal file
@ -0,0 +1,37 @@
|
||||
#ifndef FSFW_ACTION_COMMANDSACTIONSIF_H_
|
||||
#define FSFW_ACTION_COMMANDSACTIONSIF_H_
|
||||
|
||||
#include "CommandActionHelper.h"
|
||||
#include "../returnvalues/HasReturnvaluesIF.h"
|
||||
#include "../ipc/MessageQueueIF.h"
|
||||
|
||||
/**
|
||||
* Interface to separate commanding actions of other objects.
|
||||
* In next iteration, IF should be shortened to three calls:
|
||||
* - dataReceived(data)
|
||||
* - successReceived(id, step)
|
||||
* - failureReceived(id, step, cause)
|
||||
* or even
|
||||
* - replyReceived(id, step, cause) (if cause == OK, it's a success).
|
||||
*/
|
||||
class CommandsActionsIF {
|
||||
friend class CommandActionHelper;
|
||||
public:
|
||||
static const uint8_t INTERFACE_ID = CLASS_ID::COMMANDS_ACTIONS_IF;
|
||||
static const ReturnValue_t OBJECT_HAS_NO_FUNCTIONS = MAKE_RETURN_CODE(1);
|
||||
static const ReturnValue_t ALREADY_COMMANDING = MAKE_RETURN_CODE(2);
|
||||
virtual ~CommandsActionsIF() {}
|
||||
virtual MessageQueueIF* getCommandQueuePtr() = 0;
|
||||
protected:
|
||||
virtual void stepSuccessfulReceived(ActionId_t actionId, uint8_t step) = 0;
|
||||
virtual void stepFailedReceived(ActionId_t actionId, uint8_t step,
|
||||
ReturnValue_t returnCode) = 0;
|
||||
virtual void dataReceived(ActionId_t actionId, const uint8_t* data,
|
||||
uint32_t size) = 0;
|
||||
virtual void completionSuccessfulReceived(ActionId_t actionId) = 0;
|
||||
virtual void completionFailedReceived(ActionId_t actionId,
|
||||
ReturnValue_t returnCode) = 0;
|
||||
};
|
||||
|
||||
|
||||
#endif /* FSFW_ACTION_COMMANDSACTIONSIF_H_ */
|
63
action/HasActionsIF.h
Normal file
63
action/HasActionsIF.h
Normal file
@ -0,0 +1,63 @@
|
||||
#ifndef FSFW_ACTION_HASACTIONSIF_H_
|
||||
#define FSFW_ACTION_HASACTIONSIF_H_
|
||||
|
||||
#include "ActionHelper.h"
|
||||
#include "ActionMessage.h"
|
||||
#include "SimpleActionHelper.h"
|
||||
#include "../returnvalues/HasReturnvaluesIF.h"
|
||||
#include "../ipc/MessageQueueIF.h"
|
||||
|
||||
/**
|
||||
* @brief
|
||||
* Interface for component which uses actions
|
||||
*
|
||||
* @details
|
||||
* This interface is used to execute actions in the component. Actions, in the
|
||||
* sense of this interface, are activities with a well-defined beginning and
|
||||
* end in time. They may adjust sub-states of components, but are not supposed
|
||||
* to change the main mode of operation, which is handled with the HasModesIF
|
||||
* described below.
|
||||
*
|
||||
* The HasActionsIF allows components to define such actions and make them
|
||||
* available for other components to use. Implementing the interface is
|
||||
* straightforward: There’s a single executeAction call, which provides an
|
||||
* identifier for the action to execute, as well as arbitrary parameters for
|
||||
* input.
|
||||
* Aside from direct, software-based actions, it is used in device handler
|
||||
* components as an interface to forward commands to devices.
|
||||
* Implementing components of the interface are supposed to check identifier
|
||||
* (ID) and parameters and immediately start execution of the action.
|
||||
* It is, however, not required to immediately finish execution.
|
||||
* Instead, this may be deferred to a later point in time, at which the
|
||||
* component needs to inform the caller about finished or failed execution.
|
||||
*
|
||||
* @ingroup interfaces
|
||||
*/
|
||||
class HasActionsIF {
|
||||
public:
|
||||
static const uint8_t INTERFACE_ID = CLASS_ID::HAS_ACTIONS_IF;
|
||||
static const ReturnValue_t IS_BUSY = MAKE_RETURN_CODE(1);
|
||||
static const ReturnValue_t INVALID_PARAMETERS = MAKE_RETURN_CODE(2);
|
||||
static const ReturnValue_t EXECUTION_FINISHED = MAKE_RETURN_CODE(3);
|
||||
static const ReturnValue_t INVALID_ACTION_ID = MAKE_RETURN_CODE(4);
|
||||
virtual ~HasActionsIF() { }
|
||||
/**
|
||||
* Function to get the MessageQueueId_t of the implementing object
|
||||
* @return MessageQueueId_t of the object
|
||||
*/
|
||||
virtual MessageQueueId_t getCommandQueue() const = 0;
|
||||
/**
|
||||
* Execute or initialize the execution of a certain function.
|
||||
* The ActionHelpers will execute this function and behave differently
|
||||
* depending on the returnvalue.
|
||||
*
|
||||
* @return
|
||||
* -@c EXECUTION_FINISHED Finish reply will be generated
|
||||
* -@c Not RETURN_OK Step failure reply will be generated
|
||||
*/
|
||||
virtual ReturnValue_t executeAction(ActionId_t actionId,
|
||||
MessageQueueId_t commandedBy, const uint8_t* data, size_t size) = 0;
|
||||
};
|
||||
|
||||
|
||||
#endif /* FSFW_ACTION_HASACTIONSIF_H_ */
|
75
action/SimpleActionHelper.cpp
Normal file
75
action/SimpleActionHelper.cpp
Normal file
@ -0,0 +1,75 @@
|
||||
#include "HasActionsIF.h"
|
||||
#include "SimpleActionHelper.h"
|
||||
|
||||
SimpleActionHelper::SimpleActionHelper(HasActionsIF* setOwner,
|
||||
MessageQueueIF* useThisQueue) :
|
||||
ActionHelper(setOwner, useThisQueue), isExecuting(false) {
|
||||
}
|
||||
|
||||
SimpleActionHelper::~SimpleActionHelper() {
|
||||
}
|
||||
|
||||
void SimpleActionHelper::step(ReturnValue_t result) {
|
||||
// STEP_OFFESET is subtracted to compensate for adding offset in base
|
||||
// method, which is not necessary here.
|
||||
ActionHelper::step(stepCount - STEP_OFFSET, lastCommander, lastAction,
|
||||
result);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
resetHelper();
|
||||
}
|
||||
}
|
||||
|
||||
void SimpleActionHelper::finish(ReturnValue_t result) {
|
||||
ActionHelper::finish(lastCommander, lastAction, result);
|
||||
resetHelper();
|
||||
}
|
||||
|
||||
ReturnValue_t SimpleActionHelper::reportData(SerializeIF* data) {
|
||||
return ActionHelper::reportData(lastCommander, lastAction, data);
|
||||
}
|
||||
|
||||
void SimpleActionHelper::resetHelper() {
|
||||
stepCount = 0;
|
||||
isExecuting = false;
|
||||
lastAction = 0;
|
||||
lastCommander = 0;
|
||||
}
|
||||
|
||||
void SimpleActionHelper::prepareExecution(MessageQueueId_t commandedBy,
|
||||
ActionId_t actionId, store_address_t dataAddress) {
|
||||
CommandMessage reply;
|
||||
if (isExecuting) {
|
||||
ipcStore->deleteData(dataAddress);
|
||||
ActionMessage::setStepReply(&reply, actionId, 0,
|
||||
HasActionsIF::IS_BUSY);
|
||||
queueToUse->sendMessage(commandedBy, &reply);
|
||||
}
|
||||
const uint8_t* dataPtr = NULL;
|
||||
size_t size = 0;
|
||||
ReturnValue_t result = ipcStore->getData(dataAddress, &dataPtr, &size);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
ActionMessage::setStepReply(&reply, actionId, 0, result);
|
||||
queueToUse->sendMessage(commandedBy, &reply);
|
||||
return;
|
||||
}
|
||||
lastCommander = commandedBy;
|
||||
lastAction = actionId;
|
||||
result = owner->executeAction(actionId, commandedBy, dataPtr, size);
|
||||
ipcStore->deleteData(dataAddress);
|
||||
switch (result) {
|
||||
case HasReturnvaluesIF::RETURN_OK:
|
||||
isExecuting = true;
|
||||
stepCount++;
|
||||
break;
|
||||
case HasActionsIF::EXECUTION_FINISHED:
|
||||
ActionMessage::setCompletionReply(&reply, actionId,
|
||||
HasReturnvaluesIF::RETURN_OK);
|
||||
queueToUse->sendMessage(commandedBy, &reply);
|
||||
break;
|
||||
default:
|
||||
ActionMessage::setStepReply(&reply, actionId, 0, result);
|
||||
queueToUse->sendMessage(commandedBy, &reply);
|
||||
break;
|
||||
}
|
||||
|
||||
}
|
30
action/SimpleActionHelper.h
Normal file
30
action/SimpleActionHelper.h
Normal file
@ -0,0 +1,30 @@
|
||||
#ifndef FSFW_ACTION_SIMPLEACTIONHELPER_H_
|
||||
#define FSFW_ACTION_SIMPLEACTIONHELPER_H_
|
||||
|
||||
#include "ActionHelper.h"
|
||||
|
||||
/**
|
||||
* @brief This is an action helper which is only able to service one action
|
||||
* at a time but remembers last commander and last action which
|
||||
* simplifies usage
|
||||
*/
|
||||
class SimpleActionHelper: public ActionHelper {
|
||||
public:
|
||||
SimpleActionHelper(HasActionsIF* setOwner, MessageQueueIF* useThisQueue);
|
||||
virtual ~SimpleActionHelper();
|
||||
void step(ReturnValue_t result = HasReturnvaluesIF::RETURN_OK);
|
||||
void finish(ReturnValue_t result = HasReturnvaluesIF::RETURN_OK);
|
||||
ReturnValue_t reportData(SerializeIF* data);
|
||||
|
||||
protected:
|
||||
void prepareExecution(MessageQueueId_t commandedBy, ActionId_t actionId,
|
||||
store_address_t dataAddress);
|
||||
virtual void resetHelper();
|
||||
private:
|
||||
bool isExecuting;
|
||||
MessageQueueId_t lastCommander = MessageQueueIF::NO_QUEUE;
|
||||
ActionId_t lastAction = 0;
|
||||
uint8_t stepCount = 0;
|
||||
};
|
||||
|
||||
#endif /* SIMPLEACTIONHELPER_H_ */
|
@ -1,29 +0,0 @@
|
||||
FROM ubuntu:focal
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get --yes upgrade
|
||||
|
||||
#tzdata is a dependency, won't install otherwise
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
RUN apt-get --yes install gcc g++ cmake make lcov git valgrind nano iputils-ping python3 pip doxygen graphviz rsync
|
||||
|
||||
RUN python3 -m pip install sphinx breathe
|
||||
|
||||
RUN git clone https://github.com/catchorg/Catch2.git && \
|
||||
cd Catch2 && \
|
||||
git checkout v3.1.0 && \
|
||||
cmake -Bbuild -H. -DBUILD_TESTING=OFF && \
|
||||
cmake --build build/ --target install
|
||||
|
||||
RUN git clone https://github.com/ETLCPP/etl.git && \
|
||||
cd etl && \
|
||||
git checkout 20.28.0 && \
|
||||
cmake -B build . && \
|
||||
cmake --install build/
|
||||
|
||||
#ssh needs a valid user to work
|
||||
RUN adduser --uid 114 jenkins
|
||||
|
||||
#add documentation server to known hosts
|
||||
RUN echo "|1|/LzCV4BuTmTb2wKnD146l9fTKgQ=|NJJtVjvWbtRt8OYqFgcYRnMQyVw= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNL8ssTonYtgiR/6RRlSIK9WU1ywOcJmxFTLcEblAwH7oifZzmYq3XRfwXrgfMpylEfMFYfCU8JRqtmi19xc21A=" >> /etc/ssh/ssh_known_hosts
|
||||
RUN echo "|1|CcBvBc3EG03G+XM5rqRHs6gK/Gg=|oGeJQ+1I8NGI2THIkJsW92DpTzs= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNL8ssTonYtgiR/6RRlSIK9WU1ywOcJmxFTLcEblAwH7oifZzmYq3XRfwXrgfMpylEfMFYfCU8JRqtmi19xc21A=" >> /etc/ssh/ssh_known_hosts
|
127
automation/Jenkinsfile
vendored
127
automation/Jenkinsfile
vendored
@ -1,127 +0,0 @@
|
||||
pipeline {
|
||||
environment {
|
||||
BUILDDIR_HOST = 'cmake-build-tests-host'
|
||||
BUILDDIR_LINUX = 'cmake-build-tests-linux'
|
||||
DOCDDIR = 'cmake-build-documentation'
|
||||
}
|
||||
agent {
|
||||
docker {
|
||||
image 'fsfw-ci:d6'
|
||||
args '--network host --sysctl fs.mqueue.msg_max=100'
|
||||
}
|
||||
}
|
||||
stages {
|
||||
stage('Host') {
|
||||
stages{
|
||||
stage('Clean') {
|
||||
steps {
|
||||
sh 'rm -rf $BUILDDIR_HOST'
|
||||
}
|
||||
}
|
||||
stage('Configure') {
|
||||
steps {
|
||||
dir(BUILDDIR_HOST) {
|
||||
sh 'cmake -DFSFW_OSAL=host -DFSFW_BUILD_TESTS=ON -DFSFW_CICD_BUILD=ON ..'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Build') {
|
||||
steps {
|
||||
dir(BUILDDIR_HOST) {
|
||||
sh 'cmake --build . -j4'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Unittests') {
|
||||
steps {
|
||||
dir(BUILDDIR_HOST) {
|
||||
sh 'cmake --build . -- fsfw-tests_coverage -j4'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Valgrind') {
|
||||
steps {
|
||||
dir(BUILDDIR_HOST) {
|
||||
sh 'valgrind --leak-check=full --error-exitcode=1 ./fsfw-tests'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Linux') {
|
||||
stages{
|
||||
stage('Clean') {
|
||||
steps {
|
||||
sh 'rm -rf $BUILDDIR_LINUX'
|
||||
}
|
||||
}
|
||||
stage('Configure') {
|
||||
steps {
|
||||
dir(BUILDDIR_LINUX) {
|
||||
sh 'cmake -DFSFW_OSAL=linux -DFSFW_BUILD_TESTS=ON -DFSFW_CICD_BUILD=ON ..'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Build') {
|
||||
steps {
|
||||
dir(BUILDDIR_LINUX) {
|
||||
sh 'cmake --build . -j4'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Unittests') {
|
||||
steps {
|
||||
dir(BUILDDIR_LINUX) {
|
||||
sh 'cmake --build . -- fsfw-tests_coverage -j4'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Valgrind') {
|
||||
steps {
|
||||
dir(BUILDDIR_LINUX) {
|
||||
sh 'valgrind --leak-check=full --error-exitcode=1 ./fsfw-tests'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Documentation') {
|
||||
when {
|
||||
branch 'development'
|
||||
}
|
||||
steps {
|
||||
dir(DOCDDIR) {
|
||||
sh 'cmake -DFSFW_BUILD_DOCS=ON -DFSFW_OSAL=host ..'
|
||||
sh 'make Sphinx'
|
||||
sshagent(credentials: ['documentation-buildfix']) {
|
||||
sh 'rsync -r --delete docs/sphinx/* buildfix@documentation.irs.uni-stuttgart.de:/fsfw/development'
|
||||
}
|
||||
}
|
||||
dir(BUILDDIR_LINUX) {
|
||||
sshagent(credentials: ['documentation-buildfix']) {
|
||||
sh 'rsync -r --delete fsfw-tests_coverage/* buildfix@documentation.irs.uni-stuttgart.de:/fsfw/coverage/development'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Master Documentation') {
|
||||
when {
|
||||
branch 'master'
|
||||
}
|
||||
steps {
|
||||
dir(DOCDDIR) {
|
||||
sh 'cmake -DFSFW_BUILD_DOCS=ON -DFSFW_OSAL=host ..'
|
||||
sh 'make Sphinx'
|
||||
sshagent(credentials: ['documentation-buildfix']) {
|
||||
sh 'rsync -r --delete docs/sphinx/* buildfix@documentation.irs.uni-stuttgart.de:/fsfw/master'
|
||||
}
|
||||
}
|
||||
dir(BUILDDIR_LINUX) {
|
||||
sshagent(credentials: ['documentation-buildfix']) {
|
||||
sh 'rsync -r --delete fsfw-tests_coverage/* buildfix@documentation.irs.uni-stuttgart.de:/fsfw/coverage/master'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,13 +0,0 @@
|
||||
# Look for an executable called sphinx-build
|
||||
find_program(SPHINX_EXECUTABLE
|
||||
NAMES sphinx-build
|
||||
DOC "Path to sphinx-build executable")
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
|
||||
# Handle standard arguments to find_package like REQUIRED and QUIET
|
||||
find_package_handle_standard_args(
|
||||
Sphinx
|
||||
"Failed to find sphinx-build executable"
|
||||
SPHINX_EXECUTABLE
|
||||
)
|
@ -1,28 +0,0 @@
|
||||
# Determines the git version with git describe and returns it by setting
|
||||
# the GIT_INFO list in the parent scope. The list has the following entries
|
||||
# 1. Full version string
|
||||
# 2. Major version
|
||||
# 3. Minor version
|
||||
# 4. Revision
|
||||
# 5. git SHA hash and commits since tag
|
||||
function(determine_version_with_git)
|
||||
include(GetGitRevisionDescription)
|
||||
git_describe(VERSION ${ARGN})
|
||||
string(FIND ${VERSION} "." VALID_VERSION)
|
||||
if(VALID_VERSION EQUAL -1)
|
||||
message(WARNING "Version string ${VERSION} retrieved with git describe is invalid")
|
||||
return()
|
||||
endif()
|
||||
# Parse the version information into pieces.
|
||||
string(REGEX REPLACE "^v([0-9]+)\\..*" "\\1" _VERSION_MAJOR "${VERSION}")
|
||||
string(REGEX REPLACE "^v[0-9]+\\.([0-9]+).*" "\\1" _VERSION_MINOR "${VERSION}")
|
||||
string(REGEX REPLACE "^v[0-9]+\\.[0-9]+\\.([0-9]+).*" "\\1" _VERSION_PATCH "${VERSION}")
|
||||
string(REGEX REPLACE "^v[0-9]+\\.[0-9]+\\.[0-9]+-(.*)" "\\1" VERSION_SHA1 "${VERSION}")
|
||||
set(GIT_INFO ${VERSION})
|
||||
list(APPEND GIT_INFO ${_VERSION_MAJOR})
|
||||
list(APPEND GIT_INFO ${_VERSION_MINOR})
|
||||
list(APPEND GIT_INFO ${_VERSION_PATCH})
|
||||
list(APPEND GIT_INFO ${VERSION_SHA1})
|
||||
set(GIT_INFO ${GIT_INFO} PARENT_SCOPE)
|
||||
message(STATUS "${MSG_PREFIX} Set git version info into GIT_INFO from the git tag ${VERSION}")
|
||||
endfunction()
|
@ -1,7 +0,0 @@
|
||||
The files in the `bilke` folder were manually copy and pasted from the
|
||||
[cmake-modules repository](https://github.com/bilke/cmake-modules). It was decided to do
|
||||
this because only a small subset of its provided functions are needed.
|
||||
|
||||
The files in the `rpavlik` folder were manually copy and pasted from the
|
||||
[cmake-modules repository](https://github.com/rpavlik/cmake-modules). It was decided to do
|
||||
this because only a small subset of its provided functions are needed.
|
@ -1,719 +0,0 @@
|
||||
# Copyright (c) 2012 - 2017, Lars Bilke
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without modification,
|
||||
# are permitted provided that the following conditions are met:
|
||||
#
|
||||
# 1. Redistributions of source code must retain the above copyright notice, this
|
||||
# list of conditions and the following disclaimer.
|
||||
#
|
||||
# 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
# this list of conditions and the following disclaimer in the documentation
|
||||
# and/or other materials provided with the distribution.
|
||||
#
|
||||
# 3. Neither the name of the copyright holder nor the names of its contributors
|
||||
# may be used to endorse or promote products derived from this software without
|
||||
# specific prior written permission.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
|
||||
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
#
|
||||
# CHANGES:
|
||||
#
|
||||
# 2012-01-31, Lars Bilke
|
||||
# - Enable Code Coverage
|
||||
#
|
||||
# 2013-09-17, Joakim Söderberg
|
||||
# - Added support for Clang.
|
||||
# - Some additional usage instructions.
|
||||
#
|
||||
# 2016-02-03, Lars Bilke
|
||||
# - Refactored functions to use named parameters
|
||||
#
|
||||
# 2017-06-02, Lars Bilke
|
||||
# - Merged with modified version from github.com/ufz/ogs
|
||||
#
|
||||
# 2019-05-06, Anatolii Kurotych
|
||||
# - Remove unnecessary --coverage flag
|
||||
#
|
||||
# 2019-12-13, FeRD (Frank Dana)
|
||||
# - Deprecate COVERAGE_LCOVR_EXCLUDES and COVERAGE_GCOVR_EXCLUDES lists in favor
|
||||
# of tool-agnostic COVERAGE_EXCLUDES variable, or EXCLUDE setup arguments.
|
||||
# - CMake 3.4+: All excludes can be specified relative to BASE_DIRECTORY
|
||||
# - All setup functions: accept BASE_DIRECTORY, EXCLUDE list
|
||||
# - Set lcov basedir with -b argument
|
||||
# - Add automatic --demangle-cpp in lcovr, if 'c++filt' is available (can be
|
||||
# overridden with NO_DEMANGLE option in setup_target_for_coverage_lcovr().)
|
||||
# - Delete output dir, .info file on 'make clean'
|
||||
# - Remove Python detection, since version mismatches will break gcovr
|
||||
# - Minor cleanup (lowercase function names, update examples...)
|
||||
#
|
||||
# 2019-12-19, FeRD (Frank Dana)
|
||||
# - Rename Lcov outputs, make filtered file canonical, fix cleanup for targets
|
||||
#
|
||||
# 2020-01-19, Bob Apthorpe
|
||||
# - Added gfortran support
|
||||
#
|
||||
# 2020-02-17, FeRD (Frank Dana)
|
||||
# - Make all add_custom_target()s VERBATIM to auto-escape wildcard characters
|
||||
# in EXCLUDEs, and remove manual escaping from gcovr targets
|
||||
#
|
||||
# 2021-01-19, Robin Mueller
|
||||
# - Add CODE_COVERAGE_VERBOSE option which will allow to print out commands which are run
|
||||
# - Added the option for users to set the GCOVR_ADDITIONAL_ARGS variable to supply additional
|
||||
# flags to the gcovr command
|
||||
#
|
||||
# 2020-05-04, Mihchael Davis
|
||||
# - Add -fprofile-abs-path to make gcno files contain absolute paths
|
||||
# - Fix BASE_DIRECTORY not working when defined
|
||||
# - Change BYPRODUCT from folder to index.html to stop ninja from complaining about double defines
|
||||
#
|
||||
# 2021-05-10, Martin Stump
|
||||
# - Check if the generator is multi-config before warning about non-Debug builds
|
||||
#
|
||||
# 2022-02-22, Marko Wehle
|
||||
# - Change gcovr output from -o <filename> for --xml <filename> and --html <filename> output respectively.
|
||||
# This will allow for Multiple Output Formats at the same time by making use of GCOVR_ADDITIONAL_ARGS, e.g. GCOVR_ADDITIONAL_ARGS "--txt".
|
||||
#
|
||||
# USAGE:
|
||||
#
|
||||
# 1. Copy this file into your cmake modules path.
|
||||
#
|
||||
# 2. Add the following line to your CMakeLists.txt (best inside an if-condition
|
||||
# using a CMake option() to enable it just optionally):
|
||||
# include(CodeCoverage)
|
||||
#
|
||||
# 3. Append necessary compiler flags for all supported source files:
|
||||
# append_coverage_compiler_flags()
|
||||
# Or for specific target:
|
||||
# append_coverage_compiler_flags_to_target(YOUR_TARGET_NAME)
|
||||
#
|
||||
# 3.a (OPTIONAL) Set appropriate optimization flags, e.g. -O0, -O1 or -Og
|
||||
#
|
||||
# 4. If you need to exclude additional directories from the report, specify them
|
||||
# using full paths in the COVERAGE_EXCLUDES variable before calling
|
||||
# setup_target_for_coverage_*().
|
||||
# Example:
|
||||
# set(COVERAGE_EXCLUDES
|
||||
# '${PROJECT_SOURCE_DIR}/src/dir1/*'
|
||||
# '/path/to/my/src/dir2/*')
|
||||
# Or, use the EXCLUDE argument to setup_target_for_coverage_*().
|
||||
# Example:
|
||||
# setup_target_for_coverage_lcov(
|
||||
# NAME coverage
|
||||
# EXECUTABLE testrunner
|
||||
# EXCLUDE "${PROJECT_SOURCE_DIR}/src/dir1/*" "/path/to/my/src/dir2/*")
|
||||
#
|
||||
# 4.a NOTE: With CMake 3.4+, COVERAGE_EXCLUDES or EXCLUDE can also be set
|
||||
# relative to the BASE_DIRECTORY (default: PROJECT_SOURCE_DIR)
|
||||
# Example:
|
||||
# set(COVERAGE_EXCLUDES "dir1/*")
|
||||
# setup_target_for_coverage_gcovr_html(
|
||||
# NAME coverage
|
||||
# EXECUTABLE testrunner
|
||||
# BASE_DIRECTORY "${PROJECT_SOURCE_DIR}/src"
|
||||
# EXCLUDE "dir2/*")
|
||||
#
|
||||
# 5. Use the functions described below to create a custom make target which
|
||||
# runs your test executable and produces a code coverage report.
|
||||
#
|
||||
# 6. Build a Debug build:
|
||||
# cmake -DCMAKE_BUILD_TYPE=Debug ..
|
||||
# make
|
||||
# make my_coverage_target
|
||||
#
|
||||
|
||||
include(CMakeParseArguments)
|
||||
|
||||
option(CODE_COVERAGE_VERBOSE "Verbose information" FALSE)
|
||||
|
||||
# Check prereqs
|
||||
find_program( GCOV_PATH gcov )
|
||||
find_program( LCOV_PATH NAMES lcov lcov.bat lcov.exe lcov.perl)
|
||||
find_program( FASTCOV_PATH NAMES fastcov fastcov.py )
|
||||
find_program( GENHTML_PATH NAMES genhtml genhtml.perl genhtml.bat )
|
||||
find_program( GCOVR_PATH gcovr )
|
||||
find_program( CPPFILT_PATH NAMES c++filt )
|
||||
|
||||
if(NOT GCOV_PATH)
|
||||
message(FATAL_ERROR "gcov not found! Aborting...")
|
||||
endif() # NOT GCOV_PATH
|
||||
|
||||
get_property(LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)
|
||||
list(GET LANGUAGES 0 LANG)
|
||||
|
||||
if("${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
|
||||
if("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
|
||||
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
|
||||
endif()
|
||||
elseif(NOT CMAKE_COMPILER_IS_GNUCXX)
|
||||
if("${CMAKE_Fortran_COMPILER_ID}" MATCHES "[Ff]lang")
|
||||
# Do nothing; exit conditional without error if true
|
||||
elseif("${CMAKE_Fortran_COMPILER_ID}" MATCHES "GNU")
|
||||
# Do nothing; exit conditional without error if true
|
||||
else()
|
||||
message(FATAL_ERROR "Compiler is not GNU gcc! Aborting...")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
set(COVERAGE_COMPILER_FLAGS "-g -fprofile-arcs -ftest-coverage"
|
||||
CACHE INTERNAL "")
|
||||
if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
|
||||
include(CheckCXXCompilerFlag)
|
||||
check_cxx_compiler_flag(-fprofile-abs-path HAVE_fprofile_abs_path)
|
||||
if(HAVE_fprofile_abs_path)
|
||||
set(COVERAGE_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
set(CMAKE_Fortran_FLAGS_COVERAGE
|
||||
${COVERAGE_COMPILER_FLAGS}
|
||||
CACHE STRING "Flags used by the Fortran compiler during coverage builds."
|
||||
FORCE )
|
||||
set(CMAKE_CXX_FLAGS_COVERAGE
|
||||
${COVERAGE_COMPILER_FLAGS}
|
||||
CACHE STRING "Flags used by the C++ compiler during coverage builds."
|
||||
FORCE )
|
||||
set(CMAKE_C_FLAGS_COVERAGE
|
||||
${COVERAGE_COMPILER_FLAGS}
|
||||
CACHE STRING "Flags used by the C compiler during coverage builds."
|
||||
FORCE )
|
||||
set(CMAKE_EXE_LINKER_FLAGS_COVERAGE
|
||||
""
|
||||
CACHE STRING "Flags used for linking binaries during coverage builds."
|
||||
FORCE )
|
||||
set(CMAKE_SHARED_LINKER_FLAGS_COVERAGE
|
||||
""
|
||||
CACHE STRING "Flags used by the shared libraries linker during coverage builds."
|
||||
FORCE )
|
||||
mark_as_advanced(
|
||||
CMAKE_Fortran_FLAGS_COVERAGE
|
||||
CMAKE_CXX_FLAGS_COVERAGE
|
||||
CMAKE_C_FLAGS_COVERAGE
|
||||
CMAKE_EXE_LINKER_FLAGS_COVERAGE
|
||||
CMAKE_SHARED_LINKER_FLAGS_COVERAGE )
|
||||
|
||||
get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
|
||||
if(NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG))
|
||||
message(WARNING "Code coverage results with an optimised (non-Debug) build may be misleading")
|
||||
endif() # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
|
||||
|
||||
if(CMAKE_C_COMPILER_ID STREQUAL "GNU" OR CMAKE_Fortran_COMPILER_ID STREQUAL "GNU")
|
||||
link_libraries(gcov)
|
||||
endif()
|
||||
|
||||
# Defines a target for running and collection code coverage information
|
||||
# Builds dependencies, runs the given executable and outputs reports.
|
||||
# NOTE! The executable should always have a ZERO as exit code otherwise
|
||||
# the coverage generation will not complete.
|
||||
#
|
||||
# setup_target_for_coverage_lcov(
|
||||
# NAME testrunner_coverage # New target name
|
||||
# EXECUTABLE testrunner -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
|
||||
# DEPENDENCIES testrunner # Dependencies to build first
|
||||
# BASE_DIRECTORY "../" # Base directory for report
|
||||
# # (defaults to PROJECT_SOURCE_DIR)
|
||||
# EXCLUDE "src/dir1/*" "src/dir2/*" # Patterns to exclude (can be relative
|
||||
# # to BASE_DIRECTORY, with CMake 3.4+)
|
||||
# NO_DEMANGLE # Don't demangle C++ symbols
|
||||
# # even if c++filt is found
|
||||
# )
|
||||
function(setup_target_for_coverage_lcov)
|
||||
|
||||
set(options NO_DEMANGLE)
|
||||
set(oneValueArgs BASE_DIRECTORY NAME)
|
||||
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES LCOV_ARGS GENHTML_ARGS)
|
||||
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
|
||||
|
||||
if(NOT LCOV_PATH)
|
||||
message(FATAL_ERROR "lcov not found! Aborting...")
|
||||
endif() # NOT LCOV_PATH
|
||||
|
||||
if(NOT GENHTML_PATH)
|
||||
message(FATAL_ERROR "genhtml not found! Aborting...")
|
||||
endif() # NOT GENHTML_PATH
|
||||
|
||||
# Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR
|
||||
if(DEFINED Coverage_BASE_DIRECTORY)
|
||||
get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)
|
||||
else()
|
||||
set(BASEDIR ${PROJECT_SOURCE_DIR})
|
||||
endif()
|
||||
|
||||
# Collect excludes (CMake 3.4+: Also compute absolute paths)
|
||||
set(LCOV_EXCLUDES "")
|
||||
foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_LCOV_EXCLUDES})
|
||||
if(CMAKE_VERSION VERSION_GREATER 3.4)
|
||||
get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})
|
||||
endif()
|
||||
list(APPEND LCOV_EXCLUDES "${EXCLUDE}")
|
||||
endforeach()
|
||||
list(REMOVE_DUPLICATES LCOV_EXCLUDES)
|
||||
|
||||
# Conditional arguments
|
||||
if(CPPFILT_PATH AND NOT ${Coverage_NO_DEMANGLE})
|
||||
set(GENHTML_EXTRA_ARGS "--demangle-cpp")
|
||||
endif()
|
||||
|
||||
# Setting up commands which will be run to generate coverage data.
|
||||
# Cleanup lcov
|
||||
set(LCOV_CLEAN_CMD
|
||||
${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} -directory .
|
||||
-b ${BASEDIR} --zerocounters
|
||||
)
|
||||
# Create baseline to make sure untouched files show up in the report
|
||||
set(LCOV_BASELINE_CMD
|
||||
${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} -c -i -d . -b
|
||||
${BASEDIR} -o ${Coverage_NAME}.base
|
||||
)
|
||||
# Run tests
|
||||
set(LCOV_EXEC_TESTS_CMD
|
||||
${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}
|
||||
)
|
||||
# Capturing lcov counters and generating report
|
||||
set(LCOV_CAPTURE_CMD
|
||||
${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} --directory . -b
|
||||
${BASEDIR} --capture --output-file ${Coverage_NAME}.capture
|
||||
)
|
||||
# add baseline counters
|
||||
set(LCOV_BASELINE_COUNT_CMD
|
||||
${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} -a ${Coverage_NAME}.base
|
||||
-a ${Coverage_NAME}.capture --output-file ${Coverage_NAME}.total
|
||||
)
|
||||
# filter collected data to final coverage report
|
||||
set(LCOV_FILTER_CMD
|
||||
${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} --remove
|
||||
${Coverage_NAME}.total ${LCOV_EXCLUDES} --output-file ${Coverage_NAME}.info
|
||||
)
|
||||
# Generate HTML output
|
||||
set(LCOV_GEN_HTML_CMD
|
||||
${GENHTML_PATH} ${GENHTML_EXTRA_ARGS} ${Coverage_GENHTML_ARGS} -o
|
||||
${Coverage_NAME} ${Coverage_NAME}.info
|
||||
)
|
||||
|
||||
|
||||
if(CODE_COVERAGE_VERBOSE)
|
||||
message(STATUS "Executed command report")
|
||||
message(STATUS "Command to clean up lcov: ")
|
||||
string(REPLACE ";" " " LCOV_CLEAN_CMD_SPACED "${LCOV_CLEAN_CMD}")
|
||||
message(STATUS "${LCOV_CLEAN_CMD_SPACED}")
|
||||
|
||||
message(STATUS "Command to create baseline: ")
|
||||
string(REPLACE ";" " " LCOV_BASELINE_CMD_SPACED "${LCOV_BASELINE_CMD}")
|
||||
message(STATUS "${LCOV_BASELINE_CMD_SPACED}")
|
||||
|
||||
message(STATUS "Command to run the tests: ")
|
||||
string(REPLACE ";" " " LCOV_EXEC_TESTS_CMD_SPACED "${LCOV_EXEC_TESTS_CMD}")
|
||||
message(STATUS "${LCOV_EXEC_TESTS_CMD_SPACED}")
|
||||
|
||||
message(STATUS "Command to capture counters and generate report: ")
|
||||
string(REPLACE ";" " " LCOV_CAPTURE_CMD_SPACED "${LCOV_CAPTURE_CMD}")
|
||||
message(STATUS "${LCOV_CAPTURE_CMD_SPACED}")
|
||||
|
||||
message(STATUS "Command to add baseline counters: ")
|
||||
string(REPLACE ";" " " LCOV_BASELINE_COUNT_CMD_SPACED "${LCOV_BASELINE_COUNT_CMD}")
|
||||
message(STATUS "${LCOV_BASELINE_COUNT_CMD_SPACED}")
|
||||
|
||||
message(STATUS "Command to filter collected data: ")
|
||||
string(REPLACE ";" " " LCOV_FILTER_CMD_SPACED "${LCOV_FILTER_CMD}")
|
||||
message(STATUS "${LCOV_FILTER_CMD_SPACED}")
|
||||
|
||||
message(STATUS "Command to generate lcov HTML output: ")
|
||||
string(REPLACE ";" " " LCOV_GEN_HTML_CMD_SPACED "${LCOV_GEN_HTML_CMD}")
|
||||
message(STATUS "${LCOV_GEN_HTML_CMD_SPACED}")
|
||||
endif()
|
||||
|
||||
# Setup target
|
||||
add_custom_target(${Coverage_NAME}
|
||||
COMMAND ${LCOV_CLEAN_CMD}
|
||||
COMMAND ${LCOV_BASELINE_CMD}
|
||||
COMMAND ${LCOV_EXEC_TESTS_CMD}
|
||||
COMMAND ${LCOV_CAPTURE_CMD}
|
||||
COMMAND ${LCOV_BASELINE_COUNT_CMD}
|
||||
COMMAND ${LCOV_FILTER_CMD}
|
||||
COMMAND ${LCOV_GEN_HTML_CMD}
|
||||
|
||||
# Set output files as GENERATED (will be removed on 'make clean')
|
||||
BYPRODUCTS
|
||||
${Coverage_NAME}.base
|
||||
${Coverage_NAME}.capture
|
||||
${Coverage_NAME}.total
|
||||
${Coverage_NAME}.info
|
||||
${Coverage_NAME}/index.html
|
||||
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
|
||||
DEPENDS ${Coverage_DEPENDENCIES}
|
||||
VERBATIM # Protect arguments to commands
|
||||
COMMENT "Resetting code coverage counters to zero.\nProcessing code coverage counters and generating report."
|
||||
)
|
||||
|
||||
# Show where to find the lcov info report
|
||||
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
|
||||
COMMAND ;
|
||||
COMMENT "Lcov code coverage info report saved in ${Coverage_NAME}.info."
|
||||
)
|
||||
|
||||
# Show info where to find the report
|
||||
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
|
||||
COMMAND ;
|
||||
COMMENT "Open ./${Coverage_NAME}/index.html in your browser to view the coverage report."
|
||||
)
|
||||
|
||||
endfunction() # setup_target_for_coverage_lcov
|
||||
|
||||
# Defines a target for running and collection code coverage information
|
||||
# Builds dependencies, runs the given executable and outputs reports.
|
||||
# NOTE! The executable should always have a ZERO as exit code otherwise
|
||||
# the coverage generation will not complete.
|
||||
#
|
||||
# setup_target_for_coverage_gcovr_xml(
|
||||
# NAME ctest_coverage # New target name
|
||||
# EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
|
||||
# DEPENDENCIES executable_target # Dependencies to build first
|
||||
# BASE_DIRECTORY "../" # Base directory for report
|
||||
# # (defaults to PROJECT_SOURCE_DIR)
|
||||
# EXCLUDE "src/dir1/*" "src/dir2/*" # Patterns to exclude (can be relative
|
||||
# # to BASE_DIRECTORY, with CMake 3.4+)
|
||||
# )
|
||||
# The user can set the variable GCOVR_ADDITIONAL_ARGS to supply additional flags to the
|
||||
# GCVOR command.
|
||||
function(setup_target_for_coverage_gcovr_xml)
|
||||
|
||||
set(options NONE)
|
||||
set(oneValueArgs BASE_DIRECTORY NAME)
|
||||
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
|
||||
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
|
||||
|
||||
if(NOT GCOVR_PATH)
|
||||
message(FATAL_ERROR "gcovr not found! Aborting...")
|
||||
endif() # NOT GCOVR_PATH
|
||||
|
||||
# Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR
|
||||
if(DEFINED Coverage_BASE_DIRECTORY)
|
||||
get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)
|
||||
else()
|
||||
set(BASEDIR ${PROJECT_SOURCE_DIR})
|
||||
endif()
|
||||
|
||||
# Collect excludes (CMake 3.4+: Also compute absolute paths)
|
||||
set(GCOVR_EXCLUDES "")
|
||||
foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})
|
||||
if(CMAKE_VERSION VERSION_GREATER 3.4)
|
||||
get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})
|
||||
endif()
|
||||
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
|
||||
endforeach()
|
||||
list(REMOVE_DUPLICATES GCOVR_EXCLUDES)
|
||||
|
||||
# Combine excludes to several -e arguments
|
||||
set(GCOVR_EXCLUDE_ARGS "")
|
||||
foreach(EXCLUDE ${GCOVR_EXCLUDES})
|
||||
list(APPEND GCOVR_EXCLUDE_ARGS "-e")
|
||||
list(APPEND GCOVR_EXCLUDE_ARGS "${EXCLUDE}")
|
||||
endforeach()
|
||||
|
||||
# Set up commands which will be run to generate coverage data
|
||||
# Run tests
|
||||
set(GCOVR_XML_EXEC_TESTS_CMD
|
||||
${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}
|
||||
)
|
||||
# Running gcovr
|
||||
set(GCOVR_XML_CMD
|
||||
${GCOVR_PATH} --xml ${Coverage_NAME}.xml -r ${BASEDIR} ${GCOVR_ADDITIONAL_ARGS}
|
||||
${GCOVR_EXCLUDE_ARGS} --object-directory=${PROJECT_BINARY_DIR}
|
||||
)
|
||||
|
||||
if(CODE_COVERAGE_VERBOSE)
|
||||
message(STATUS "Executed command report")
|
||||
|
||||
message(STATUS "Command to run tests: ")
|
||||
string(REPLACE ";" " " GCOVR_XML_EXEC_TESTS_CMD_SPACED "${GCOVR_XML_EXEC_TESTS_CMD}")
|
||||
message(STATUS "${GCOVR_XML_EXEC_TESTS_CMD_SPACED}")
|
||||
|
||||
message(STATUS "Command to generate gcovr XML coverage data: ")
|
||||
string(REPLACE ";" " " GCOVR_XML_CMD_SPACED "${GCOVR_XML_CMD}")
|
||||
message(STATUS "${GCOVR_XML_CMD_SPACED}")
|
||||
endif()
|
||||
|
||||
add_custom_target(${Coverage_NAME}
|
||||
COMMAND ${GCOVR_XML_EXEC_TESTS_CMD}
|
||||
COMMAND ${GCOVR_XML_CMD}
|
||||
|
||||
BYPRODUCTS ${Coverage_NAME}.xml
|
||||
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
|
||||
DEPENDS ${Coverage_DEPENDENCIES}
|
||||
VERBATIM # Protect arguments to commands
|
||||
COMMENT "Running gcovr to produce Cobertura code coverage report."
|
||||
)
|
||||
|
||||
# Show info where to find the report
|
||||
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
|
||||
COMMAND ;
|
||||
COMMENT "Cobertura code coverage report saved in ${Coverage_NAME}.xml."
|
||||
)
|
||||
endfunction() # setup_target_for_coverage_gcovr_xml
|
||||
|
||||
# Defines a target for running and collection code coverage information
|
||||
# Builds dependencies, runs the given executable and outputs reports.
|
||||
# NOTE! The executable should always have a ZERO as exit code otherwise
|
||||
# the coverage generation will not complete.
|
||||
#
|
||||
# setup_target_for_coverage_gcovr_html(
|
||||
# NAME ctest_coverage # New target name
|
||||
# EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
|
||||
# DEPENDENCIES executable_target # Dependencies to build first
|
||||
# BASE_DIRECTORY "../" # Base directory for report
|
||||
# # (defaults to PROJECT_SOURCE_DIR)
|
||||
# EXCLUDE "src/dir1/*" "src/dir2/*" # Patterns to exclude (can be relative
|
||||
# # to BASE_DIRECTORY, with CMake 3.4+)
|
||||
# )
|
||||
# The user can set the variable GCOVR_ADDITIONAL_ARGS to supply additional flags to the
|
||||
# GCVOR command.
|
||||
function(setup_target_for_coverage_gcovr_html)
|
||||
|
||||
set(options NONE)
|
||||
set(oneValueArgs BASE_DIRECTORY NAME)
|
||||
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
|
||||
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
|
||||
|
||||
if(NOT GCOVR_PATH)
|
||||
message(FATAL_ERROR "gcovr not found! Aborting...")
|
||||
endif() # NOT GCOVR_PATH
|
||||
|
||||
# Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR
|
||||
if(DEFINED Coverage_BASE_DIRECTORY)
|
||||
get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)
|
||||
else()
|
||||
set(BASEDIR ${PROJECT_SOURCE_DIR})
|
||||
endif()
|
||||
|
||||
# Collect excludes (CMake 3.4+: Also compute absolute paths)
|
||||
set(GCOVR_EXCLUDES "")
|
||||
foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})
|
||||
if(CMAKE_VERSION VERSION_GREATER 3.4)
|
||||
get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})
|
||||
endif()
|
||||
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
|
||||
endforeach()
|
||||
list(REMOVE_DUPLICATES GCOVR_EXCLUDES)
|
||||
|
||||
# Combine excludes to several -e arguments
|
||||
set(GCOVR_EXCLUDE_ARGS "")
|
||||
foreach(EXCLUDE ${GCOVR_EXCLUDES})
|
||||
list(APPEND GCOVR_EXCLUDE_ARGS "-e")
|
||||
list(APPEND GCOVR_EXCLUDE_ARGS "${EXCLUDE}")
|
||||
endforeach()
|
||||
|
||||
# Set up commands which will be run to generate coverage data
|
||||
# Run tests
|
||||
set(GCOVR_HTML_EXEC_TESTS_CMD
|
||||
${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}
|
||||
)
|
||||
# Create folder
|
||||
set(GCOVR_HTML_FOLDER_CMD
|
||||
${CMAKE_COMMAND} -E make_directory ${PROJECT_BINARY_DIR}/${Coverage_NAME}
|
||||
)
|
||||
# Running gcovr
|
||||
set(GCOVR_HTML_CMD
|
||||
${GCOVR_PATH} --html ${Coverage_NAME}/index.html --html-details -r ${BASEDIR} ${GCOVR_ADDITIONAL_ARGS}
|
||||
${GCOVR_EXCLUDE_ARGS} --object-directory=${PROJECT_BINARY_DIR}
|
||||
)
|
||||
|
||||
if(CODE_COVERAGE_VERBOSE)
|
||||
message(STATUS "Executed command report")
|
||||
|
||||
message(STATUS "Command to run tests: ")
|
||||
string(REPLACE ";" " " GCOVR_HTML_EXEC_TESTS_CMD_SPACED "${GCOVR_HTML_EXEC_TESTS_CMD}")
|
||||
message(STATUS "${GCOVR_HTML_EXEC_TESTS_CMD_SPACED}")
|
||||
|
||||
message(STATUS "Command to create a folder: ")
|
||||
string(REPLACE ";" " " GCOVR_HTML_FOLDER_CMD_SPACED "${GCOVR_HTML_FOLDER_CMD}")
|
||||
message(STATUS "${GCOVR_HTML_FOLDER_CMD_SPACED}")
|
||||
|
||||
message(STATUS "Command to generate gcovr HTML coverage data: ")
|
||||
string(REPLACE ";" " " GCOVR_HTML_CMD_SPACED "${GCOVR_HTML_CMD}")
|
||||
message(STATUS "${GCOVR_HTML_CMD_SPACED}")
|
||||
endif()
|
||||
|
||||
add_custom_target(${Coverage_NAME}
|
||||
COMMAND ${GCOVR_HTML_EXEC_TESTS_CMD}
|
||||
COMMAND ${GCOVR_HTML_FOLDER_CMD}
|
||||
COMMAND ${GCOVR_HTML_CMD}
|
||||
|
||||
BYPRODUCTS ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html # report directory
|
||||
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
|
||||
DEPENDS ${Coverage_DEPENDENCIES}
|
||||
VERBATIM # Protect arguments to commands
|
||||
COMMENT "Running gcovr to produce HTML code coverage report."
|
||||
)
|
||||
|
||||
# Show info where to find the report
|
||||
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
|
||||
COMMAND ;
|
||||
COMMENT "Open ./${Coverage_NAME}/index.html in your browser to view the coverage report."
|
||||
)
|
||||
|
||||
endfunction() # setup_target_for_coverage_gcovr_html
|
||||
|
||||
# Defines a target for running and collection code coverage information
|
||||
# Builds dependencies, runs the given executable and outputs reports.
|
||||
# NOTE! The executable should always have a ZERO as exit code otherwise
|
||||
# the coverage generation will not complete.
|
||||
#
|
||||
# setup_target_for_coverage_fastcov(
|
||||
# NAME testrunner_coverage # New target name
|
||||
# EXECUTABLE testrunner -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
|
||||
# DEPENDENCIES testrunner # Dependencies to build first
|
||||
# BASE_DIRECTORY "../" # Base directory for report
|
||||
# # (defaults to PROJECT_SOURCE_DIR)
|
||||
# EXCLUDE "src/dir1/" "src/dir2/" # Patterns to exclude.
|
||||
# NO_DEMANGLE # Don't demangle C++ symbols
|
||||
# # even if c++filt is found
|
||||
# SKIP_HTML # Don't create html report
|
||||
# POST_CMD perl -i -pe s!${PROJECT_SOURCE_DIR}/!!g ctest_coverage.json # E.g. for stripping source dir from file paths
|
||||
# )
|
||||
function(setup_target_for_coverage_fastcov)
|
||||
|
||||
set(options NO_DEMANGLE SKIP_HTML)
|
||||
set(oneValueArgs BASE_DIRECTORY NAME)
|
||||
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES FASTCOV_ARGS GENHTML_ARGS POST_CMD)
|
||||
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
|
||||
|
||||
if(NOT FASTCOV_PATH)
|
||||
message(FATAL_ERROR "fastcov not found! Aborting...")
|
||||
endif()
|
||||
|
||||
if(NOT Coverage_SKIP_HTML AND NOT GENHTML_PATH)
|
||||
message(FATAL_ERROR "genhtml not found! Aborting...")
|
||||
endif()
|
||||
|
||||
# Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR
|
||||
if(Coverage_BASE_DIRECTORY)
|
||||
get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)
|
||||
else()
|
||||
set(BASEDIR ${PROJECT_SOURCE_DIR})
|
||||
endif()
|
||||
|
||||
# Collect excludes (Patterns, not paths, for fastcov)
|
||||
set(FASTCOV_EXCLUDES "")
|
||||
foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_FASTCOV_EXCLUDES})
|
||||
list(APPEND FASTCOV_EXCLUDES "${EXCLUDE}")
|
||||
endforeach()
|
||||
list(REMOVE_DUPLICATES FASTCOV_EXCLUDES)
|
||||
|
||||
# Conditional arguments
|
||||
if(CPPFILT_PATH AND NOT ${Coverage_NO_DEMANGLE})
|
||||
set(GENHTML_EXTRA_ARGS "--demangle-cpp")
|
||||
endif()
|
||||
|
||||
# Set up commands which will be run to generate coverage data
|
||||
set(FASTCOV_EXEC_TESTS_CMD ${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS})
|
||||
|
||||
set(FASTCOV_CAPTURE_CMD ${FASTCOV_PATH} ${Coverage_FASTCOV_ARGS} --gcov ${GCOV_PATH}
|
||||
--search-directory ${BASEDIR}
|
||||
--process-gcno
|
||||
--output ${Coverage_NAME}.json
|
||||
--exclude ${FASTCOV_EXCLUDES}
|
||||
--exclude ${FASTCOV_EXCLUDES}
|
||||
)
|
||||
|
||||
set(FASTCOV_CONVERT_CMD ${FASTCOV_PATH}
|
||||
-C ${Coverage_NAME}.json --lcov --output ${Coverage_NAME}.info
|
||||
)
|
||||
|
||||
if(Coverage_SKIP_HTML)
|
||||
set(FASTCOV_HTML_CMD ";")
|
||||
else()
|
||||
set(FASTCOV_HTML_CMD ${GENHTML_PATH} ${GENHTML_EXTRA_ARGS} ${Coverage_GENHTML_ARGS}
|
||||
-o ${Coverage_NAME} ${Coverage_NAME}.info
|
||||
)
|
||||
endif()
|
||||
|
||||
set(FASTCOV_POST_CMD ";")
|
||||
if(Coverage_POST_CMD)
|
||||
set(FASTCOV_POST_CMD ${Coverage_POST_CMD})
|
||||
endif()
|
||||
|
||||
if(CODE_COVERAGE_VERBOSE)
|
||||
message(STATUS "Code coverage commands for target ${Coverage_NAME} (fastcov):")
|
||||
|
||||
message(" Running tests:")
|
||||
string(REPLACE ";" " " FASTCOV_EXEC_TESTS_CMD_SPACED "${FASTCOV_EXEC_TESTS_CMD}")
|
||||
message(" ${FASTCOV_EXEC_TESTS_CMD_SPACED}")
|
||||
|
||||
message(" Capturing fastcov counters and generating report:")
|
||||
string(REPLACE ";" " " FASTCOV_CAPTURE_CMD_SPACED "${FASTCOV_CAPTURE_CMD}")
|
||||
message(" ${FASTCOV_CAPTURE_CMD_SPACED}")
|
||||
|
||||
message(" Converting fastcov .json to lcov .info:")
|
||||
string(REPLACE ";" " " FASTCOV_CONVERT_CMD_SPACED "${FASTCOV_CONVERT_CMD}")
|
||||
message(" ${FASTCOV_CONVERT_CMD_SPACED}")
|
||||
|
||||
if(NOT Coverage_SKIP_HTML)
|
||||
message(" Generating HTML report: ")
|
||||
string(REPLACE ";" " " FASTCOV_HTML_CMD_SPACED "${FASTCOV_HTML_CMD}")
|
||||
message(" ${FASTCOV_HTML_CMD_SPACED}")
|
||||
endif()
|
||||
if(Coverage_POST_CMD)
|
||||
message(" Running post command: ")
|
||||
string(REPLACE ";" " " FASTCOV_POST_CMD_SPACED "${FASTCOV_POST_CMD}")
|
||||
message(" ${FASTCOV_POST_CMD_SPACED}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Setup target
|
||||
add_custom_target(${Coverage_NAME}
|
||||
|
||||
# Cleanup fastcov
|
||||
COMMAND ${FASTCOV_PATH} ${Coverage_FASTCOV_ARGS} --gcov ${GCOV_PATH}
|
||||
--search-directory ${BASEDIR}
|
||||
--zerocounters
|
||||
|
||||
COMMAND ${FASTCOV_EXEC_TESTS_CMD}
|
||||
COMMAND ${FASTCOV_CAPTURE_CMD}
|
||||
COMMAND ${FASTCOV_CONVERT_CMD}
|
||||
COMMAND ${FASTCOV_HTML_CMD}
|
||||
COMMAND ${FASTCOV_POST_CMD}
|
||||
|
||||
# Set output files as GENERATED (will be removed on 'make clean')
|
||||
BYPRODUCTS
|
||||
${Coverage_NAME}.info
|
||||
${Coverage_NAME}.json
|
||||
${Coverage_NAME}/index.html # report directory
|
||||
|
||||
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
|
||||
DEPENDS ${Coverage_DEPENDENCIES}
|
||||
VERBATIM # Protect arguments to commands
|
||||
COMMENT "Resetting code coverage counters to zero. Processing code coverage counters and generating report."
|
||||
)
|
||||
|
||||
set(INFO_MSG "fastcov code coverage info report saved in ${Coverage_NAME}.info and ${Coverage_NAME}.json.")
|
||||
if(NOT Coverage_SKIP_HTML)
|
||||
string(APPEND INFO_MSG " Open ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html in your browser to view the coverage report.")
|
||||
endif()
|
||||
# Show where to find the fastcov info report
|
||||
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
|
||||
COMMAND ${CMAKE_COMMAND} -E echo ${INFO_MSG}
|
||||
)
|
||||
|
||||
endfunction() # setup_target_for_coverage_fastcov
|
||||
|
||||
function(append_coverage_compiler_flags)
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
|
||||
set(CMAKE_Fortran_FLAGS "${CMAKE_Fortran_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
|
||||
message(STATUS "Appending code coverage compiler flags: ${COVERAGE_COMPILER_FLAGS}")
|
||||
endfunction() # append_coverage_compiler_flags
|
||||
|
||||
# Setup coverage for specific library
|
||||
function(append_coverage_compiler_flags_to_target name)
|
||||
target_compile_options(${name}
|
||||
PRIVATE ${COVERAGE_COMPILER_FLAGS})
|
||||
endfunction()
|
@ -1,23 +0,0 @@
|
||||
Boost Software License - Version 1.0 - August 17th, 2003
|
||||
|
||||
Permission is hereby granted, free of charge, to any person or organization
|
||||
obtaining a copy of the software and accompanying documentation covered by
|
||||
this license (the "Software") to use, reproduce, display, distribute,
|
||||
execute, and transmit the Software, and to prepare derivative works of the
|
||||
Software, and to permit third-parties to whom the Software is furnished to
|
||||
do so, all subject to the following:
|
||||
|
||||
The copyright notices in the Software and this entire statement, including
|
||||
the above license grant, this restriction and the following disclaimer,
|
||||
must be included in all copies of the Software, in whole or in part, and
|
||||
all derivative works of the Software, unless such copies or derivative
|
||||
works are solely in the form of machine-executable object code generated by
|
||||
a source language processor.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
|
||||
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
|
||||
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
|
||||
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
DEALINGS IN THE SOFTWARE.
|
@ -1,284 +0,0 @@
|
||||
# - Returns a version string from Git
|
||||
#
|
||||
# These functions force a re-configure on each git commit so that you can
|
||||
# trust the values of the variables in your build system.
|
||||
#
|
||||
# get_git_head_revision(<refspecvar> <hashvar> [ALLOW_LOOKING_ABOVE_CMAKE_SOURCE_DIR])
|
||||
#
|
||||
# Returns the refspec and sha hash of the current head revision
|
||||
#
|
||||
# git_describe(<var> [<additional arguments to git describe> ...])
|
||||
#
|
||||
# Returns the results of git describe on the source tree, and adjusting
|
||||
# the output so that it tests false if an error occurs.
|
||||
#
|
||||
# git_describe_working_tree(<var> [<additional arguments to git describe> ...])
|
||||
#
|
||||
# Returns the results of git describe on the working tree (--dirty option),
|
||||
# and adjusting the output so that it tests false if an error occurs.
|
||||
#
|
||||
# git_get_exact_tag(<var> [<additional arguments to git describe> ...])
|
||||
#
|
||||
# Returns the results of git describe --exact-match on the source tree,
|
||||
# and adjusting the output so that it tests false if there was no exact
|
||||
# matching tag.
|
||||
#
|
||||
# git_local_changes(<var>)
|
||||
#
|
||||
# Returns either "CLEAN" or "DIRTY" with respect to uncommitted changes.
|
||||
# Uses the return code of "git diff-index --quiet HEAD --".
|
||||
# Does not regard untracked files.
|
||||
#
|
||||
# Requires CMake 2.6 or newer (uses the 'function' command)
|
||||
#
|
||||
# Original Author:
|
||||
# 2009-2020 Ryan Pavlik <ryan.pavlik@gmail.com> <abiryan@ryand.net>
|
||||
# http://academic.cleardefinition.com
|
||||
#
|
||||
# Copyright 2009-2013, Iowa State University.
|
||||
# Copyright 2013-2020, Ryan Pavlik
|
||||
# Copyright 2013-2020, Contributors
|
||||
# SPDX-License-Identifier: BSL-1.0
|
||||
# Distributed under the Boost Software License, Version 1.0.
|
||||
# (See accompanying file LICENSE_1_0.txt or copy at
|
||||
# http://www.boost.org/LICENSE_1_0.txt)
|
||||
|
||||
if(__get_git_revision_description)
|
||||
return()
|
||||
endif()
|
||||
set(__get_git_revision_description YES)
|
||||
|
||||
# We must run the following at "include" time, not at function call time,
|
||||
# to find the path to this module rather than the path to a calling list file
|
||||
get_filename_component(_gitdescmoddir ${CMAKE_CURRENT_LIST_FILE} PATH)
|
||||
|
||||
# Function _git_find_closest_git_dir finds the next closest .git directory
|
||||
# that is part of any directory in the path defined by _start_dir.
|
||||
# The result is returned in the parent scope variable whose name is passed
|
||||
# as variable _git_dir_var. If no .git directory can be found, the
|
||||
# function returns an empty string via _git_dir_var.
|
||||
#
|
||||
# Example: Given a path C:/bla/foo/bar and assuming C:/bla/.git exists and
|
||||
# neither foo nor bar contain a file/directory .git. This wil return
|
||||
# C:/bla/.git
|
||||
#
|
||||
function(_git_find_closest_git_dir _start_dir _git_dir_var)
|
||||
set(cur_dir "${_start_dir}")
|
||||
set(git_dir "${_start_dir}/.git")
|
||||
while(NOT EXISTS "${git_dir}")
|
||||
# .git dir not found, search parent directories
|
||||
set(git_previous_parent "${cur_dir}")
|
||||
get_filename_component(cur_dir "${cur_dir}" DIRECTORY)
|
||||
if(cur_dir STREQUAL git_previous_parent)
|
||||
# We have reached the root directory, we are not in git
|
||||
set(${_git_dir_var}
|
||||
""
|
||||
PARENT_SCOPE)
|
||||
return()
|
||||
endif()
|
||||
set(git_dir "${cur_dir}/.git")
|
||||
endwhile()
|
||||
set(${_git_dir_var}
|
||||
"${git_dir}"
|
||||
PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
function(get_git_head_revision _refspecvar _hashvar)
|
||||
_git_find_closest_git_dir("${CMAKE_CURRENT_SOURCE_DIR}" GIT_DIR)
|
||||
|
||||
if("${ARGN}" STREQUAL "ALLOW_LOOKING_ABOVE_CMAKE_SOURCE_DIR")
|
||||
set(ALLOW_LOOKING_ABOVE_CMAKE_SOURCE_DIR TRUE)
|
||||
else()
|
||||
set(ALLOW_LOOKING_ABOVE_CMAKE_SOURCE_DIR FALSE)
|
||||
endif()
|
||||
if(NOT "${GIT_DIR}" STREQUAL "")
|
||||
file(RELATIVE_PATH _relative_to_source_dir "${CMAKE_SOURCE_DIR}"
|
||||
"${GIT_DIR}")
|
||||
if("${_relative_to_source_dir}" MATCHES "[.][.]" AND NOT ALLOW_LOOKING_ABOVE_CMAKE_SOURCE_DIR)
|
||||
# We've gone above the CMake root dir.
|
||||
set(GIT_DIR "")
|
||||
endif()
|
||||
endif()
|
||||
if("${GIT_DIR}" STREQUAL "")
|
||||
set(${_refspecvar}
|
||||
"GITDIR-NOTFOUND"
|
||||
PARENT_SCOPE)
|
||||
set(${_hashvar}
|
||||
"GITDIR-NOTFOUND"
|
||||
PARENT_SCOPE)
|
||||
return()
|
||||
endif()
|
||||
|
||||
# Check if the current source dir is a git submodule or a worktree.
|
||||
# In both cases .git is a file instead of a directory.
|
||||
#
|
||||
if(NOT IS_DIRECTORY ${GIT_DIR})
|
||||
# The following git command will return a non empty string that
|
||||
# points to the super project working tree if the current
|
||||
# source dir is inside a git submodule.
|
||||
# Otherwise the command will return an empty string.
|
||||
#
|
||||
execute_process(
|
||||
COMMAND "${GIT_EXECUTABLE}" rev-parse
|
||||
--show-superproject-working-tree
|
||||
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
|
||||
OUTPUT_VARIABLE out
|
||||
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
if(NOT "${out}" STREQUAL "")
|
||||
# If out is empty, GIT_DIR/CMAKE_CURRENT_SOURCE_DIR is in a submodule
|
||||
file(READ ${GIT_DIR} submodule)
|
||||
string(REGEX REPLACE "gitdir: (.*)$" "\\1" GIT_DIR_RELATIVE
|
||||
${submodule})
|
||||
string(STRIP ${GIT_DIR_RELATIVE} GIT_DIR_RELATIVE)
|
||||
get_filename_component(SUBMODULE_DIR ${GIT_DIR} PATH)
|
||||
get_filename_component(GIT_DIR ${SUBMODULE_DIR}/${GIT_DIR_RELATIVE}
|
||||
ABSOLUTE)
|
||||
set(HEAD_SOURCE_FILE "${GIT_DIR}/HEAD")
|
||||
else()
|
||||
# GIT_DIR/CMAKE_CURRENT_SOURCE_DIR is in a worktree
|
||||
file(READ ${GIT_DIR} worktree_ref)
|
||||
# The .git directory contains a path to the worktree information directory
|
||||
# inside the parent git repo of the worktree.
|
||||
#
|
||||
string(REGEX REPLACE "gitdir: (.*)$" "\\1" git_worktree_dir
|
||||
${worktree_ref})
|
||||
string(STRIP ${git_worktree_dir} git_worktree_dir)
|
||||
_git_find_closest_git_dir("${git_worktree_dir}" GIT_DIR)
|
||||
set(HEAD_SOURCE_FILE "${git_worktree_dir}/HEAD")
|
||||
endif()
|
||||
else()
|
||||
set(HEAD_SOURCE_FILE "${GIT_DIR}/HEAD")
|
||||
endif()
|
||||
set(GIT_DATA "${CMAKE_CURRENT_BINARY_DIR}/CMakeFiles/git-data")
|
||||
if(NOT EXISTS "${GIT_DATA}")
|
||||
file(MAKE_DIRECTORY "${GIT_DATA}")
|
||||
endif()
|
||||
|
||||
if(NOT EXISTS "${HEAD_SOURCE_FILE}")
|
||||
return()
|
||||
endif()
|
||||
set(HEAD_FILE "${GIT_DATA}/HEAD")
|
||||
configure_file("${HEAD_SOURCE_FILE}" "${HEAD_FILE}" COPYONLY)
|
||||
|
||||
configure_file("${_gitdescmoddir}/GetGitRevisionDescription.cmake.in"
|
||||
"${GIT_DATA}/grabRef.cmake" @ONLY)
|
||||
include("${GIT_DATA}/grabRef.cmake")
|
||||
|
||||
set(${_refspecvar}
|
||||
"${HEAD_REF}"
|
||||
PARENT_SCOPE)
|
||||
set(${_hashvar}
|
||||
"${HEAD_HASH}"
|
||||
PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
function(git_describe _var)
|
||||
if(NOT GIT_FOUND)
|
||||
find_package(Git QUIET)
|
||||
endif()
|
||||
get_git_head_revision(refspec hash)
|
||||
if(NOT GIT_FOUND)
|
||||
set(${_var}
|
||||
"GIT-NOTFOUND"
|
||||
PARENT_SCOPE)
|
||||
return()
|
||||
endif()
|
||||
if(NOT hash)
|
||||
set(${_var}
|
||||
"HEAD-HASH-NOTFOUND"
|
||||
PARENT_SCOPE)
|
||||
return()
|
||||
endif()
|
||||
|
||||
# TODO sanitize
|
||||
#if((${ARGN}" MATCHES "&&") OR
|
||||
# (ARGN MATCHES "||") OR
|
||||
# (ARGN MATCHES "\\;"))
|
||||
# message("Please report the following error to the project!")
|
||||
# message(FATAL_ERROR "Looks like someone's doing something nefarious with git_describe! Passed arguments ${ARGN}")
|
||||
#endif()
|
||||
|
||||
#message(STATUS "Arguments to execute_process: ${ARGN}")
|
||||
|
||||
execute_process(
|
||||
COMMAND "${GIT_EXECUTABLE}" describe --tags --always ${hash} ${ARGN}
|
||||
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
|
||||
RESULT_VARIABLE res
|
||||
OUTPUT_VARIABLE out
|
||||
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
if(NOT res EQUAL 0)
|
||||
set(out "${out}-${res}-NOTFOUND")
|
||||
endif()
|
||||
|
||||
set(${_var}
|
||||
"${out}"
|
||||
PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
function(git_describe_working_tree _var)
|
||||
if(NOT GIT_FOUND)
|
||||
find_package(Git QUIET)
|
||||
endif()
|
||||
if(NOT GIT_FOUND)
|
||||
set(${_var}
|
||||
"GIT-NOTFOUND"
|
||||
PARENT_SCOPE)
|
||||
return()
|
||||
endif()
|
||||
|
||||
execute_process(
|
||||
COMMAND "${GIT_EXECUTABLE}" describe --dirty ${ARGN}
|
||||
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
|
||||
RESULT_VARIABLE res
|
||||
OUTPUT_VARIABLE out
|
||||
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
if(NOT res EQUAL 0)
|
||||
set(out "${out}-${res}-NOTFOUND")
|
||||
endif()
|
||||
|
||||
set(${_var}
|
||||
"${out}"
|
||||
PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
function(git_get_exact_tag _var)
|
||||
git_describe(out --exact-match ${ARGN})
|
||||
set(${_var}
|
||||
"${out}"
|
||||
PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
function(git_local_changes _var)
|
||||
if(NOT GIT_FOUND)
|
||||
find_package(Git QUIET)
|
||||
endif()
|
||||
get_git_head_revision(refspec hash)
|
||||
if(NOT GIT_FOUND)
|
||||
set(${_var}
|
||||
"GIT-NOTFOUND"
|
||||
PARENT_SCOPE)
|
||||
return()
|
||||
endif()
|
||||
if(NOT hash)
|
||||
set(${_var}
|
||||
"HEAD-HASH-NOTFOUND"
|
||||
PARENT_SCOPE)
|
||||
return()
|
||||
endif()
|
||||
|
||||
execute_process(
|
||||
COMMAND "${GIT_EXECUTABLE}" diff-index --quiet HEAD --
|
||||
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
|
||||
RESULT_VARIABLE res
|
||||
OUTPUT_VARIABLE out
|
||||
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
if(res EQUAL 0)
|
||||
set(${_var}
|
||||
"CLEAN"
|
||||
PARENT_SCOPE)
|
||||
else()
|
||||
set(${_var}
|
||||
"DIRTY"
|
||||
PARENT_SCOPE)
|
||||
endif()
|
||||
endfunction()
|
@ -1,43 +0,0 @@
|
||||
#
|
||||
# Internal file for GetGitRevisionDescription.cmake
|
||||
#
|
||||
# Requires CMake 2.6 or newer (uses the 'function' command)
|
||||
#
|
||||
# Original Author:
|
||||
# 2009-2010 Ryan Pavlik <rpavlik@iastate.edu> <abiryan@ryand.net>
|
||||
# http://academic.cleardefinition.com
|
||||
# Iowa State University HCI Graduate Program/VRAC
|
||||
#
|
||||
# Copyright 2009-2012, Iowa State University
|
||||
# Copyright 2011-2015, Contributors
|
||||
# Distributed under the Boost Software License, Version 1.0.
|
||||
# (See accompanying file LICENSE_1_0.txt or copy at
|
||||
# http://www.boost.org/LICENSE_1_0.txt)
|
||||
# SPDX-License-Identifier: BSL-1.0
|
||||
|
||||
set(HEAD_HASH)
|
||||
|
||||
file(READ "@HEAD_FILE@" HEAD_CONTENTS LIMIT 1024)
|
||||
|
||||
string(STRIP "${HEAD_CONTENTS}" HEAD_CONTENTS)
|
||||
if(HEAD_CONTENTS MATCHES "ref")
|
||||
# named branch
|
||||
string(REPLACE "ref: " "" HEAD_REF "${HEAD_CONTENTS}")
|
||||
if(EXISTS "@GIT_DIR@/${HEAD_REF}")
|
||||
configure_file("@GIT_DIR@/${HEAD_REF}" "@GIT_DATA@/head-ref" COPYONLY)
|
||||
else()
|
||||
configure_file("@GIT_DIR@/packed-refs" "@GIT_DATA@/packed-refs" COPYONLY)
|
||||
file(READ "@GIT_DATA@/packed-refs" PACKED_REFS)
|
||||
if(${PACKED_REFS} MATCHES "([0-9a-z]*) ${HEAD_REF}")
|
||||
set(HEAD_HASH "${CMAKE_MATCH_1}")
|
||||
endif()
|
||||
endif()
|
||||
else()
|
||||
# detached HEAD
|
||||
configure_file("@GIT_DIR@/HEAD" "@GIT_DATA@/head-ref" COPYONLY)
|
||||
endif()
|
||||
|
||||
if(NOT HEAD_HASH)
|
||||
file(READ "@GIT_DATA@/head-ref" HEAD_HASH LIMIT 1024)
|
||||
string(STRIP "${HEAD_HASH}" HEAD_HASH)
|
||||
endif()
|
@ -1,26 +0,0 @@
|
||||
Copyright (c) <year> <owner>. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software without
|
||||
specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
@ -1,23 +0,0 @@
|
||||
Boost Software License - Version 1.0 - August 17th, 2003
|
||||
|
||||
Permission is hereby granted, free of charge, to any person or organization
|
||||
obtaining a copy of the software and accompanying documentation covered by
|
||||
this license (the "Software") to use, reproduce, display, distribute, execute,
|
||||
and transmit the Software, and to prepare derivative works of the Software,
|
||||
and to permit third-parties to whom the Software is furnished to do so, all
|
||||
subject to the following:
|
||||
|
||||
The copyright notices in the Software and this entire statement, including
|
||||
the above license grant, this restriction and the following disclaimer, must
|
||||
be included in all copies of the Software, in whole or in part, and all derivative
|
||||
works of the Software, unless such copies or derivative works are solely in
|
||||
the form of machine-executable object code generated by a source language
|
||||
processor.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
|
||||
COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE FOR ANY DAMAGES
|
||||
OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
@ -1,23 +0,0 @@
|
||||
Boost Software License - Version 1.0 - August 17th, 2003
|
||||
|
||||
Permission is hereby granted, free of charge, to any person or organization
|
||||
obtaining a copy of the software and accompanying documentation covered by
|
||||
this license (the "Software") to use, reproduce, display, distribute,
|
||||
execute, and transmit the Software, and to prepare derivative works of the
|
||||
Software, and to permit third-parties to whom the Software is furnished to
|
||||
do so, all subject to the following:
|
||||
|
||||
The copyright notices in the Software and this entire statement, including
|
||||
the above license grant, this restriction and the following disclaimer,
|
||||
must be included in all copies of the Software, in whole or in part, and
|
||||
all derivative works of the Software, unless such copies or derivative
|
||||
works are solely in the form of machine-executable object code generated by
|
||||
a source language processor.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
|
||||
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
|
||||
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
|
||||
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
DEALINGS IN THE SOFTWARE.
|
253
container/ArrayList.h
Normal file
253
container/ArrayList.h
Normal file
@ -0,0 +1,253 @@
|
||||
#ifndef FSFW_CONTAINER_ARRAYLIST_H_
|
||||
#define FSFW_CONTAINER_ARRAYLIST_H_
|
||||
|
||||
#include "../returnvalues/HasReturnvaluesIF.h"
|
||||
#include "../serialize/SerializeAdapter.h"
|
||||
#include "../serialize/SerializeIF.h"
|
||||
|
||||
/**
|
||||
* @brief A List that stores its values in an array.
|
||||
* @details
|
||||
* The underlying storage is an array that can be allocated by the class
|
||||
* itself or supplied via ctor.
|
||||
*
|
||||
* @ingroup container
|
||||
*/
|
||||
template<typename T, typename count_t = uint8_t>
|
||||
class ArrayList {
|
||||
template<typename U, typename count> friend class SerialArrayListAdapter;
|
||||
public:
|
||||
static const uint8_t INTERFACE_ID = CLASS_ID::ARRAY_LIST;
|
||||
static const ReturnValue_t FULL = MAKE_RETURN_CODE(0x01);
|
||||
|
||||
/**
|
||||
* This is the allocating constructor.
|
||||
* It allocates an array of the specified size.
|
||||
* @param maxSize
|
||||
*/
|
||||
ArrayList(count_t maxSize) :
|
||||
size(0), maxSize_(maxSize), allocated(true) {
|
||||
entries = new T[maxSize];
|
||||
}
|
||||
|
||||
/**
|
||||
* This is the non-allocating constructor
|
||||
*
|
||||
* It expects a pointer to an array of a certain size and initializes
|
||||
* itself to it.
|
||||
*
|
||||
* @param storage the array to use as backend
|
||||
* @param maxSize size of storage
|
||||
* @param size size of data already present in storage
|
||||
*/
|
||||
ArrayList(T *storage, count_t maxSize, count_t size = 0) :
|
||||
size(size), entries(storage), maxSize_(maxSize), allocated(false) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Copying is forbiden by declaring copy ctor and copy assignment deleted
|
||||
* It is too ambigous in this case.
|
||||
* (Allocate a new backend? Use the same? What to do in an modifying call?)
|
||||
*/
|
||||
ArrayList(const ArrayList& other) = delete;
|
||||
const ArrayList& operator=(const ArrayList& other) = delete;
|
||||
|
||||
/**
|
||||
* Number of Elements stored in this List
|
||||
*/
|
||||
count_t size;
|
||||
|
||||
|
||||
/**
|
||||
* Destructor, if the allocating constructor was used, it deletes the array.
|
||||
*/
|
||||
virtual ~ArrayList() {
|
||||
if (allocated) {
|
||||
delete[] entries;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* An Iterator to go trough an ArrayList
|
||||
*
|
||||
* It stores a pointer to an element and increments the
|
||||
* pointer when incremented itself.
|
||||
*/
|
||||
class Iterator {
|
||||
public:
|
||||
/**
|
||||
* Empty ctor, points to NULL
|
||||
*/
|
||||
Iterator(): value(0) {}
|
||||
|
||||
/**
|
||||
* Initializes the Iterator to point to an element
|
||||
*
|
||||
* @param initialize
|
||||
*/
|
||||
Iterator(T *initialize) {
|
||||
value = initialize;
|
||||
}
|
||||
|
||||
/**
|
||||
* The current element the iterator points to
|
||||
*/
|
||||
T *value;
|
||||
|
||||
Iterator& operator++() {
|
||||
value++;
|
||||
return *this;
|
||||
}
|
||||
|
||||
Iterator operator++(int) {
|
||||
Iterator tmp(*this);
|
||||
operator++();
|
||||
return tmp;
|
||||
}
|
||||
|
||||
Iterator& operator--() {
|
||||
value--;
|
||||
return *this;
|
||||
}
|
||||
|
||||
Iterator operator--(int) {
|
||||
Iterator tmp(*this);
|
||||
operator--();
|
||||
return tmp;
|
||||
}
|
||||
|
||||
T& operator*() {
|
||||
return *value;
|
||||
}
|
||||
|
||||
const T& operator*() const {
|
||||
return *value;
|
||||
}
|
||||
|
||||
T *operator->() {
|
||||
return value;
|
||||
}
|
||||
|
||||
const T *operator->() const {
|
||||
return value;
|
||||
}
|
||||
};
|
||||
|
||||
friend bool operator==(const ArrayList::Iterator& lhs,
|
||||
const ArrayList::Iterator& rhs) {
|
||||
return (lhs.value == rhs.value);
|
||||
}
|
||||
|
||||
friend bool operator!=(const ArrayList::Iterator& lhs,
|
||||
const ArrayList::Iterator& rhs) {
|
||||
return not (lhs.value == rhs.value);
|
||||
}
|
||||
|
||||
/**
|
||||
* Iterator pointing to the first stored elmement
|
||||
*
|
||||
* @return Iterator to the first element
|
||||
*/
|
||||
Iterator begin() const {
|
||||
return Iterator(&entries[0]);
|
||||
}
|
||||
|
||||
/**
|
||||
* returns an Iterator pointing to the element after the last stored entry
|
||||
*
|
||||
* @return Iterator to the element after the last entry
|
||||
*/
|
||||
Iterator end() const {
|
||||
return Iterator(&entries[size]);
|
||||
}
|
||||
|
||||
T & operator[](count_t i) const {
|
||||
return entries[i];
|
||||
}
|
||||
|
||||
/**
|
||||
* The first element
|
||||
*
|
||||
* @return pointer to the first stored element
|
||||
*/
|
||||
T *front() {
|
||||
return entries;
|
||||
}
|
||||
|
||||
/**
|
||||
* The last element
|
||||
*
|
||||
* does not return a valid pointer if called on an empty list.
|
||||
*
|
||||
* @return pointer to the last stored element
|
||||
*/
|
||||
T *back() {
|
||||
return &entries[size - 1];
|
||||
//Alternative solution
|
||||
//return const_cast<T*>(static_cast<const T*>(*this).back());
|
||||
}
|
||||
|
||||
const T* back() const{
|
||||
return &entries[size-1];
|
||||
}
|
||||
|
||||
/**
|
||||
* The maximum number of elements this List can contain
|
||||
*
|
||||
* @return maximum number of elements
|
||||
*/
|
||||
size_t maxSize() const {
|
||||
return this->maxSize_;
|
||||
}
|
||||
|
||||
/**
|
||||
* Insert a new element into the list.
|
||||
*
|
||||
* The new element is inserted after the last stored element.
|
||||
*
|
||||
* @param entry
|
||||
* @return
|
||||
* -@c FULL if the List is full
|
||||
* -@c RETURN_OK else
|
||||
*/
|
||||
ReturnValue_t insert(T entry) {
|
||||
if (size >= maxSize_) {
|
||||
return FULL;
|
||||
}
|
||||
entries[size] = entry;
|
||||
++size;
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
/**
|
||||
* clear the List
|
||||
*
|
||||
* This does not actually clear all entries, it only sets the size to 0.
|
||||
*/
|
||||
void clear() {
|
||||
size = 0;
|
||||
}
|
||||
|
||||
count_t remaining() {
|
||||
return (maxSize_ - size);
|
||||
}
|
||||
|
||||
protected:
|
||||
/**
|
||||
* pointer to the array in which the entries are stored
|
||||
*/
|
||||
T *entries;
|
||||
/**
|
||||
* remembering the maximum size
|
||||
*/
|
||||
size_t maxSize_;
|
||||
|
||||
/**
|
||||
* true if the array was allocated and needs to be deleted in the destructor.
|
||||
*/
|
||||
bool allocated;
|
||||
};
|
||||
|
||||
|
||||
|
||||
#endif /* FSFW_CONTAINER_ARRAYLIST_H_ */
|
153
container/BinaryTree.h
Normal file
153
container/BinaryTree.h
Normal file
@ -0,0 +1,153 @@
|
||||
#ifndef FRAMEWORK_CONTAINER_BINARYTREE_H_
|
||||
#define FRAMEWORK_CONTAINER_BINARYTREE_H_
|
||||
|
||||
#include <stddef.h>
|
||||
#include <stdint.h>
|
||||
#include <map>
|
||||
template<typename Tp>
|
||||
class BinaryNode {
|
||||
public:
|
||||
BinaryNode(Tp* setValue) :
|
||||
value(setValue), left(NULL), right(NULL), parent(NULL) {
|
||||
}
|
||||
Tp *value;
|
||||
BinaryNode* left;
|
||||
BinaryNode* right;
|
||||
BinaryNode* parent;
|
||||
};
|
||||
|
||||
template<typename Tp>
|
||||
class ExplicitNodeIterator {
|
||||
public:
|
||||
typedef ExplicitNodeIterator<Tp> _Self;
|
||||
typedef BinaryNode<Tp> _Node;
|
||||
typedef Tp value_type;
|
||||
typedef Tp* pointer;
|
||||
typedef Tp& reference;
|
||||
ExplicitNodeIterator() :
|
||||
element(NULL) {
|
||||
}
|
||||
ExplicitNodeIterator(_Node* node) :
|
||||
element(node) {
|
||||
}
|
||||
BinaryNode<Tp>* element;
|
||||
_Self up() {
|
||||
return _Self(element->parent);
|
||||
}
|
||||
_Self left() {
|
||||
if (element != NULL) {
|
||||
return _Self(element->left);
|
||||
} else {
|
||||
return _Self(NULL);
|
||||
}
|
||||
|
||||
}
|
||||
_Self right() {
|
||||
if (element != NULL) {
|
||||
return _Self(element->right);
|
||||
} else {
|
||||
return _Self(NULL);
|
||||
}
|
||||
|
||||
}
|
||||
bool operator==(const _Self& __x) const {
|
||||
return element == __x.element;
|
||||
}
|
||||
bool operator!=(const _Self& __x) const {
|
||||
return element != __x.element;
|
||||
}
|
||||
pointer
|
||||
operator->() const {
|
||||
if (element != NULL) {
|
||||
return element->value;
|
||||
} else {
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
pointer operator*() const {
|
||||
return this->operator->();
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Pretty rudimentary version of a simple binary tree (not a binary search tree!).
|
||||
*/
|
||||
template<typename Tp>
|
||||
class BinaryTree {
|
||||
public:
|
||||
typedef ExplicitNodeIterator<Tp> iterator;
|
||||
typedef BinaryNode<Tp> Node;
|
||||
typedef std::pair<iterator, iterator> children;
|
||||
BinaryTree() :
|
||||
rootNode(NULL) {
|
||||
}
|
||||
BinaryTree(Node* rootNode) :
|
||||
rootNode(rootNode) {
|
||||
}
|
||||
iterator begin() const {
|
||||
return iterator(rootNode);
|
||||
}
|
||||
static iterator end() {
|
||||
return iterator(NULL);
|
||||
}
|
||||
iterator insert(bool insertLeft, iterator parentNode, Node* newNode ) {
|
||||
newNode->parent = parentNode.element;
|
||||
if (parentNode.element != NULL) {
|
||||
if (insertLeft) {
|
||||
parentNode.element->left = newNode;
|
||||
} else {
|
||||
parentNode.element->right = newNode;
|
||||
}
|
||||
} else {
|
||||
//Insert first element.
|
||||
rootNode = newNode;
|
||||
}
|
||||
return iterator(newNode);
|
||||
}
|
||||
//No recursion to children. Needs to be done externally.
|
||||
children erase(iterator node) {
|
||||
if (node.element == rootNode) {
|
||||
//We're root node
|
||||
rootNode = NULL;
|
||||
} else {
|
||||
//Delete parent's reference
|
||||
if (node.up().left() == node) {
|
||||
node.up().element->left = NULL;
|
||||
} else {
|
||||
node.up().element->right = NULL;
|
||||
}
|
||||
}
|
||||
return children(node.element->left, node.element->right);
|
||||
}
|
||||
static uint32_t countLeft(iterator start) {
|
||||
if (start == end()) {
|
||||
return 0;
|
||||
}
|
||||
//We also count the start node itself.
|
||||
uint32_t count = 1;
|
||||
while (start.left() != end()) {
|
||||
count++;
|
||||
start = start.left();
|
||||
}
|
||||
return count;
|
||||
}
|
||||
static uint32_t countRight(iterator start) {
|
||||
if (start == end()) {
|
||||
return 0;
|
||||
}
|
||||
//We also count the start node itself.
|
||||
uint32_t count = 1;
|
||||
while (start.right() != end()) {
|
||||
count++;
|
||||
start = start.right();
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
protected:
|
||||
Node* rootNode;
|
||||
};
|
||||
|
||||
|
||||
|
||||
#endif /* FRAMEWORK_CONTAINER_BINARYTREE_H_ */
|
55
container/DynamicFIFO.h
Normal file
55
container/DynamicFIFO.h
Normal file
@ -0,0 +1,55 @@
|
||||
#ifndef FSFW_CONTAINER_DYNAMICFIFO_H_
|
||||
#define FSFW_CONTAINER_DYNAMICFIFO_H_
|
||||
|
||||
#include "FIFOBase.h"
|
||||
#include <vector>
|
||||
|
||||
/**
|
||||
* @brief Simple First-In-First-Out data structure. The maximum size
|
||||
* can be set in the constructor.
|
||||
* @details
|
||||
* The maximum capacity can be determined at run-time, so this container
|
||||
* performs dynamic memory allocation!
|
||||
* The public interface of FIFOBase exposes the user interface for the FIFO.
|
||||
* @tparam T Entry Type
|
||||
* @tparam capacity Maximum capacity
|
||||
*/
|
||||
template<typename T>
|
||||
class DynamicFIFO: public FIFOBase<T> {
|
||||
public:
|
||||
DynamicFIFO(size_t maxCapacity): FIFOBase<T>(nullptr, maxCapacity),
|
||||
fifoVector(maxCapacity) {
|
||||
// trying to pass the pointer of the uninitialized vector
|
||||
// to the FIFOBase constructor directly lead to a super evil bug.
|
||||
// So we do it like this now.
|
||||
this->setContainer(fifoVector.data());
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief Custom copy constructor which prevents setting the
|
||||
* underlying pointer wrong. This function allocates memory!
|
||||
* @details This is a very heavy operation so try to avoid this!
|
||||
*
|
||||
*/
|
||||
DynamicFIFO(const DynamicFIFO& other): FIFOBase<T>(other),
|
||||
fifoVector(other.maxCapacity) {
|
||||
this->fifoVector = other.fifoVector;
|
||||
this->setContainer(fifoVector.data());
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Custom assignment operator
|
||||
* @details This is a very heavy operation so try to avoid this!
|
||||
* @param other DyamicFIFO to copy from
|
||||
*/
|
||||
DynamicFIFO& operator=(const DynamicFIFO& other){
|
||||
FIFOBase<T>::operator=(other);
|
||||
this->fifoVector = other.fifoVector;
|
||||
this->setContainer(fifoVector.data());
|
||||
return *this;
|
||||
}
|
||||
private:
|
||||
std::vector<T> fifoVector;
|
||||
};
|
||||
|
||||
#endif /* FSFW_CONTAINER_DYNAMICFIFO_H_ */
|
47
container/FIFO.h
Normal file
47
container/FIFO.h
Normal file
@ -0,0 +1,47 @@
|
||||
#ifndef FSFW_CONTAINER_FIFO_H_
|
||||
#define FSFW_CONTAINER_FIFO_H_
|
||||
|
||||
#include "FIFOBase.h"
|
||||
#include <array>
|
||||
|
||||
/**
|
||||
* @brief Simple First-In-First-Out data structure with size fixed at
|
||||
* compile time
|
||||
* @details
|
||||
* Performs no dynamic memory allocation.
|
||||
* The public interface of FIFOBase exposes the user interface for the FIFO.
|
||||
* @tparam T Entry Type
|
||||
* @tparam capacity Maximum capacity
|
||||
*/
|
||||
template<typename T, size_t capacity>
|
||||
class FIFO: public FIFOBase<T> {
|
||||
public:
|
||||
FIFO(): FIFOBase<T>(nullptr, capacity) {
|
||||
this->setContainer(fifoArray.data());
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief Custom copy constructor to set pointer correctly.
|
||||
* @param other
|
||||
*/
|
||||
FIFO(const FIFO& other): FIFOBase<T>(other) {
|
||||
this->fifoArray = other.fifoArray;
|
||||
this->setContainer(fifoArray.data());
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Custom assignment operator
|
||||
* @param other
|
||||
*/
|
||||
FIFO& operator=(const FIFO& other){
|
||||
FIFOBase<T>::operator=(other);
|
||||
this->fifoArray = other.fifoArray;
|
||||
this->setContainer(fifoArray.data());
|
||||
return *this;
|
||||
}
|
||||
|
||||
private:
|
||||
std::array<T, capacity> fifoArray;
|
||||
};
|
||||
|
||||
#endif /* FSFW_CONTAINER_FIFO_H_ */
|
79
container/FIFOBase.h
Normal file
79
container/FIFOBase.h
Normal file
@ -0,0 +1,79 @@
|
||||
#ifndef FSFW_CONTAINER_FIFOBASE_H_
|
||||
#define FSFW_CONTAINER_FIFOBASE_H_
|
||||
|
||||
#include "../returnvalues/HasReturnvaluesIF.h"
|
||||
#include <cstddef>
|
||||
#include <cstring>
|
||||
|
||||
template <typename T>
|
||||
class FIFOBase {
|
||||
public:
|
||||
static const uint8_t INTERFACE_ID = CLASS_ID::FIFO_CLASS;
|
||||
static const ReturnValue_t FULL = MAKE_RETURN_CODE(1);
|
||||
static const ReturnValue_t EMPTY = MAKE_RETURN_CODE(2);
|
||||
|
||||
/** Default ctor, takes pointer to first entry of underlying container
|
||||
* and maximum capacity */
|
||||
FIFOBase(T* values, const size_t maxCapacity);
|
||||
|
||||
/**
|
||||
* Insert value into FIFO
|
||||
* @param value
|
||||
* @return RETURN_OK on success, FULL if full
|
||||
*/
|
||||
ReturnValue_t insert(T value);
|
||||
/**
|
||||
* Retrieve item from FIFO. This removes the item from the FIFO.
|
||||
* @param value Must point to a valid T
|
||||
* @return RETURN_OK on success, EMPTY if empty and FAILED if nullptr check failed
|
||||
*/
|
||||
ReturnValue_t retrieve(T *value);
|
||||
/**
|
||||
* Retrieve item from FIFO without removing it from FIFO.
|
||||
* @param value Must point to a valid T
|
||||
* @return RETURN_OK on success, EMPTY if empty and FAILED if nullptr check failed
|
||||
*/
|
||||
ReturnValue_t peek(T * value);
|
||||
/**
|
||||
* Remove item from FIFO.
|
||||
* @return RETURN_OK on success, EMPTY if empty
|
||||
*/
|
||||
ReturnValue_t pop();
|
||||
|
||||
/***
|
||||
* Check if FIFO is empty
|
||||
* @return True if empty, False if not
|
||||
*/
|
||||
bool empty();
|
||||
/***
|
||||
* Check if FIFO is Full
|
||||
* @return True if full, False if not
|
||||
*/
|
||||
bool full();
|
||||
/***
|
||||
* Current used size (elements) used
|
||||
* @return size_t in elements
|
||||
*/
|
||||
size_t size();
|
||||
/***
|
||||
* Get maximal capacity of fifo
|
||||
* @return size_t with max capacity of this fifo
|
||||
*/
|
||||
size_t getMaxCapacity() const;
|
||||
|
||||
protected:
|
||||
void setContainer(T* data);
|
||||
size_t maxCapacity = 0;
|
||||
|
||||
T* values;
|
||||
|
||||
size_t readIndex = 0;
|
||||
size_t writeIndex = 0;
|
||||
size_t currentSize = 0;
|
||||
|
||||
size_t next(size_t current);
|
||||
};
|
||||
|
||||
#include "FIFOBase.tpp"
|
||||
|
||||
#endif /* FSFW_CONTAINER_FIFOBASE_H_ */
|
93
container/FIFOBase.tpp
Normal file
93
container/FIFOBase.tpp
Normal file
@ -0,0 +1,93 @@
|
||||
#ifndef FSFW_CONTAINER_FIFOBASE_TPP_
|
||||
#define FSFW_CONTAINER_FIFOBASE_TPP_
|
||||
|
||||
#ifndef FSFW_CONTAINER_FIFOBASE_H_
|
||||
#error Include FIFOBase.h before FIFOBase.tpp!
|
||||
#endif
|
||||
|
||||
template<typename T>
|
||||
inline FIFOBase<T>::FIFOBase(T* values, const size_t maxCapacity):
|
||||
maxCapacity(maxCapacity), values(values){};
|
||||
|
||||
template<typename T>
|
||||
inline ReturnValue_t FIFOBase<T>::insert(T value) {
|
||||
if (full()) {
|
||||
return FULL;
|
||||
} else {
|
||||
values[writeIndex] = value;
|
||||
writeIndex = next(writeIndex);
|
||||
++currentSize;
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
};
|
||||
|
||||
template<typename T>
|
||||
inline ReturnValue_t FIFOBase<T>::retrieve(T* value) {
|
||||
if (empty()) {
|
||||
return EMPTY;
|
||||
} else {
|
||||
if (value == nullptr){
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
*value = values[readIndex];
|
||||
readIndex = next(readIndex);
|
||||
--currentSize;
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
};
|
||||
|
||||
template<typename T>
|
||||
inline ReturnValue_t FIFOBase<T>::peek(T* value) {
|
||||
if(empty()) {
|
||||
return EMPTY;
|
||||
} else {
|
||||
if (value == nullptr){
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
*value = values[readIndex];
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
};
|
||||
|
||||
template<typename T>
|
||||
inline ReturnValue_t FIFOBase<T>::pop() {
|
||||
T value;
|
||||
return this->retrieve(&value);
|
||||
};
|
||||
|
||||
template<typename T>
|
||||
inline bool FIFOBase<T>::empty() {
|
||||
return (currentSize == 0);
|
||||
};
|
||||
|
||||
template<typename T>
|
||||
inline bool FIFOBase<T>::full() {
|
||||
return (currentSize == maxCapacity);
|
||||
}
|
||||
|
||||
template<typename T>
|
||||
inline size_t FIFOBase<T>::size() {
|
||||
return currentSize;
|
||||
}
|
||||
|
||||
template<typename T>
|
||||
inline size_t FIFOBase<T>::next(size_t current) {
|
||||
++current;
|
||||
if (current == maxCapacity) {
|
||||
current = 0;
|
||||
}
|
||||
return current;
|
||||
}
|
||||
|
||||
template<typename T>
|
||||
inline size_t FIFOBase<T>::getMaxCapacity() const {
|
||||
return maxCapacity;
|
||||
}
|
||||
|
||||
|
||||
template<typename T>
|
||||
inline void FIFOBase<T>::setContainer(T *data) {
|
||||
this->values = data;
|
||||
}
|
||||
|
||||
#endif
|
38
container/FixedArrayList.h
Normal file
38
container/FixedArrayList.h
Normal file
@ -0,0 +1,38 @@
|
||||
#ifndef FIXEDARRAYLIST_H_
|
||||
#define FIXEDARRAYLIST_H_
|
||||
|
||||
#include "ArrayList.h"
|
||||
#include <cmath>
|
||||
/**
|
||||
* \ingroup container
|
||||
*/
|
||||
template<typename T, size_t MAX_SIZE, typename count_t = uint8_t>
|
||||
class FixedArrayList: public ArrayList<T, count_t> {
|
||||
static_assert(MAX_SIZE <= (pow(2,sizeof(count_t)*8)-1), "count_t is not large enough to hold MAX_SIZE");
|
||||
private:
|
||||
T data[MAX_SIZE];
|
||||
public:
|
||||
FixedArrayList() :
|
||||
ArrayList<T, count_t>(data, MAX_SIZE) {
|
||||
}
|
||||
|
||||
FixedArrayList(const FixedArrayList& other) :
|
||||
ArrayList<T, count_t>(data, MAX_SIZE) {
|
||||
memcpy(this->data, other.data, sizeof(this->data));
|
||||
this->entries = data;
|
||||
this->size = other.size;
|
||||
}
|
||||
|
||||
FixedArrayList& operator=(FixedArrayList other) {
|
||||
memcpy(this->data, other.data, sizeof(this->data));
|
||||
this->entries = data;
|
||||
this->size = other.size;
|
||||
return *this;
|
||||
}
|
||||
|
||||
virtual ~FixedArrayList() {
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
#endif /* FIXEDARRAYLIST_H_ */
|
230
container/FixedMap.h
Normal file
230
container/FixedMap.h
Normal file
@ -0,0 +1,230 @@
|
||||
#ifndef FSFW_CONTAINER_FIXEDMAP_H_
|
||||
#define FSFW_CONTAINER_FIXEDMAP_H_
|
||||
|
||||
#include "ArrayList.h"
|
||||
#include "../returnvalues/HasReturnvaluesIF.h"
|
||||
#include <utility>
|
||||
#include <type_traits>
|
||||
|
||||
/**
|
||||
* @brief Map implementation for maps with a pre-defined size.
|
||||
* @details
|
||||
* Can be initialized with desired maximum size.
|
||||
* Iterator is used to access <key,value> pair and iterate through map entries.
|
||||
* Complexity O(n).
|
||||
* @warning Iterators return a non-const key_t in the pair.
|
||||
* @warning A User is not allowed to change the key, otherwise the map is corrupted.
|
||||
* @ingroup container
|
||||
*/
|
||||
template<typename key_t, typename T>
|
||||
class FixedMap: public SerializeIF {
|
||||
static_assert (std::is_trivially_copyable<T>::value or
|
||||
std::is_base_of<SerializeIF, T>::value,
|
||||
"Types used in FixedMap must either be trivial copy-able or a "
|
||||
"derived class from SerializeIF to be serialize-able");
|
||||
public:
|
||||
static const uint8_t INTERFACE_ID = CLASS_ID::FIXED_MAP;
|
||||
static const ReturnValue_t KEY_ALREADY_EXISTS = MAKE_RETURN_CODE(0x01);
|
||||
static const ReturnValue_t MAP_FULL = MAKE_RETURN_CODE(0x02);
|
||||
static const ReturnValue_t KEY_DOES_NOT_EXIST = MAKE_RETURN_CODE(0x03);
|
||||
|
||||
private:
|
||||
static const key_t EMPTY_SLOT = -1;
|
||||
ArrayList<std::pair<key_t, T>, uint32_t> theMap;
|
||||
uint32_t _size;
|
||||
|
||||
uint32_t findIndex(key_t key) const {
|
||||
if (_size == 0) {
|
||||
return 1;
|
||||
}
|
||||
uint32_t i = 0;
|
||||
for (i = 0; i < _size; ++i) {
|
||||
if (theMap[i].first == key) {
|
||||
return i;
|
||||
}
|
||||
}
|
||||
return i;
|
||||
}
|
||||
public:
|
||||
FixedMap(uint32_t maxSize) :
|
||||
theMap(maxSize), _size(0) {
|
||||
}
|
||||
|
||||
class Iterator: public ArrayList<std::pair<key_t, T>, uint32_t>::Iterator {
|
||||
public:
|
||||
Iterator() :
|
||||
ArrayList<std::pair<key_t, T>, uint32_t>::Iterator() {
|
||||
}
|
||||
|
||||
Iterator(std::pair<key_t, T> *pair) :
|
||||
ArrayList<std::pair<key_t, T>, uint32_t>::Iterator(pair) {
|
||||
}
|
||||
};
|
||||
|
||||
friend bool operator==(const typename FixedMap::Iterator& lhs,
|
||||
const typename FixedMap::Iterator& rhs) {
|
||||
return (lhs.value == rhs.value);
|
||||
}
|
||||
|
||||
friend bool operator!=(const typename FixedMap::Iterator& lhs,
|
||||
const typename FixedMap::Iterator& rhs) {
|
||||
return not (lhs.value == rhs.value);
|
||||
}
|
||||
|
||||
Iterator begin() const {
|
||||
return Iterator(&theMap[0]);
|
||||
}
|
||||
|
||||
Iterator end() const {
|
||||
return Iterator(&theMap[_size]);
|
||||
}
|
||||
|
||||
uint32_t size() const {
|
||||
return _size;
|
||||
}
|
||||
|
||||
ReturnValue_t insert(key_t key, T value, Iterator *storedValue = nullptr) {
|
||||
if (exists(key) == HasReturnvaluesIF::RETURN_OK) {
|
||||
return KEY_ALREADY_EXISTS;
|
||||
}
|
||||
if (_size == theMap.maxSize()) {
|
||||
return MAP_FULL;
|
||||
}
|
||||
theMap[_size].first = key;
|
||||
theMap[_size].second = value;
|
||||
if (storedValue != nullptr) {
|
||||
*storedValue = Iterator(&theMap[_size]);
|
||||
}
|
||||
++_size;
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
ReturnValue_t insert(std::pair<key_t, T> pair) {
|
||||
return insert(pair.first, pair.second);
|
||||
}
|
||||
|
||||
ReturnValue_t exists(key_t key) const {
|
||||
ReturnValue_t result = KEY_DOES_NOT_EXIST;
|
||||
if (findIndex(key) < _size) {
|
||||
result = HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
ReturnValue_t erase(Iterator *iter) {
|
||||
uint32_t i;
|
||||
if ((i = findIndex((*iter).value->first)) >= _size) {
|
||||
return KEY_DOES_NOT_EXIST;
|
||||
}
|
||||
theMap[i] = theMap[_size - 1];
|
||||
--_size;
|
||||
--((*iter).value);
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
ReturnValue_t erase(key_t key) {
|
||||
uint32_t i;
|
||||
if ((i = findIndex(key)) >= _size) {
|
||||
return KEY_DOES_NOT_EXIST;
|
||||
}
|
||||
theMap[i] = theMap[_size - 1];
|
||||
--_size;
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
T *findValue(key_t key) const {
|
||||
return &theMap[findIndex(key)].second;
|
||||
}
|
||||
|
||||
Iterator find(key_t key) const {
|
||||
ReturnValue_t result = exists(key);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return end();
|
||||
}
|
||||
return Iterator(&theMap[findIndex(key)]);
|
||||
}
|
||||
|
||||
ReturnValue_t find(key_t key, T **value) const {
|
||||
ReturnValue_t result = exists(key);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return result;
|
||||
}
|
||||
*value = &theMap[findIndex(key)].second;
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
bool empty() {
|
||||
if(_size == 0) {
|
||||
return true;
|
||||
}
|
||||
else {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
bool full() {
|
||||
if(_size >= theMap.maxSize()) {
|
||||
return true;
|
||||
}
|
||||
else {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
void clear() {
|
||||
_size = 0;
|
||||
}
|
||||
|
||||
uint32_t maxSize() const {
|
||||
return theMap.maxSize();
|
||||
}
|
||||
|
||||
virtual ReturnValue_t serialize(uint8_t** buffer, size_t* size,
|
||||
size_t maxSize, Endianness streamEndianness) const {
|
||||
ReturnValue_t result = SerializeAdapter::serialize(&this->_size,
|
||||
buffer, size, maxSize, streamEndianness);
|
||||
uint32_t i = 0;
|
||||
while ((result == HasReturnvaluesIF::RETURN_OK) && (i < this->_size)) {
|
||||
result = SerializeAdapter::serialize(&theMap[i].first, buffer,
|
||||
size, maxSize, streamEndianness);
|
||||
result = SerializeAdapter::serialize(&theMap[i].second, buffer, size,
|
||||
maxSize, streamEndianness);
|
||||
++i;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
virtual size_t getSerializedSize() const {
|
||||
uint32_t printSize = sizeof(_size);
|
||||
uint32_t i = 0;
|
||||
|
||||
for (i = 0; i < _size; ++i) {
|
||||
printSize += SerializeAdapter::getSerializedSize(
|
||||
&theMap[i].first);
|
||||
printSize += SerializeAdapter::getSerializedSize(&theMap[i].second);
|
||||
}
|
||||
|
||||
return printSize;
|
||||
}
|
||||
|
||||
virtual ReturnValue_t deSerialize(const uint8_t** buffer, size_t* size,
|
||||
Endianness streamEndianness) {
|
||||
ReturnValue_t result = SerializeAdapter::deSerialize(&this->_size,
|
||||
buffer, size, streamEndianness);
|
||||
if (this->_size > theMap.maxSize()) {
|
||||
return SerializeIF::TOO_MANY_ELEMENTS;
|
||||
}
|
||||
uint32_t i = 0;
|
||||
while ((result == HasReturnvaluesIF::RETURN_OK) && (i < this->_size)) {
|
||||
result = SerializeAdapter::deSerialize(&theMap[i].first, buffer,
|
||||
size, streamEndianness);
|
||||
result = SerializeAdapter::deSerialize(&theMap[i].second, buffer, size,
|
||||
streamEndianness);
|
||||
++i;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
#endif /* FSFW_CONTAINER_FIXEDMAP_H_ */
|
206
container/FixedOrderedMultimap.h
Normal file
206
container/FixedOrderedMultimap.h
Normal file
@ -0,0 +1,206 @@
|
||||
#ifndef FSFW_CONTAINER_FIXEDORDEREDMULTIMAP_H_
|
||||
#define FSFW_CONTAINER_FIXEDORDEREDMULTIMAP_H_
|
||||
|
||||
#include "ArrayList.h"
|
||||
#include <cstring>
|
||||
|
||||
/**
|
||||
* @brief An associative container which allows multiple entries of the same key.
|
||||
* @details
|
||||
* Same keys are ordered by KEY_COMPARE function which is std::less<key_t> > by default.
|
||||
*
|
||||
* It uses the ArrayList, so technically this is not a real map, it is an array of pairs
|
||||
* of type key_t, T. It is ordered by key_t as FixedMap but allows same keys. Thus it has a linear
|
||||
* complexity O(n). As long as the number of entries remains low, this
|
||||
* should not be an issue.
|
||||
* The number of insertion and deletion operation should be minimized
|
||||
* as those incur extensive memory move operations (the underlying container
|
||||
* is not node based).
|
||||
*
|
||||
* Its of fixed size so no allocations are performed after the construction.
|
||||
*
|
||||
* The maximum size is given as first parameter of the constructor.
|
||||
*
|
||||
* It provides an iterator to do list iterations.
|
||||
*
|
||||
* The type T must have a copy constructor if it is not trivial copy-able.
|
||||
*
|
||||
* @warning Iterators return a non-const key_t in the pair.
|
||||
* @warning A User is not allowed to change the key, otherwise the map is corrupted.
|
||||
*
|
||||
* \ingroup container
|
||||
*/
|
||||
template<typename key_t, typename T, typename KEY_COMPARE = std::less<key_t>>
|
||||
class FixedOrderedMultimap {
|
||||
public:
|
||||
static const uint8_t INTERFACE_ID = CLASS_ID::FIXED_MULTIMAP;
|
||||
static const ReturnValue_t MAP_FULL = MAKE_RETURN_CODE(0x01);
|
||||
static const ReturnValue_t KEY_DOES_NOT_EXIST = MAKE_RETURN_CODE(0x02);
|
||||
|
||||
/***
|
||||
* Constructor which needs a size_t for the maximum allowed size
|
||||
*
|
||||
* Can not be resized during runtime
|
||||
*
|
||||
* Allocates memory at construction
|
||||
* @param maxSize size_t of Maximum allowed size
|
||||
*/
|
||||
FixedOrderedMultimap(size_t maxSize):theMap(maxSize), _size(0){
|
||||
}
|
||||
|
||||
/***
|
||||
* Virtual destructor frees Memory by deleting its member
|
||||
*/
|
||||
virtual ~FixedOrderedMultimap() {
|
||||
}
|
||||
|
||||
/***
|
||||
* Special iterator for FixedOrderedMultimap
|
||||
*/
|
||||
class Iterator: public ArrayList<std::pair<key_t, T>, size_t>::Iterator {
|
||||
public:
|
||||
Iterator() :
|
||||
ArrayList<std::pair<key_t, T>, size_t>::Iterator() {
|
||||
}
|
||||
|
||||
Iterator(std::pair<key_t, T> *pair) :
|
||||
ArrayList<std::pair<key_t, T>, size_t>::Iterator(pair) {
|
||||
}
|
||||
};
|
||||
|
||||
/***
|
||||
* Returns an iterator pointing to the first element
|
||||
* @return Iterator pointing to first element
|
||||
*/
|
||||
Iterator begin() const {
|
||||
return Iterator(&theMap[0]);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns an iterator pointing to one element past the end
|
||||
* @return Iterator pointing to one element past the end
|
||||
*/
|
||||
Iterator end() const {
|
||||
return Iterator(&theMap[_size]);
|
||||
}
|
||||
|
||||
/***
|
||||
* Returns the current size of the map (not maximum size!)
|
||||
* @return Current size
|
||||
*/
|
||||
size_t size() const{
|
||||
return _size;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears the map, does not deallocate any memory
|
||||
*/
|
||||
void clear(){
|
||||
_size = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the maximum size of the map
|
||||
* @return Maximum size of the map
|
||||
*/
|
||||
size_t maxSize() const{
|
||||
return theMap.maxSize();
|
||||
}
|
||||
|
||||
/***
|
||||
* Used to insert a key and value separately.
|
||||
*
|
||||
* @param[in] key Key of the new element
|
||||
* @param[in] value Value of the new element
|
||||
* @param[in/out] (optional) storedValue On success this points to the new value, otherwise a nullptr
|
||||
* @return RETURN_OK if insert was successful, MAP_FULL if no space is available
|
||||
*/
|
||||
ReturnValue_t insert(key_t key, T value, Iterator *storedValue = nullptr);
|
||||
|
||||
/***
|
||||
* Used to insert new pair instead of single values
|
||||
*
|
||||
* @param pair Pair to be inserted
|
||||
* @return RETURN_OK if insert was successful, MAP_FULL if no space is available
|
||||
*/
|
||||
ReturnValue_t insert(std::pair<key_t, T> pair);
|
||||
|
||||
/***
|
||||
* Can be used to check if a certain key is in the map
|
||||
* @param key Key to be checked
|
||||
* @return RETURN_OK if the key exists KEY_DOES_NOT_EXIST otherwise
|
||||
*/
|
||||
ReturnValue_t exists(key_t key) const;
|
||||
|
||||
/***
|
||||
* Used to delete the element in the iterator
|
||||
*
|
||||
* The iterator will point to the element before or begin(),
|
||||
* but never to one element in front of the map.
|
||||
*
|
||||
* @warning The iterator needs to be valid and dereferenceable
|
||||
* @param[in/out] iter Pointer to iterator to the element that needs to be ereased
|
||||
* @return RETURN_OK if erased, KEY_DOES_NOT_EXIST if the there is no element like this
|
||||
*/
|
||||
ReturnValue_t erase(Iterator *iter);
|
||||
|
||||
/***
|
||||
* Used to erase by key
|
||||
* @param key Key to be erased
|
||||
* @return RETURN_OK if erased, KEY_DOES_NOT_EXIST if the there is no element like this
|
||||
*/
|
||||
ReturnValue_t erase(key_t key);
|
||||
|
||||
/***
|
||||
* Find returns the first appearance of the key
|
||||
*
|
||||
* If the key does not exist, it points to end()
|
||||
*
|
||||
* @param key Key to search for
|
||||
* @return Iterator pointing to the first entry of key
|
||||
*/
|
||||
Iterator find(key_t key) const{
|
||||
ReturnValue_t result = exists(key);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return end();
|
||||
}
|
||||
return Iterator(&theMap[findFirstIndex(key)]);
|
||||
};
|
||||
|
||||
/***
|
||||
* Finds first entry of the given key and returns a
|
||||
* pointer to the value
|
||||
*
|
||||
* @param key Key to search for
|
||||
* @param value Found value
|
||||
* @return RETURN_OK if it points to the value,
|
||||
* KEY_DOES_NOT_EXIST if the key is not in the map
|
||||
*/
|
||||
ReturnValue_t find(key_t key, T **value) const;
|
||||
|
||||
friend bool operator==(const typename FixedOrderedMultimap::Iterator& lhs,
|
||||
const typename FixedOrderedMultimap::Iterator& rhs) {
|
||||
return (lhs.value == rhs.value);
|
||||
}
|
||||
|
||||
friend bool operator!=(const typename FixedOrderedMultimap::Iterator& lhs,
|
||||
const typename FixedOrderedMultimap::Iterator& rhs) {
|
||||
return not (lhs.value == rhs.value);
|
||||
}
|
||||
|
||||
private:
|
||||
typedef KEY_COMPARE compare;
|
||||
compare myComp;
|
||||
ArrayList<std::pair<key_t, T>, size_t> theMap;
|
||||
size_t _size;
|
||||
|
||||
size_t findFirstIndex(key_t key, size_t startAt = 0) const;
|
||||
|
||||
size_t findNicePlace(key_t key) const;
|
||||
|
||||
void removeFromPosition(size_t position);
|
||||
};
|
||||
|
||||
#include "FixedOrderedMultimap.tpp"
|
||||
|
||||
#endif /* FSFW_CONTAINER_FIXEDORDEREDMULTIMAP_H_ */
|
109
container/FixedOrderedMultimap.tpp
Normal file
109
container/FixedOrderedMultimap.tpp
Normal file
@ -0,0 +1,109 @@
|
||||
#ifndef FRAMEWORK_CONTAINER_FIXEDORDEREDMULTIMAP_TPP_
|
||||
#define FRAMEWORK_CONTAINER_FIXEDORDEREDMULTIMAP_TPP_
|
||||
|
||||
|
||||
template<typename key_t, typename T, typename KEY_COMPARE>
|
||||
inline ReturnValue_t FixedOrderedMultimap<key_t, T, KEY_COMPARE>::insert(key_t key, T value, Iterator *storedValue) {
|
||||
if (_size == theMap.maxSize()) {
|
||||
return MAP_FULL;
|
||||
}
|
||||
size_t position = findNicePlace(key);
|
||||
memmove(static_cast<void*>(&theMap[position + 1]),static_cast<void*>(&theMap[position]),
|
||||
(_size - position) * sizeof(std::pair<key_t,T>));
|
||||
theMap[position].first = key;
|
||||
theMap[position].second = value;
|
||||
++_size;
|
||||
if (storedValue != nullptr) {
|
||||
*storedValue = Iterator(&theMap[position]);
|
||||
}
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
template<typename key_t, typename T, typename KEY_COMPARE>
|
||||
inline ReturnValue_t FixedOrderedMultimap<key_t, T, KEY_COMPARE>::insert(std::pair<key_t, T> pair) {
|
||||
return insert(pair.first, pair.second);
|
||||
}
|
||||
|
||||
template<typename key_t, typename T, typename KEY_COMPARE>
|
||||
inline ReturnValue_t FixedOrderedMultimap<key_t, T, KEY_COMPARE>::exists(key_t key) const {
|
||||
ReturnValue_t result = KEY_DOES_NOT_EXIST;
|
||||
if (findFirstIndex(key) < _size) {
|
||||
result = HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
template<typename key_t, typename T, typename KEY_COMPARE>
|
||||
inline ReturnValue_t FixedOrderedMultimap<key_t, T, KEY_COMPARE>::erase(Iterator *iter) {
|
||||
size_t i;
|
||||
if ((i = findFirstIndex((*iter).value->first)) >= _size) {
|
||||
return KEY_DOES_NOT_EXIST;
|
||||
}
|
||||
removeFromPosition(i);
|
||||
if (*iter != begin()) {
|
||||
(*iter)--;
|
||||
} else {
|
||||
*iter = begin();
|
||||
}
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
template<typename key_t, typename T, typename KEY_COMPARE>
|
||||
inline ReturnValue_t FixedOrderedMultimap<key_t, T, KEY_COMPARE>::erase(key_t key) {
|
||||
size_t i;
|
||||
if ((i = findFirstIndex(key)) >= _size) {
|
||||
return KEY_DOES_NOT_EXIST;
|
||||
}
|
||||
do {
|
||||
removeFromPosition(i);
|
||||
i = findFirstIndex(key, i);
|
||||
} while (i < _size);
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
template<typename key_t, typename T, typename KEY_COMPARE>
|
||||
inline ReturnValue_t FixedOrderedMultimap<key_t, T, KEY_COMPARE>::find(key_t key, T **value) const {
|
||||
ReturnValue_t result = exists(key);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return result;
|
||||
}
|
||||
*value = &theMap[findFirstIndex(key)].second;
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
template<typename key_t, typename T, typename KEY_COMPARE>
|
||||
inline size_t FixedOrderedMultimap<key_t, T, KEY_COMPARE>::findFirstIndex(key_t key, size_t startAt) const {
|
||||
if (startAt >= _size) {
|
||||
return startAt + 1;
|
||||
}
|
||||
size_t i = startAt;
|
||||
for (i = startAt; i < _size; ++i) {
|
||||
if (theMap[i].first == key) {
|
||||
return i;
|
||||
}
|
||||
}
|
||||
return i;
|
||||
}
|
||||
|
||||
template<typename key_t, typename T, typename KEY_COMPARE>
|
||||
inline size_t FixedOrderedMultimap<key_t, T, KEY_COMPARE>::findNicePlace(key_t key) const {
|
||||
size_t i = 0;
|
||||
for (i = 0; i < _size; ++i) {
|
||||
if (myComp(key, theMap[i].first)) {
|
||||
return i;
|
||||
}
|
||||
}
|
||||
return i;
|
||||
}
|
||||
|
||||
template<typename key_t, typename T, typename KEY_COMPARE>
|
||||
inline void FixedOrderedMultimap<key_t, T, KEY_COMPARE>::removeFromPosition(size_t position) {
|
||||
if (_size <= position) {
|
||||
return;
|
||||
}
|
||||
memmove(static_cast<void*>(&theMap[position]), static_cast<void*>(&theMap[position + 1]),
|
||||
(_size - position - 1) * sizeof(std::pair<key_t,T>));
|
||||
--_size;
|
||||
}
|
||||
|
||||
|
||||
#endif /* FRAMEWORK_CONTAINER_FIXEDORDEREDMULTIMAP_TPP_ */
|
90
container/HybridIterator.h
Normal file
90
container/HybridIterator.h
Normal file
@ -0,0 +1,90 @@
|
||||
#ifndef FRAMEWORK_CONTAINER_HYBRIDITERATOR_H_
|
||||
#define FRAMEWORK_CONTAINER_HYBRIDITERATOR_H_
|
||||
|
||||
#include "ArrayList.h"
|
||||
#include "SinglyLinkedList.h"
|
||||
|
||||
template<typename T, typename count_t = uint8_t>
|
||||
class HybridIterator: public LinkedElement<T>::Iterator,
|
||||
public ArrayList<T, count_t>::Iterator {
|
||||
public:
|
||||
HybridIterator() {}
|
||||
|
||||
HybridIterator(typename LinkedElement<T>::Iterator *iter) :
|
||||
LinkedElement<T>::Iterator(*iter), value(iter->value),
|
||||
linked(true) {
|
||||
|
||||
}
|
||||
|
||||
HybridIterator(LinkedElement<T> *start) :
|
||||
LinkedElement<T>::Iterator(start), value(start->value),
|
||||
linked(true) {
|
||||
|
||||
}
|
||||
|
||||
HybridIterator(typename ArrayList<T, count_t>::Iterator start,
|
||||
typename ArrayList<T, count_t>::Iterator end) :
|
||||
ArrayList<T, count_t>::Iterator(start), value(start.value),
|
||||
linked(false), end(end.value) {
|
||||
if (value == this->end) {
|
||||
value = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
HybridIterator(T *firstElement, T *lastElement) :
|
||||
ArrayList<T, count_t>::Iterator(firstElement), value(firstElement),
|
||||
linked(false), end(++lastElement) {
|
||||
if (value == end) {
|
||||
value = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
HybridIterator& operator++() {
|
||||
if (linked) {
|
||||
LinkedElement<T>::Iterator::operator++();
|
||||
if (LinkedElement<T>::Iterator::value != nullptr) {
|
||||
value = LinkedElement<T>::Iterator::value->value;
|
||||
} else {
|
||||
value = nullptr;
|
||||
}
|
||||
} else {
|
||||
ArrayList<T, count_t>::Iterator::operator++();
|
||||
value = ArrayList<T, count_t>::Iterator::value;
|
||||
|
||||
if (value == end) {
|
||||
value = nullptr;
|
||||
}
|
||||
}
|
||||
return *this;
|
||||
}
|
||||
|
||||
HybridIterator operator++(int) {
|
||||
HybridIterator tmp(*this);
|
||||
operator++();
|
||||
return tmp;
|
||||
}
|
||||
|
||||
bool operator==(const HybridIterator& other) const {
|
||||
return value == other.value;
|
||||
}
|
||||
|
||||
bool operator!=(const HybridIterator& other) const {
|
||||
return !(*this == other);
|
||||
}
|
||||
|
||||
T operator*() {
|
||||
return *value;
|
||||
}
|
||||
|
||||
T *operator->() {
|
||||
return value;
|
||||
}
|
||||
|
||||
T* value = nullptr;
|
||||
|
||||
private:
|
||||
bool linked = false;
|
||||
T *end = nullptr;
|
||||
};
|
||||
|
||||
#endif /* FRAMEWORK_CONTAINER_HYBRIDITERATOR_H_ */
|
700
container/IndexedRingMemoryArray.h
Normal file
700
container/IndexedRingMemoryArray.h
Normal file
@ -0,0 +1,700 @@
|
||||
#ifndef FRAMEWORK_CONTAINER_INDEXEDRINGMEMORY_H_
|
||||
#define FRAMEWORK_CONTAINER_INDEXEDRINGMEMORY_H_
|
||||
|
||||
#include "ArrayList.h"
|
||||
#include "../globalfunctions/CRC.h"
|
||||
#include "../serviceinterface/ServiceInterfaceStream.h"
|
||||
#include "../returnvalues/HasReturnvaluesIF.h"
|
||||
#include "../serialize/SerialArrayListAdapter.h"
|
||||
#include <cmath>
|
||||
|
||||
template<typename T>
|
||||
class Index: public SerializeIF{
|
||||
/**
|
||||
* Index is the Type used for the list of indices. The template parameter is the type which describes the index, it needs to be a child of SerializeIF to be able to make it persistent
|
||||
*/
|
||||
static_assert(std::is_base_of<SerializeIF,T>::value,"Wrong Type for Index, Type must implement SerializeIF");
|
||||
public:
|
||||
Index():blockStartAddress(0),size(0),storedPackets(0){}
|
||||
|
||||
Index(uint32_t startAddress):blockStartAddress(startAddress),size(0),storedPackets(0){
|
||||
|
||||
}
|
||||
|
||||
void setBlockStartAddress(uint32_t newAddress){
|
||||
this->blockStartAddress = newAddress;
|
||||
}
|
||||
|
||||
uint32_t getBlockStartAddress() const {
|
||||
return blockStartAddress;
|
||||
}
|
||||
|
||||
const T* getIndexType() const {
|
||||
return &indexType;
|
||||
}
|
||||
|
||||
T* modifyIndexType(){
|
||||
return &indexType;
|
||||
}
|
||||
/**
|
||||
* Updates the index Type. Uses = operator
|
||||
* @param indexType Type to copy from
|
||||
*/
|
||||
void setIndexType(T* indexType) {
|
||||
this->indexType = *indexType;
|
||||
}
|
||||
|
||||
uint32_t getSize() const {
|
||||
return size;
|
||||
}
|
||||
|
||||
void setSize(uint32_t size) {
|
||||
this->size = size;
|
||||
}
|
||||
|
||||
void addSize(uint32_t size){
|
||||
this->size += size;
|
||||
}
|
||||
|
||||
void setStoredPackets(uint32_t newStoredPackets){
|
||||
this->storedPackets = newStoredPackets;
|
||||
}
|
||||
|
||||
void addStoredPackets(uint32_t packets){
|
||||
this->storedPackets += packets;
|
||||
}
|
||||
|
||||
uint32_t getStoredPackets() const{
|
||||
return this->storedPackets;
|
||||
}
|
||||
|
||||
ReturnValue_t serialize(uint8_t** buffer, size_t* size,
|
||||
size_t maxSize, Endianness streamEndianness) const {
|
||||
ReturnValue_t result = SerializeAdapter::serialize(&blockStartAddress,buffer,size,maxSize,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
result = indexType.serialize(buffer,size,maxSize,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
result = SerializeAdapter::serialize(&this->size,buffer,size,maxSize,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
result = SerializeAdapter::serialize(&this->storedPackets,buffer,size,maxSize,streamEndianness);
|
||||
return result;
|
||||
}
|
||||
|
||||
ReturnValue_t deSerialize(const uint8_t** buffer, size_t* size,
|
||||
Endianness streamEndianness){
|
||||
ReturnValue_t result = SerializeAdapter::deSerialize(&blockStartAddress,buffer,size,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
result = indexType.deSerialize(buffer,size,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
result = SerializeAdapter::deSerialize(&this->size,buffer,size,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
result = SerializeAdapter::deSerialize(&this->storedPackets,buffer,size,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
size_t getSerializedSize() const {
|
||||
uint32_t size = SerializeAdapter::getSerializedSize(&blockStartAddress);
|
||||
size += indexType.getSerializedSize();
|
||||
size += SerializeAdapter::getSerializedSize(&this->size);
|
||||
size += SerializeAdapter::getSerializedSize(&this->storedPackets);
|
||||
return size;
|
||||
}
|
||||
|
||||
|
||||
bool operator==(const Index<T>& other){
|
||||
return ((blockStartAddress == other.getBlockStartAddress()) && (size==other.getSize())) && (indexType == *(other.getIndexType()));
|
||||
}
|
||||
|
||||
private:
|
||||
uint32_t blockStartAddress;
|
||||
uint32_t size;
|
||||
uint32_t storedPackets;
|
||||
T indexType;
|
||||
};
|
||||
|
||||
|
||||
|
||||
template<typename T>
|
||||
class IndexedRingMemoryArray: public SerializeIF, public ArrayList<Index<T>, uint32_t>{
|
||||
/**
|
||||
* Indexed Ring Memory Array is a class for a ring memory with indices. It assumes that the newest data comes in last
|
||||
* It uses the currentWriteBlock as pointer to the current writing position
|
||||
* The currentReadBlock must be set manually
|
||||
*/
|
||||
public:
|
||||
IndexedRingMemoryArray(uint32_t startAddress, uint32_t size, uint32_t bytesPerBlock, SerializeIF* additionalInfo,
|
||||
bool overwriteOld) :ArrayList<Index<T>,uint32_t>(NULL,(uint32_t)10,(uint32_t)0),totalSize(size),indexAddress(startAddress),currentReadSize(0),currentReadBlockSizeCached(0),lastBlockToReadSize(0), additionalInfo(additionalInfo),overwriteOld(overwriteOld){
|
||||
|
||||
//Calculate the maximum number of indices needed for this blocksize
|
||||
uint32_t maxNrOfIndices = floor(static_cast<double>(size)/static_cast<double>(bytesPerBlock));
|
||||
|
||||
//Calculate the Size needeed for the index itself
|
||||
uint32_t serializedSize = 0;
|
||||
if(additionalInfo!=NULL){
|
||||
serializedSize += additionalInfo->getSerializedSize();
|
||||
}
|
||||
//Size of current iterator type
|
||||
Index<T> tempIndex;
|
||||
serializedSize += tempIndex.getSerializedSize();
|
||||
|
||||
//Add Size of Array
|
||||
serializedSize += sizeof(uint32_t); //size of array
|
||||
serializedSize += (tempIndex.getSerializedSize() * maxNrOfIndices); //size of elements
|
||||
serializedSize += sizeof(uint16_t); //size of crc
|
||||
|
||||
//Calculate new size after index
|
||||
if(serializedSize > totalSize){
|
||||
error << "IndexedRingMemory: Store is too small for index" << std::endl;
|
||||
}
|
||||
uint32_t useableSize = totalSize - serializedSize;
|
||||
//Update the totalSize for calculations
|
||||
totalSize = useableSize;
|
||||
|
||||
//True StartAddress
|
||||
uint32_t trueStartAddress = indexAddress + serializedSize;
|
||||
|
||||
//Calculate True number of Blocks and reset size of true Number of Blocks
|
||||
uint32_t trueNumberOfBlocks = floor(static_cast<double>(totalSize) / static_cast<double>(bytesPerBlock));
|
||||
|
||||
//allocate memory now
|
||||
this->entries = new Index<T>[trueNumberOfBlocks];
|
||||
this->size = trueNumberOfBlocks;
|
||||
this->maxSize_ = trueNumberOfBlocks;
|
||||
this->allocated = true;
|
||||
|
||||
//Check trueNumberOfBlocks
|
||||
if(trueNumberOfBlocks<1){
|
||||
error << "IndexedRingMemory: Invalid Number of Blocks: " << trueNumberOfBlocks;
|
||||
}
|
||||
|
||||
|
||||
|
||||
//Fill address into index
|
||||
uint32_t address = trueStartAddress;
|
||||
for (typename IndexedRingMemoryArray<T>::Iterator it = this->begin();it!=this->end();++it) {
|
||||
it->setBlockStartAddress(address);
|
||||
it->setSize(0);
|
||||
it->setStoredPackets(0);
|
||||
address += bytesPerBlock;
|
||||
}
|
||||
|
||||
|
||||
//Initialize iterators
|
||||
currentWriteBlock = this->begin();
|
||||
currentReadBlock = this->begin();
|
||||
lastBlockToRead = this->begin();
|
||||
|
||||
//Check last blockSize
|
||||
uint32_t lastBlockSize = (trueStartAddress + useableSize) - (this->back()->getBlockStartAddress());
|
||||
if((lastBlockSize<bytesPerBlock) && (this->size > 1)){
|
||||
//remove the last Block so the second last block has more size
|
||||
this->size -= 1;
|
||||
debug << "IndexedRingMemory: Last Block is smaller than bytesPerBlock, removed last block" << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Resets the whole index, the iterators and executes the given reset function on every index type
|
||||
* @param typeResetFnc static reset function which accepts a pointer to the index Type
|
||||
*/
|
||||
void reset(void (*typeResetFnc)(T*)){
|
||||
currentReadBlock = this->begin();
|
||||
currentWriteBlock = this->begin();
|
||||
lastBlockToRead = this->begin();
|
||||
currentReadSize = 0;
|
||||
currentReadBlockSizeCached = 0;
|
||||
lastBlockToReadSize = 0;
|
||||
for(typename IndexedRingMemoryArray<T>::Iterator it = this->begin();it!=this->end();++it){
|
||||
it->setSize(0);
|
||||
it->setStoredPackets(0);
|
||||
(*typeResetFnc)(it->modifyIndexType());
|
||||
}
|
||||
}
|
||||
|
||||
void resetBlock(typename IndexedRingMemoryArray<T>::Iterator it,void (*typeResetFnc)(T*)){
|
||||
it->setSize(0);
|
||||
it->setStoredPackets(0);
|
||||
(*typeResetFnc)(it->modifyIndexType());
|
||||
}
|
||||
|
||||
/*
|
||||
* Reading
|
||||
*/
|
||||
|
||||
void setCurrentReadBlock(typename IndexedRingMemoryArray<T>::Iterator it){
|
||||
currentReadBlock = it;
|
||||
currentReadBlockSizeCached = it->getSize();
|
||||
}
|
||||
|
||||
void resetRead(){
|
||||
currentReadBlock = this->begin();
|
||||
currentReadSize = 0;
|
||||
currentReadBlockSizeCached = this->begin()->getSize();
|
||||
lastBlockToRead = currentWriteBlock;
|
||||
lastBlockToReadSize = currentWriteBlock->getSize();
|
||||
}
|
||||
/**
|
||||
* Sets the last block to read to this iterator.
|
||||
* Can be used to dump until block x
|
||||
* @param it The iterator for the last read block
|
||||
*/
|
||||
void setLastBlockToRead(typename IndexedRingMemoryArray<T>::Iterator it){
|
||||
lastBlockToRead = it;
|
||||
lastBlockToReadSize = it->getSize();
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the read pointer to the first written Block, which is the first non empty block in front of the write block
|
||||
* Can be the currentWriteBlock as well
|
||||
*/
|
||||
void readOldest(){
|
||||
resetRead();
|
||||
currentReadBlock = getNextNonEmptyBlock();
|
||||
currentReadBlockSizeCached = currentReadBlock->getSize();
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets the current read iterator to the next Block and resets the current read size
|
||||
* The current size of the block will be cached to avoid race condition between write and read
|
||||
* If the end of the ring is reached the read pointer will be set to the begin
|
||||
*/
|
||||
void readNext(){
|
||||
currentReadSize = 0;
|
||||
if((this->size != 0) && (currentReadBlock.value ==this->back())){
|
||||
currentReadBlock = this->begin();
|
||||
}else{
|
||||
currentReadBlock++;
|
||||
}
|
||||
|
||||
currentReadBlockSizeCached = currentReadBlock->getSize();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the address which is currently read from
|
||||
* @return Address to read from
|
||||
*/
|
||||
uint32_t getCurrentReadAddress() const {
|
||||
return getAddressOfCurrentReadBlock() + currentReadSize;
|
||||
}
|
||||
/**
|
||||
* Adds readSize to the current size and checks if the read has no more data left and advances the read block
|
||||
* @param readSize The size that was read
|
||||
* @return Returns true if the read can go on
|
||||
*/
|
||||
bool addReadSize(uint32_t readSize) {
|
||||
if(currentReadBlock == lastBlockToRead){
|
||||
//The current read block is the last to read
|
||||
if((currentReadSize+readSize)<lastBlockToReadSize){
|
||||
//the block has more data -> return true
|
||||
currentReadSize += readSize;
|
||||
return true;
|
||||
}else{
|
||||
//Reached end of read -> return false
|
||||
currentReadSize = lastBlockToReadSize;
|
||||
return false;
|
||||
}
|
||||
}else{
|
||||
//We are not in the last Block
|
||||
if((currentReadSize + readSize)<currentReadBlockSizeCached){
|
||||
//The current Block has more data
|
||||
currentReadSize += readSize;
|
||||
return true;
|
||||
}else{
|
||||
//The current block is written completely
|
||||
readNext();
|
||||
if(currentReadBlockSizeCached==0){
|
||||
//Next block is empty
|
||||
typename IndexedRingMemoryArray<T>::Iterator it(currentReadBlock);
|
||||
//Search if any block between this and the last block is not empty
|
||||
for(;it!=lastBlockToRead;++it){
|
||||
if(it == this->end()){
|
||||
//This is the end, next block is the begin
|
||||
it = this->begin();
|
||||
if(it == lastBlockToRead){
|
||||
//Break if the begin is the lastBlockToRead
|
||||
break;
|
||||
}
|
||||
}
|
||||
if(it->getSize()!=0){
|
||||
//This is a non empty block. Go on reading with this block
|
||||
currentReadBlock = it;
|
||||
currentReadBlockSizeCached = it->getSize();
|
||||
return true;
|
||||
}
|
||||
}
|
||||
//reached lastBlockToRead and every block was empty, check if the last block is also empty
|
||||
if(lastBlockToReadSize!=0){
|
||||
//go on with last Block
|
||||
currentReadBlock = lastBlockToRead;
|
||||
currentReadBlockSizeCached = lastBlockToReadSize;
|
||||
return true;
|
||||
}
|
||||
//There is no non empty block left
|
||||
return false;
|
||||
}
|
||||
//Size is larger than 0
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
uint32_t getRemainigSizeOfCurrentReadBlock() const{
|
||||
if(currentReadBlock == lastBlockToRead){
|
||||
return (lastBlockToReadSize - currentReadSize);
|
||||
}else{
|
||||
return (currentReadBlockSizeCached - currentReadSize);
|
||||
}
|
||||
}
|
||||
|
||||
uint32_t getAddressOfCurrentReadBlock() const {
|
||||
return currentReadBlock->getBlockStartAddress();
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the next non empty Block after the current write block,
|
||||
* @return Returns the iterator to the block. If there is non, the current write block is returned
|
||||
*/
|
||||
typename IndexedRingMemoryArray<T>::Iterator getNextNonEmptyBlock() const {
|
||||
for(typename IndexedRingMemoryArray<T>::Iterator it = getNextWrite();it!=currentWriteBlock;++it){
|
||||
if(it == this->end()){
|
||||
it = this->begin();
|
||||
if(it == currentWriteBlock){
|
||||
break;
|
||||
}
|
||||
}
|
||||
if(it->getSize()!=0){
|
||||
return it;
|
||||
}
|
||||
}
|
||||
return currentWriteBlock;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a copy of the oldest Index type
|
||||
* @return Type of Index
|
||||
*/
|
||||
T* getOldest(){
|
||||
return (getNextNonEmptyBlock()->modifyIndexType());
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Writing
|
||||
*/
|
||||
uint32_t getAddressOfCurrentWriteBlock() const{
|
||||
return currentWriteBlock->getBlockStartAddress();
|
||||
}
|
||||
|
||||
uint32_t getSizeOfCurrentWriteBlock() const{
|
||||
return currentWriteBlock->getSize();
|
||||
}
|
||||
|
||||
uint32_t getCurrentWriteAddress() const{
|
||||
return getAddressOfCurrentWriteBlock() + getSizeOfCurrentWriteBlock();
|
||||
}
|
||||
|
||||
void clearCurrentWriteBlock(){
|
||||
currentWriteBlock->setSize(0);
|
||||
currentWriteBlock->setStoredPackets(0);
|
||||
}
|
||||
|
||||
void addCurrentWriteBlock(uint32_t size, uint32_t storedPackets){
|
||||
currentWriteBlock->addSize(size);
|
||||
currentWriteBlock->addStoredPackets(storedPackets);
|
||||
}
|
||||
|
||||
T* modifyCurrentWriteBlockIndexType(){
|
||||
return currentWriteBlock->modifyIndexType();
|
||||
}
|
||||
void updatePreviousWriteSize(uint32_t size, uint32_t storedPackets){
|
||||
typename IndexedRingMemoryArray<T>::Iterator it = getPreviousBlock(currentWriteBlock);
|
||||
it->addSize(size);
|
||||
it->addStoredPackets(storedPackets);
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Checks if the block has enough space for sizeToWrite
|
||||
* @param sizeToWrite The data to be written in the Block
|
||||
* @return Returns true if size to write is smaller the remaining size of the block
|
||||
*/
|
||||
bool hasCurrentWriteBlockEnoughSpace(uint32_t sizeToWrite){
|
||||
typename IndexedRingMemoryArray<T>::Iterator next = getNextWrite();
|
||||
uint32_t addressOfNextBlock = next->getBlockStartAddress();
|
||||
uint32_t availableSize = ((addressOfNextBlock+totalSize) - (getAddressOfCurrentWriteBlock()+getSizeOfCurrentWriteBlock()))%totalSize;
|
||||
return (sizeToWrite < availableSize);
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the store is full if overwrite old is false
|
||||
* @return Returns true if it is writeable and false if not
|
||||
*/
|
||||
bool isNextBlockWritable(){
|
||||
//First check if this is the end of the list
|
||||
typename IndexedRingMemoryArray<T>::Iterator next;
|
||||
next = getNextWrite();
|
||||
if((next->getSize()!=0) && (!overwriteOld)){
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates current write Block Index Type
|
||||
* @param infoOfNewBlock
|
||||
*/
|
||||
void updateCurrentBlock(T* infoOfNewBlock){
|
||||
currentWriteBlock->setIndexType(infoOfNewBlock);
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Succeed to next block, returns FAILED if overwrite is false and the store is full
|
||||
* @return
|
||||
*/
|
||||
ReturnValue_t writeNext(){
|
||||
//Check Next Block
|
||||
if(!isNextBlockWritable()){
|
||||
//The Index is full and does not overwrite old
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
//Next block can be written, update Metadata
|
||||
currentWriteBlock = getNextWrite();
|
||||
currentWriteBlock->setSize(0);
|
||||
currentWriteBlock->setStoredPackets(0);
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
/**
|
||||
* Serializes the Index and calculates the CRC.
|
||||
* Parameters according to HasSerializeIF
|
||||
* @param buffer
|
||||
* @param size
|
||||
* @param maxSize
|
||||
* @param streamEndianness
|
||||
* @return
|
||||
*/
|
||||
ReturnValue_t serialize(uint8_t** buffer, size_t* size,
|
||||
size_t maxSize, Endianness streamEndianness) const{
|
||||
uint8_t* crcBuffer = *buffer;
|
||||
uint32_t oldSize = *size;
|
||||
if(additionalInfo!=NULL){
|
||||
additionalInfo->serialize(buffer,size,maxSize,streamEndianness);
|
||||
}
|
||||
ReturnValue_t result = currentWriteBlock->serialize(buffer,size,maxSize,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
result = SerializeAdapter::serialize(&this->size,buffer,size,maxSize,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
|
||||
uint32_t i = 0;
|
||||
while ((result == HasReturnvaluesIF::RETURN_OK) && (i < this->size)) {
|
||||
result = SerializeAdapter::serialize(&this->entries[i], buffer, size,
|
||||
maxSize, streamEndianness);
|
||||
++i;
|
||||
}
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
uint16_t crc = Calculate_CRC(crcBuffer,(*size-oldSize));
|
||||
result = SerializeAdapter::serialize(&crc,buffer,size,maxSize,streamEndianness);
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Get the serialized Size of the index
|
||||
* @return The serialized size of the index
|
||||
*/
|
||||
size_t getSerializedSize() const {
|
||||
|
||||
uint32_t size = 0;
|
||||
if(additionalInfo!=NULL){
|
||||
size += additionalInfo->getSerializedSize();
|
||||
}
|
||||
size += currentWriteBlock->getSerializedSize();
|
||||
size += SerializeAdapter::getSerializedSize(&this->size);
|
||||
size += (this->entries[0].getSerializedSize()) * this->size;
|
||||
uint16_t crc = 0;
|
||||
size += SerializeAdapter::getSerializedSize(&crc);
|
||||
return size;
|
||||
}
|
||||
/**
|
||||
* DeSerialize the Indexed Ring from a buffer, deSerializes the current write iterator
|
||||
* CRC Has to be checked before!
|
||||
* @param buffer
|
||||
* @param size
|
||||
* @param streamEndianness
|
||||
* @return
|
||||
*/
|
||||
|
||||
ReturnValue_t deSerialize(const uint8_t** buffer, size_t* size,
|
||||
Endianness streamEndianness){
|
||||
|
||||
ReturnValue_t result = HasReturnvaluesIF::RETURN_OK;
|
||||
if(additionalInfo!=NULL){
|
||||
result = additionalInfo->deSerialize(buffer,size,streamEndianness);
|
||||
}
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
|
||||
Index<T> tempIndex;
|
||||
result = tempIndex.deSerialize(buffer,size,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
uint32_t tempSize = 0;
|
||||
result = SerializeAdapter::deSerialize(&tempSize,buffer,size,streamEndianness);
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
if(this->size != tempSize){
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
uint32_t i = 0;
|
||||
while ((result == HasReturnvaluesIF::RETURN_OK) && (i < this->size)) {
|
||||
result = SerializeAdapter::deSerialize(
|
||||
&this->entries[i], buffer, size,
|
||||
streamEndianness);
|
||||
++i;
|
||||
}
|
||||
if(result != HasReturnvaluesIF::RETURN_OK){
|
||||
return result;
|
||||
}
|
||||
typename IndexedRingMemoryArray<T>::Iterator cmp(&tempIndex);
|
||||
for(typename IndexedRingMemoryArray<T>::Iterator it= this->begin();it!=this->end();++it){
|
||||
if(*(cmp.value) == *(it.value)){
|
||||
currentWriteBlock = it;
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
}
|
||||
//Reached if current write block iterator is not found
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
|
||||
uint32_t getIndexAddress() const {
|
||||
return indexAddress;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Statistics
|
||||
*/
|
||||
uint32_t getStoredPackets() const {
|
||||
uint32_t size = 0;
|
||||
for(typename IndexedRingMemoryArray<T>::Iterator it= this->begin();it!=this->end();++it){
|
||||
size += it->getStoredPackets();
|
||||
}
|
||||
return size;
|
||||
}
|
||||
|
||||
uint32_t getTotalSize() const {
|
||||
return totalSize;
|
||||
}
|
||||
|
||||
uint32_t getCurrentSize() const{
|
||||
uint32_t size = 0;
|
||||
for(typename IndexedRingMemoryArray<T>::Iterator it= this->begin();it!=this->end();++it){
|
||||
size += it->getSize();
|
||||
}
|
||||
return size;
|
||||
}
|
||||
|
||||
bool isEmpty() const{
|
||||
return getCurrentSize()==0;
|
||||
}
|
||||
|
||||
double getPercentageFilled() const{
|
||||
uint32_t filledSize = 0;
|
||||
for(typename IndexedRingMemoryArray<T>::Iterator it= this->begin();it!=this->end();++it){
|
||||
filledSize += it->getSize();
|
||||
}
|
||||
|
||||
return (double)filledSize/(double)this->totalSize;
|
||||
}
|
||||
|
||||
typename IndexedRingMemoryArray<T>::Iterator getCurrentWriteBlock() const{
|
||||
return currentWriteBlock;
|
||||
}
|
||||
/**
|
||||
* Get the next block of the currentWriteBlock.
|
||||
* Returns the first one if currentWriteBlock is the last one
|
||||
* @return Iterator pointing to the next block after currentWriteBlock
|
||||
*/
|
||||
typename IndexedRingMemoryArray<T>::Iterator getNextWrite() const{
|
||||
typename IndexedRingMemoryArray<T>::Iterator next(currentWriteBlock);
|
||||
if((this->size != 0) && (currentWriteBlock.value == this->back())){
|
||||
next = this->begin();
|
||||
}else{
|
||||
++next;
|
||||
}
|
||||
return next;
|
||||
}
|
||||
/**
|
||||
* Get the block in front of the Iterator
|
||||
* Returns the last block if it is the first block
|
||||
* @param it iterator which you want the previous block from
|
||||
* @return pointing to the block before it
|
||||
*/
|
||||
typename IndexedRingMemoryArray<T>::Iterator getPreviousBlock(typename IndexedRingMemoryArray<T>::Iterator it) {
|
||||
if(this->begin() == it){
|
||||
typename IndexedRingMemoryArray<T>::Iterator next((this->back()));
|
||||
return next;
|
||||
}
|
||||
typename IndexedRingMemoryArray<T>::Iterator next(it);
|
||||
--next;
|
||||
return next;
|
||||
}
|
||||
private:
|
||||
//The total size used by the blocks (without index)
|
||||
uint32_t totalSize;
|
||||
|
||||
//The address of the index
|
||||
const uint32_t indexAddress;
|
||||
|
||||
//The iterators for writing and reading
|
||||
typename IndexedRingMemoryArray<T>::Iterator currentWriteBlock;
|
||||
typename IndexedRingMemoryArray<T>::Iterator currentReadBlock;
|
||||
|
||||
//How much of the current read block is read already
|
||||
uint32_t currentReadSize;
|
||||
|
||||
//Cached Size of current read block
|
||||
uint32_t currentReadBlockSizeCached;
|
||||
|
||||
//Last block of current write (should be write block)
|
||||
typename IndexedRingMemoryArray<T>::Iterator lastBlockToRead;
|
||||
//current size of last Block to read
|
||||
uint32_t lastBlockToReadSize;
|
||||
|
||||
//Additional Info to be serialized with the index
|
||||
SerializeIF* additionalInfo;
|
||||
|
||||
//Does it overwrite old blocks?
|
||||
const bool overwriteOld;
|
||||
|
||||
};
|
||||
|
||||
|
||||
|
||||
|
||||
#endif /* FRAMEWORK_CONTAINER_INDEXEDRINGMEMORY_H_ */
|
71
container/PlacementFactory.h
Normal file
71
container/PlacementFactory.h
Normal file
@ -0,0 +1,71 @@
|
||||
#ifndef FRAMEWORK_CONTAINER_PLACEMENTFACTORY_H_
|
||||
#define FRAMEWORK_CONTAINER_PLACEMENTFACTORY_H_
|
||||
|
||||
#include "../storagemanager/StorageManagerIF.h"
|
||||
#include <utility>
|
||||
/**
|
||||
* The Placement Factory is used to create objects at runtime in a specific pool.
|
||||
* In general, this should be avoided and it should only be used if you know what you are doing.
|
||||
* You are not allowed to use this container with a type that allocates memory internally like ArrayList.
|
||||
*
|
||||
* Also, you have to check the returned pointer in generate against nullptr!
|
||||
*
|
||||
* A backend of Type StorageManagerIF must be given as a place to store the new objects.
|
||||
* Therefore ThreadSafety is only provided by your StorageManager Implementation.
|
||||
*
|
||||
* Objects must be destroyed by the user with "destroy"! Otherwise the pool will not be cleared.
|
||||
*
|
||||
* The concept is based on the placement new operator.
|
||||
*
|
||||
* @warning Do not use with any Type that allocates memory internally!
|
||||
* @ingroup container
|
||||
*/
|
||||
class PlacementFactory {
|
||||
public:
|
||||
PlacementFactory(StorageManagerIF* backend) :
|
||||
dataBackend(backend) {
|
||||
}
|
||||
|
||||
/***
|
||||
* Generates an object of type T in the backend storage.
|
||||
*
|
||||
* @warning Do not use with any Type that allocates memory internally!
|
||||
*
|
||||
* @tparam T Type of Object
|
||||
* @param args Constructor Arguments to be passed
|
||||
* @return A pointer to the new object or a nullptr in case of failure
|
||||
*/
|
||||
template<typename T, typename ... Args>
|
||||
T* generate(Args&&... args) {
|
||||
store_address_t tempId;
|
||||
uint8_t* pData = nullptr;
|
||||
ReturnValue_t result = dataBackend->getFreeElement(&tempId, sizeof(T),
|
||||
&pData);
|
||||
if (result != HasReturnvaluesIF::RETURN_OK) {
|
||||
return nullptr;
|
||||
}
|
||||
T* temp = new (pData) T(std::forward<Args>(args)...);
|
||||
return temp;
|
||||
}
|
||||
/***
|
||||
* Function to destroy the object allocated with generate and free space in backend.
|
||||
* This must be called by the user.
|
||||
*
|
||||
* @param thisElement Element to be destroyed
|
||||
* @return RETURN_OK if the element was destroyed, different errors on failure
|
||||
*/
|
||||
template<typename T>
|
||||
ReturnValue_t destroy(T* thisElement) {
|
||||
if (thisElement == nullptr){
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
//Need to call destructor first, in case something was allocated by the object (shouldn't do that, however).
|
||||
thisElement->~T();
|
||||
uint8_t* pointer = (uint8_t*) (thisElement);
|
||||
return dataBackend->deleteData(pointer, sizeof(T));
|
||||
}
|
||||
private:
|
||||
StorageManagerIF* dataBackend;
|
||||
};
|
||||
|
||||
#endif /* FRAMEWORK_CONTAINER_PLACEMENTFACTORY_H_ */
|
113
container/RingBufferBase.h
Normal file
113
container/RingBufferBase.h
Normal file
@ -0,0 +1,113 @@
|
||||
#ifndef FSFW_CONTAINER_RINGBUFFERBASE_H_
|
||||
#define FSFW_CONTAINER_RINGBUFFERBASE_H_
|
||||
|
||||
#include "../returnvalues/HasReturnvaluesIF.h"
|
||||
#include <cstddef>
|
||||
|
||||
template<uint8_t N_READ_PTRS = 1>
|
||||
class RingBufferBase {
|
||||
public:
|
||||
RingBufferBase(size_t startAddress, const size_t size, bool overwriteOld) :
|
||||
start(startAddress), write(startAddress), size(size),
|
||||
overwriteOld(overwriteOld) {
|
||||
for (uint8_t count = 0; count < N_READ_PTRS; count++) {
|
||||
read[count] = startAddress;
|
||||
}
|
||||
}
|
||||
|
||||
virtual ~RingBufferBase() {}
|
||||
|
||||
bool isFull(uint8_t n = 0) {
|
||||
return (availableWriteSpace(n) == 0);
|
||||
}
|
||||
bool isEmpty(uint8_t n = 0) {
|
||||
return (getAvailableReadData(n) == 0);
|
||||
}
|
||||
|
||||
size_t getAvailableReadData(uint8_t n = 0) const {
|
||||
return ((write + size) - read[n]) % size;
|
||||
}
|
||||
size_t availableWriteSpace(uint8_t n = 0) const {
|
||||
//One less to avoid ambiguous full/empty problem.
|
||||
return (((read[n] + size) - write - 1) % size);
|
||||
}
|
||||
|
||||
bool overwritesOld() const {
|
||||
return overwriteOld;
|
||||
}
|
||||
|
||||
size_t getMaxSize() const {
|
||||
return size - 1;
|
||||
}
|
||||
|
||||
void clear() {
|
||||
write = start;
|
||||
for (uint8_t count = 0; count < N_READ_PTRS; count++) {
|
||||
read[count] = start;
|
||||
}
|
||||
}
|
||||
|
||||
size_t writeTillWrap() {
|
||||
return (start + size) - write;
|
||||
}
|
||||
|
||||
size_t readTillWrap(uint8_t n = 0) {
|
||||
return (start + size) - read[n];
|
||||
}
|
||||
|
||||
size_t getStart() const {
|
||||
return start;
|
||||
}
|
||||
|
||||
protected:
|
||||
const size_t start;
|
||||
size_t write;
|
||||
size_t read[N_READ_PTRS];
|
||||
const size_t size;
|
||||
const bool overwriteOld;
|
||||
|
||||
void incrementWrite(uint32_t amount) {
|
||||
write = ((write + amount - start) % size) + start;
|
||||
}
|
||||
void incrementRead(uint32_t amount, uint8_t n = 0) {
|
||||
read[n] = ((read[n] + amount - start) % size) + start;
|
||||
}
|
||||
|
||||
ReturnValue_t readData(uint32_t amount, uint8_t n = 0) {
|
||||
if (getAvailableReadData(n) >= amount) {
|
||||
incrementRead(amount, n);
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
} else {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
}
|
||||
|
||||
ReturnValue_t writeData(uint32_t amount) {
|
||||
if (availableWriteSpace() >= amount or overwriteOld) {
|
||||
incrementWrite(amount);
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
} else {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
}
|
||||
|
||||
size_t getRead(uint8_t n = 0) const {
|
||||
return read[n];
|
||||
}
|
||||
|
||||
void setRead(uint32_t read, uint8_t n = 0) {
|
||||
if (read >= start && read < (start+size)) {
|
||||
this->read[n] = read;
|
||||
}
|
||||
}
|
||||
|
||||
uint32_t getWrite() const {
|
||||
return write;
|
||||
}
|
||||
|
||||
void setWrite(uint32_t write) {
|
||||
this->write = write;
|
||||
}
|
||||
};
|
||||
|
||||
#endif /* FSFW_CONTAINER_RINGBUFFERBASE_H_ */
|
55
container/SharedRingBuffer.cpp
Normal file
55
container/SharedRingBuffer.cpp
Normal file
@ -0,0 +1,55 @@
|
||||
#include "SharedRingBuffer.h"
|
||||
#include "../ipc/MutexFactory.h"
|
||||
#include "../ipc/MutexHelper.h"
|
||||
|
||||
SharedRingBuffer::SharedRingBuffer(object_id_t objectId, const size_t size,
|
||||
bool overwriteOld, size_t maxExcessBytes):
|
||||
SystemObject(objectId), SimpleRingBuffer(size, overwriteOld,
|
||||
maxExcessBytes) {
|
||||
mutex = MutexFactory::instance()->createMutex();
|
||||
}
|
||||
|
||||
|
||||
SharedRingBuffer::SharedRingBuffer(object_id_t objectId, uint8_t *buffer,
|
||||
const size_t size, bool overwriteOld, size_t maxExcessBytes):
|
||||
SystemObject(objectId), SimpleRingBuffer(buffer, size, overwriteOld,
|
||||
maxExcessBytes) {
|
||||
mutex = MutexFactory::instance()->createMutex();
|
||||
}
|
||||
|
||||
|
||||
void SharedRingBuffer::setToUseReceiveSizeFIFO(size_t fifoDepth) {
|
||||
this->fifoDepth = fifoDepth;
|
||||
}
|
||||
|
||||
ReturnValue_t SharedRingBuffer::lockRingBufferMutex(
|
||||
MutexIF::TimeoutType timeoutType, dur_millis_t timeout) {
|
||||
return mutex->lockMutex(timeoutType, timeout);
|
||||
}
|
||||
|
||||
ReturnValue_t SharedRingBuffer::unlockRingBufferMutex() {
|
||||
return mutex->unlockMutex();
|
||||
}
|
||||
|
||||
|
||||
|
||||
MutexIF* SharedRingBuffer::getMutexHandle() const {
|
||||
return mutex;
|
||||
}
|
||||
|
||||
ReturnValue_t SharedRingBuffer::initialize() {
|
||||
if(fifoDepth > 0) {
|
||||
receiveSizesFIFO = new DynamicFIFO<size_t>(fifoDepth);
|
||||
}
|
||||
return SystemObject::initialize();
|
||||
}
|
||||
|
||||
DynamicFIFO<size_t>* SharedRingBuffer::getReceiveSizesFIFO() {
|
||||
if(receiveSizesFIFO == nullptr) {
|
||||
// Configuration error.
|
||||
sif::warning << "SharedRingBuffer::getReceiveSizesFIFO: Ring buffer"
|
||||
<< " was not configured to have sizes FIFO, returning nullptr!"
|
||||
<< std::endl;
|
||||
}
|
||||
return receiveSizesFIFO;
|
||||
}
|
92
container/SharedRingBuffer.h
Normal file
92
container/SharedRingBuffer.h
Normal file
@ -0,0 +1,92 @@
|
||||
#ifndef FSFW_CONTAINER_SHAREDRINGBUFFER_H_
|
||||
#define FSFW_CONTAINER_SHAREDRINGBUFFER_H_
|
||||
|
||||
#include "SimpleRingBuffer.h"
|
||||
#include "DynamicFIFO.h"
|
||||
#include "../ipc/MutexIF.h"
|
||||
#include "../objectmanager/SystemObject.h"
|
||||
#include "../timemanager/Clock.h"
|
||||
|
||||
/**
|
||||
* @brief Ring buffer which can be shared among multiple objects
|
||||
* @details
|
||||
* This class offers a mutex to perform thread-safe operation on the ring
|
||||
* buffer. It is still up to the developer to actually perform the lock
|
||||
* and unlock operations.
|
||||
*/
|
||||
class SharedRingBuffer: public SystemObject,
|
||||
public SimpleRingBuffer {
|
||||
public:
|
||||
/**
|
||||
* This constructor allocates a new internal buffer with the supplied size.
|
||||
* @param size
|
||||
* @param overwriteOld
|
||||
* If the ring buffer is overflowing at a write operartion, the oldest data
|
||||
* will be overwritten.
|
||||
*/
|
||||
SharedRingBuffer(object_id_t objectId, const size_t size,
|
||||
bool overwriteOld, size_t maxExcessBytes);
|
||||
|
||||
/**
|
||||
* @brief This function can be used to add an optional FIFO to the class
|
||||
* @details
|
||||
* This FIFO will be allocated in the initialize function (and will
|
||||
* have a fixed maximum size after that). It can be used to store
|
||||
* values like packet sizes, for example for a shared ring buffer
|
||||
* used by producer/consumer tasks.
|
||||
*/
|
||||
void setToUseReceiveSizeFIFO(size_t fifoDepth);
|
||||
|
||||
/**
|
||||
* This constructor takes an external buffer with the specified size.
|
||||
* @param buffer
|
||||
* @param size
|
||||
* @param overwriteOld
|
||||
* If the ring buffer is overflowing at a write operartion, the oldest data
|
||||
* will be overwritten.
|
||||
*/
|
||||
SharedRingBuffer(object_id_t objectId, uint8_t* buffer, const size_t size,
|
||||
bool overwriteOld, size_t maxExcessBytes);
|
||||
|
||||
/**
|
||||
* Unless a read-only constant value is read, all operations on the
|
||||
* shared ring buffer should be protected by calling this function.
|
||||
* @param timeoutType
|
||||
* @param timeout
|
||||
* @return
|
||||
*/
|
||||
virtual ReturnValue_t lockRingBufferMutex(MutexIF::TimeoutType timeoutType,
|
||||
dur_millis_t timeout);
|
||||
/**
|
||||
* Any locked mutex also has to be unlocked, otherwise, access to the
|
||||
* shared ring buffer will be blocked.
|
||||
* @return
|
||||
*/
|
||||
virtual ReturnValue_t unlockRingBufferMutex();
|
||||
|
||||
/**
|
||||
* The mutex handle can be accessed directly, for example to perform
|
||||
* the lock with the #MutexHelper for a RAII compliant lock operation.
|
||||
* @return
|
||||
*/
|
||||
MutexIF* getMutexHandle() const;
|
||||
|
||||
ReturnValue_t initialize() override;
|
||||
|
||||
/**
|
||||
* If the shared ring buffer was configured to have a sizes FIFO, a handle
|
||||
* to that FIFO can be retrieved with this function.
|
||||
* Do not forget to protect access with a lock if required!
|
||||
* @return
|
||||
*/
|
||||
DynamicFIFO<size_t>* getReceiveSizesFIFO();
|
||||
private:
|
||||
MutexIF* mutex = nullptr;
|
||||
|
||||
size_t fifoDepth = 0;
|
||||
DynamicFIFO<size_t>* receiveSizesFIFO = nullptr;
|
||||
};
|
||||
|
||||
|
||||
|
||||
#endif /* FSFW_CONTAINER_SHAREDRINGBUFFER_H_ */
|
131
container/SimpleRingBuffer.cpp
Normal file
131
container/SimpleRingBuffer.cpp
Normal file
@ -0,0 +1,131 @@
|
||||
#include "SimpleRingBuffer.h"
|
||||
#include <cstring>
|
||||
|
||||
SimpleRingBuffer::SimpleRingBuffer(const size_t size, bool overwriteOld,
|
||||
size_t maxExcessBytes) :
|
||||
RingBufferBase<>(0, size, overwriteOld),
|
||||
maxExcessBytes(maxExcessBytes) {
|
||||
if(maxExcessBytes > size) {
|
||||
this->maxExcessBytes = size;
|
||||
}
|
||||
else {
|
||||
this->maxExcessBytes = maxExcessBytes;
|
||||
}
|
||||
buffer = new uint8_t[size + maxExcessBytes];
|
||||
}
|
||||
|
||||
SimpleRingBuffer::SimpleRingBuffer(uint8_t *buffer, const size_t size,
|
||||
bool overwriteOld, size_t maxExcessBytes):
|
||||
RingBufferBase<>(0, size, overwriteOld), buffer(buffer) {
|
||||
if(maxExcessBytes > size) {
|
||||
this->maxExcessBytes = size;
|
||||
}
|
||||
else {
|
||||
this->maxExcessBytes = maxExcessBytes;
|
||||
}
|
||||
}
|
||||
|
||||
SimpleRingBuffer::~SimpleRingBuffer() {
|
||||
delete[] buffer;
|
||||
}
|
||||
|
||||
ReturnValue_t SimpleRingBuffer::getFreeElement(uint8_t **writePointer,
|
||||
size_t amount) {
|
||||
if (availableWriteSpace() >= amount or overwriteOld) {
|
||||
size_t amountTillWrap = writeTillWrap();
|
||||
if (amountTillWrap < amount) {
|
||||
if((amount - amountTillWrap + excessBytes) > maxExcessBytes) {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
excessBytes = amount - amountTillWrap;
|
||||
}
|
||||
*writePointer = &buffer[write];
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
else {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
}
|
||||
|
||||
void SimpleRingBuffer::confirmBytesWritten(size_t amount) {
|
||||
if(getExcessBytes() > 0) {
|
||||
moveExcessBytesToStart();
|
||||
}
|
||||
incrementWrite(amount);
|
||||
|
||||
}
|
||||
|
||||
ReturnValue_t SimpleRingBuffer::writeData(const uint8_t* data,
|
||||
size_t amount) {
|
||||
if (availableWriteSpace() >= amount or overwriteOld) {
|
||||
size_t amountTillWrap = writeTillWrap();
|
||||
if (amountTillWrap >= amount) {
|
||||
// remaining size in buffer is sufficient to fit full amount.
|
||||
memcpy(&buffer[write], data, amount);
|
||||
}
|
||||
else {
|
||||
memcpy(&buffer[write], data, amountTillWrap);
|
||||
memcpy(buffer, data + amountTillWrap, amount - amountTillWrap);
|
||||
}
|
||||
incrementWrite(amount);
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
} else {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
}
|
||||
|
||||
ReturnValue_t SimpleRingBuffer::readData(uint8_t* data, size_t amount,
|
||||
bool incrementReadPtr, bool readRemaining, size_t* trueAmount) {
|
||||
size_t availableData = getAvailableReadData(READ_PTR);
|
||||
size_t amountTillWrap = readTillWrap(READ_PTR);
|
||||
if (availableData < amount) {
|
||||
if (readRemaining) {
|
||||
// more data available than amount specified.
|
||||
amount = availableData;
|
||||
} else {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
}
|
||||
if (trueAmount != nullptr) {
|
||||
*trueAmount = amount;
|
||||
}
|
||||
if (amountTillWrap >= amount) {
|
||||
memcpy(data, &buffer[read[READ_PTR]], amount);
|
||||
} else {
|
||||
memcpy(data, &buffer[read[READ_PTR]], amountTillWrap);
|
||||
memcpy(data + amountTillWrap, buffer, amount - amountTillWrap);
|
||||
}
|
||||
|
||||
if(incrementReadPtr) {
|
||||
deleteData(amount, readRemaining);
|
||||
}
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
||||
|
||||
size_t SimpleRingBuffer::getExcessBytes() const {
|
||||
return excessBytes;
|
||||
}
|
||||
|
||||
void SimpleRingBuffer::moveExcessBytesToStart() {
|
||||
if(excessBytes > 0) {
|
||||
std::memcpy(buffer, &buffer[size], excessBytes);
|
||||
excessBytes = 0;
|
||||
}
|
||||
}
|
||||
|
||||
ReturnValue_t SimpleRingBuffer::deleteData(size_t amount,
|
||||
bool deleteRemaining, size_t* trueAmount) {
|
||||
size_t availableData = getAvailableReadData(READ_PTR);
|
||||
if (availableData < amount) {
|
||||
if (deleteRemaining) {
|
||||
amount = availableData;
|
||||
} else {
|
||||
return HasReturnvaluesIF::RETURN_FAILED;
|
||||
}
|
||||
}
|
||||
if (trueAmount != nullptr) {
|
||||
*trueAmount = amount;
|
||||
}
|
||||
incrementRead(amount, READ_PTR);
|
||||
return HasReturnvaluesIF::RETURN_OK;
|
||||
}
|
129
container/SimpleRingBuffer.h
Normal file
129
container/SimpleRingBuffer.h
Normal file
@ -0,0 +1,129 @@
|
||||
#ifndef FSFW_CONTAINER_SIMPLERINGBUFFER_H_
|
||||
#define FSFW_CONTAINER_SIMPLERINGBUFFER_H_
|
||||
|
||||
#include "RingBufferBase.h"
|
||||
#include <cstddef>
|
||||
|
||||
/**
|
||||
* @brief Circular buffer implementation, useful for buffering
|
||||
* into data streams.
|
||||
* @details
|
||||
* Note that the deleteData() has to be called to increment the read pointer.
|
||||
* This class allocated dynamically, so
|
||||
* @ingroup containers
|
||||
*/
|
||||
class SimpleRingBuffer: public RingBufferBase<> {
|
||||
public:
|
||||
/**
|
||||
* This constructor allocates a new internal buffer with the supplied size.
|
||||
*
|
||||
* @param size
|
||||
* @param overwriteOld If the ring buffer is overflowing at a write
|
||||
* operation, the oldest data will be overwritten.
|
||||
* @param maxExcessBytes These additional bytes will be allocated in addtion
|
||||
* to the specified size to accomodate contiguous write operations
|
||||
* with getFreeElement.
|
||||
*
|
||||
*/
|
||||
SimpleRingBuffer(const size_t size, bool overwriteOld,
|
||||
size_t maxExcessBytes = 0);
|
||||
/**
|
||||
* This constructor takes an external buffer with the specified size.
|
||||
* @param buffer
|
||||
* @param size
|
||||
* @param overwriteOld
|
||||
* If the ring buffer is overflowing at a write operartion, the oldest data
|
||||
* will be overwritten.
|
||||
* @param maxExcessBytes
|
||||
* If the buffer can accomodate additional bytes for contigous write
|
||||
* operations with getFreeElement, this is the maximum allowed additional
|
||||
* size
|
||||
*/
|
||||
SimpleRingBuffer(uint8_t* buffer, const size_t size, bool overwriteOld,
|
||||
size_t maxExcessBytes = 0);
|
||||
|
||||
virtual ~SimpleRingBuffer();
|
||||
|
||||
/**
|
||||
* Write to circular buffer and increment write pointer by amount.
|
||||
* @param data
|
||||
* @param amount
|
||||
* @return -@c RETURN_OK if write operation was successfull
|
||||
* -@c RETURN_FAILED if
|
||||
*/
|
||||
ReturnValue_t writeData(const uint8_t* data, size_t amount);
|
||||
|
||||
/**
|
||||
* Returns a pointer to a free element. If the remaining buffer is
|
||||
* not large enough, the data will be written past the actual size
|
||||
* and the amount of excess bytes will be cached. This function
|
||||
* does not increment the write pointer!
|
||||
* @param writePointer Pointer to a pointer which can be used to write
|
||||
* contiguous blocks into the ring buffer
|
||||
* @param amount
|
||||
* @return
|
||||
*/
|
||||
ReturnValue_t getFreeElement(uint8_t** writePointer, size_t amount);
|
||||
|
||||
/**
|
||||
* This increments the write pointer and also copies the excess bytes
|
||||
* to the beginning. It should be called if the write operation
|
||||
* conducted after calling getFreeElement() was performed.
|
||||
* @return
|
||||
*/
|
||||
void confirmBytesWritten(size_t amount);
|
||||
|
||||
virtual size_t getExcessBytes() const;
|
||||
/**
|
||||
* Helper functions which moves any excess bytes to the start
|
||||
* of the ring buffer.
|
||||
* @return
|
||||
*/
|
||||
virtual void moveExcessBytesToStart();
|
||||
|
||||
/**
|
||||
* Read from circular buffer at read pointer.
|
||||
* @param data
|
||||
* @param amount
|
||||
* @param incrementReadPtr
|
||||
* If this is set to true, the read pointer will be incremented.
|
||||
* If readRemaining is set to true, the read pointer will be incremented
|
||||
* accordingly.
|
||||
* @param readRemaining
|
||||
* If this is set to true, the data will be read even if the amount
|
||||
* specified exceeds the read data available.
|
||||
* @param trueAmount [out]
|
||||
* If readRemaining was set to true, the true amount read will be assigned
|
||||
* to the passed value.
|
||||
* @return
|
||||
* - @c RETURN_OK if data was read successfully
|
||||
* - @c RETURN_FAILED if not enough data was available and readRemaining
|
||||
* was set to false.
|
||||
*/
|
||||
ReturnValue_t readData(uint8_t* data, size_t amount,
|
||||
bool incrementReadPtr = false, bool readRemaining = false,
|
||||
size_t* trueAmount = nullptr);
|
||||
|
||||
/**
|
||||
* Delete data by incrementing read pointer.
|
||||
* @param amount
|
||||
* @param deleteRemaining
|
||||
* If the amount specified is larger than the remaing size to read and this
|
||||
* is set to true, the remaining amount will be deleted as well
|
||||
* @param trueAmount [out]
|
||||
* If deleteRemaining was set to true, the amount deleted will be assigned
|
||||
* to the passed value.
|
||||
* @return
|
||||
*/
|
||||
ReturnValue_t deleteData(size_t amount, bool deleteRemaining = false,
|
||||
size_t* trueAmount = nullptr);
|
||||
|
||||
private:
|
||||
static const uint8_t READ_PTR = 0;
|
||||
uint8_t* buffer = nullptr;
|
||||
size_t maxExcessBytes;
|
||||
size_t excessBytes = 0;
|
||||
};
|
||||
|
||||
#endif /* FSFW_CONTAINER_SIMPLERINGBUFFER_H_ */
|
||||
|
154
container/SinglyLinkedList.h
Normal file
154
container/SinglyLinkedList.h
Normal file
@ -0,0 +1,154 @@
|
||||
#ifndef FRAMEWORK_CONTAINER_SINGLYLINKEDLIST_H_
|
||||
#define FRAMEWORK_CONTAINER_SINGLYLINKEDLIST_H_
|
||||
|
||||
#include <cstddef>
|
||||
#include <cstdint>
|
||||
|
||||
/**
|
||||
* @brief Linked list data structure,
|
||||
* each entry has a pointer to the next entry (singly)
|
||||
* @ingroup container
|
||||
*/
|
||||
template<typename T>
|
||||
class LinkedElement {
|
||||
public:
|
||||
T *value;
|
||||
class Iterator {
|
||||
public:
|
||||
LinkedElement<T> *value = nullptr;
|
||||
Iterator() {}
|
||||
|
||||
Iterator(LinkedElement<T> *element) :
|
||||
value(element) {
|
||||
}
|
||||
|
||||
Iterator& operator++() {
|
||||
value = value->getNext();
|
||||
return *this;
|
||||
}
|
||||
|
||||
Iterator operator++(int) {
|
||||
Iterator tmp(*this);
|
||||
operator++();
|
||||
return tmp;
|
||||
}
|
||||
|
||||
bool operator==(Iterator other) {
|
||||
return value == other.value;
|
||||
}
|
||||
|
||||
bool operator!=(Iterator other) {
|
||||
return !(*this == other);
|
||||
}
|
||||
T *operator->() {
|
||||
return value->value;
|
||||
}
|
||||
};
|
||||
|
||||
LinkedElement(T* setElement, LinkedElement<T>* setNext = nullptr):
|
||||
value(setElement), next(setNext) {}
|
||||
|
||||
virtual ~LinkedElement(){}
|
||||
|
||||
virtual LinkedElement* getNext() const {
|
||||
return next;
|
||||
}
|
||||
|
||||
virtual void setNext(LinkedElement* next) {
|
||||
this->next = next;
|
||||
}
|
||||
|
||||
virtual void setEnd() {
|
||||
this->next = nullptr;
|
||||
}
|
||||
|
||||
LinkedElement* begin() {
|
||||
return this;
|
||||
}
|
||||
LinkedElement* end() {
|
||||
return nullptr;
|
||||
}
|
||||
private:
|
||||
LinkedElement *next;
|
||||
};
|
||||
|
||||
template<typename T>
|
||||
class SinglyLinkedList {
|
||||
public:
|
||||
using ElementIterator = typename LinkedElement<T>::Iterator;
|
||||
|
||||
SinglyLinkedList() {}
|
||||
|
||||
SinglyLinkedList(ElementIterator start) :
|
||||
start(start.value) {}
|
||||
|
||||
SinglyLinkedList(LinkedElement<T>* startElement) :
|
||||
start(startElement) {}
|
||||
|
||||
ElementIterator begin() const {
|
||||
return ElementIterator::Iterator(start);
|
||||
}
|
||||
|
||||
/** Returns iterator to nulltr */
|
||||
ElementIterator end() const {
|
||||
return ElementIterator::Iterator();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns last element in singly linked list.
|
||||
* @return
|
||||
*/
|
||||
ElementIterator back() const {
|
||||
LinkedElement<T> *element = start;
|
||||
while (element->getNext() != nullptr) {
|
||||
element = element->getNext();
|
||||
}
|
||||
return ElementIterator::Iterator(element);
|
||||
}
|
||||
|
||||
size_t getSize() const {
|
||||
size_t size = 0;
|
||||
LinkedElement<T> *element = start;
|
||||
while (element != nullptr) {
|
||||
size++;
|
||||
element = element->getNext();
|
||||
}
|
||||
return size;
|
||||
}
|
||||
void setStart(LinkedElement<T>* firstElement) {
|
||||
start = firstElement;
|
||||
}
|
||||
|
||||
void setNext(LinkedElement<T>* currentElement,
|
||||
LinkedElement<T>* nextElement) {
|
||||
currentElement->setNext(nextElement);
|
||||
}
|
||||
|
||||
void setLast(LinkedElement<T>* lastElement) {
|
||||
lastElement->setEnd();
|
||||
}
|
||||
|
||||
void insertElement(LinkedElement<T>* element, size_t position) {
|
||||
LinkedElement<T> *currentElement = start;
|
||||
for(size_t count = 0; count < position; count++) {
|
||||
if(currentElement == nullptr) {
|
||||
return;
|
||||
}
|
||||
currentElement = currentElement->getNext();
|
||||
}
|
||||
LinkedElement<T>* elementAfterCurrent = currentElement->next;
|
||||
currentElement->setNext(element);
|
||||
if(elementAfterCurrent != nullptr) {
|
||||
element->setNext(elementAfterCurrent);
|
||||
}
|
||||
}
|
||||
|
||||
void insertBack(LinkedElement<T>* lastElement) {
|
||||
back().value->setNext(lastElement);
|
||||
}
|
||||
|
||||
protected:
|
||||
LinkedElement<T> *start = nullptr;
|
||||
};
|
||||
|
||||
#endif /* SINGLYLINKEDLIST_H_ */
|
@ -11,4 +11,5 @@
|
||||
* as an Adapter to swap the endianness.
|
||||
*/
|
||||
|
||||
|
||||
#endif /* GROUP_H_ */
|
@ -1,9 +0,0 @@
|
||||
target_include_directories(${LIB_FSFW_NAME} PRIVATE
|
||||
${CMAKE_CURRENT_SOURCE_DIR}
|
||||
)
|
||||
|
||||
target_include_directories(${LIB_FSFW_NAME} INTERFACE
|
||||
${CMAKE_CURRENT_SOURCE_DIR}
|
||||
)
|
||||
|
||||
add_subdirectory(fsfw_contrib)
|
@ -1,16 +0,0 @@
|
||||
if(FSFW_ADD_SGP4_PROPAGATOR)
|
||||
target_sources(${LIB_FSFW_NAME} PRIVATE
|
||||
sgp4/sgp4unit.cpp
|
||||
)
|
||||
target_include_directories(${LIB_FSFW_NAME} PRIVATE
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/sgp4
|
||||
)
|
||||
target_include_directories(${LIB_FSFW_NAME} INTERFACE
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/sgp4
|
||||
)
|
||||
endif()
|
||||
|
||||
add_subdirectory(etl-20.39.4)
|
||||
if(FSFW_BUILD_TESTS OR FSFW_ADD_CATCH2)
|
||||
add_subdirectory(Catch2-3.7.1)
|
||||
endif()
|
@ -1,11 +0,0 @@
|
||||
build --enable_platform_specific_config
|
||||
|
||||
build:gcc9 --cxxopt=-std=c++2a
|
||||
build:gcc11 --cxxopt=-std=c++2a
|
||||
build:clang13 --cxxopt=-std=c++17
|
||||
build:vs2019 --cxxopt=/std:c++17
|
||||
build:vs2022 --cxxopt=/std:c++17
|
||||
|
||||
build:windows --config=vs2022
|
||||
build:linux --config=gcc11
|
||||
build:macos --cxxopt=-std=c++2b
|
@ -1,45 +0,0 @@
|
||||
---
|
||||
Language: Cpp
|
||||
Standard: c++14
|
||||
|
||||
# Note that we cannot use IncludeIsMainRegex functionality, because it
|
||||
# does not support includes in angle brackets (<>)
|
||||
SortIncludes: true
|
||||
IncludeBlocks: Regroup
|
||||
IncludeCategories:
|
||||
- Regex: <catch2/.*\.hpp>
|
||||
Priority: 1
|
||||
- Regex: <.*/.*\.hpp>
|
||||
Priority: 2
|
||||
- Regex: <.*>
|
||||
Priority: 3
|
||||
|
||||
AllowShortBlocksOnASingleLine: Always
|
||||
AllowShortEnumsOnASingleLine: false
|
||||
AllowShortFunctionsOnASingleLine: All
|
||||
AllowShortIfStatementsOnASingleLine: WithoutElse
|
||||
AllowShortLambdasOnASingleLine: Inline
|
||||
|
||||
AccessModifierOffset: "-4"
|
||||
AlignEscapedNewlines: Left
|
||||
AllowAllConstructorInitializersOnNextLine: "true"
|
||||
BinPackArguments: "false"
|
||||
BinPackParameters: "false"
|
||||
BreakConstructorInitializers: AfterColon
|
||||
ConstructorInitializerAllOnOneLineOrOnePerLine: "true"
|
||||
DerivePointerAlignment: "false"
|
||||
FixNamespaceComments: "true"
|
||||
IndentCaseLabels: "false"
|
||||
IndentPPDirectives: AfterHash
|
||||
IndentWidth: "4"
|
||||
NamespaceIndentation: All
|
||||
PointerAlignment: Left
|
||||
SpaceBeforeCtorInitializerColon: "false"
|
||||
SpaceInEmptyParentheses: "false"
|
||||
SpacesInParentheses: "true"
|
||||
TabWidth: "4"
|
||||
UseTab: Never
|
||||
AlwaysBreakTemplateDeclarations: Yes
|
||||
SpaceAfterTemplateKeyword: true
|
||||
SortUsingDeclarations: true
|
||||
ReflowComments: true
|
@ -1,81 +0,0 @@
|
||||
---
|
||||
# Note: Alas, `Checks` is a string, not an array.
|
||||
# Comments in the block string are not parsed and are passed in the value.
|
||||
# They must thus be delimited by ',' from either side - then they are
|
||||
# harmless. It's terrible, but it works.
|
||||
Checks: >-
|
||||
clang-diagnostic-*,
|
||||
clang-analyzer-*,
|
||||
-clang-analyzer-optin.core.EnumCastOutOfRange,
|
||||
|
||||
bugprone-*,
|
||||
-bugprone-unchecked-optional-access,
|
||||
,# This is ridiculous, as it triggers on constants,
|
||||
-bugprone-implicit-widening-of-multiplication-result,
|
||||
-bugprone-easily-swappable-parameters,
|
||||
,# Is not really useful, has false positives, triggers for no-noexcept move constructors ...,
|
||||
-bugprone-exception-escape,
|
||||
-bugprone-narrowing-conversions,
|
||||
-bugprone-chained-comparison,# RIP decomposers,
|
||||
|
||||
modernize-*,
|
||||
-modernize-avoid-c-arrays,
|
||||
-modernize-use-auto,
|
||||
-modernize-use-emplace,
|
||||
-modernize-use-nullptr,# it went crazy with three-way comparison operators,
|
||||
-modernize-use-trailing-return-type,
|
||||
-modernize-return-braced-init-list,
|
||||
-modernize-concat-nested-namespaces,
|
||||
-modernize-use-nodiscard,
|
||||
-modernize-use-default-member-init,
|
||||
-modernize-type-traits,# we need to support C++14,
|
||||
-modernize-deprecated-headers,
|
||||
,# There's a lot of these and most of them are probably not useful,
|
||||
-modernize-pass-by-value,
|
||||
|
||||
performance-*,
|
||||
-performance-enum-size,
|
||||
|
||||
portability-*,
|
||||
|
||||
readability-*,
|
||||
-readability-braces-around-statements,
|
||||
-readability-container-size-empty,
|
||||
-readability-convert-member-functions-to-static,
|
||||
-readability-else-after-return,
|
||||
-readability-function-cognitive-complexity,
|
||||
-readability-function-size,
|
||||
-readability-identifier-length,
|
||||
-readability-implicit-bool-conversion,
|
||||
-readability-isolate-declaration,
|
||||
-readability-magic-numbers,
|
||||
-readability-named-parameter,
|
||||
-readability-qualified-auto,
|
||||
-readability-redundant-access-specifiers,
|
||||
-readability-simplify-boolean-expr,
|
||||
-readability-static-definition-in-anonymous-namespace,
|
||||
-readability-uppercase-literal-suffix,
|
||||
-readability-use-anyofallof,
|
||||
-readability-avoid-return-with-void-value,
|
||||
|
||||
,# time hogs,
|
||||
-bugprone-throw-keyword-missing,
|
||||
-modernize-replace-auto-ptr,
|
||||
-readability-identifier-naming,
|
||||
|
||||
,# We cannot use this until clang-tidy supports custom unique_ptr,
|
||||
-bugprone-use-after-move,
|
||||
,# Doesn't recognize unevaluated context in CATCH_MOVE and CATCH_FORWARD,
|
||||
-bugprone-macro-repeated-side-effects,
|
||||
WarningsAsErrors: >-
|
||||
clang-analyzer-core.*,
|
||||
clang-analyzer-cplusplus.*,
|
||||
clang-analyzer-security.*,
|
||||
clang-analyzer-unix.*,
|
||||
performance-move-const-arg,
|
||||
performance-unnecessary-value-param,
|
||||
readability-duplicate-include,
|
||||
HeaderFilterRegex: '.*\.(c|cxx|cpp)$'
|
||||
FormatStyle: none
|
||||
CheckOptions: {}
|
||||
...
|
@ -1,94 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import re
|
||||
from cpt.packager import ConanMultiPackager
|
||||
from cpt.ci_manager import CIManager
|
||||
from cpt.printer import Printer
|
||||
|
||||
|
||||
class BuilderSettings(object):
|
||||
@property
|
||||
def username(self):
|
||||
""" Set catchorg as package's owner
|
||||
"""
|
||||
return os.getenv("CONAN_USERNAME", "catchorg")
|
||||
|
||||
@property
|
||||
def login_username(self):
|
||||
""" Set Bintray login username
|
||||
"""
|
||||
return os.getenv("CONAN_LOGIN_USERNAME", "horenmar")
|
||||
|
||||
@property
|
||||
def upload(self):
|
||||
""" Set Catch2 repository to be used on upload.
|
||||
The upload server address could be customized by env var
|
||||
CONAN_UPLOAD. If not defined, the method will check the branch name.
|
||||
Only devel or CONAN_STABLE_BRANCH_PATTERN will be accepted.
|
||||
The devel branch will be pushed to testing channel, because it does
|
||||
not match the stable pattern. Otherwise it will upload to stable
|
||||
channel.
|
||||
"""
|
||||
return os.getenv("CONAN_UPLOAD", "https://api.bintray.com/conan/catchorg/catch2")
|
||||
|
||||
@property
|
||||
def upload_only_when_stable(self):
|
||||
""" Force to upload when running over tag branch
|
||||
"""
|
||||
return os.getenv("CONAN_UPLOAD_ONLY_WHEN_STABLE", "True").lower() in ["true", "1", "yes"]
|
||||
|
||||
@property
|
||||
def stable_branch_pattern(self):
|
||||
""" Only upload the package the branch name is like a tag
|
||||
"""
|
||||
return os.getenv("CONAN_STABLE_BRANCH_PATTERN", r"v\d+\.\d+\.\d+")
|
||||
|
||||
@property
|
||||
def reference(self):
|
||||
""" Read project version from branch create Conan reference
|
||||
"""
|
||||
return os.getenv("CONAN_REFERENCE", "catch2/{}".format(self._version))
|
||||
|
||||
@property
|
||||
def channel(self):
|
||||
""" Default Conan package channel when not stable
|
||||
"""
|
||||
return os.getenv("CONAN_CHANNEL", "testing")
|
||||
|
||||
@property
|
||||
def _version(self):
|
||||
""" Get version name from cmake file
|
||||
"""
|
||||
pattern = re.compile(r"project\(Catch2 LANGUAGES CXX VERSION (\d+\.\d+\.\d+)\)")
|
||||
version = "latest"
|
||||
with open("CMakeLists.txt") as file:
|
||||
for line in file:
|
||||
result = pattern.search(line)
|
||||
if result:
|
||||
version = result.group(1)
|
||||
return version
|
||||
|
||||
@property
|
||||
def _branch(self):
|
||||
""" Get branch name from CI manager
|
||||
"""
|
||||
printer = Printer(None)
|
||||
ci_manager = CIManager(printer)
|
||||
return ci_manager.get_branch()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
settings = BuilderSettings()
|
||||
builder = ConanMultiPackager(
|
||||
reference=settings.reference,
|
||||
channel=settings.channel,
|
||||
upload=settings.upload,
|
||||
upload_only_when_stable=False,
|
||||
stable_branch_pattern=settings.stable_branch_pattern,
|
||||
login_username=settings.login_username,
|
||||
username=settings.username,
|
||||
test_folder=os.path.join(".conan", "test_package"))
|
||||
builder.add()
|
||||
builder.run()
|
@ -1,8 +0,0 @@
|
||||
cmake_minimum_required(VERSION 3.15)
|
||||
project(PackageTest CXX)
|
||||
|
||||
find_package(Catch2 CONFIG REQUIRED)
|
||||
|
||||
add_executable(test_package test_package.cpp)
|
||||
target_link_libraries(test_package Catch2::Catch2WithMain)
|
||||
target_compile_features(test_package PRIVATE cxx_std_14)
|
@ -1,40 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
from conan import ConanFile
|
||||
from conan.tools.cmake import CMake, cmake_layout
|
||||
from conan.tools.build import can_run
|
||||
from conan.tools.files import save, load
|
||||
import os
|
||||
|
||||
|
||||
class TestPackageConan(ConanFile):
|
||||
settings = "os", "compiler", "build_type", "arch"
|
||||
generators = "CMakeToolchain", "CMakeDeps", "VirtualRunEnv"
|
||||
test_type = "explicit"
|
||||
|
||||
def requirements(self):
|
||||
self.requires(self.tested_reference_str)
|
||||
|
||||
def layout(self):
|
||||
cmake_layout(self)
|
||||
|
||||
def generate(self):
|
||||
save(self, os.path.join(self.build_folder, "package_folder"),
|
||||
self.dependencies[self.tested_reference_str].package_folder)
|
||||
save(self, os.path.join(self.build_folder, "license"),
|
||||
self.dependencies[self.tested_reference_str].license)
|
||||
|
||||
def build(self):
|
||||
cmake = CMake(self)
|
||||
cmake.configure()
|
||||
cmake.build()
|
||||
|
||||
def test(self):
|
||||
if can_run(self):
|
||||
cmd = os.path.join(self.cpp.build.bindir, "test_package")
|
||||
self.run(cmd, env="conanrun")
|
||||
|
||||
package_folder = load(self, os.path.join(self.build_folder, "package_folder"))
|
||||
license = load(self, os.path.join(self.build_folder, "license"))
|
||||
assert os.path.isfile(os.path.join(package_folder, "licenses", "LICENSE.txt"))
|
||||
assert license == 'BSL-1.0'
|
@ -1,13 +0,0 @@
|
||||
#include <catch2/catch_test_macros.hpp>
|
||||
|
||||
int Factorial( int number ) {
|
||||
return number <= 1 ? 1 : Factorial( number - 1 ) * number;
|
||||
}
|
||||
|
||||
TEST_CASE( "Factorial Tests", "[single-file]" ) {
|
||||
REQUIRE( Factorial(0) == 1 );
|
||||
REQUIRE( Factorial(1) == 1 );
|
||||
REQUIRE( Factorial(2) == 2 );
|
||||
REQUIRE( Factorial(3) == 6 );
|
||||
REQUIRE( Factorial(10) == 3628800 );
|
||||
}
|
22
contrib/fsfw_contrib/Catch2-3.7.1/.gitattributes
vendored
22
contrib/fsfw_contrib/Catch2-3.7.1/.gitattributes
vendored
@ -1,22 +0,0 @@
|
||||
# This sets the default behaviour, overriding core.autocrlf
|
||||
* text=auto
|
||||
|
||||
# All source files should have unix line-endings in the repository,
|
||||
# but convert to native line-endings on checkout
|
||||
*.cpp text
|
||||
*.h text
|
||||
*.hpp text
|
||||
|
||||
# Windows specific files should retain windows line-endings
|
||||
*.sln text eol=crlf
|
||||
|
||||
# Keep executable scripts with LFs so they can be run after being
|
||||
# checked out on Windows
|
||||
*.py text eol=lf
|
||||
|
||||
|
||||
# Keep the single include header with LFs to make sure it is uploaded,
|
||||
# hashed etc with LF
|
||||
single_include/**/*.hpp eol=lf
|
||||
# Also keep the LICENCE file with LFs for the same reason
|
||||
LICENCE.txt eol=lf
|
@ -1,2 +0,0 @@
|
||||
github: "horenmar"
|
||||
custom: "https://www.paypal.me/horenmar"
|
@ -1,29 +0,0 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create an issue that documents a bug
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Reproduction steps**
|
||||
Steps to reproduce the bug.
|
||||
<!-- Usually this means a small and self-contained piece of code that uses Catch and specifying compiler flags if relevant. -->
|
||||
|
||||
|
||||
**Platform information:**
|
||||
<!-- Fill in any extra information that might be important for your issue. -->
|
||||
- OS: **Windows NT**
|
||||
- Compiler+version: **GCC v2.9.5**
|
||||
- Catch version: **v1.2.3**
|
||||
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
@ -1,14 +0,0 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Create an issue that requests a feature or other improvement
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Description**
|
||||
Describe the feature/change you request and why do you want it.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
@ -1,28 +0,0 @@
|
||||
<!--
|
||||
Please do not submit pull requests changing the `version.hpp`
|
||||
or the single-include `catch.hpp` file, these are changed
|
||||
only when a new release is made.
|
||||
|
||||
Before submitting a PR you should probably read the contributor documentation
|
||||
at docs/contributing.md. It will tell you how to properly test your changes.
|
||||
-->
|
||||
|
||||
|
||||
## Description
|
||||
<!--
|
||||
Describe the what and the why of your pull request. Remember that these two
|
||||
are usually a bit different. As an example, if you have made various changes
|
||||
to decrease the number of new strings allocated, that's what. The why probably
|
||||
was that you have a large set of tests and found that this speeds them up.
|
||||
-->
|
||||
|
||||
## GitHub Issues
|
||||
<!--
|
||||
If this PR was motivated by some existing issues, reference them here.
|
||||
|
||||
If it is a simple bug-fix, please also add a line like 'Closes #123'
|
||||
to your commit message, so that it is automatically closed.
|
||||
If it is not, don't, as it might take several iterations for a feature
|
||||
to be done properly. If in doubt, leave it open and reference it in the
|
||||
PR itself, so that maintainers can decide.
|
||||
-->
|
@ -1,24 +0,0 @@
|
||||
name: Bazel build
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build_and_test_ubuntu:
|
||||
name: Linux Ubuntu 22.04 Bazel build <GCC 11.2.0>
|
||||
runs-on: ubuntu-22.04
|
||||
strategy:
|
||||
matrix:
|
||||
compilation_mode: [fastbuild, dbg, opt]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Mount bazel cache
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: "/home/runner/.cache/bazel"
|
||||
key: bazel-ubuntu22-gcc11
|
||||
|
||||
- name: Build Catch2
|
||||
run: |
|
||||
bazelisk build --compilation_mode=${{matrix.compilation_mode}} //...
|
@ -1,44 +0,0 @@
|
||||
name: Linux builds (meson)
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: meson ${{matrix.cxx}}, C++${{matrix.std}}, ${{matrix.build_type}}
|
||||
runs-on: ubuntu-22.04
|
||||
strategy:
|
||||
matrix:
|
||||
cxx:
|
||||
- g++-11
|
||||
- clang++-11
|
||||
build_type: [debug, release]
|
||||
std: [14, 17]
|
||||
include:
|
||||
- cxx: clang++-11
|
||||
other_pkgs: clang-11
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Prepare environment
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y meson ninja-build ${{matrix.other_pkgs}}
|
||||
|
||||
- name: Configure build
|
||||
env:
|
||||
CXX: ${{matrix.cxx}}
|
||||
CXXFLAGS: -std=c++${{matrix.std}} ${{matrix.cxxflags}}
|
||||
# Note: $GITHUB_WORKSPACE is distinct from ${{runner.workspace}}.
|
||||
# This is important
|
||||
run: |
|
||||
meson -Dbuildtype=${{matrix.build_type}} ${{runner.workspace}}/meson-build
|
||||
|
||||
- name: Build tests + lib
|
||||
working-directory: ${{runner.workspace}}/meson-build
|
||||
run: ninja
|
||||
|
||||
- name: Run tests
|
||||
working-directory: ${{runner.workspace}}/meson-build
|
||||
run: |
|
||||
meson test --verbose
|
@ -1,154 +0,0 @@
|
||||
# The builds in this file are more complex (e.g. they need custom CMake
|
||||
# configuration) and thus are unsuitable to the simple build matrix
|
||||
# approach used in simple-builds
|
||||
name: Linux builds (complex)
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: ${{matrix.build_description}}, ${{matrix.cxx}}, C++${{matrix.std}} ${{matrix.build_type}}
|
||||
runs-on: ubuntu-20.04
|
||||
strategy:
|
||||
matrix:
|
||||
# We add builds one by one in this case, because there are no
|
||||
# dimensions that are shared across the builds
|
||||
include:
|
||||
|
||||
# Single surrogate header build
|
||||
- cxx: clang++-10
|
||||
build_description: Surrogates build
|
||||
build_type: Debug
|
||||
std: 14
|
||||
other_pkgs: clang-10
|
||||
cmake_configurations: -DCATCH_BUILD_SURROGATES=ON
|
||||
|
||||
# Extras and examples with gcc-7
|
||||
- cxx: g++-7
|
||||
build_description: Extras + Examples
|
||||
build_type: Debug
|
||||
std: 14
|
||||
other_pkgs: g++-7
|
||||
cmake_configurations: -DCATCH_BUILD_EXTRA_TESTS=ON -DCATCH_BUILD_EXAMPLES=ON -DCATCH_ENABLE_CMAKE_HELPER_TESTS=ON
|
||||
- cxx: g++-7
|
||||
build_description: Extras + Examples
|
||||
build_type: Release
|
||||
std: 14
|
||||
other_pkgs: g++-7
|
||||
cmake_configurations: -DCATCH_BUILD_EXTRA_TESTS=ON -DCATCH_BUILD_EXAMPLES=ON -DCATCH_ENABLE_CMAKE_HELPER_TESTS=ON
|
||||
|
||||
# Extras and examples with Clang-10
|
||||
- cxx: clang++-10
|
||||
build_description: Extras + Examples
|
||||
build_type: Debug
|
||||
std: 17
|
||||
other_pkgs: clang-10
|
||||
cmake_configurations: -DCATCH_BUILD_EXTRA_TESTS=ON -DCATCH_BUILD_EXAMPLES=ON -DCATCH_ENABLE_CMAKE_HELPER_TESTS=ON
|
||||
- cxx: clang++-10
|
||||
build_description: Extras + Examples
|
||||
build_type: Release
|
||||
std: 17
|
||||
other_pkgs: clang-10
|
||||
cmake_configurations: -DCATCH_BUILD_EXTRA_TESTS=ON -DCATCH_BUILD_EXAMPLES=ON -DCATCH_ENABLE_CMAKE_HELPER_TESTS=ON
|
||||
|
||||
# Configure tests with Clang-10
|
||||
- cxx: clang++-10
|
||||
build_description: CMake configuration tests
|
||||
build_type: Debug
|
||||
std: 14
|
||||
other_pkgs: clang-10
|
||||
cmake_configurations: -DCATCH_ENABLE_CONFIGURE_TESTS=ON
|
||||
|
||||
# Valgrind test Clang-10
|
||||
- cxx: clang++-10
|
||||
build_description: Valgrind tests
|
||||
build_type: Debug
|
||||
std: 14
|
||||
other_pkgs: clang-10 valgrind
|
||||
cmake_configurations: -DMEMORYCHECK_COMMAND=`which valgrind` -DMEMORYCHECK_COMMAND_OPTIONS="-q --track-origins=yes --leak-check=full --num-callers=50 --show-leak-kinds=definite --error-exitcode=1"
|
||||
other_ctest_args: -T memcheck -LE uses-python
|
||||
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Prepare environment
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y ninja-build ${{matrix.other_pkgs}}
|
||||
|
||||
- name: Configure build
|
||||
working-directory: ${{runner.workspace}}
|
||||
env:
|
||||
CXX: ${{matrix.cxx}}
|
||||
CXXFLAGS: ${{matrix.cxxflags}}
|
||||
# Note: $GITHUB_WORKSPACE is distinct from ${{runner.workspace}}.
|
||||
# This is important
|
||||
run: |
|
||||
cmake -Bbuild -H$GITHUB_WORKSPACE \
|
||||
-DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
|
||||
-DCMAKE_CXX_STANDARD=${{matrix.std}} \
|
||||
-DCMAKE_CXX_STANDARD_REQUIRED=ON \
|
||||
-DCMAKE_CXX_EXTENSIONS=OFF \
|
||||
-DCATCH_DEVELOPMENT_BUILD=ON \
|
||||
${{matrix.cmake_configurations}} \
|
||||
-G Ninja
|
||||
|
||||
- name: Build tests + lib
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
run: ninja
|
||||
|
||||
- name: Run tests
|
||||
env:
|
||||
CTEST_OUTPUT_ON_FAILURE: 1
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
run: ctest -C ${{matrix.build_type}} -j `nproc` ${{matrix.other_ctest_args}}
|
||||
clang-tidy:
|
||||
name: clang-tidy ${{matrix.version}}, ${{matrix.build_description}}, C++${{matrix.std}} ${{matrix.build_type}}
|
||||
runs-on: ubuntu-22.04
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- version: "15"
|
||||
build_description: all
|
||||
build_type: Debug
|
||||
std: 17
|
||||
other_pkgs: ''
|
||||
cmake_configurations: -DCATCH_BUILD_EXAMPLES=ON -DCATCH_ENABLE_CMAKE_HELPER_TESTS=ON
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Prepare environment
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y ninja-build clang-${{matrix.version}} clang-tidy-${{matrix.version}} ${{matrix.other_pkgs}}
|
||||
|
||||
- name: Configure build
|
||||
working-directory: ${{runner.workspace}}
|
||||
env:
|
||||
CXX: clang++-${{matrix.version}}
|
||||
CXXFLAGS: ${{matrix.cxxflags}}
|
||||
# Note: $GITHUB_WORKSPACE is distinct from ${{runner.workspace}}.
|
||||
# This is important
|
||||
run: |
|
||||
clangtidy="clang-tidy-${{matrix.version}};-use-color"
|
||||
# Use a dummy compiler/linker/ar/ranlib to effectively disable the
|
||||
# compilation and only run clang-tidy.
|
||||
cmake -Bbuild -H$GITHUB_WORKSPACE \
|
||||
-DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
|
||||
-DCMAKE_CXX_STANDARD=${{matrix.std}} \
|
||||
-DCMAKE_CXX_STANDARD_REQUIRED=ON \
|
||||
-DCMAKE_CXX_EXTENSIONS=OFF \
|
||||
-DCATCH_DEVELOPMENT_BUILD=ON \
|
||||
-DCMAKE_CXX_CLANG_TIDY="$clangtidy" \
|
||||
-DCMAKE_CXX_COMPILER_LAUNCHER=/usr/bin/true \
|
||||
-DCMAKE_AR=/usr/bin/true \
|
||||
-DCMAKE_CXX_COMPILER_AR=/usr/bin/true \
|
||||
-DCMAKE_RANLIB=/usr/bin/true \
|
||||
-DCMAKE_CXX_LINK_EXECUTABLE=/usr/bin/true \
|
||||
${{matrix.cmake_configurations}} \
|
||||
-G Ninja
|
||||
|
||||
- name: Run clang-tidy
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
run: ninja
|
@ -1,123 +0,0 @@
|
||||
name: Linux builds (basic)
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: ${{matrix.cxx}}, C++${{matrix.std}}, ${{matrix.build_type}}
|
||||
runs-on: ubuntu-20.04
|
||||
strategy:
|
||||
matrix:
|
||||
cxx:
|
||||
- g++-5
|
||||
- g++-6
|
||||
- g++-7
|
||||
- g++-8
|
||||
- g++-9
|
||||
- g++-10
|
||||
- clang++-6.0
|
||||
- clang++-7
|
||||
- clang++-8
|
||||
- clang++-9
|
||||
- clang++-10
|
||||
build_type: [Debug, Release]
|
||||
std: [14]
|
||||
include:
|
||||
- cxx: g++-5
|
||||
other_pkgs: g++-5
|
||||
- cxx: g++-6
|
||||
other_pkgs: g++-6
|
||||
- cxx: g++-7
|
||||
other_pkgs: g++-7
|
||||
- cxx: g++-8
|
||||
other_pkgs: g++-8
|
||||
- cxx: g++-9
|
||||
other_pkgs: g++-9
|
||||
- cxx: g++-10
|
||||
other_pkgs: g++-10
|
||||
- cxx: clang++-6.0
|
||||
other_pkgs: clang-6.0
|
||||
- cxx: clang++-7
|
||||
other_pkgs: clang-7
|
||||
- cxx: clang++-8
|
||||
other_pkgs: clang-8
|
||||
- cxx: clang++-9
|
||||
other_pkgs: clang-9
|
||||
- cxx: clang++-10
|
||||
other_pkgs: clang-10
|
||||
# Clang 6 + C++17
|
||||
# does not work with the default libstdc++ version thanks
|
||||
# to a disagreement on variant implementation.
|
||||
# - cxx: clang++-6.0
|
||||
# build_type: Debug
|
||||
# std: 17
|
||||
# other_pkgs: clang-6.0
|
||||
# - cxx: clang++-6.0
|
||||
# build_type: Release
|
||||
# std: 17
|
||||
# other_pkgs: clang-6.0
|
||||
# Clang 10 + C++17
|
||||
- cxx: clang++-10
|
||||
build_type: Debug
|
||||
std: 17
|
||||
other_pkgs: clang-10
|
||||
- cxx: clang++-10
|
||||
build_type: Release
|
||||
std: 17
|
||||
other_pkgs: clang-10
|
||||
- cxx: clang++-10
|
||||
build_type: Debug
|
||||
std: 20
|
||||
other_pkgs: clang-10
|
||||
- cxx: clang++-10
|
||||
build_type: Release
|
||||
std: 20
|
||||
other_pkgs: clang-10
|
||||
- cxx: g++-10
|
||||
build_type: Debug
|
||||
std: 20
|
||||
other_pkgs: g++-10
|
||||
- cxx: g++-10
|
||||
build_type: Release
|
||||
std: 20
|
||||
other_pkgs: g++-10
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Add repositories for older GCC
|
||||
run: |
|
||||
sudo apt-add-repository 'deb http://azure.archive.ubuntu.com/ubuntu/ bionic main'
|
||||
sudo apt-add-repository 'deb http://azure.archive.ubuntu.com/ubuntu/ bionic universe'
|
||||
if: ${{ matrix.cxx == 'g++-5' || matrix.cxx == 'g++-6' }}
|
||||
|
||||
- name: Prepare environment
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y ninja-build ${{matrix.other_pkgs}}
|
||||
|
||||
- name: Configure build
|
||||
working-directory: ${{runner.workspace}}
|
||||
env:
|
||||
CXX: ${{matrix.cxx}}
|
||||
CXXFLAGS: ${{matrix.cxxflags}}
|
||||
# Note: $GITHUB_WORKSPACE is distinct from ${{runner.workspace}}.
|
||||
# This is important
|
||||
run: |
|
||||
cmake -Bbuild -H$GITHUB_WORKSPACE \
|
||||
-DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
|
||||
-DCMAKE_CXX_STANDARD=${{matrix.std}} \
|
||||
-DCMAKE_CXX_STANDARD_REQUIRED=ON \
|
||||
-DCMAKE_CXX_EXTENSIONS=OFF \
|
||||
-DCATCH_DEVELOPMENT_BUILD=ON \
|
||||
-G Ninja
|
||||
|
||||
- name: Build tests + lib
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
run: ninja
|
||||
|
||||
- name: Run tests
|
||||
env:
|
||||
CTEST_OUTPUT_ON_FAILURE: 1
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
run: ctest -C ${{matrix.build_type}} -j `nproc`
|
@ -1,44 +0,0 @@
|
||||
name: M1 Mac builds
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: macos-14
|
||||
strategy:
|
||||
matrix:
|
||||
cxx:
|
||||
- clang++
|
||||
build_type: [Debug, Release]
|
||||
std: [14, 17]
|
||||
include:
|
||||
- build_type: Debug
|
||||
examples: ON
|
||||
extra_tests: ON
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Configure build
|
||||
working-directory: ${{runner.workspace}}
|
||||
env:
|
||||
CXX: ${{matrix.cxx}}
|
||||
CXXFLAGS: ${{matrix.cxxflags}}
|
||||
run: |
|
||||
cmake -Bbuild -H$GITHUB_WORKSPACE \
|
||||
-DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
|
||||
-DCMAKE_CXX_STANDARD=${{matrix.std}} \
|
||||
-DCMAKE_CXX_STANDARD_REQUIRED=ON \
|
||||
-DCATCH_DEVELOPMENT_BUILD=ON \
|
||||
-DCATCH_BUILD_EXAMPLES=${{matrix.examples}} \
|
||||
-DCATCH_BUILD_EXTRA_TESTS=${{matrix.examples}}
|
||||
|
||||
- name: Build tests + lib
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
run: make -j `sysctl -n hw.ncpu`
|
||||
|
||||
- name: Run tests
|
||||
env:
|
||||
CTEST_OUTPUT_ON_FAILURE: 1
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
run: ctest -C ${{matrix.build_type}} -j `sysctl -n hw.ncpu`
|
@ -1,44 +0,0 @@
|
||||
name: Mac builds
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: macos-12
|
||||
strategy:
|
||||
matrix:
|
||||
cxx:
|
||||
- clang++
|
||||
build_type: [Debug, Release]
|
||||
std: [14, 17]
|
||||
include:
|
||||
- build_type: Debug
|
||||
examples: ON
|
||||
extra_tests: ON
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Configure build
|
||||
working-directory: ${{runner.workspace}}
|
||||
env:
|
||||
CXX: ${{matrix.cxx}}
|
||||
CXXFLAGS: ${{matrix.cxxflags}}
|
||||
run: |
|
||||
cmake -Bbuild -H$GITHUB_WORKSPACE \
|
||||
-DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
|
||||
-DCMAKE_CXX_STANDARD=${{matrix.std}} \
|
||||
-DCMAKE_CXX_STANDARD_REQUIRED=ON \
|
||||
-DCATCH_DEVELOPMENT_BUILD=ON \
|
||||
-DCATCH_BUILD_EXAMPLES=${{matrix.examples}} \
|
||||
-DCATCH_BUILD_EXTRA_TESTS=${{matrix.examples}}
|
||||
|
||||
- name: Build tests + lib
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
run: make -j `sysctl -n hw.ncpu`
|
||||
|
||||
- name: Run tests
|
||||
env:
|
||||
CTEST_OUTPUT_ON_FAILURE: 1
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
run: ctest -C ${{matrix.build_type}} -j `sysctl -n hw.ncpu`
|
@ -1,31 +0,0 @@
|
||||
name: Package Manager Builds
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
conan_builds:
|
||||
name: Conan ${{matrix.conan_version}}
|
||||
runs-on: ubuntu-20.04
|
||||
strategy:
|
||||
matrix:
|
||||
conan_version:
|
||||
- '1.63'
|
||||
- '2.1'
|
||||
|
||||
include:
|
||||
# Conan 1 has default profiles installed
|
||||
- conan_version: '1.63'
|
||||
profile_generate: 'false'
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install conan
|
||||
run: pip install conan==${{matrix.conan_version}}
|
||||
|
||||
- name: Setup conan profiles
|
||||
if: matrix.profile_generate != 'false'
|
||||
run: conan profile detect
|
||||
|
||||
- name: Run conan package create
|
||||
run: conan create . -tf .conan/test_package
|
@ -1,36 +0,0 @@
|
||||
name: Check header guards
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
# Set the type of machine to run on
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
|
||||
- name: Checkout source code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Dependencies
|
||||
uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: '3.7'
|
||||
- name: Install checkguard
|
||||
run: pip install guardonce
|
||||
|
||||
- name: Check that include guards are properly named
|
||||
run: |
|
||||
wrong_files=$(checkguard -r src/catch2/ -p "name | append _INCLUDED | upper")
|
||||
if [[ $wrong_files ]]; then
|
||||
echo "Files with wrong header guard:"
|
||||
echo $wrong_files
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Check that there are no duplicated filenames
|
||||
run: |
|
||||
./tools/scripts/checkDuplicateFilenames.py
|
||||
|
||||
- name: Check that all source files have the correct license header
|
||||
run: |
|
||||
./tools/scripts/checkLicense.py
|
@ -1,37 +0,0 @@
|
||||
name: Windows builds (basic)
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: ${{matrix.os}}, ${{matrix.std}}, ${{matrix.build_type}}, ${{matrix.platform}}
|
||||
runs-on: ${{matrix.os}}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [windows-2019, windows-2022]
|
||||
platform: [Win32, x64]
|
||||
build_type: [Debug, Release]
|
||||
std: [14, 17]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Configure build
|
||||
working-directory: ${{runner.workspace}}
|
||||
run: |
|
||||
cmake -S $Env:GITHUB_WORKSPACE `
|
||||
-B ${{runner.workspace}}/build `
|
||||
-DCMAKE_CXX_STANDARD=${{matrix.std}} `
|
||||
-A ${{matrix.platform}} `
|
||||
--preset all-tests
|
||||
|
||||
- name: Build tests
|
||||
working-directory: ${{runner.workspace}}
|
||||
run: cmake --build build --config ${{matrix.build_type}} --parallel %NUMBER_OF_PROCESSORS%
|
||||
shell: cmd
|
||||
|
||||
- name: Run tests
|
||||
working-directory: ${{runner.workspace}}/build
|
||||
env:
|
||||
CTEST_OUTPUT_ON_FAILURE: 1
|
||||
run: ctest -C ${{matrix.build_type}} -j %NUMBER_OF_PROCESSORS%
|
||||
shell: cmd
|
40
contrib/fsfw_contrib/Catch2-3.7.1/.gitignore
vendored
40
contrib/fsfw_contrib/Catch2-3.7.1/.gitignore
vendored
@ -1,40 +0,0 @@
|
||||
*.build
|
||||
!meson.build
|
||||
*.pbxuser
|
||||
*.mode1v3
|
||||
*.ncb
|
||||
*.suo
|
||||
Debug
|
||||
Release
|
||||
*.user
|
||||
*.xcuserstate
|
||||
.DS_Store
|
||||
xcuserdata
|
||||
CatchSelfTest.xcscheme
|
||||
Breakpoints.xcbkptlist
|
||||
UpgradeLog.XML
|
||||
Resources/DWARF
|
||||
projects/Generated
|
||||
*.pyc
|
||||
DerivedData
|
||||
*.xccheckout
|
||||
Build
|
||||
.idea
|
||||
.vs
|
||||
.vscode
|
||||
cmake-build-*
|
||||
benchmark-dir
|
||||
.conan/test_package/build
|
||||
**/CMakeUserPresets.json
|
||||
bazel-*
|
||||
MODULE.bazel.lock
|
||||
build-fuzzers
|
||||
debug-build
|
||||
.vscode
|
||||
msvc-sln*
|
||||
# Currently we use Doxygen for dep graphs and the full docs are only slowly
|
||||
# being filled in, so we definitely do not want git to deal with the docs.
|
||||
docs/doxygen
|
||||
*.cache
|
||||
compile_commands.json
|
||||
**/*.unapproved.txt
|
@ -1,95 +0,0 @@
|
||||
load("@bazel_skylib//rules:expand_template.bzl", "expand_template")
|
||||
|
||||
expand_template(
|
||||
name = "catch_user_config",
|
||||
out = "catch2/catch_user_config.hpp",
|
||||
substitutions = {
|
||||
"@CATCH_CONFIG_CONSOLE_WIDTH@": "80",
|
||||
"@CATCH_CONFIG_DEFAULT_REPORTER@": "console",
|
||||
"#cmakedefine CATCH_CONFIG_ANDROID_LOGWRITE": "",
|
||||
"#cmakedefine CATCH_CONFIG_BAZEL_SUPPORT": "#define CATCH_CONFIG_BAZEL_SUPPORT",
|
||||
"#cmakedefine CATCH_CONFIG_COLOUR_WIN32": "",
|
||||
"#cmakedefine CATCH_CONFIG_COUNTER": "",
|
||||
"#cmakedefine CATCH_CONFIG_CPP11_TO_STRING": "",
|
||||
"#cmakedefine CATCH_CONFIG_CPP17_BYTE": "",
|
||||
"#cmakedefine CATCH_CONFIG_CPP17_OPTIONAL": "",
|
||||
"#cmakedefine CATCH_CONFIG_CPP17_STRING_VIEW": "",
|
||||
"#cmakedefine CATCH_CONFIG_CPP17_UNCAUGHT_EXCEPTIONS": "",
|
||||
"#cmakedefine CATCH_CONFIG_CPP17_VARIANT": "",
|
||||
"#cmakedefine CATCH_CONFIG_DISABLE_EXCEPTIONS_CUSTOM_HANDLER": "",
|
||||
"#cmakedefine CATCH_CONFIG_DISABLE_EXCEPTIONS": "",
|
||||
"#cmakedefine CATCH_CONFIG_DISABLE_STRINGIFICATION": "",
|
||||
"#cmakedefine CATCH_CONFIG_DISABLE": "",
|
||||
"#cmakedefine CATCH_CONFIG_ENABLE_ALL_STRINGMAKERS": "",
|
||||
"#cmakedefine CATCH_CONFIG_ENABLE_OPTIONAL_STRINGMAKER": "",
|
||||
"#cmakedefine CATCH_CONFIG_ENABLE_PAIR_STRINGMAKER": "",
|
||||
"#cmakedefine CATCH_CONFIG_ENABLE_TUPLE_STRINGMAKER": "",
|
||||
"#cmakedefine CATCH_CONFIG_ENABLE_VARIANT_STRINGMAKER": "",
|
||||
"#cmakedefine CATCH_CONFIG_EXPERIMENTAL_REDIRECT": "",
|
||||
"#cmakedefine CATCH_CONFIG_FALLBACK_STRINGIFIER @CATCH_CONFIG_FALLBACK_STRINGIFIER@": "",
|
||||
"#cmakedefine CATCH_CONFIG_FAST_COMPILE": "",
|
||||
"#cmakedefine CATCH_CONFIG_GETENV": "",
|
||||
"#cmakedefine CATCH_CONFIG_GLOBAL_NEXTAFTER": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_ANDROID_LOGWRITE": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_COLOUR_WIN32": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_COUNTER": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_CPP11_TO_STRING": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_CPP17_BYTE": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_CPP17_OPTIONAL": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_CPP17_STRING_VIEW": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_CPP17_UNCAUGHT_EXCEPTIONS": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_CPP17_VARIANT": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_GETENV": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_GLOBAL_NEXTAFTER": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_POSIX_SIGNALS": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_USE_ASYNC": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_EXPERIMENTAL_STATIC_ANALYSIS_SUPPORT": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_WCHAR": "",
|
||||
"#cmakedefine CATCH_CONFIG_NO_WINDOWS_SEH": "",
|
||||
"#cmakedefine CATCH_CONFIG_NOSTDOUT": "",
|
||||
"#cmakedefine CATCH_CONFIG_POSIX_SIGNALS": "",
|
||||
"#cmakedefine CATCH_CONFIG_PREFIX_ALL": "",
|
||||
"#cmakedefine CATCH_CONFIG_PREFIX_MESSAGES": "",
|
||||
"#cmakedefine CATCH_CONFIG_SHARED_LIBRARY": "",
|
||||
"#cmakedefine CATCH_CONFIG_EXPERIMENTAL_STATIC_ANALYSIS_SUPPORT": "",
|
||||
"#cmakedefine CATCH_CONFIG_USE_ASYNC": "",
|
||||
"#cmakedefine CATCH_CONFIG_WCHAR": "",
|
||||
"#cmakedefine CATCH_CONFIG_WINDOWS_CRTDBG": "",
|
||||
"#cmakedefine CATCH_CONFIG_WINDOWS_SEH": "",
|
||||
},
|
||||
template = "src/catch2/catch_user_config.hpp.in",
|
||||
)
|
||||
|
||||
# Generated header library, modifies the include prefix to account for
|
||||
# generation path so that we can include <catch2/catch_user_config.hpp>
|
||||
# correctly.
|
||||
cc_library(
|
||||
name = "catch2_generated",
|
||||
hdrs = ["catch2/catch_user_config.hpp"],
|
||||
include_prefix = ".", # to manipulate -I of dependenices
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
|
||||
# Static library, without main.
|
||||
cc_library(
|
||||
name = "catch2",
|
||||
srcs = glob(
|
||||
["src/catch2/**/*.cpp"],
|
||||
exclude = ["src/catch2/internal/catch_main.cpp"],
|
||||
),
|
||||
hdrs = glob(["src/catch2/**/*.hpp"]),
|
||||
includes = ["src/"],
|
||||
linkstatic = True,
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [":catch2_generated"],
|
||||
)
|
||||
|
||||
# Static library, with main.
|
||||
cc_library(
|
||||
name = "catch2_main",
|
||||
srcs = ["src/catch2/internal/catch_main.cpp"],
|
||||
includes = ["src/"],
|
||||
linkstatic = True,
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [":catch2"],
|
||||
)
|
@ -1,10 +0,0 @@
|
||||
@PACKAGE_INIT@
|
||||
|
||||
|
||||
# Avoid repeatedly including the targets
|
||||
if(NOT TARGET Catch2::Catch2)
|
||||
# Provide path for scripts
|
||||
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}")
|
||||
|
||||
include(${CMAKE_CURRENT_LIST_DIR}/Catch2Targets.cmake)
|
||||
endif()
|
@ -1,89 +0,0 @@
|
||||
|
||||
# Copyright Catch2 Authors
|
||||
# Distributed under the Boost Software License, Version 1.0.
|
||||
# (See accompanying file LICENSE.txt or copy at
|
||||
# https://www.boost.org/LICENSE_1_0.txt)
|
||||
|
||||
# SPDX-License-Identifier: BSL-1.0
|
||||
|
||||
##
|
||||
# This file contains options that are materialized into the Catch2
|
||||
# compiled library. All of them default to OFF, as even the positive
|
||||
# forms correspond to the user _forcing_ them to ON, while being OFF
|
||||
# means that Catch2 can use its own autodetection.
|
||||
#
|
||||
# For detailed docs look into docs/configuration.md
|
||||
|
||||
|
||||
macro(AddOverridableConfigOption OptionBaseName)
|
||||
option(CATCH_CONFIG_${OptionBaseName} "Read docs/configuration.md for details" OFF)
|
||||
option(CATCH_CONFIG_NO_${OptionBaseName} "Read docs/configuration.md for details" OFF)
|
||||
mark_as_advanced(CATCH_CONFIG_${OptionBaseName} CATCH_CONFIG_NO_${OptionBaseName})
|
||||
endmacro()
|
||||
|
||||
macro(AddConfigOption OptionBaseName)
|
||||
option(CATCH_CONFIG_${OptionBaseName} "Read docs/configuration.md for details" OFF)
|
||||
mark_as_advanced(CATCH_CONFIG_${OptionBaseName})
|
||||
endmacro()
|
||||
|
||||
set(_OverridableOptions
|
||||
"ANDROID_LOGWRITE"
|
||||
"BAZEL_SUPPORT"
|
||||
"COLOUR_WIN32"
|
||||
"COUNTER"
|
||||
"CPP11_TO_STRING"
|
||||
"CPP17_BYTE"
|
||||
"CPP17_OPTIONAL"
|
||||
"CPP17_STRING_VIEW"
|
||||
"CPP17_UNCAUGHT_EXCEPTIONS"
|
||||
"CPP17_VARIANT"
|
||||
"GLOBAL_NEXTAFTER"
|
||||
"POSIX_SIGNALS"
|
||||
"USE_ASYNC"
|
||||
"WCHAR"
|
||||
"WINDOWS_SEH"
|
||||
"GETENV"
|
||||
"EXPERIMENTAL_STATIC_ANALYSIS_SUPPORT"
|
||||
)
|
||||
|
||||
foreach(OptionName ${_OverridableOptions})
|
||||
AddOverridableConfigOption(${OptionName})
|
||||
endforeach()
|
||||
|
||||
set(_OtherConfigOptions
|
||||
"DISABLE_EXCEPTIONS"
|
||||
"DISABLE_EXCEPTIONS_CUSTOM_HANDLER"
|
||||
"DISABLE"
|
||||
"DISABLE_STRINGIFICATION"
|
||||
"ENABLE_ALL_STRINGMAKERS"
|
||||
"ENABLE_OPTIONAL_STRINGMAKER"
|
||||
"ENABLE_PAIR_STRINGMAKER"
|
||||
"ENABLE_TUPLE_STRINGMAKER"
|
||||
"ENABLE_VARIANT_STRINGMAKER"
|
||||
"EXPERIMENTAL_REDIRECT"
|
||||
"FAST_COMPILE"
|
||||
"NOSTDOUT"
|
||||
"PREFIX_ALL"
|
||||
"PREFIX_MESSAGES"
|
||||
"WINDOWS_CRTDBG"
|
||||
)
|
||||
|
||||
|
||||
foreach(OptionName ${_OtherConfigOptions})
|
||||
AddConfigOption(${OptionName})
|
||||
endforeach()
|
||||
if(DEFINED BUILD_SHARED_LIBS)
|
||||
set(CATCH_CONFIG_SHARED_LIBRARY ${BUILD_SHARED_LIBS})
|
||||
else()
|
||||
set(CATCH_CONFIG_SHARED_LIBRARY "")
|
||||
endif()
|
||||
|
||||
set(CATCH_CONFIG_DEFAULT_REPORTER "console" CACHE STRING "Read docs/configuration.md for details. The name of the reporter should be without quotes.")
|
||||
set(CATCH_CONFIG_CONSOLE_WIDTH "80" CACHE STRING "Read docs/configuration.md for details. Must form a valid integer literal.")
|
||||
|
||||
mark_as_advanced(CATCH_CONFIG_SHARED_LIBRARY CATCH_CONFIG_DEFAULT_REPORTER CATCH_CONFIG_CONSOLE_WIDTH)
|
||||
|
||||
# There is no good way to both turn this into a CMake cache variable,
|
||||
# and keep reasonable default semantics inside the project. Thus we do
|
||||
# not define it and users have to provide it as an outside variable.
|
||||
#set(CATCH_CONFIG_FALLBACK_STRINGIFIER "" CACHE STRING "Read docs/configuration.md for details.")
|
@ -1,122 +0,0 @@
|
||||
|
||||
# Copyright Catch2 Authors
|
||||
# Distributed under the Boost Software License, Version 1.0.
|
||||
# (See accompanying file LICENSE.txt or copy at
|
||||
# https://www.boost.org/LICENSE_1_0.txt)
|
||||
|
||||
# SPDX-License-Identifier: BSL-1.0
|
||||
|
||||
include(CheckCXXCompilerFlag)
|
||||
function(add_cxx_flag_if_supported_to_targets flagname targets)
|
||||
string(MAKE_C_IDENTIFIER ${flagname} flag_identifier )
|
||||
check_cxx_compiler_flag("${flagname}" HAVE_FLAG_${flag_identifier})
|
||||
|
||||
if (HAVE_FLAG_${flag_identifier})
|
||||
foreach(target ${targets})
|
||||
target_compile_options(${target} PRIVATE ${flagname})
|
||||
endforeach()
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
# Assumes that it is only called for development builds, where warnings
|
||||
# and Werror is desired, so it also enables Werror.
|
||||
function(add_warnings_to_targets targets)
|
||||
LIST(LENGTH targets TARGETS_LEN)
|
||||
# For now we just assume 2 possibilities: msvc and msvc-like compilers,
|
||||
# and other.
|
||||
if (MSVC)
|
||||
foreach(target ${targets})
|
||||
# Force MSVC to consider everything as encoded in utf-8
|
||||
target_compile_options( ${target} PRIVATE /utf-8 )
|
||||
# Enable Werror equivalent
|
||||
if (CATCH_ENABLE_WERROR)
|
||||
target_compile_options( ${target} PRIVATE /WX )
|
||||
endif()
|
||||
|
||||
# MSVC is currently handled specially
|
||||
if ( CMAKE_CXX_COMPILER_ID MATCHES "MSVC" )
|
||||
STRING(REGEX REPLACE "/W[0-9]" "/W4" CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS}) # override default warning level
|
||||
target_compile_options( ${target} PRIVATE /w44265 /w44061 /w44062 /w45038 )
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
endif()
|
||||
|
||||
if (NOT MSVC)
|
||||
set(CHECKED_WARNING_FLAGS
|
||||
"-Wabsolute-value"
|
||||
"-Wall"
|
||||
"-Wcall-to-pure-virtual-from-ctor-dtor"
|
||||
"-Wcast-align"
|
||||
"-Wcatch-value"
|
||||
"-Wdangling"
|
||||
"-Wdeprecated"
|
||||
"-Wdeprecated-register"
|
||||
"-Wexceptions"
|
||||
"-Wexit-time-destructors"
|
||||
"-Wextra"
|
||||
"-Wextra-semi"
|
||||
"-Wfloat-equal"
|
||||
"-Wglobal-constructors"
|
||||
"-Winit-self"
|
||||
"-Wmisleading-indentation"
|
||||
"-Wmismatched-new-delete"
|
||||
"-Wmismatched-return-types"
|
||||
"-Wmismatched-tags"
|
||||
"-Wmissing-braces"
|
||||
"-Wmissing-declarations"
|
||||
"-Wmissing-noreturn"
|
||||
"-Wmissing-prototypes"
|
||||
"-Wmissing-variable-declarations"
|
||||
"-Wnon-virtual-dtor"
|
||||
"-Wnull-dereference"
|
||||
"-Wold-style-cast"
|
||||
"-Woverloaded-virtual"
|
||||
"-Wparentheses"
|
||||
"-Wpedantic"
|
||||
"-Wredundant-decls"
|
||||
"-Wreorder"
|
||||
"-Wreturn-std-move"
|
||||
"-Wshadow"
|
||||
"-Wstrict-aliasing"
|
||||
"-Wsubobject-linkage"
|
||||
"-Wsuggest-destructor-override"
|
||||
"-Wsuggest-override"
|
||||
"-Wundef"
|
||||
"-Wuninitialized"
|
||||
"-Wunneeded-internal-declaration"
|
||||
"-Wunreachable-code-aggressive"
|
||||
"-Wunused"
|
||||
"-Wunused-function"
|
||||
"-Wunused-parameter"
|
||||
"-Wvla"
|
||||
"-Wweak-vtables"
|
||||
|
||||
# This is a useful warning, but our tests sometimes rely on
|
||||
# functions being present, but not picked (e.g. various checks
|
||||
# for stringification implementation ordering).
|
||||
# Ergo, we should use it every now and then, but we cannot
|
||||
# enable it by default.
|
||||
# "-Wunused-member-function"
|
||||
)
|
||||
foreach(warning ${CHECKED_WARNING_FLAGS})
|
||||
add_cxx_flag_if_supported_to_targets(${warning} "${targets}")
|
||||
endforeach()
|
||||
|
||||
if (CATCH_ENABLE_WERROR)
|
||||
foreach(target ${targets})
|
||||
# Enable Werror equivalent
|
||||
target_compile_options( ${target} PRIVATE -Werror )
|
||||
endforeach()
|
||||
endif()
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
# Adds flags required for reproducible build to the target
|
||||
# Currently only supports GCC and Clang
|
||||
function(add_build_reproducibility_settings target)
|
||||
# Make the build reproducible on versions of g++ and clang that supports -ffile-prefix-map
|
||||
if((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") OR (CMAKE_CXX_COMPILER_ID MATCHES "Clang"))
|
||||
add_cxx_flag_if_supported_to_targets("-ffile-prefix-map=${CATCH_DIR}/=" "${target}")
|
||||
endif()
|
||||
endfunction()
|
@ -1,157 +0,0 @@
|
||||
# This file is part of CMake-codecov.
|
||||
#
|
||||
# Copyright (c)
|
||||
# 2015-2017 RWTH Aachen University, Federal Republic of Germany
|
||||
#
|
||||
# See the LICENSE file in the package base directory for details
|
||||
#
|
||||
# Written by Alexander Haase, alexander.haase@rwth-aachen.de
|
||||
#
|
||||
|
||||
|
||||
# include required Modules
|
||||
include(FindPackageHandleStandardArgs)
|
||||
|
||||
|
||||
# Search for gcov binary.
|
||||
set(CMAKE_REQUIRED_QUIET_SAVE ${CMAKE_REQUIRED_QUIET})
|
||||
set(CMAKE_REQUIRED_QUIET ${codecov_FIND_QUIETLY})
|
||||
|
||||
get_property(ENABLED_LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)
|
||||
foreach (LANG ${ENABLED_LANGUAGES})
|
||||
# Gcov evaluation is dependent on the used compiler. Check gcov support for
|
||||
# each compiler that is used. If gcov binary was already found for this
|
||||
# compiler, do not try to find it again.
|
||||
if (NOT GCOV_${CMAKE_${LANG}_COMPILER_ID}_BIN)
|
||||
get_filename_component(COMPILER_PATH "${CMAKE_${LANG}_COMPILER}" PATH)
|
||||
|
||||
if ("${CMAKE_${LANG}_COMPILER_ID}" STREQUAL "GNU")
|
||||
# Some distributions like OSX (homebrew) ship gcov with the compiler
|
||||
# version appended as gcov-x. To find this binary we'll build the
|
||||
# suggested binary name with the compiler version.
|
||||
string(REGEX MATCH "^[0-9]+" GCC_VERSION
|
||||
"${CMAKE_${LANG}_COMPILER_VERSION}")
|
||||
|
||||
find_program(GCOV_BIN NAMES gcov-${GCC_VERSION} gcov
|
||||
HINTS ${COMPILER_PATH})
|
||||
|
||||
elseif ("${CMAKE_${LANG}_COMPILER_ID}" STREQUAL "Clang")
|
||||
# Some distributions like Debian ship llvm-cov with the compiler
|
||||
# version appended as llvm-cov-x.y. To find this binary we'll build
|
||||
# the suggested binary name with the compiler version.
|
||||
string(REGEX MATCH "^[0-9]+.[0-9]+" LLVM_VERSION
|
||||
"${CMAKE_${LANG}_COMPILER_VERSION}")
|
||||
|
||||
# llvm-cov prior version 3.5 seems to be not working with coverage
|
||||
# evaluation tools, but these versions are compatible with the gcc
|
||||
# gcov tool.
|
||||
if(LLVM_VERSION VERSION_GREATER 3.4)
|
||||
find_program(LLVM_COV_BIN NAMES "llvm-cov-${LLVM_VERSION}"
|
||||
"llvm-cov" HINTS ${COMPILER_PATH})
|
||||
mark_as_advanced(LLVM_COV_BIN)
|
||||
|
||||
if (LLVM_COV_BIN)
|
||||
find_program(LLVM_COV_WRAPPER "llvm-cov-wrapper" PATHS
|
||||
${CMAKE_MODULE_PATH})
|
||||
if (LLVM_COV_WRAPPER)
|
||||
set(GCOV_BIN "${LLVM_COV_WRAPPER}" CACHE FILEPATH "")
|
||||
|
||||
# set additional parameters
|
||||
set(GCOV_${CMAKE_${LANG}_COMPILER_ID}_ENV
|
||||
"LLVM_COV_BIN=${LLVM_COV_BIN}" CACHE STRING
|
||||
"Environment variables for llvm-cov-wrapper.")
|
||||
mark_as_advanced(GCOV_${CMAKE_${LANG}_COMPILER_ID}_ENV)
|
||||
endif ()
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
if (NOT GCOV_BIN)
|
||||
# Fall back to gcov binary if llvm-cov was not found or is
|
||||
# incompatible. This is the default on OSX, but may crash on
|
||||
# recent Linux versions.
|
||||
find_program(GCOV_BIN gcov HINTS ${COMPILER_PATH})
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
|
||||
if (GCOV_BIN)
|
||||
set(GCOV_${CMAKE_${LANG}_COMPILER_ID}_BIN "${GCOV_BIN}" CACHE STRING
|
||||
"${LANG} gcov binary.")
|
||||
|
||||
if (NOT CMAKE_REQUIRED_QUIET)
|
||||
message("-- Found gcov evaluation for "
|
||||
"${CMAKE_${LANG}_COMPILER_ID}: ${GCOV_BIN}")
|
||||
endif()
|
||||
|
||||
unset(GCOV_BIN CACHE)
|
||||
endif ()
|
||||
endif ()
|
||||
endforeach ()
|
||||
|
||||
|
||||
|
||||
|
||||
# Add a new global target for all gcov targets. This target could be used to
|
||||
# generate the gcov files for the whole project instead of calling <TARGET>-gcov
|
||||
# for each target.
|
||||
if (NOT TARGET gcov)
|
||||
add_custom_target(gcov)
|
||||
endif (NOT TARGET gcov)
|
||||
|
||||
|
||||
|
||||
# This function will add gcov evaluation for target <TNAME>. Only sources of
|
||||
# this target will be evaluated and no dependencies will be added. It will call
|
||||
# Gcov on any source file of <TNAME> once and store the gcov file in the same
|
||||
# directory.
|
||||
function (add_gcov_target TNAME)
|
||||
set(TDIR ${CMAKE_CURRENT_BINARY_DIR}/CMakeFiles/${TNAME}.dir)
|
||||
|
||||
# We don't have to check, if the target has support for coverage, thus this
|
||||
# will be checked by add_coverage_target in Findcoverage.cmake. Instead we
|
||||
# have to determine which gcov binary to use.
|
||||
get_target_property(TSOURCES ${TNAME} SOURCES)
|
||||
set(SOURCES "")
|
||||
set(TCOMPILER "")
|
||||
foreach (FILE ${TSOURCES})
|
||||
codecov_path_of_source(${FILE} FILE)
|
||||
if (NOT "${FILE}" STREQUAL "")
|
||||
codecov_lang_of_source(${FILE} LANG)
|
||||
if (NOT "${LANG}" STREQUAL "")
|
||||
list(APPEND SOURCES "${FILE}")
|
||||
set(TCOMPILER ${CMAKE_${LANG}_COMPILER_ID})
|
||||
endif ()
|
||||
endif ()
|
||||
endforeach ()
|
||||
|
||||
# If no gcov binary was found, coverage data can't be evaluated.
|
||||
if (NOT GCOV_${TCOMPILER}_BIN)
|
||||
message(WARNING "No coverage evaluation binary found for ${TCOMPILER}.")
|
||||
return()
|
||||
endif ()
|
||||
|
||||
set(GCOV_BIN "${GCOV_${TCOMPILER}_BIN}")
|
||||
set(GCOV_ENV "${GCOV_${TCOMPILER}_ENV}")
|
||||
|
||||
|
||||
set(BUFFER "")
|
||||
foreach(FILE ${SOURCES})
|
||||
get_filename_component(FILE_PATH "${TDIR}/${FILE}" PATH)
|
||||
|
||||
# call gcov
|
||||
add_custom_command(OUTPUT ${TDIR}/${FILE}.gcov
|
||||
COMMAND ${GCOV_ENV} ${GCOV_BIN} ${TDIR}/${FILE}.gcno > /dev/null
|
||||
DEPENDS ${TNAME} ${TDIR}/${FILE}.gcno
|
||||
WORKING_DIRECTORY ${FILE_PATH}
|
||||
)
|
||||
|
||||
list(APPEND BUFFER ${TDIR}/${FILE}.gcov)
|
||||
endforeach()
|
||||
|
||||
|
||||
# add target for gcov evaluation of <TNAME>
|
||||
add_custom_target(${TNAME}-gcov DEPENDS ${BUFFER})
|
||||
|
||||
# add evaluation target to the global gcov target.
|
||||
add_dependencies(gcov ${TNAME}-gcov)
|
||||
endfunction (add_gcov_target)
|
@ -1,354 +0,0 @@
|
||||
# This file is part of CMake-codecov.
|
||||
#
|
||||
# Copyright (c)
|
||||
# 2015-2017 RWTH Aachen University, Federal Republic of Germany
|
||||
#
|
||||
# See the LICENSE file in the package base directory for details
|
||||
#
|
||||
# Written by Alexander Haase, alexander.haase@rwth-aachen.de
|
||||
#
|
||||
|
||||
|
||||
# configuration
|
||||
set(LCOV_DATA_PATH "${CMAKE_BINARY_DIR}/lcov/data")
|
||||
set(LCOV_DATA_PATH_INIT "${LCOV_DATA_PATH}/init")
|
||||
set(LCOV_DATA_PATH_CAPTURE "${LCOV_DATA_PATH}/capture")
|
||||
set(LCOV_HTML_PATH "${CMAKE_BINARY_DIR}/lcov/html")
|
||||
|
||||
|
||||
|
||||
|
||||
# Search for Gcov which is used by Lcov.
|
||||
find_package(Gcov)
|
||||
|
||||
|
||||
|
||||
|
||||
# This function will add lcov evaluation for target <TNAME>. Only sources of
|
||||
# this target will be evaluated and no dependencies will be added. It will call
|
||||
# geninfo on any source file of <TNAME> once and store the info file in the same
|
||||
# directory.
|
||||
#
|
||||
# Note: This function is only a wrapper to define this function always, even if
|
||||
# coverage is not supported by the compiler or disabled. This function must
|
||||
# be defined here, because the module will be exited, if there is no coverage
|
||||
# support by the compiler or it is disabled by the user.
|
||||
function (add_lcov_target TNAME)
|
||||
if (LCOV_FOUND)
|
||||
# capture initial coverage data
|
||||
lcov_capture_initial_tgt(${TNAME})
|
||||
|
||||
# capture coverage data after execution
|
||||
lcov_capture_tgt(${TNAME})
|
||||
endif ()
|
||||
endfunction (add_lcov_target)
|
||||
|
||||
|
||||
|
||||
|
||||
# include required Modules
|
||||
include(FindPackageHandleStandardArgs)
|
||||
|
||||
# Search for required lcov binaries.
|
||||
find_program(LCOV_BIN lcov)
|
||||
find_program(GENINFO_BIN geninfo)
|
||||
find_program(GENHTML_BIN genhtml)
|
||||
find_package_handle_standard_args(lcov
|
||||
REQUIRED_VARS LCOV_BIN GENINFO_BIN GENHTML_BIN
|
||||
)
|
||||
|
||||
# enable genhtml C++ demangeling, if c++filt is found.
|
||||
set(GENHTML_CPPFILT_FLAG "")
|
||||
find_program(CPPFILT_BIN c++filt)
|
||||
if (NOT CPPFILT_BIN STREQUAL "")
|
||||
set(GENHTML_CPPFILT_FLAG "--demangle-cpp")
|
||||
endif (NOT CPPFILT_BIN STREQUAL "")
|
||||
|
||||
# enable no-external flag for lcov, if available.
|
||||
if (GENINFO_BIN AND NOT DEFINED GENINFO_EXTERN_FLAG)
|
||||
set(FLAG "")
|
||||
execute_process(COMMAND ${GENINFO_BIN} --help OUTPUT_VARIABLE GENINFO_HELP)
|
||||
string(REGEX MATCH "external" GENINFO_RES "${GENINFO_HELP}")
|
||||
if (GENINFO_RES)
|
||||
set(FLAG "--no-external")
|
||||
endif ()
|
||||
|
||||
set(GENINFO_EXTERN_FLAG "${FLAG}"
|
||||
CACHE STRING "Geninfo flag to exclude system sources.")
|
||||
endif ()
|
||||
|
||||
# If Lcov was not found, exit module now.
|
||||
if (NOT LCOV_FOUND)
|
||||
return()
|
||||
endif (NOT LCOV_FOUND)
|
||||
|
||||
|
||||
|
||||
|
||||
# Create directories to be used.
|
||||
file(MAKE_DIRECTORY ${LCOV_DATA_PATH_INIT})
|
||||
file(MAKE_DIRECTORY ${LCOV_DATA_PATH_CAPTURE})
|
||||
|
||||
set(LCOV_REMOVE_PATTERNS "")
|
||||
|
||||
# This function will merge lcov files to a single target file. Additional lcov
|
||||
# flags may be set with setting LCOV_EXTRA_FLAGS before calling this function.
|
||||
function (lcov_merge_files OUTFILE ...)
|
||||
# Remove ${OUTFILE} from ${ARGV} and generate lcov parameters with files.
|
||||
list(REMOVE_AT ARGV 0)
|
||||
|
||||
# Generate merged file.
|
||||
string(REPLACE "${CMAKE_BINARY_DIR}/" "" FILE_REL "${OUTFILE}")
|
||||
add_custom_command(OUTPUT "${OUTFILE}.raw"
|
||||
COMMAND cat ${ARGV} > ${OUTFILE}.raw
|
||||
DEPENDS ${ARGV}
|
||||
COMMENT "Generating ${FILE_REL}"
|
||||
)
|
||||
|
||||
add_custom_command(OUTPUT "${OUTFILE}"
|
||||
COMMAND ${LCOV_BIN} --quiet -a ${OUTFILE}.raw --output-file ${OUTFILE}
|
||||
--base-directory ${PROJECT_SOURCE_DIR} ${LCOV_EXTRA_FLAGS}
|
||||
COMMAND ${LCOV_BIN} --quiet -r ${OUTFILE} ${LCOV_REMOVE_PATTERNS}
|
||||
--output-file ${OUTFILE} ${LCOV_EXTRA_FLAGS}
|
||||
DEPENDS ${OUTFILE}.raw
|
||||
COMMENT "Post-processing ${FILE_REL}"
|
||||
)
|
||||
endfunction ()
|
||||
|
||||
|
||||
|
||||
|
||||
# Add a new global target to generate initial coverage reports for all targets.
|
||||
# This target will be used to generate the global initial info file, which is
|
||||
# used to gather even empty report data.
|
||||
if (NOT TARGET lcov-capture-init)
|
||||
add_custom_target(lcov-capture-init)
|
||||
set(LCOV_CAPTURE_INIT_FILES "" CACHE INTERNAL "")
|
||||
endif (NOT TARGET lcov-capture-init)
|
||||
|
||||
|
||||
# This function will add initial capture of coverage data for target <TNAME>,
|
||||
# which is needed to get also data for objects, which were not loaded at
|
||||
# execution time. It will call geninfo for every source file of <TNAME> once and
|
||||
# store the info file in the same directory.
|
||||
function (lcov_capture_initial_tgt TNAME)
|
||||
# We don't have to check, if the target has support for coverage, thus this
|
||||
# will be checked by add_coverage_target in Findcoverage.cmake. Instead we
|
||||
# have to determine which gcov binary to use.
|
||||
get_target_property(TSOURCES ${TNAME} SOURCES)
|
||||
set(SOURCES "")
|
||||
set(TCOMPILER "")
|
||||
foreach (FILE ${TSOURCES})
|
||||
codecov_path_of_source(${FILE} FILE)
|
||||
if (NOT "${FILE}" STREQUAL "")
|
||||
codecov_lang_of_source(${FILE} LANG)
|
||||
if (NOT "${LANG}" STREQUAL "")
|
||||
list(APPEND SOURCES "${FILE}")
|
||||
set(TCOMPILER ${CMAKE_${LANG}_COMPILER_ID})
|
||||
endif ()
|
||||
endif ()
|
||||
endforeach ()
|
||||
|
||||
# If no gcov binary was found, coverage data can't be evaluated.
|
||||
if (NOT GCOV_${TCOMPILER}_BIN)
|
||||
message(WARNING "No coverage evaluation binary found for ${TCOMPILER}.")
|
||||
return()
|
||||
endif ()
|
||||
|
||||
set(GCOV_BIN "${GCOV_${TCOMPILER}_BIN}")
|
||||
set(GCOV_ENV "${GCOV_${TCOMPILER}_ENV}")
|
||||
|
||||
|
||||
set(TDIR ${CMAKE_CURRENT_BINARY_DIR}/CMakeFiles/${TNAME}.dir)
|
||||
set(GENINFO_FILES "")
|
||||
foreach(FILE ${SOURCES})
|
||||
# generate empty coverage files
|
||||
set(OUTFILE "${TDIR}/${FILE}.info.init")
|
||||
list(APPEND GENINFO_FILES ${OUTFILE})
|
||||
|
||||
add_custom_command(OUTPUT ${OUTFILE} COMMAND ${GCOV_ENV} ${GENINFO_BIN}
|
||||
--quiet --base-directory ${PROJECT_SOURCE_DIR} --initial
|
||||
--gcov-tool ${GCOV_BIN} --output-filename ${OUTFILE}
|
||||
${GENINFO_EXTERN_FLAG} ${TDIR}/${FILE}.gcno
|
||||
DEPENDS ${TNAME}
|
||||
COMMENT "Capturing initial coverage data for ${FILE}"
|
||||
)
|
||||
endforeach()
|
||||
|
||||
# Concatenate all files generated by geninfo to a single file per target.
|
||||
set(OUTFILE "${LCOV_DATA_PATH_INIT}/${TNAME}.info")
|
||||
set(LCOV_EXTRA_FLAGS "--initial")
|
||||
lcov_merge_files("${OUTFILE}" ${GENINFO_FILES})
|
||||
add_custom_target(${TNAME}-capture-init ALL DEPENDS ${OUTFILE})
|
||||
|
||||
# add geninfo file generation to global lcov-geninfo target
|
||||
add_dependencies(lcov-capture-init ${TNAME}-capture-init)
|
||||
set(LCOV_CAPTURE_INIT_FILES "${LCOV_CAPTURE_INIT_FILES}"
|
||||
"${OUTFILE}" CACHE INTERNAL ""
|
||||
)
|
||||
endfunction (lcov_capture_initial_tgt)
|
||||
|
||||
|
||||
# This function will generate the global info file for all targets. It has to be
|
||||
# called after all other CMake functions in the root CMakeLists.txt file, to get
|
||||
# a full list of all targets that generate coverage data.
|
||||
function (lcov_capture_initial)
|
||||
# Skip this function (and do not create the following targets), if there are
|
||||
# no input files.
|
||||
if ("${LCOV_CAPTURE_INIT_FILES}" STREQUAL "")
|
||||
return()
|
||||
endif ()
|
||||
|
||||
# Add a new target to merge the files of all targets.
|
||||
set(OUTFILE "${LCOV_DATA_PATH_INIT}/all_targets.info")
|
||||
lcov_merge_files("${OUTFILE}" ${LCOV_CAPTURE_INIT_FILES})
|
||||
add_custom_target(lcov-geninfo-init ALL DEPENDS ${OUTFILE}
|
||||
lcov-capture-init
|
||||
)
|
||||
endfunction (lcov_capture_initial)
|
||||
|
||||
|
||||
|
||||
|
||||
# Add a new global target to generate coverage reports for all targets. This
|
||||
# target will be used to generate the global info file.
|
||||
if (NOT TARGET lcov-capture)
|
||||
add_custom_target(lcov-capture)
|
||||
set(LCOV_CAPTURE_FILES "" CACHE INTERNAL "")
|
||||
endif (NOT TARGET lcov-capture)
|
||||
|
||||
|
||||
# This function will add capture of coverage data for target <TNAME>, which is
|
||||
# needed to get also data for objects, which were not loaded at execution time.
|
||||
# It will call geninfo for every source file of <TNAME> once and store the info
|
||||
# file in the same directory.
|
||||
function (lcov_capture_tgt TNAME)
|
||||
# We don't have to check, if the target has support for coverage, thus this
|
||||
# will be checked by add_coverage_target in Findcoverage.cmake. Instead we
|
||||
# have to determine which gcov binary to use.
|
||||
get_target_property(TSOURCES ${TNAME} SOURCES)
|
||||
set(SOURCES "")
|
||||
set(TCOMPILER "")
|
||||
foreach (FILE ${TSOURCES})
|
||||
codecov_path_of_source(${FILE} FILE)
|
||||
if (NOT "${FILE}" STREQUAL "")
|
||||
codecov_lang_of_source(${FILE} LANG)
|
||||
if (NOT "${LANG}" STREQUAL "")
|
||||
list(APPEND SOURCES "${FILE}")
|
||||
set(TCOMPILER ${CMAKE_${LANG}_COMPILER_ID})
|
||||
endif ()
|
||||
endif ()
|
||||
endforeach ()
|
||||
|
||||
# If no gcov binary was found, coverage data can't be evaluated.
|
||||
if (NOT GCOV_${TCOMPILER}_BIN)
|
||||
message(WARNING "No coverage evaluation binary found for ${TCOMPILER}.")
|
||||
return()
|
||||
endif ()
|
||||
|
||||
set(GCOV_BIN "${GCOV_${TCOMPILER}_BIN}")
|
||||
set(GCOV_ENV "${GCOV_${TCOMPILER}_ENV}")
|
||||
|
||||
|
||||
set(TDIR ${CMAKE_CURRENT_BINARY_DIR}/CMakeFiles/${TNAME}.dir)
|
||||
set(GENINFO_FILES "")
|
||||
foreach(FILE ${SOURCES})
|
||||
# Generate coverage files. If no .gcda file was generated during
|
||||
# execution, the empty coverage file will be used instead.
|
||||
set(OUTFILE "${TDIR}/${FILE}.info")
|
||||
list(APPEND GENINFO_FILES ${OUTFILE})
|
||||
|
||||
add_custom_command(OUTPUT ${OUTFILE}
|
||||
COMMAND test -f "${TDIR}/${FILE}.gcda"
|
||||
&& ${GCOV_ENV} ${GENINFO_BIN} --quiet --base-directory
|
||||
${PROJECT_SOURCE_DIR} --gcov-tool ${GCOV_BIN}
|
||||
--output-filename ${OUTFILE} ${GENINFO_EXTERN_FLAG}
|
||||
${TDIR}/${FILE}.gcda
|
||||
|| cp ${OUTFILE}.init ${OUTFILE}
|
||||
DEPENDS ${TNAME} ${TNAME}-capture-init
|
||||
COMMENT "Capturing coverage data for ${FILE}"
|
||||
)
|
||||
endforeach()
|
||||
|
||||
# Concatenate all files generated by geninfo to a single file per target.
|
||||
set(OUTFILE "${LCOV_DATA_PATH_CAPTURE}/${TNAME}.info")
|
||||
lcov_merge_files("${OUTFILE}" ${GENINFO_FILES})
|
||||
add_custom_target(${TNAME}-geninfo DEPENDS ${OUTFILE})
|
||||
|
||||
# add geninfo file generation to global lcov-capture target
|
||||
add_dependencies(lcov-capture ${TNAME}-geninfo)
|
||||
set(LCOV_CAPTURE_FILES "${LCOV_CAPTURE_FILES}" "${OUTFILE}" CACHE INTERNAL
|
||||
""
|
||||
)
|
||||
|
||||
# Add target for generating html output for this target only.
|
||||
file(MAKE_DIRECTORY ${LCOV_HTML_PATH}/${TNAME})
|
||||
add_custom_target(${TNAME}-genhtml
|
||||
COMMAND ${GENHTML_BIN} --quiet --sort --prefix ${PROJECT_SOURCE_DIR}
|
||||
--baseline-file ${LCOV_DATA_PATH_INIT}/${TNAME}.info
|
||||
--output-directory ${LCOV_HTML_PATH}/${TNAME}
|
||||
--title "${CMAKE_PROJECT_NAME} - target ${TNAME}"
|
||||
${GENHTML_CPPFILT_FLAG} ${OUTFILE}
|
||||
DEPENDS ${TNAME}-geninfo ${TNAME}-capture-init
|
||||
)
|
||||
endfunction (lcov_capture_tgt)
|
||||
|
||||
|
||||
# This function will generate the global info file for all targets. It has to be
|
||||
# called after all other CMake functions in the root CMakeLists.txt file, to get
|
||||
# a full list of all targets that generate coverage data.
|
||||
function (lcov_capture)
|
||||
# Skip this function (and do not create the following targets), if there are
|
||||
# no input files.
|
||||
if ("${LCOV_CAPTURE_FILES}" STREQUAL "")
|
||||
return()
|
||||
endif ()
|
||||
|
||||
# Add a new target to merge the files of all targets.
|
||||
set(OUTFILE "${LCOV_DATA_PATH_CAPTURE}/all_targets.info")
|
||||
lcov_merge_files("${OUTFILE}" ${LCOV_CAPTURE_FILES})
|
||||
add_custom_target(lcov-geninfo DEPENDS ${OUTFILE} lcov-capture)
|
||||
|
||||
# Add a new global target for all lcov targets. This target could be used to
|
||||
# generate the lcov html output for the whole project instead of calling
|
||||
# <TARGET>-geninfo and <TARGET>-genhtml for each target. It will also be
|
||||
# used to generate a html site for all project data together instead of one
|
||||
# for each target.
|
||||
if (NOT TARGET lcov)
|
||||
file(MAKE_DIRECTORY ${LCOV_HTML_PATH}/all_targets)
|
||||
add_custom_target(lcov
|
||||
COMMAND ${GENHTML_BIN} --quiet --sort
|
||||
--baseline-file ${LCOV_DATA_PATH_INIT}/all_targets.info
|
||||
--output-directory ${LCOV_HTML_PATH}/all_targets
|
||||
--title "${CMAKE_PROJECT_NAME}" --prefix "${PROJECT_SOURCE_DIR}"
|
||||
${GENHTML_CPPFILT_FLAG} ${OUTFILE}
|
||||
DEPENDS lcov-geninfo-init lcov-geninfo
|
||||
)
|
||||
endif ()
|
||||
endfunction (lcov_capture)
|
||||
|
||||
|
||||
|
||||
|
||||
# Add a new global target to generate the lcov html report for the whole project
|
||||
# instead of calling <TARGET>-genhtml for each target (to create an own report
|
||||
# for each target). Instead of the lcov target it does not require geninfo for
|
||||
# all targets, so you have to call <TARGET>-geninfo to generate the info files
|
||||
# the targets you'd like to have in your report or lcov-geninfo for generating
|
||||
# info files for all targets before calling lcov-genhtml.
|
||||
file(MAKE_DIRECTORY ${LCOV_HTML_PATH}/selected_targets)
|
||||
if (NOT TARGET lcov-genhtml)
|
||||
add_custom_target(lcov-genhtml
|
||||
COMMAND ${GENHTML_BIN}
|
||||
--quiet
|
||||
--output-directory ${LCOV_HTML_PATH}/selected_targets
|
||||
--title \"${CMAKE_PROJECT_NAME} - targets `find
|
||||
${LCOV_DATA_PATH_CAPTURE} -name \"*.info\" ! -name
|
||||
\"all_targets.info\" -exec basename {} .info \\\;`\"
|
||||
--prefix ${PROJECT_SOURCE_DIR}
|
||||
--sort
|
||||
${GENHTML_CPPFILT_FLAG}
|
||||
`find ${LCOV_DATA_PATH_CAPTURE} -name \"*.info\" ! -name
|
||||
\"all_targets.info\"`
|
||||
)
|
||||
endif (NOT TARGET lcov-genhtml)
|
@ -1,258 +0,0 @@
|
||||
# This file is part of CMake-codecov.
|
||||
#
|
||||
# Copyright (c)
|
||||
# 2015-2017 RWTH Aachen University, Federal Republic of Germany
|
||||
#
|
||||
# See the LICENSE file in the package base directory for details
|
||||
#
|
||||
# Written by Alexander Haase, alexander.haase@rwth-aachen.de
|
||||
#
|
||||
|
||||
|
||||
# Add an option to choose, if coverage should be enabled or not. If enabled
|
||||
# marked targets will be build with coverage support and appropriate targets
|
||||
# will be added. If disabled coverage will be ignored for *ALL* targets.
|
||||
option(ENABLE_COVERAGE "Enable coverage build." OFF)
|
||||
|
||||
set(COVERAGE_FLAG_CANDIDATES
|
||||
# gcc and clang
|
||||
"-O0 -g -fprofile-arcs -ftest-coverage"
|
||||
|
||||
# gcc and clang fallback
|
||||
"-O0 -g --coverage"
|
||||
)
|
||||
|
||||
|
||||
# Add coverage support for target ${TNAME} and register target for coverage
|
||||
# evaluation. If coverage is disabled or not supported, this function will
|
||||
# simply do nothing.
|
||||
#
|
||||
# Note: This function is only a wrapper to define this function always, even if
|
||||
# coverage is not supported by the compiler or disabled. This function must
|
||||
# be defined here, because the module will be exited, if there is no coverage
|
||||
# support by the compiler or it is disabled by the user.
|
||||
function (add_coverage TNAME)
|
||||
# only add coverage for target, if coverage is support and enabled.
|
||||
if (ENABLE_COVERAGE)
|
||||
foreach (TNAME ${ARGV})
|
||||
add_coverage_target(${TNAME})
|
||||
endforeach ()
|
||||
endif ()
|
||||
endfunction (add_coverage)
|
||||
|
||||
|
||||
# Add global target to gather coverage information after all targets have been
|
||||
# added. Other evaluation functions could be added here, after checks for the
|
||||
# specific module have been passed.
|
||||
#
|
||||
# Note: This function is only a wrapper to define this function always, even if
|
||||
# coverage is not supported by the compiler or disabled. This function must
|
||||
# be defined here, because the module will be exited, if there is no coverage
|
||||
# support by the compiler or it is disabled by the user.
|
||||
function (coverage_evaluate)
|
||||
# add lcov evaluation
|
||||
if (LCOV_FOUND)
|
||||
lcov_capture_initial()
|
||||
lcov_capture()
|
||||
endif (LCOV_FOUND)
|
||||
endfunction ()
|
||||
|
||||
|
||||
# Exit this module, if coverage is disabled. add_coverage is defined before this
|
||||
# return, so this module can be exited now safely without breaking any build-
|
||||
# scripts.
|
||||
if (NOT ENABLE_COVERAGE)
|
||||
return()
|
||||
endif ()
|
||||
|
||||
|
||||
|
||||
|
||||
# Find the reuired flags foreach language.
|
||||
set(CMAKE_REQUIRED_QUIET_SAVE ${CMAKE_REQUIRED_QUIET})
|
||||
set(CMAKE_REQUIRED_QUIET ${codecov_FIND_QUIETLY})
|
||||
|
||||
get_property(ENABLED_LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)
|
||||
foreach (LANG ${ENABLED_LANGUAGES})
|
||||
# Coverage flags are not dependent on language, but the used compiler. So
|
||||
# instead of searching flags foreach language, search flags foreach compiler
|
||||
# used.
|
||||
set(COMPILER ${CMAKE_${LANG}_COMPILER_ID})
|
||||
if (NOT COVERAGE_${COMPILER}_FLAGS)
|
||||
foreach (FLAG ${COVERAGE_FLAG_CANDIDATES})
|
||||
if(NOT CMAKE_REQUIRED_QUIET)
|
||||
message(STATUS "Try ${COMPILER} code coverage flag = [${FLAG}]")
|
||||
endif()
|
||||
|
||||
set(CMAKE_REQUIRED_FLAGS "${FLAG}")
|
||||
unset(COVERAGE_FLAG_DETECTED CACHE)
|
||||
|
||||
if (${LANG} STREQUAL "C")
|
||||
include(CheckCCompilerFlag)
|
||||
check_c_compiler_flag("${FLAG}" COVERAGE_FLAG_DETECTED)
|
||||
|
||||
elseif (${LANG} STREQUAL "CXX")
|
||||
include(CheckCXXCompilerFlag)
|
||||
check_cxx_compiler_flag("${FLAG}" COVERAGE_FLAG_DETECTED)
|
||||
|
||||
elseif (${LANG} STREQUAL "Fortran")
|
||||
# CheckFortranCompilerFlag was introduced in CMake 3.x. To be
|
||||
# compatible with older Cmake versions, we will check if this
|
||||
# module is present before we use it. Otherwise we will define
|
||||
# Fortran coverage support as not available.
|
||||
include(CheckFortranCompilerFlag OPTIONAL
|
||||
RESULT_VARIABLE INCLUDED)
|
||||
if (INCLUDED)
|
||||
check_fortran_compiler_flag("${FLAG}"
|
||||
COVERAGE_FLAG_DETECTED)
|
||||
elseif (NOT CMAKE_REQUIRED_QUIET)
|
||||
message("-- Performing Test COVERAGE_FLAG_DETECTED")
|
||||
message("-- Performing Test COVERAGE_FLAG_DETECTED - Failed"
|
||||
" (Check not supported)")
|
||||
endif ()
|
||||
endif()
|
||||
|
||||
if (COVERAGE_FLAG_DETECTED)
|
||||
set(COVERAGE_${COMPILER}_FLAGS "${FLAG}"
|
||||
CACHE STRING "${COMPILER} flags for code coverage.")
|
||||
mark_as_advanced(COVERAGE_${COMPILER}_FLAGS)
|
||||
break()
|
||||
else ()
|
||||
message(WARNING "Code coverage is not available for ${COMPILER}"
|
||||
" compiler. Targets using this compiler will be "
|
||||
"compiled without it.")
|
||||
endif ()
|
||||
endforeach ()
|
||||
endif ()
|
||||
endforeach ()
|
||||
|
||||
set(CMAKE_REQUIRED_QUIET ${CMAKE_REQUIRED_QUIET_SAVE})
|
||||
|
||||
|
||||
|
||||
|
||||
# Helper function to get the language of a source file.
|
||||
function (codecov_lang_of_source FILE RETURN_VAR)
|
||||
get_filename_component(FILE_EXT "${FILE}" EXT)
|
||||
string(TOLOWER "${FILE_EXT}" FILE_EXT)
|
||||
string(SUBSTRING "${FILE_EXT}" 1 -1 FILE_EXT)
|
||||
|
||||
get_property(ENABLED_LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)
|
||||
foreach (LANG ${ENABLED_LANGUAGES})
|
||||
list(FIND CMAKE_${LANG}_SOURCE_FILE_EXTENSIONS "${FILE_EXT}" TEMP)
|
||||
if (NOT ${TEMP} EQUAL -1)
|
||||
set(${RETURN_VAR} "${LANG}" PARENT_SCOPE)
|
||||
return()
|
||||
endif ()
|
||||
endforeach()
|
||||
|
||||
set(${RETURN_VAR} "" PARENT_SCOPE)
|
||||
endfunction ()
|
||||
|
||||
|
||||
# Helper function to get the relative path of the source file destination path.
|
||||
# This path is needed by FindGcov and FindLcov cmake files to locate the
|
||||
# captured data.
|
||||
function (codecov_path_of_source FILE RETURN_VAR)
|
||||
string(REGEX MATCH "TARGET_OBJECTS:([^ >]+)" _source ${FILE})
|
||||
|
||||
# If expression was found, SOURCEFILE is a generator-expression for an
|
||||
# object library. Currently we found no way to call this function automatic
|
||||
# for the referenced target, so it must be called in the directoryso of the
|
||||
# object library definition.
|
||||
if (NOT "${_source}" STREQUAL "")
|
||||
set(${RETURN_VAR} "" PARENT_SCOPE)
|
||||
return()
|
||||
endif ()
|
||||
|
||||
|
||||
string(REPLACE "${CMAKE_CURRENT_BINARY_DIR}/" "" FILE "${FILE}")
|
||||
if(IS_ABSOLUTE ${FILE})
|
||||
file(RELATIVE_PATH FILE ${CMAKE_CURRENT_SOURCE_DIR} ${FILE})
|
||||
endif()
|
||||
|
||||
# get the right path for file
|
||||
string(REPLACE ".." "__" PATH "${FILE}")
|
||||
|
||||
set(${RETURN_VAR} "${PATH}" PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
|
||||
|
||||
|
||||
# Add coverage support for target ${TNAME} and register target for coverage
|
||||
# evaluation.
|
||||
function(add_coverage_target TNAME)
|
||||
# Check if all sources for target use the same compiler. If a target uses
|
||||
# e.g. C and Fortran mixed and uses different compilers (e.g. clang and
|
||||
# gfortran) this can trigger huge problems, because different compilers may
|
||||
# use different implementations for code coverage.
|
||||
get_target_property(TSOURCES ${TNAME} SOURCES)
|
||||
set(TARGET_COMPILER "")
|
||||
set(ADDITIONAL_FILES "")
|
||||
foreach (FILE ${TSOURCES})
|
||||
# If expression was found, FILE is a generator-expression for an object
|
||||
# library. Object libraries will be ignored.
|
||||
string(REGEX MATCH "TARGET_OBJECTS:([^ >]+)" _file ${FILE})
|
||||
if ("${_file}" STREQUAL "")
|
||||
codecov_lang_of_source(${FILE} LANG)
|
||||
if (LANG)
|
||||
list(APPEND TARGET_COMPILER ${CMAKE_${LANG}_COMPILER_ID})
|
||||
|
||||
list(APPEND ADDITIONAL_FILES "${FILE}.gcno")
|
||||
list(APPEND ADDITIONAL_FILES "${FILE}.gcda")
|
||||
endif ()
|
||||
endif ()
|
||||
endforeach ()
|
||||
|
||||
list(REMOVE_DUPLICATES TARGET_COMPILER)
|
||||
list(LENGTH TARGET_COMPILER NUM_COMPILERS)
|
||||
|
||||
if (NUM_COMPILERS GREATER 1)
|
||||
message(WARNING "Can't use code coverage for target ${TNAME}, because "
|
||||
"it will be compiled by incompatible compilers. Target will be "
|
||||
"compiled without code coverage.")
|
||||
return()
|
||||
|
||||
elseif (NUM_COMPILERS EQUAL 0)
|
||||
message(WARNING "Can't use code coverage for target ${TNAME}, because "
|
||||
"it uses an unknown compiler. Target will be compiled without "
|
||||
"code coverage.")
|
||||
return()
|
||||
|
||||
elseif (NOT DEFINED "COVERAGE_${TARGET_COMPILER}_FLAGS")
|
||||
# A warning has been printed before, so just return if flags for this
|
||||
# compiler aren't available.
|
||||
return()
|
||||
endif()
|
||||
|
||||
|
||||
# enable coverage for target
|
||||
set_property(TARGET ${TNAME} APPEND_STRING
|
||||
PROPERTY COMPILE_FLAGS " ${COVERAGE_${TARGET_COMPILER}_FLAGS}")
|
||||
set_property(TARGET ${TNAME} APPEND_STRING
|
||||
PROPERTY LINK_FLAGS " ${COVERAGE_${TARGET_COMPILER}_FLAGS}")
|
||||
|
||||
|
||||
# Add gcov files generated by compiler to clean target.
|
||||
set(CLEAN_FILES "")
|
||||
foreach (FILE ${ADDITIONAL_FILES})
|
||||
codecov_path_of_source(${FILE} FILE)
|
||||
list(APPEND CLEAN_FILES "CMakeFiles/${TNAME}.dir/${FILE}")
|
||||
endforeach()
|
||||
|
||||
set_directory_properties(PROPERTIES ADDITIONAL_MAKE_CLEAN_FILES
|
||||
"${CLEAN_FILES}")
|
||||
|
||||
|
||||
add_gcov_target(${TNAME})
|
||||
add_lcov_target(${TNAME})
|
||||
endfunction(add_coverage_target)
|
||||
|
||||
|
||||
|
||||
|
||||
# Include modules for parsing the collected data and output it in a readable
|
||||
# format (like gcov and lcov).
|
||||
find_package(Gcov)
|
||||
find_package(Lcov)
|
@ -1,10 +0,0 @@
|
||||
includedir=@CMAKE_INSTALL_FULL_INCLUDEDIR@
|
||||
libdir=@CMAKE_INSTALL_FULL_LIBDIR@
|
||||
pkg_version=@Catch2_VERSION@
|
||||
|
||||
Name: Catch2-With-Main
|
||||
Description: A modern, C++-native test framework for C++14 and above (links in default main)
|
||||
Version: ${pkg_version}
|
||||
Requires: catch2 = ${pkg_version}
|
||||
Cflags: -I${includedir}
|
||||
Libs: -L${libdir} -lCatch2Main
|
@ -1,11 +0,0 @@
|
||||
prefix=@CMAKE_INSTALL_PREFIX@
|
||||
exec_prefix=${prefix}
|
||||
includedir=@CMAKE_INSTALL_FULL_INCLUDEDIR@
|
||||
libdir=@CMAKE_INSTALL_FULL_LIBDIR@
|
||||
|
||||
Name: Catch2
|
||||
Description: A modern, C++-native, test framework for C++14 and above
|
||||
URL: https://github.com/catchorg/Catch2
|
||||
Version: @Catch2_VERSION@
|
||||
Cflags: -I${includedir}
|
||||
Libs: -L${libdir} -lCatch2
|
@ -1,56 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
# This file is part of CMake-codecov.
|
||||
#
|
||||
# Copyright (c)
|
||||
# 2015-2017 RWTH Aachen University, Federal Republic of Germany
|
||||
#
|
||||
# See the LICENSE file in the package base directory for details
|
||||
#
|
||||
# Written by Alexander Haase, alexander.haase@rwth-aachen.de
|
||||
#
|
||||
|
||||
if [ -z "$LLVM_COV_BIN" ]
|
||||
then
|
||||
echo "LLVM_COV_BIN not set!" >& 2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
# Get LLVM version to find out.
|
||||
LLVM_VERSION=$($LLVM_COV_BIN -version | grep -i "LLVM version" \
|
||||
| sed "s/^\([A-Za-z ]*\)\([0-9]\).\([0-9]\).*$/\2.\3/g")
|
||||
|
||||
if [ "$1" = "-v" ]
|
||||
then
|
||||
echo "llvm-cov-wrapper $LLVM_VERSION"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
|
||||
if [ -n "$LLVM_VERSION" ]
|
||||
then
|
||||
MAJOR=$(echo $LLVM_VERSION | cut -d'.' -f1)
|
||||
MINOR=$(echo $LLVM_VERSION | cut -d'.' -f2)
|
||||
|
||||
if [ $MAJOR -eq 3 ] && [ $MINOR -le 4 ]
|
||||
then
|
||||
if [ -f "$1" ]
|
||||
then
|
||||
filename=$(basename "$1")
|
||||
extension="${filename##*.}"
|
||||
|
||||
case "$extension" in
|
||||
"gcno") exec $LLVM_COV_BIN --gcno="$1" ;;
|
||||
"gcda") exec $LLVM_COV_BIN --gcda="$1" ;;
|
||||
esac
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ $MAJOR -eq 3 ] && [ $MINOR -le 5 ]
|
||||
then
|
||||
exec $LLVM_COV_BIN $@
|
||||
fi
|
||||
fi
|
||||
|
||||
exec $LLVM_COV_BIN gcov $@
|
@ -1,201 +0,0 @@
|
||||
cmake_minimum_required(VERSION 3.10)
|
||||
|
||||
# detect if Catch is being bundled,
|
||||
# disable testsuite in that case
|
||||
if(NOT DEFINED PROJECT_NAME)
|
||||
set(NOT_SUBPROJECT ON)
|
||||
else()
|
||||
set(NOT_SUBPROJECT OFF)
|
||||
endif()
|
||||
|
||||
option(CATCH_INSTALL_DOCS "Install documentation alongside library" ON)
|
||||
option(CATCH_INSTALL_EXTRAS "Install extras (CMake scripts, debugger helpers) alongside library" ON)
|
||||
option(CATCH_DEVELOPMENT_BUILD "Build tests, enable warnings, enable Werror, etc" OFF)
|
||||
option(CATCH_ENABLE_REPRODUCIBLE_BUILD "Add compiler flags for improving build reproducibility" ON)
|
||||
|
||||
include(CMakeDependentOption)
|
||||
cmake_dependent_option(CATCH_BUILD_TESTING "Build the SelfTest project" ON "CATCH_DEVELOPMENT_BUILD" OFF)
|
||||
cmake_dependent_option(CATCH_BUILD_EXAMPLES "Build code examples" OFF "CATCH_DEVELOPMENT_BUILD" OFF)
|
||||
cmake_dependent_option(CATCH_BUILD_EXTRA_TESTS "Build extra tests" OFF "CATCH_DEVELOPMENT_BUILD" OFF)
|
||||
cmake_dependent_option(CATCH_BUILD_FUZZERS "Build fuzzers" OFF "CATCH_DEVELOPMENT_BUILD" OFF)
|
||||
cmake_dependent_option(CATCH_ENABLE_COVERAGE "Generate coverage for codecov.io" OFF "CATCH_DEVELOPMENT_BUILD" OFF)
|
||||
cmake_dependent_option(CATCH_ENABLE_WERROR "Enables Werror during build" ON "CATCH_DEVELOPMENT_BUILD" OFF)
|
||||
cmake_dependent_option(CATCH_BUILD_SURROGATES "Enable generating and building surrogate TUs for the main headers" OFF "CATCH_DEVELOPMENT_BUILD" OFF)
|
||||
cmake_dependent_option(CATCH_ENABLE_CONFIGURE_TESTS "Enable CMake configuration tests. WARNING: VERY EXPENSIVE" OFF "CATCH_DEVELOPMENT_BUILD" OFF)
|
||||
cmake_dependent_option(CATCH_ENABLE_CMAKE_HELPER_TESTS "Enable CMake helper tests. WARNING: VERY EXPENSIVE" OFF "CATCH_DEVELOPMENT_BUILD" OFF)
|
||||
|
||||
|
||||
# Catch2's build breaks if done in-tree. You probably should not build
|
||||
# things in tree anyway, but we can allow projects that include Catch2
|
||||
# as a subproject to build in-tree as long as it is not in our tree.
|
||||
if (CMAKE_BINARY_DIR STREQUAL CMAKE_CURRENT_SOURCE_DIR)
|
||||
message(FATAL_ERROR "Building in-source is not supported! Create a build dir and remove ${CMAKE_SOURCE_DIR}/CMakeCache.txt")
|
||||
endif()
|
||||
|
||||
project(Catch2
|
||||
VERSION 3.7.1 # CML version placeholder, don't delete
|
||||
LANGUAGES CXX
|
||||
# HOMEPAGE_URL is not supported until CMake version 3.12, which
|
||||
# we do not target yet.
|
||||
# HOMEPAGE_URL "https://github.com/catchorg/Catch2"
|
||||
DESCRIPTION "A modern, C++-native, unit test framework."
|
||||
)
|
||||
|
||||
|
||||
# Provide path for scripts. We first add path to the scripts we don't use,
|
||||
# but projects including us might, and set the path up to parent scope.
|
||||
# Then we also add path that we use to configure the project, but is of
|
||||
# no use to top level projects.
|
||||
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/extras")
|
||||
if (NOT NOT_SUBPROJECT)
|
||||
set(CMAKE_MODULE_PATH "${CMAKE_MODULE_PATH}" PARENT_SCOPE)
|
||||
endif()
|
||||
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/CMake")
|
||||
|
||||
include(GNUInstallDirs)
|
||||
include(CMakePackageConfigHelpers)
|
||||
include(CatchConfigOptions)
|
||||
if(CATCH_DEVELOPMENT_BUILD)
|
||||
include(CTest)
|
||||
endif()
|
||||
|
||||
# This variable is used in some subdirectories, so we need it here, rather
|
||||
# than later in the install block
|
||||
set(CATCH_CMAKE_CONFIG_DESTINATION "${CMAKE_INSTALL_LIBDIR}/cmake/Catch2")
|
||||
|
||||
# We have some Windows builds that test `wmain` entry point,
|
||||
# and we need this change to be present in all binaries that
|
||||
# are built during these tests, so this is required here, before
|
||||
# the subdirectories are added.
|
||||
if(CATCH_TEST_USE_WMAIN)
|
||||
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /ENTRY:wmainCRTStartup")
|
||||
endif()
|
||||
|
||||
|
||||
# Basic paths
|
||||
set(CATCH_DIR ${CMAKE_CURRENT_SOURCE_DIR})
|
||||
set(SOURCES_DIR ${CATCH_DIR}/src/catch2)
|
||||
set(SELF_TEST_DIR ${CATCH_DIR}/tests/SelfTest)
|
||||
|
||||
# We need to bring-in the variables defined there to this scope
|
||||
add_subdirectory(src)
|
||||
|
||||
# Build tests only if requested
|
||||
if (BUILD_TESTING AND CATCH_BUILD_TESTING AND NOT_SUBPROJECT)
|
||||
find_package(PythonInterp 3 REQUIRED)
|
||||
if (NOT PYTHONINTERP_FOUND)
|
||||
message(FATAL_ERROR "Python not found, but required for tests")
|
||||
endif()
|
||||
add_subdirectory(tests)
|
||||
endif()
|
||||
|
||||
if(CATCH_BUILD_EXAMPLES)
|
||||
add_subdirectory(examples)
|
||||
endif()
|
||||
|
||||
if(CATCH_BUILD_EXTRA_TESTS)
|
||||
add_subdirectory(tests/ExtraTests)
|
||||
endif()
|
||||
|
||||
if(CATCH_BUILD_FUZZERS)
|
||||
add_subdirectory(fuzzing)
|
||||
endif()
|
||||
|
||||
if (CATCH_DEVELOPMENT_BUILD)
|
||||
add_warnings_to_targets("${CATCH_WARNING_TARGETS}")
|
||||
endif()
|
||||
|
||||
# Only perform the installation steps when Catch is not being used as
|
||||
# a subproject via `add_subdirectory`, or the destinations will break,
|
||||
# see https://github.com/catchorg/Catch2/issues/1373
|
||||
if (NOT_SUBPROJECT)
|
||||
configure_package_config_file(
|
||||
${CMAKE_CURRENT_LIST_DIR}/CMake/Catch2Config.cmake.in
|
||||
${CMAKE_CURRENT_BINARY_DIR}/Catch2Config.cmake
|
||||
INSTALL_DESTINATION
|
||||
${CATCH_CMAKE_CONFIG_DESTINATION}
|
||||
)
|
||||
|
||||
write_basic_package_version_file(
|
||||
"${CMAKE_CURRENT_BINARY_DIR}/Catch2ConfigVersion.cmake"
|
||||
COMPATIBILITY
|
||||
SameMajorVersion
|
||||
)
|
||||
|
||||
install(
|
||||
FILES
|
||||
"${CMAKE_CURRENT_BINARY_DIR}/Catch2Config.cmake"
|
||||
"${CMAKE_CURRENT_BINARY_DIR}/Catch2ConfigVersion.cmake"
|
||||
DESTINATION
|
||||
${CATCH_CMAKE_CONFIG_DESTINATION}
|
||||
)
|
||||
|
||||
# Install documentation
|
||||
if(CATCH_INSTALL_DOCS)
|
||||
install(
|
||||
DIRECTORY
|
||||
docs/
|
||||
DESTINATION
|
||||
"${CMAKE_INSTALL_DOCDIR}"
|
||||
PATTERN "doxygen" EXCLUDE
|
||||
)
|
||||
endif()
|
||||
|
||||
if(CATCH_INSTALL_EXTRAS)
|
||||
# Install CMake scripts
|
||||
install(
|
||||
FILES
|
||||
"extras/ParseAndAddCatchTests.cmake"
|
||||
"extras/Catch.cmake"
|
||||
"extras/CatchAddTests.cmake"
|
||||
"extras/CatchShardTests.cmake"
|
||||
"extras/CatchShardTestsImpl.cmake"
|
||||
DESTINATION
|
||||
${CATCH_CMAKE_CONFIG_DESTINATION}
|
||||
)
|
||||
|
||||
# Install debugger helpers
|
||||
install(
|
||||
FILES
|
||||
"extras/gdbinit"
|
||||
"extras/lldbinit"
|
||||
DESTINATION
|
||||
${CMAKE_INSTALL_DATAROOTDIR}/Catch2
|
||||
)
|
||||
endif()
|
||||
|
||||
## Provide some pkg-config integration
|
||||
set(PKGCONFIG_INSTALL_DIR
|
||||
"${CMAKE_INSTALL_DATAROOTDIR}/pkgconfig"
|
||||
CACHE PATH "Path where catch2.pc is installed"
|
||||
)
|
||||
configure_file(
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/CMake/catch2.pc.in
|
||||
${CMAKE_CURRENT_BINARY_DIR}/catch2.pc
|
||||
@ONLY
|
||||
)
|
||||
configure_file(
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/CMake/catch2-with-main.pc.in
|
||||
${CMAKE_CURRENT_BINARY_DIR}/catch2-with-main.pc
|
||||
@ONLY
|
||||
)
|
||||
install(
|
||||
FILES
|
||||
"${CMAKE_CURRENT_BINARY_DIR}/catch2.pc"
|
||||
"${CMAKE_CURRENT_BINARY_DIR}/catch2-with-main.pc"
|
||||
DESTINATION
|
||||
${PKGCONFIG_INSTALL_DIR}
|
||||
)
|
||||
|
||||
# CPack/CMake started taking the package version from project version 3.12
|
||||
# So we need to set the version manually for older CMake versions
|
||||
if(${CMAKE_VERSION} VERSION_LESS "3.12.0")
|
||||
set(CPACK_PACKAGE_VERSION ${PROJECT_VERSION})
|
||||
endif()
|
||||
|
||||
set(CPACK_PACKAGE_CONTACT "https://github.com/catchorg/Catch2/")
|
||||
|
||||
|
||||
include( CPack )
|
||||
|
||||
endif()
|
@ -1,26 +0,0 @@
|
||||
{
|
||||
"version": 3,
|
||||
"configurePresets": [
|
||||
{
|
||||
"name": "basic-tests",
|
||||
"displayName": "Basic development build",
|
||||
"description": "Enables development build with basic tests that are cheap to build and run",
|
||||
"cacheVariables": {
|
||||
"CATCH_DEVELOPMENT_BUILD": "ON"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "all-tests",
|
||||
"inherits": "basic-tests",
|
||||
"displayName": "Full development build",
|
||||
"description": "Enables development build with examples and ALL tests",
|
||||
"cacheVariables": {
|
||||
"CATCH_BUILD_EXAMPLES": "ON",
|
||||
"CATCH_BUILD_EXTRA_TESTS": "ON",
|
||||
"CATCH_BUILD_SURROGATES": "ON",
|
||||
"CATCH_ENABLE_CONFIGURE_TESTS": "ON",
|
||||
"CATCH_ENABLE_CMAKE_HELPER_TESTS": "ON"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
@ -1,46 +0,0 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to creating a positive environment include:
|
||||
|
||||
* Using welcoming and inclusive language
|
||||
* Being respectful of differing viewpoints and experiences
|
||||
* Gracefully accepting constructive criticism
|
||||
* Focusing on what is best for the community
|
||||
* Showing empathy towards other community members
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery and unwelcome sexual attention or advances
|
||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or electronic address, without explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a professional setting
|
||||
|
||||
## Our Responsibilities
|
||||
|
||||
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at github@philnash.me. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
|
||||
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
|
||||
|
||||
[homepage]: http://contributor-covenant.org
|
||||
[version]: http://contributor-covenant.org/version/1/4/
|
File diff suppressed because it is too large
Load Diff
@ -1,23 +0,0 @@
|
||||
Boost Software License - Version 1.0 - August 17th, 2003
|
||||
|
||||
Permission is hereby granted, free of charge, to any person or organization
|
||||
obtaining a copy of the software and accompanying documentation covered by
|
||||
this license (the "Software") to use, reproduce, display, distribute,
|
||||
execute, and transmit the Software, and to prepare derivative works of the
|
||||
Software, and to permit third-parties to whom the Software is furnished to
|
||||
do so, all subject to the following:
|
||||
|
||||
The copyright notices in the Software and this entire statement, including
|
||||
the above license grant, this restriction and the following disclaimer,
|
||||
must be included in all copies of the Software, in whole or in part, and
|
||||
all derivative works of the Software, unless such copies or derivative
|
||||
works are solely in the form of machine-executable object code generated by
|
||||
a source language processor.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
|
||||
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
|
||||
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
|
||||
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
DEALINGS IN THE SOFTWARE.
|
@ -1,3 +0,0 @@
|
||||
module(name = "catch2")
|
||||
|
||||
bazel_dep(name = "bazel_skylib", version = "1.7.1")
|
@ -1,103 +0,0 @@
|
||||
<a id="top"></a>
|
||||

|
||||
|
||||
[](https://github.com/catchorg/catch2/releases)
|
||||
[](https://github.com/catchorg/Catch2/actions/workflows/linux-simple-builds.yml)
|
||||
[](https://github.com/catchorg/Catch2/actions/workflows/linux-other-builds.yml)
|
||||
[](https://github.com/catchorg/Catch2/actions/workflows/mac-builds.yml)
|
||||
[](https://ci.appveyor.com/project/catchorg/catch2)
|
||||
[](https://codecov.io/gh/catchorg/Catch2)
|
||||
[](https://godbolt.org/z/EdoY15q9G)
|
||||
[](https://discord.gg/4CWS9zD)
|
||||
|
||||
|
||||
## What is Catch2?
|
||||
|
||||
Catch2 is mainly a unit testing framework for C++, but it also
|
||||
provides basic micro-benchmarking features, and simple BDD macros.
|
||||
|
||||
Catch2's main advantage is that using it is both simple and natural.
|
||||
Test names do not have to be valid identifiers, assertions look like
|
||||
normal C++ boolean expressions, and sections provide a nice and local way
|
||||
to share set-up and tear-down code in tests.
|
||||
|
||||
**Example unit test**
|
||||
```cpp
|
||||
#include <catch2/catch_test_macros.hpp>
|
||||
|
||||
#include <cstdint>
|
||||
|
||||
uint32_t factorial( uint32_t number ) {
|
||||
return number <= 1 ? number : factorial(number-1) * number;
|
||||
}
|
||||
|
||||
TEST_CASE( "Factorials are computed", "[factorial]" ) {
|
||||
REQUIRE( factorial( 1) == 1 );
|
||||
REQUIRE( factorial( 2) == 2 );
|
||||
REQUIRE( factorial( 3) == 6 );
|
||||
REQUIRE( factorial(10) == 3'628'800 );
|
||||
}
|
||||
```
|
||||
|
||||
**Example microbenchmark**
|
||||
```cpp
|
||||
#include <catch2/catch_test_macros.hpp>
|
||||
#include <catch2/benchmark/catch_benchmark.hpp>
|
||||
|
||||
#include <cstdint>
|
||||
|
||||
uint64_t fibonacci(uint64_t number) {
|
||||
return number < 2 ? number : fibonacci(number - 1) + fibonacci(number - 2);
|
||||
}
|
||||
|
||||
TEST_CASE("Benchmark Fibonacci", "[!benchmark]") {
|
||||
REQUIRE(fibonacci(5) == 5);
|
||||
|
||||
REQUIRE(fibonacci(20) == 6'765);
|
||||
BENCHMARK("fibonacci 20") {
|
||||
return fibonacci(20);
|
||||
};
|
||||
|
||||
REQUIRE(fibonacci(25) == 75'025);
|
||||
BENCHMARK("fibonacci 25") {
|
||||
return fibonacci(25);
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
_Note that benchmarks are not run by default, so you need to run it explicitly
|
||||
with the `[!benchmark]` tag._
|
||||
|
||||
|
||||
## Catch2 v3 has been released!
|
||||
|
||||
You are on the `devel` branch, where the v3 version is being developed.
|
||||
v3 brings a bunch of significant changes, the big one being that Catch2
|
||||
is no longer a single-header library. Catch2 now behaves as a normal
|
||||
library, with multiple headers and separately compiled implementation.
|
||||
|
||||
The documentation is slowly being updated to take these changes into
|
||||
account, but this work is currently still ongoing.
|
||||
|
||||
For migrating from the v2 releases to v3, you should look at [our
|
||||
documentation](docs/migrate-v2-to-v3.md#top). It provides a simple
|
||||
guidelines on getting started, and collects most common migration
|
||||
problems.
|
||||
|
||||
For the previous major version of Catch2 [look into the `v2.x` branch
|
||||
here on GitHub](https://github.com/catchorg/Catch2/tree/v2.x).
|
||||
|
||||
|
||||
## How to use it
|
||||
This documentation comprises these three parts:
|
||||
|
||||
* [Why do we need yet another C++ Test Framework?](docs/why-catch.md#top)
|
||||
* [Tutorial](docs/tutorial.md#top) - getting started
|
||||
* [Reference section](docs/Readme.md#top) - all the details
|
||||
|
||||
|
||||
## More
|
||||
* Issues and bugs can be raised on the [Issue tracker on GitHub](https://github.com/catchorg/Catch2/issues)
|
||||
* For discussion or questions please use [our Discord](https://discord.gg/4CWS9zD)
|
||||
* See who else is using Catch2 in [Open Source Software](docs/opensource-users.md#top)
|
||||
or [commercially](docs/commercial-users.md#top).
|
@ -1,19 +0,0 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
* Versions 1.x (branch Catch1.x) are no longer supported.
|
||||
* Versions 2.x (branch v2.x) are currently supported.
|
||||
* `devel` branch serves for stable-ish development and is supported,
|
||||
but branches `devel-*` are considered short lived and are not supported separately.
|
||||
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Due to its nature as a _unit_ test framework, Catch2 shouldn't interact
|
||||
with untrusted inputs and there shouldn't be many security vulnerabilities
|
||||
in it.
|
||||
|
||||
However, if you find one you send email to martin <dot> horenovsky <at>
|
||||
gmail <dot> com. If you want to encrypt the email, my pgp key is
|
||||
`E29C 46F3 B8A7 5028 6079 3B7D ECC9 C20E 314B 2360`.
|
@ -1,16 +0,0 @@
|
||||
workspace(name = "catch2")
|
||||
|
||||
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
|
||||
|
||||
http_archive(
|
||||
name = "bazel_skylib",
|
||||
sha256 = "bc283cdfcd526a52c3201279cda4bc298652efa898b10b4db0837dc51652756f",
|
||||
urls = [
|
||||
"https://mirror.bazel.build/github.com/bazelbuild/bazel-skylib/releases/download/1.7.1/bazel-skylib-1.7.1.tar.gz",
|
||||
"https://github.com/bazelbuild/bazel-skylib/releases/download/1.7.1/bazel-skylib-1.7.1.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
load("@bazel_skylib//:workspace.bzl", "bazel_skylib_workspace")
|
||||
|
||||
bazel_skylib_workspace()
|
@ -1,83 +0,0 @@
|
||||
version: "{build}-{branch}"
|
||||
|
||||
# If we ever get a backlog larger than clone_depth, builds will fail
|
||||
# spuriously. I do not think we will ever get 20 deep commits deep though.
|
||||
clone_depth: 20
|
||||
|
||||
# We want to build everything, except for branches that are explicitly
|
||||
# for messing around with Github Actions.
|
||||
branches:
|
||||
except:
|
||||
- /devel-gha.+/
|
||||
|
||||
|
||||
# We need a more up to date pip because Python 2.7 is EOL soon
|
||||
init:
|
||||
- set PATH=C:\Python35;C:\Python35\Scripts;%PATH%
|
||||
|
||||
|
||||
install:
|
||||
- ps: if (($env:CONFIGURATION) -eq "Debug" -And ($env:coverage) -eq "1" ) { pip --disable-pip-version-check install codecov }
|
||||
# This removes our changes to PATH. Keep this step last!
|
||||
- ps: if (($env:CONFIGURATION) -eq "Debug" -And ($env:coverage) -eq "1" ) { .\tools\misc\installOpenCppCoverage.ps1 }
|
||||
|
||||
|
||||
before_build:
|
||||
# We need to modify PATH again, because it was reset since the "init" step
|
||||
- set PATH=C:\Python35;C:\Python35\Scripts;%PATH%
|
||||
- set CXXFLAGS=%additional_flags%
|
||||
# If we are building examples/extra-tests, we need to regenerate the amalgamated files
|
||||
- cmd: if "%examples%"=="1" ( python .\tools\scripts\generateAmalgamatedFiles.py )
|
||||
# Indirection because appveyor doesn't handle multiline batch scripts properly
|
||||
# https://stackoverflow.com/questions/37627248/how-to-split-a-command-over-multiple-lines-in-appveyor-yml/37647169#37647169
|
||||
# https://help.appveyor.com/discussions/questions/3888-multi-line-cmd-or-powershell-warning-ignore
|
||||
- cmd: .\tools\misc\appveyorBuildConfigurationScript.bat
|
||||
|
||||
|
||||
# build with MSBuild
|
||||
build:
|
||||
project: Build\Catch2.sln # path to Visual Studio solution or project
|
||||
parallel: true # enable MSBuild parallel builds
|
||||
verbosity: normal # MSBuild verbosity level {quiet|minimal|normal|detailed}
|
||||
|
||||
test_script:
|
||||
- set CTEST_OUTPUT_ON_FAILURE=1
|
||||
- cmd: .\tools\misc\appveyorTestRunScript.bat
|
||||
|
||||
|
||||
# Sadly we cannot use the standard "dimensions" based approach towards
|
||||
# specifying the different builds, as there is no way to add one-offs
|
||||
# builds afterwards. This means that we will painfully specify each
|
||||
# build explicitly.
|
||||
environment:
|
||||
matrix:
|
||||
- FLAVOR: VS 2019 x64 Debug Coverage Examples
|
||||
APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2019
|
||||
examples: 1
|
||||
coverage: 1
|
||||
platform: x64
|
||||
configuration: Debug
|
||||
|
||||
- FLAVOR: VS 2019 x64 Debug WMain
|
||||
APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2019
|
||||
wmain: 1
|
||||
additional_flags: "/D_UNICODE /DUNICODE"
|
||||
platform: x64
|
||||
configuration: Debug
|
||||
|
||||
- FLAVOR: VS 2019 x64 Debug Latest Strict
|
||||
APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2019
|
||||
additional_flags: "/permissive- /std:c++latest"
|
||||
platform: x64
|
||||
configuration: Debug
|
||||
|
||||
- FLAVOR: VS 2017 x64 Debug
|
||||
APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2017
|
||||
platform: x64
|
||||
configuration: Debug
|
||||
|
||||
- FLAVOR: VS 2017 x64 Release Coverage
|
||||
APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2017
|
||||
coverage: 1
|
||||
platform: x64
|
||||
configuration: Debug
|
@ -1,22 +0,0 @@
|
||||
coverage:
|
||||
precision: 2
|
||||
round: nearest
|
||||
range: "60...90"
|
||||
status:
|
||||
project:
|
||||
default:
|
||||
threshold: 2%
|
||||
patch:
|
||||
default:
|
||||
target: 80%
|
||||
ignore:
|
||||
- "**/external/clara.hpp"
|
||||
- "tests"
|
||||
|
||||
|
||||
codecov:
|
||||
branch: devel
|
||||
max_report_age: off
|
||||
|
||||
comment:
|
||||
layout: "diff"
|
@ -1,129 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
from conan import ConanFile
|
||||
from conan.tools.cmake import CMake, CMakeToolchain, CMakeDeps, cmake_layout
|
||||
from conan.tools.files import copy, rmdir
|
||||
from conan.tools.build import check_min_cppstd
|
||||
from conan.tools.scm import Version
|
||||
from conan.errors import ConanInvalidConfiguration
|
||||
import os
|
||||
import re
|
||||
|
||||
required_conan_version = ">=1.53.0"
|
||||
|
||||
class CatchConan(ConanFile):
|
||||
name = "catch2"
|
||||
description = "A modern, C++-native, framework for unit-tests, TDD and BDD"
|
||||
topics = ("conan", "catch2", "unit-test", "tdd", "bdd")
|
||||
url = "https://github.com/catchorg/Catch2"
|
||||
homepage = url
|
||||
license = "BSL-1.0"
|
||||
version = "latest"
|
||||
settings = "os", "compiler", "build_type", "arch"
|
||||
extension_properties = {"compatibility_cppstd": False}
|
||||
|
||||
options = {
|
||||
"shared": [True, False],
|
||||
"fPIC": [True, False],
|
||||
}
|
||||
default_options = {
|
||||
"shared": False,
|
||||
"fPIC": True,
|
||||
}
|
||||
|
||||
@property
|
||||
def _min_cppstd(self):
|
||||
return "14"
|
||||
|
||||
@property
|
||||
def _compilers_minimum_version(self):
|
||||
return {
|
||||
"gcc": "7",
|
||||
"Visual Studio": "15",
|
||||
"msvc": "191",
|
||||
"clang": "5",
|
||||
"apple-clang": "10",
|
||||
}
|
||||
|
||||
|
||||
def set_version(self):
|
||||
pattern = re.compile(r"\w*VERSION (\d+\.\d+\.\d+) # CML version placeholder, don't delete")
|
||||
with open("CMakeLists.txt") as file:
|
||||
for line in file:
|
||||
result = pattern.search(line)
|
||||
if result:
|
||||
self.version = result.group(1)
|
||||
|
||||
self.output.info(f'Using version: {self.version}')
|
||||
|
||||
def export(self):
|
||||
copy(self, "LICENSE.txt", src=self.recipe_folder, dst=self.export_folder)
|
||||
|
||||
def export_sources(self):
|
||||
copy(self, "CMakeLists.txt", src=self.recipe_folder, dst=self.export_sources_folder)
|
||||
copy(self, "src/*", src=self.recipe_folder, dst=self.export_sources_folder)
|
||||
copy(self, "extras/*", src=self.recipe_folder, dst=self.export_sources_folder)
|
||||
copy(self, "CMake/*", src=self.recipe_folder, dst=self.export_sources_folder)
|
||||
|
||||
def config_options(self):
|
||||
if self.settings.os == "Windows":
|
||||
del self.options.fPIC
|
||||
|
||||
def configure(self):
|
||||
if self.options.shared:
|
||||
self.options.rm_safe("fPIC")
|
||||
|
||||
def layout(self):
|
||||
cmake_layout(self)
|
||||
|
||||
def validate(self):
|
||||
if self.settings.compiler.get_safe("cppstd"):
|
||||
check_min_cppstd(self, self._min_cppstd)
|
||||
# INFO: Conan 1.x does not specify cppstd by default, so we need to check the compiler version instead.
|
||||
minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)
|
||||
if minimum_version and Version(self.settings.compiler.version) < minimum_version:
|
||||
raise ConanInvalidConfiguration(f"{self.ref} requires C++{self._min_cppstd}, which your compiler doesn't support")
|
||||
|
||||
def generate(self):
|
||||
tc = CMakeToolchain(self)
|
||||
tc.cache_variables["BUILD_TESTING"] = False
|
||||
tc.cache_variables["CATCH_INSTALL_DOCS"] = False
|
||||
tc.cache_variables["CATCH_INSTALL_EXTRAS"] = True
|
||||
tc.generate()
|
||||
|
||||
deps = CMakeDeps(self)
|
||||
deps.generate()
|
||||
|
||||
def build(self):
|
||||
cmake = CMake(self)
|
||||
cmake.configure()
|
||||
cmake.build()
|
||||
|
||||
def package(self):
|
||||
copy(self, "LICENSE.txt", src=str(self.recipe_folder), dst=os.path.join(self.package_folder, "licenses"))
|
||||
cmake = CMake(self)
|
||||
cmake.install()
|
||||
rmdir(self, os.path.join(self.package_folder, "share"))
|
||||
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
|
||||
copy(self, "*.cmake", src=os.path.join(self.export_sources_folder, "extras"),
|
||||
dst=os.path.join(self.package_folder, "lib", "cmake", "Catch2"))
|
||||
|
||||
def package_info(self):
|
||||
lib_suffix = "d" if self.settings.build_type == "Debug" else ""
|
||||
|
||||
self.cpp_info.set_property("cmake_file_name", "Catch2")
|
||||
self.cpp_info.set_property("cmake_target_name", "Catch2::Catch2WithMain")
|
||||
self.cpp_info.set_property("pkg_config_name", "catch2-with-main")
|
||||
|
||||
# Catch2
|
||||
self.cpp_info.components["catch2base"].set_property("cmake_file_name", "Catch2::Catch2")
|
||||
self.cpp_info.components["catch2base"].set_property("cmake_target_name", "Catch2::Catch2")
|
||||
self.cpp_info.components["catch2base"].set_property("pkg_config_name", "catch2")
|
||||
self.cpp_info.components["catch2base"].libs = ["Catch2" + lib_suffix]
|
||||
self.cpp_info.components["catch2base"].builddirs.append("lib/cmake/Catch2")
|
||||
|
||||
# Catch2WithMain
|
||||
self.cpp_info.components["catch2main"].set_property("cmake_file_name", "Catch2::Catch2WithMain")
|
||||
self.cpp_info.components["catch2main"].set_property("cmake_target_name", "Catch2::Catch2WithMain")
|
||||
self.cpp_info.components["catch2main"].set_property("pkg_config_name", "catch2-with-main")
|
||||
self.cpp_info.components["catch2main"].libs = ["Catch2Main" + lib_suffix]
|
||||
self.cpp_info.components["catch2main"].requires = ["catch2base"]
|
Binary file not shown.
Before Width: | Height: | Size: 10 KiB |
Binary file not shown.
Before Width: | Height: | Size: 33 KiB |
Binary file not shown.
Before Width: | Height: | Size: 25 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user