-
StatusCompleted
-
Status date2023-07-12
-
Activity Code3A.122P1
Artificial Intelligence (AI) has proven its value in several fields. In the case of satellite telecommunications, it is a promising tool for many applications, for instance spectrum sharing or cognitive radio. AI algorithms however, can be very computationally intensive, thus power hungry and slow when run in standard processors. These issues seriously limit their use in a variety of devices (from mobile phones to satellite payloads) and therefore applications. To solve this, industry has developed specific AI processors capable of running complex algorithms in real time and consuming a fraction of the power with respect to the one needed by traditional processors. These AI processors are now available in the market, even as Off-the-Shelf AI chipsets, allowing inferencing to be performed in an effective manner.
The availability of such chipsets has enabled the practical use of AI for satellite applications both for on-ground and on-board scenarios. AI chips have been already investigated by ESA for Earth Observation (EO) satellites. This project develops AI-based signal processing techniques for satellite telecommunications such as signal identification, spectrum monitoring, spectrum sharing, signal demodulation etc. using Off-the-Shelf AI accelerator(s). This includes the development of a laboratory testbed to validate and demonstrate the developments in both on-ground and on-board scenarios.
Key challenges are to define appropriate scenario, establish representative training data, develop efficient machine learning algorithm & hardware and use efficient AI accelerator.
The learning process requires a huge amount of data and processing power. The data needs to be created in a representative way with “sufficient” statistical variation – in terms of noise, interference, self-overlaps, distortion etc. Within the training framework, a clear model with a combination of training data and “ground truth” is essential.
Furthermore, meaningful metrics are required, indicating effective training and inferencing.
Previous ARTES Future Preparation studies such as MLSAT or SATAI have already identified beneficial applications of AI in telecommunication scenarios, especially with many links and under unexpected conditions like interferers. However, in most cases, the AI inference can be efficiently implemented in the ground segment omitting the need for AI inferencing in space.
By selecting the Internet-of-Things (IoT) use case where highly unpredictable traffic and interferer scenarios impose difficult challenges, this project demonstrates a clear benefit for the use of AI algorithms for satellite telecommunications during on-board processing. The targeted improvements using AI are twofold:
-
firstly, to optimize the IoT receiver parameter setup depending on traffic and interferer scenarios.
-
secondly, to improve signal classification for interferer detection and to apply new AI-based demodulation techniques.
This will improve the overall system capacity, reliability and performance of future IoT systems.
The laboratory testbed developed in this project is widely using COTS elements – especially hardware, inducing low development risks. The Hardware module is based on available standard bread boards, with a clear path to fly, fitting in a nanosatellite form factor. This facilitates the integration of this activity into a sustainable roadmap which can be further realised and demonstrated within a few years.
The machine learning setup rely on standard machine learning frameworks that can be deployed as well standard PC as servers for more computational power.
The laboratory testbed includes an EM like payload based on the multiMIND core board, a digitizing frontend and an AI accelerator companion (Myriad X). It is built and used for the specific project scenario. However the whole design considers also the use for different scenarios – mainly different waveforms, modulations and packets. The elements on the AISTT are more or less generic:
-
Spectrum analysis and interference characterization.
-
Spectrum pre-processing.
-
Packet Extraction.
-
Bit Extraction.
multiMIND is a modular payload processing system for micro/nanosatellites developed under ARTES C&G contract (see link below). It is based on a latest generation multiprocessor System-on-Chip (Xilinx Zynq Ultrascale+) that can be combined with mission specific complement. It is robustified by rad-hard elements to offer an unprecedented processing power and efficiency on your mission. It allows further system miniaturization, system performance enhancement and in-orbit reconfigurability.
The Myriad X AI accelerator is the successor of the well-known Myriad 2. It can provide up to 4 TOPS and integrates dedicated FFT blocks for its use in signal processing.
The setup consists of multiple elements that are matched together. The two main blocks are the Machine Learning Setup (MLS) and an Artificial Intelligence Satellite Telecommunication Testbed (AISTT) complemented by suitable scenario and training data.
The MLS is based on widely used machine learning frameworks running on Joanneum servers.
The main blocks of the AISTT – hardware module, data generator, control unit hardware with control software including GUI – are taken from existing designs and units, using the field of nanosat IoT payloads.
-
The Hardware module takes the existing SDR and core data handling framework (multiMIND) from Thales Alenia Space plus a derivative of the Myriad accelerator companion hardware and framework from Ubotica.
-
The data generator consists out of Thales Alenia Space’s available ADS-B generator and simulator assets, upgraded by programmable VSG / SDR elements like the Ettus USRP.
-
Control and GUI parts use standard elements available.
The system and scenario chosen for this approach is ADS-B – the air traffic system which combines many properties of an IoT system with mission and safety critical elements and is already running since decades on ground and demonstrated for ten years from space. The training data are generated using the existing Thales Alenia Space ADS-B SASKIA simulator.
The project is divided into two phases.
Phase 1 defines scenario and requirements and creates the data sets, with two main milestones:
-
Scenario & Requirements Review (SRR),
-
Data Sets Review (DSR).
Phase 2 leverages on the results of phase 1 with the machine learning, provision of the testbed (AISTT) and demonstration of the AI accelerator use. It starts with two activity streams running in parallel leading to the following milestones:
-
Machine Learning Review (MLR),
-
Hardware Design Review (HWDR).
This is followed by the Test campaign for an Acceptance
The project has been closed in 2023.
The project achieved in the proof of concept phase several steps:
-
Streaming the RF data into an FPGA and generating “optical-like” pseudospectrum plots.
-
Processing the pseudo-spectrum plots, identifying the wanted signal.
-
Extracting the direction information and feeding a beam forming algorithm
All these achievements were performed in hardware or simulation demonstrating that their implementation will be feasible on modern space hardware. However, the final implementation on the hardware with optimization was not done due to several reasons (mainly staffing situation during the global pandemic). Nevertheless, the proof-of-concept shows, that AI assisted signal processing is possible and is possible in current hardware. It also showed the limits of throughput on the selected platforms, showing a clear benefit for future hardware.