AIoT – AI enhanced IoT

- SATELLITE SIGNAL PROCESSING USING AN OFF-THE-SHELF AI CHIPSET

STATUS | Completed
STATUS DATE | 12/07/2023
ACTIVITY CODE | 3A.122P1
AIoT –  AI enhanced IoT

Objectives

Artificial Intelligence (AI) has proven its value in several fields. In the case of satellite telecommunications, it is a promising tool for many applications, for instance spectrum sharing or cognitive radio. AI algorithms however, can be very computationally intensive, thus power hungry and slow when run in standard processors. These issues seriously limit their use in a variety of devices (from mobile phones to satellite payloads) and therefore applications. To solve this, industry has developed specific AI processors capable of running complex algorithms in real time and consuming a fraction of the power with respect to the one needed by traditional processors. These AI processors are now available in the market, even as Off-the-Shelf AI chipsets, allowing inferencing to be performed in an effective manner. 

The availability of such chipsets has enabled the practical use of AI for satellite applications both for on-ground and on-board scenarios. AI chips have been already investigated by ESA for Earth Observation (EO) satellites. This project develops AI-based signal processing techniques for satellite telecommunications such as signal identification, spectrum monitoring, spectrum sharing, signal demodulation etc. using Off-the-Shelf AI accelerator(s). This includes the development of a laboratory testbed to validate and demonstrate the developments in both on-ground and on-board scenarios.

Challenges

Key challenges are to define appropriate scenario, establish representative training data, develop efficient machine learning algorithm & hardware and use efficient AI accelerator. 

The learning process requires a huge amount of data and processing power. The data needs to be created in a representative way with “sufficient” statistical variation – in terms of noise, interference, self-overlaps, distortion etc. Within the training framework, a clear model with a combination of training data and “ground truth” is essential. 

Furthermore, meaningful metrics are required, indicating effective training and inferencing.
 

System Architecture

The setup consists of multiple elements that are matched together. The two main blocks are the Machine Learning Setup (MLS) and an Artificial Intelligence Satellite Telecommunication Testbed (AISTT) complemented by suitable scenario and training data.

The MLS is based on widely used machine learning frameworks running on Joanneum servers. 

The main blocks of the AISTT – hardware module, data generator, control unit hardware with control software including GUI – are taken from existing designs and units, using the field of nanosat IoT payloads.

  • The Hardware module takes the existing SDR and core data handling framework (multiMIND) from Thales Alenia Space plus a derivative of the Myriad accelerator companion hardware and framework from Ubotica.

  • The data generator consists out of Thales Alenia Space’s available ADS-B generator and simulator assets, upgraded by programmable VSG / SDR elements like the Ettus USRP. 

  • Control and GUI parts use standard elements available. 

The system and scenario chosen for this approach is ADS-B – the air traffic system which combines many properties of an IoT system with mission and safety critical elements and is already running since decades on ground and demonstrated for ten years from space. The training data are generated using the existing Thales Alenia Space ADS-B SASKIA simulator.

Plan

The project is divided into two phases. 

Phase 1 defines scenario and requirements and creates the data sets, with two main milestones:

  • Scenario & Requirements Review (SRR),

  • Data Sets Review (DSR).

Phase 2 leverages on the results of phase 1 with the machine learning, provision of the testbed (AISTT) and demonstration of the AI accelerator use. It starts with two activity streams running in parallel leading to the following milestones: 

  • Machine Learning Review (MLR),

  • Hardware Design Review (HWDR).

This is followed by the Test campaign for an Acceptance

Current Status

The project has been closed in 2023.

The project achieved in the proof of concept phase several steps:

  1. Streaming the RF data into an FPGA and generating “optical-like” pseudospectrum plots. 

  2. Processing the pseudo-spectrum plots, identifying the wanted signal.

  3. Extracting the direction information and feeding a beam forming algorithm

All these achievements were performed in hardware or simulation demonstrating that their implementation will be feasible on modern space hardware. However, the final implementation on the hardware with optimization was not done due to several reasons (mainly staffing situation during the global pandemic). Nevertheless, the proof-of-concept shows, that AI assisted signal processing is possible and is possible in current hardware. It also showed the limits of throughput on the selected platforms, showing a clear benefit for future hardware.