AI4AIT Artificial Intelligence For Augmented Reality In Satcom Assembly, Integration And Test (ARTES AT 4A.084)

  • Status
    Completed
  • Status date
    2025-01-28
  • Activity Code
    4A.084
Objectives

The project objectives are to enhance autonomy in decision-making for quality assurance during spacecraft Assembly, Integration, and Testing (AIT). The project focuses on developing an innovative system that integrates augmented reality (AR) and artificial intelligence (AI). By using sensor data from AR devices and AI-based image recognition, the goal is to compare spacecraft CAD models, pictures and videos with the physical assembly and provide the operator with cues to ensure efficiency, precision and accuracy. A collaboration with AI, AR and AIT experts and the use of representative facilities ensures the solution aligns with real-world operational needs. 

Challenges

The project faced some challenges worth noting. OCR struggles with complex backgrounds, and arbitrary alphanumeric sequences. Object Detection’s reliance on real-world data limits flexibility when only CAD models are available. Similarly, 6D Pose Estimation lacks the precision required for certain high-accuracy tasks. Real-time AI processing introduced further issues, including reduced HoloLens application framerate, stream delays causing lag in AI responses, and degraded performance in features like voice dictation.

Additionally, video streaming impacted reliability for external sharing.

Benefits

The solution offers significant advantages and value to spacecraft Assembly, Integration, and Testing (AIT) processes by combining AI
and AR technologies to streamline workflows, reduce human error, and improve efficiency. A key benefit is the automatic, hands-free collection of data from measurement devices, allowing operators to focus on tasks without pausing to manually record values. This reduces AIT procedure execution time and enhances productivity.

The system also includes automatic detection of kitting components, ensuring operators have the correct tools and parts before starting a procedure. This reduces human error and further improves efficiency.

Additionally, contextual instructions are provided directly on components without requiring AR markers, allowing operators to
receive precise, real-time guidance on assembly locations or measurement points, eliminating the need for extra tagging.

Another valuable feature is the automatic detection of procedural errors, such as incorrect bolting sequences or overlooked "Remove Before Flight" tags. This minimizes costly mistakes and enhances quality assurance.

Features

The product combines AR and AI technologies to optimize spacecraft Assembly, Integration, and Testing (AIT) processes. It features a
commercially available AR headset for capturing spatial and image data and custom-developed software. AI algorithms enable Optical
Character Recognition (OCR), object detection, and tracking, seamlessly integrated with the AR system for real-time, hands-free
guidance and error detection. Object detection and tracking provide contextual overlays on physical components, eliminating the need for markers and enhancing productivity by detecting errors like incorrect sequences or missing parts. OCR automates data logging from
measurement devices while validating operator actions.

The AI algorithms were trained using synthetic data (e.g., CAD models) and real-world data (e.g., images and videos), ensuring high accuracy and adaptability. This integration delivers a scalable, efficient solution that streamlines AIT operations, enhances quality assurance, and reduces execution time, making it a robust tool for error-free assembly processes.

System Architecture

The AI4AR system architecture combines a computer and an AR headset to support augmented reality in complex assembly tasks like
satellite integration. 

The computer manages core computational tasks, with the following modules: Detection Module (Identifies and locates objects in the
assembly environment using advanced algorithms); 6D Pose Estimation Module (Ensures precise object and headset positioning for accurate virtual overlays); OCR Module (Extracts text from labels or instruments for validation and contextual guidance); and Communication Module (Enables fast, low-latency data exchange with the AR headset).

The AR headset acts as the operator’s interface, providing augmented visualizations and guidance, capturing real-time visual and depth data for detection and pose estimation; aligning virtual overlays with the user’s perspective and also synchronizing data with the computer for real-time feedback.

Plan

 The project was initially planned to have a full duration of 24 months.

The following different work packages were pursued: WP1 Preliminary Design, which included the “Output 0 (Defined System Scenario)
Review” and the “Output 1 (Finalised Technical Specification) Review”;

WP2 Detailed Design, which included the “Output 2 (Selected Technical Baseline) Review” , the “Output 3 (Verified Detailed Design) Review” and “Output 4 (Implementation and Verification Plan) Review”; WP3 Implementation, which included the Applications Review; and WP4 Validation and Way-Forward, which included the “Output 5 (Verified Deliverable Items and Compliance Statement) Review” and the “Output 6 (Technology Assessment and Development Plan) Review”. 

Current status

Project completed, all goals achieved.

Prime Contractor

Subcontractors