PAGE CONTENTS
Objectives
The project objectives are to enhance autonomy in decision-making for quality assurance during spacecraft Assembly, Integration, and Testing (AIT). The project focuses on developing an innovative system that integrates augmented reality (AR) and artificial intelligence (AI). By using sensor data from AR devices and AI-based image recognition, the goal is to compare spacecraft CAD models, pictures and videos with the physical assembly and provide the operator with cues to ensure efficiency, precision and accuracy. A collaboration with AI, AR and AIT experts and the use of representative facilities ensures the solution aligns with real-world operational needs.
Challenges
The project faced some challenges worth noting. OCR struggles with complex backgrounds, and arbitrary alphanumeric sequences. Object Detection’s reliance on real-world data limits flexibility when only CAD models are available. Similarly, 6D Pose Estimation lacks the precision required for certain high-accuracy tasks. Real-time AI processing introduced further issues, including reduced HoloLens application framerate, stream delays causing lag in AI responses, and degraded performance in features like voice dictation.
Additionally, video streaming impacted reliability for external sharing.
System Architecture
The AI4AR system architecture combines a computer and an AR headset to support augmented reality in complex assembly tasks like
satellite integration.
The computer manages core computational tasks, with the following modules: Detection Module (Identifies and locates objects in the
assembly environment using advanced algorithms); 6D Pose Estimation Module (Ensures precise object and headset positioning for accurate virtual overlays); OCR Module (Extracts text from labels or instruments for validation and contextual guidance); and Communication Module (Enables fast, low-latency data exchange with the AR headset).
The AR headset acts as the operator’s interface, providing augmented visualizations and guidance, capturing real-time visual and depth data for detection and pose estimation; aligning virtual overlays with the user’s perspective and also synchronizing data with the computer for real-time feedback.
Plan
The project was initially planned to have a full duration of 24 months.
The following different work packages were pursued: WP1 Preliminary Design, which included the “Output 0 (Defined System Scenario)
Review” and the “Output 1 (Finalised Technical Specification) Review”;
WP2 Detailed Design, which included the “Output 2 (Selected Technical Baseline) Review” , the “Output 3 (Verified Detailed Design) Review” and “Output 4 (Implementation and Verification Plan) Review”; WP3 Implementation, which included the Applications Review; and WP4 Validation and Way-Forward, which included the “Output 5 (Verified Deliverable Items and Compliance Statement) Review” and the “Output 6 (Technology Assessment and Development Plan) Review”.
Current Status
Project completed, all goals achieved.



