Boris Benedikter, Ph.D. Postdoc in Optimal Control & AI for Aerospace Autonomy

Research

"How do we build autonomous systems that are safe, certifiable, and worthy of our trust?"

My work unifies the rigor of optimal control and convex optimization with the adaptability of modern AI to design verifiable Guidance, Navigation, and Control (GNC) algorithms for spacecraft, rockets, UAVs, and robotic systems. Traditional control offers guarantees but relies on simplified models; purely data-driven approaches capture complexities but lack formal assurances. I bridge these worlds by using data-driven models to inform mathematically sound optimizers, combining learning with guarantees for operation in the uncertain, dynamic, and resource-constrained environments characteristic of aerospace applications.

Past & Ongoing Research

I develop computationally efficient optimization methods, advanced stochastic control frameworks, and learning-in-the-loop techniques to enhance adaptability and real-time feasibility in aerospace settings, from spacecraft and rocket guidance to space domain awareness and orbit determination.

Convex Optimization for Nonlinear GNC

I reformulate hard, nonlinear optimal control problems into convex subproblems via lossless convexification and sequential convex programming, delivering orders-of-magnitude speed-ups and improved reliability versus classical NLP, enabling onboard real-time planning for safety- and mission-critical GNC tasks.

Keywords: Lossless convexification, sequential convex programming, real-time guidance.

Selected Publications: Benedikter et al. (2021), Benedikter et al. (2022)

Convex Optimization
Model Predictive Control

MPC for Real-Time Guidance

Embedding the convexified OCP inside Model Predictive Control (MPC) yields a receding-horizon optimal controller that adapts to measured state and disturbances. Monte Carlo analyses under uncertainty validated the real-time feasibility for rocket guidance, planetary landing, and on-orbit servicing scenarios.

Keywords: Model predictive control, real-time optimization, guidance & control.

Selected Publications: Benedikter et al. (2020), Benedikter et al. (2021)

Stochastic Optimal Control

To explicitly manage and control uncertainty, my research advances the field of stochastic optimal control through Covariance Control (CC), which co-optimizes a nominal trajectory and feedback policy to steer the full state distribution (i.e., mean and covariance) subject to chance constraints. I introduced a lossless convex reformulation of CC, enabling low-complexity solutions with rigorous probabilistic safety guarantees for missions involving low-thrust interplanetary transfers, UAV path planning in cluttered environments, station-keeping near Earth–Moon libration points, and in-space close-proximity operations.

Keywords: Covariance control, chance constraints, stochastic MPC, uncertainty.

Selected Publications: Benedikter et al. (2022), Garzelli et al. (2025)

Covariance Control
Physics-Informed Neural Networks

Physics-Informed Neural Networks

To address the "black box" problem in machine learning, I leverage Physics-Informed Neural Networks (PINNs) that embed governing dynamics and optimality conditions. Pontryagin Neural Networks (PoNNs) incorporate the Maximum Principle for accurately solving path-constrained OCPs. I also embed kinematics to estimate attitude from light curves for SDA, improving physical consistency and data efficiency.

Keywords: PINNs, Pontryagin's Maximum Principle (PMP), indirect methods, SDA.

Selected Publications: D’Ambrosio et al. (2025), Benedikter et al. (2025)

RL-Enhanced MPC

To blend MPC's constraint-awareness with RL's adaptability, the RL agent learns small cost-shaping corrections in perturbed simulations. The resulting controller improves robustness to unmodeled effects over standalone MPC and improves generalization over standalone RL. I demonstrated this approach in autonomous planetary landing scenarios, where maintaining robustness to uncertainty while satisfying stringent real-time constraints is critical.

Keywords: RL, MPC, robustness, real-time guidance.

Selected Publications: Federici et al. (2025)

RL-Enhanced MPC
Warm-Starting Optimization with Learning

Learning Warm-Starts for Optimization

I investigate the use of imitation learning to train complex control policies from datasets of expert demonstrations. Rather than deploying these learned policies directly as black-box controllers, their output is used to generate high-quality initial guesses, or "warm starts," for rigorous, optimization-based algorithms, dramatically reducing solver convergence time while retaining safety and performance guarantees. This approach is particularly valuable for enabling fast and reliable onboard flight computing in resource-constrained aerospace systems.

Keywords: imitation learning, Transformers, real-time optimization.

Space Domain Awareness & Orbit Determination

I developed TRACER (Tracking, Recognition, Analysis for Celestial Ephemeris Retrieval), a Space4 Center tool integrated with a telescope network for initial orbit determination, tracking, and cataloging of resident space objects, supporting resilient SDA pipelines.

Keywords: estimation, tracking, cataloging, SDA.

Orbit Determination

Research Vision

I aim to establish the scientific and algorithmic foundations for trustworthy autonomy in flight and space systems. My vision is organized in three, mutually reinforcing research directions:

Safe-by-Design Hybrid GNC

1. Safe-by-Design Hybrid GNC

The goal is to fuse the adaptability of learning with the formal guarantees of optimal control theory. However, the central question is: how can AI components improve performance without compromising verifiability? My key insight is that data-driven models should serve as structured guidance rather than black-box decision-makers. This means using NNs to generate high-quality warm-starts for rigorous optimizers or embedding physical laws and safety constraints directly into PINNs/PoNNs, ensuring that GNC architectures are both adaptive and provably safe.

2. Real-Time Learning & Adaptation

To operate in the real world, autonomous systems must adapt to unpredictable and dynamic environments in real time. My goal is to move beyond static, pre-programmed behaviors by training controllers as general-purpose Foundation Models. By leveraging transfer and meta-learning, these models can be pre-trained on broad knowledge and then rapidly specialized for new tasks and scenarios. Target use-cases include UAVs in gusty urban flows and deep-space probes adapting to poorly known fields.

Real-Time Learning
Verification & Assurance

3. Verification & Assurance

A key challenge is ensuring that AI components behave reliably when transitioning from simulation to the real world. My vision is to establish the theoretical foundations and practical tools needed to bridge this sim-to-real gap. This involves developing runtime monitoring systems and stochastic reachability–based assurance envelopes that formally quantify risk in real time, enabling proactive safety measures and verifiable guarantees for deployed systems. This work is targeted toward certification frameworks, mission assurance, and hardware-in-the-loop testing.

Together, these efforts aim to advance trustworthy autonomy across the full spectrum of aerospace applications, from deep-space exploration to autonomous flight in Earth’s atmosphere.