Jonathan Gornet

AI Systems Engineer · Ph.D. in Systems Science & Mathematics, Washington University in St. Louis

Reinforcement LearningAI InfrastructureOptimization & Control

Modern AI systems are extraordinarily capable but also very inefficient. Training and deployment consume massive computational and labor resources, creating barriers to iteration and scalability. As noted in Scientific American, the energy used to power large AI models is projected to reach 85.4 terawatt-hours per year by 2027 — enough to power more than ten million Tesla Model 3s annually. These costs highlight a deeper issue: AI progress is now constrained less by capability, and more by efficiency.

My work focuses on lowering those costs through automation and efficiency. I design systems that make AI training faster, stabler, and easier to deploy. My portfolio includes AI-Mission-Control, infrastructure for rapid training deployment across GPU server clusters and edge devices; RobotBandit, a suite of algorithms for black-box optimization; and HyperController, an autonomous trainer for efficient learning, achieving 1000× faster convergence while surpassing state-of-the-art baselines.

The result: AI models that can be trained, tested, and deployed at scale in days, not months. By stabilizing learning and automating infrastructure, I make AI not just smarter — but field-ready.

Jonathan Gornet

Experience

Stealth Mode AI Startup

Lead Reinforcement Learning Engineer

August 2025 – September 2025

  • Delivered multiple AI prototypes from concept to deployment in an initial 3-week sprint, including LLM-based voice assistants and multi-agent coordination systems to help establish the startup’s early technical moat.
  • Designed containerized multi-agent coordination API enabling simulation-to-hardware deployment within hours instead of days.

Washington University in St. Louis

Ph.D. Candidate – Systems Science & Mathematics

2019 – August 2025

  • Developed scalable AI infrastructure combining control-theoretic methods (Kalman filtering, stochastic systems) with deep learning to accelerate training and ensure reproducible deployment.
  • Built and released HyperController — automated AI agent training system achieving 1000× speedup with 52% median performance improvement across five benchmarks, expediting training time from days to hours.
  • Built and released RobotBandit — deployable optimization API with 7+ novel algorithms with full test coverage and documentation.
  • Delivered FDMonitor — full-duplex spectrum monitoring system for 1,500 users with 27-month uptime and 95% overall prediction accuracy for mission-critical communications (IEEE DySPAN 2024).
  • Presented research at IEEE CDC and L4DC; contributed to Army Research Laboratory MURI program meetings and collaborative projects.
  • Led reproducibility efforts across 3+ research projects, ensuring consistent benchmarking and deployment.

IQT Lab41

Data Scientist Intern

Summer 2020

  • Developed online adversarial detection system with strong predictive performance (AUC=0.84); optimized compute efficiency for real-time image monitoring and field deployment.

New York University

Undergraduate Researcher – Neural Modeling

2016 – 2019

  • Built large-scale neural simulations to study learning mechanisms during sleep and adaptation — foundational to neuroadaptive autonomy.

HHMI Janelia

Undergraduate Scholar – Computational Neuroscience

Summer 2017

  • Built reduced-order neural visual models for 100× faster simulations; modeling principles applicable for on-edge autonomous systems.

MITRE

Aviation Systems Analyst Intern

Summer 2016

  • Built classifier combining NLP and trajectory analysis to infer aircraft intent for actionable intelligence to stakeholders.

Education

Washington University in St. Louis

Ph.D. in Systems Science & Mathematics · 2019 – 2025

New York University

B.A. in Mathematics · 2016 – 2018

Loyola University Chicago

Pre-medical coursework · 2014 – 2015

Learn more about my Projects or view my Publications.