Research Engineer for Generative AI, Multimodal Learning, and Scalable ML Systems
I build research-grade ML systems that connect careful experimentation with deployable engineering. My work spans generative AI, multimodal learning, efficient training, and production inference, with experience across NSF- and DoD-funded research as well as real-world manufacturing and geospatial AI systems. I care most about reproducible experiments, strong empirical evidence, and turning good research ideas into reliable code.
Led applied ML systems work across training infrastructure, synthetic data, predictive modeling, and deployment for manufacturing and imaging systems. The work consistently emphasized reproducible experimentation, throughput-aware engineering, and production relevance.
Developed socially intelligent navigation policy for robots using reinforcement learning, integrating natural language instructions and human intent signals for effective human-robot collaboration.
Built synthetic data pipeline using simulation to accelerate training and deployment of object detection models for industrial tools on edge devices.
Designed and implemented AI-driven system to optimize in-cabin climate control in passenger vehicles using vision-based occupant detection and pose estimation.
PyTorch, distributed training, mixed precision, checkpointing, throughput tuning.
Benchmarks, ablations, reproducible experiments, and technical analysis.
Docker, Triton, ONNX, experiment orchestration, and deployment pipelines.
Transformers, TRL, TorchTitan, vLLM, W&B, MLflow, Ray, DVC.