hero

It's your turn to create the future.

Check out the latest job postings from Radical's portfolio companies and discover opportunities to build the technologies of tomorrow.
Radical Ventures
companies
Jobs

Machine Learning Engineer - Training & Infrastructure

P-1 Ai

P-1 Ai

Software Engineering, Other Engineering
San Francisco, CA, USA
Posted on Jul 8, 2025

About P-1 AI:

We are building an engineering AGI. We founded P-1 AI with the conviction that the greatest impact of artificial intelligence will be on the built world—helping mankind conquer nature and bend it to our will. Our first product is Archie, an AI engineer capable of quantitative and spatial reasoning over physical product domains that performs at the level of an entry-level design engineer. We aim to put an Archie on every engineering team at every industrial company on earth.

Our founding team includes the top minds in deep learning, model-based engineering, and industries that are our customers. We just closed a $23 million seed round led by Radical Ventures that includes a number of other AI and industrial luminaries (from OpenAI, DeepMind, etc.).

About the Role:

We’re looking for an experienced engineer to take ownership of LLM training operations across our applied research team. Your focus will be on making large-scale GPU training run reliably, efficiently, and fast on a dedicated mid-size GPU cluster and possibly on cloud platforms as well.

You’ll work closely with researchers and ML engineers developing new models and agentic systems, ensuring their experiments scale smoothly across multi-node GPU clusters. From debugging NCCL deadlocks to optimizing FSDP configs, you’ll be the go-to person for training infrastructure and performance.

What You’ll Do:

  • Own the training pipeline for large-scale LLM fine-tuning and post-training workflows

  • Configure, launch, monitor, and debug multi-node distributed training jobs using FSDP, DeepSpeed, or custom wrappers

  • Contribute to upstream and internal forks of training frameworks like TorchTune, TRL, and Hugging Face Transformers

  • Tune training parameters, memory footprints, and sharding strategies for optimal throughput

  • Work closely with infra and systems teams to maintain the health and utilization of our GPU clusters (e.g., Infiniband, NCCL, Slurm, Kubernetes)

  • Implement features or fixes to unblock novel use cases in our LLM training stack

About you:

  • 3+ years working with large-scale ML systems or training pipelines

  • Deep familiarity with PyTorch, especially distributed training via FSDP, DeepSpeed, or DDP

  • Comfortable navigating training libraries like TorchTune, Accelerate, or Trainer APIs

  • Practical experience with multi-node GPU training, including profiling, debugging, and optimizing jobs

  • Understanding of low-level components like NCCL, Infiniband, CUDA memory, and model partitioning strategies

  • You enjoy bridging research and engineering—making messy ideas actually run on hardware

Nice to Have:

  • Experience maintaining Slurm, Ray, or Kubernetes clusters

  • Past contributions to open-source ML training frameworks

  • Exposure to model scaling laws, checkpointing formats (e.g., HF sharded safetensors vs. distcp), or mixed precision training

  • Familiarity with on-policy reinforcement learning setups with inference (policy rollouts) as part of the training loop, such as GRPO, PPO, or A2C

  • Experience working at a startup

Interview process:

  • Initial screening - Head of Talent (30 mins)

  • Hiring manager interview - Head of AI (45 mins)

  • Technical Interview - AI Chief Scientist and/or Head of AI (45 mins)

  • Culture fit / Q&A (maybe in person) - with co-founder & CEO (45 mins)