Simplifying Distributed AI Training

We are building tools to make it easier to train and fine-tune large LLM models on distributed infrastructure.

Backed by

  • YCombinator

Key Features of Felafax

  • Effortless Setup

    Get started instantly on any cloud or on-premises setup with our out-of-the-box solution designed for seamless deployment.

  • Optimized for Large-Scale Models

    Specifically engineered for fine-tuning massive models like Grok1 and LLaMa3, ensuring high GPU utilization and cost efficiency.

  • Continuous Training Support

    Automatically fine-tune models on the latest data daily, ensuring your models stay current and effective.

  • Modular Training Pipelines

    Customize your training processes with ease using our flexible, component-based architecture.

  • JAX-Powered Performance

    Benefit from the speed and flexibility of JAX, enabling faster and more efficient training with advanced parallelism.

  • Fast Inference Endpoints

    Deliver rapid and efficient model predictions with our optimized inference endpoints, ensuring quick and reliable results.

Need infra for fine-tuning?

Please reach out to us, and we'll work with you personally to get you set up. 🙂

Reach out

Meet our team

  • Nikhil Sonti

    Co-Founder & CEO

    Over 6 years at Meta and 3+ years at Microsoft, Nikhil has worked on ML inference infrastructure for Facebook Feed, focusing on performance and efficiency.

  • Nithin Sonti

    Co-Founder & CTO

    With over 5 years at Google and experience at Nvidia, Nithin specializes in building large scale ML training infrastructure. He worked on building the trainer platform for all YouTube recommender models and also worked on fine-tuning Gemini for YouTube.

Built by engineers with experience at

  • Google
  • Facebook
  • Nvidia
  • Microsoft

Let’s connect

We’re here to help and answer any questions you might have. We look forward to hearing from you.