We are seeking highly experienced and motivated Platform Engineers to build scalable services and ensure a high-availability experience to empower our AI research team in their daily work.
Responsibilities
As a ML Platform Engineer, you balance the day-to-day operations on production systems with long-term software engineering improvements to reduce operational toil and foster the reliability, availability, and performance of these systems.
Operations (50%)
- Operate systems and troubleshoot issues in production environments (interrupts, on-call responses, users admin, data extraction, infrastructure scaling, etc.)
- Make sure our model training environments are always highly available and enable seamless replication of work environments across several HPC clusters
- Implement and maintain workflows and tools (CI/CD, containerization, orchestration, monitoring, logging and alerting systems) for both our large training runs and our client-facing APIs
- Participate occasionally in on-call rotations to resolve incidents occurring out-of-hours
Engineering (50%)
- Collaborate with AI/ML researchers to develop and implement solutions that enable safe and reproducible model-training experiments
- Build a cloud-agnostic platform offering an abstraction layer between science and infrastructure
- Design and develop new workflows and tooling to improve to the reliability, availability and performance of our systems (automation scripts, refactoring, new API-based features, web apps, dashboards, etc.)
- Ensure compliance with security best practices and industry standards
- Document processes and procedures to ensure consistency and knowledge sharing across the team
- Contribute to open-source projects, research publications, blog articles, conference attending
About you
- Master’s degree in Computer Science, Engineering or a related field
- 5+ years of experience in a DevOps/SRE role
- Strong experience with cloud computing and highly available distributed systems
- Exposure to site reliability issues in critical environments (issue root cause analysis, in-production troubleshooting, working against reliability KPIs, participating in on-call rotations...)
- Hands-on experience with CI/CD, containerization and orchestration tools
- Knowledge of monitoring, logging, alerting and observability tools like Prometheus, Grafana, ELK Stack or Datadog
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation
- Proficiency in scripting languages such as Python, Bash, or PowerShell and knowledge of software development best practices
- Strong understanding of networking, security, and system administration concepts
- Excellent problem-solving and communication skills
- Self-motivated and able to work well in a fast-paced startup environment
Your application will be all the more interesting if you also have:
- experience in AI/ML environment
- experience of high-performance computing (HPC) systems and workload managers (Slurm)
- worked with modern AI-oriented solutions such as Fluidstack, Coreweave, Vast...
What We Offer
- Ability to shape the exciting journey of AI and be part of the very early days of one of Europe’s hottest startup
- A fun, young, multicultural team and collaborative work environment — based in Paris and London
- Competitive salary and bonus structure
- Comprehensive benefits package
- Opportunities for professional growth and development
About Mistral AI
We're a small team, composed of seasoned researchers and engineers in the AI field. We like to work hard and be at the edge of science. We are creative, low-ego, team-spirited, and have been passionate about AI for years. We hire people that foster in competitive environments, because they find them more fun to work in. We hire passionate women and men from all over the world.
Developers are using our API via la Plateforme to build incredible AI-first applications powered by our models that can understand and generate natural language text and code. We are multilingual at our core. More recently, we released le Chat, as a demonstrator of our models.