About the team:
Reliable services are what enables Open AI to train the best AI models in the world and to bring the promise of safe, effective AI to the world. The SRE team in research is responsible for defining, measuring, and improving the reliability of the research platform. The SRE team works closely with the supercomputing and hardware health teams to improve the functioning of the existing research platform and build the future platform. The research platform is the platform used to conduct basic AI research and to train the next generation of models.
This is the team that helps make the infrastructure enabling progress at the world’s leading AI lab.
About the Role
As OpenAI continues to grow, we are looking for experienced, problem-solving engineers to ensure our systems scale. Our success depends on our ability to quickly iterate on research ideas while also ensuring that the underlying platform is performant, usable, and reliable.
You will work in a deeply iterative, collaborative, fast-paced environment to bring our technology to millions of users around the world, and ensure it’s delivered with safety and reliability in mind.
Successful candidates will play a crucial role in ensuring the reliability, scalability, and performance of our systems as we continue to expand. As a Reliability Engineer, you will be at the forefront of maintaining and enhancing the stability, scalability, and performance of our rapidly evolving infrastructure. You will work closely with cross-functional teams, including software engineers, data scientists and ML researchers to build and maintain resilient systems that can handle our growing user base and workload.
In this role, you will:
Collaborate with researchers, data scientists and platform developers to specify the availability, performance, correctness, and efficiency requirements of the current and future versions of the research platform.
Design and implement solutions to ensure the scalability of our infrastructure to meet rapidly increasing demands.
Implement and manage monitoring systems to proactively identify issues and anomalies in our production environment.
Develop and maintain service level objectives (SLOs) and service level indicators (SLIs) to measure and ensure system reliability.
Implement fault-tolerant and resilient design patterns to minimize service disruptions.
Build and maintain automation tools to streamline repetitive tasks and improve system reliability.
Participate in an on-call rotation to respond to critical incidents and ensure 24/7 system availability. alongside other infrastructure developers.
You might thrive in this role if you:
Enjoy seeking out and addressing bottlenecks and areas for performance improvement in our systems.
Utilize Infrastructure as Code (IaC) principles to automate infrastructure provisioning and configuration management.
Are experienced in collaborating with cross-functional teams to ensure that reliability and scalability are considered in the design and development of new features and services.
Have a track record of accelerating engineering reliability by empowering your fellow engineers with excellent tooling and systems.
Help create a diverse, equitable, and inclusive culture that makes all feel welcome while enabling radical candor and the challenging of group think.
Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.
Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done.
Have excellent communication skills. Expressing ideas clearly and listening carefully are among the most important requirements for success in this role.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
Proven experience as an reliability engineer, production engineer, infrastructure software engineer or a similar role in a fast-paced, rapidly scaling company.
Strong proficiency in cloud infrastructure, including the underlying concepts of scheduling, scaling, cloud storage, networking and security.
Proficiency in programming/scripting languages.
Experience with containerization technologies and container orchestration platforms like Kubernetes or equivalent.
Knowledge of IaC tools such as Terraform or CloudFormation or equivalent.
Excellent problem-solving and troubleshooting skills.
Strong communication and collaboration skills.
Experience with observability tools; examples include DataDog, Prometheus, Grafana, Splunk and ELK stack or similar.
Experience with bare metal performance maximization in a Linux environment as well as hardware (especially GPU) device performance and troubleshooting.
Knowledge of security best practices in cloud environments.
Bonus: Experience as an SRE within the AI/ML space is strongly preferred.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.