Blockchain.com is the world's leading software platform for digital assets. Offering the largest production blockchain platform in the world, we share the passion to code, create, and ultimately build an open, accessible and fair financial future, one piece of software.
We are looking for a talented Senior or Staff Data Engineer to join our FinPlatform team and work from our office in London. The group is part of a larger Data Science team, informing all product decisions and creating models and infrastructure to improve efficiency, growth, and security. To do this, we use data from various sources and of varying quality. Our automated ETL processes serve both the broader company (in the form of clean, simplified tables of aggregated statistics and dashboards) and the Data Science team itself (cleaning and processing data for analysis and modeling purposes, ensuring reproducibility).
We are looking for someone with experience in designing, building, and maintaining a scalable and robust Data Infra that makes data easily accessible to the Data Science team and the broader audience via different tools. As a data engineer, you will be involved in all aspects of the data infrastructure, from understanding current bottlenecks and requirements to ensuring the quality and availability of data. You will collaborate closely with data scientists, platform, and front-end engineers, defining requirements and designing new data processes for both streaming and batch processing of data, as well as maintaining and improving existing ones. We are looking for someone passionate about high-quality data who understands their impact in solving real-life problems. Being proactive in identifying issues, digging deep into their source, and developing solutions, are at the heart of this role.
SENIOR
What You Will Need
Bachelor’s degree in Computer Science, Applied Mathematics, Engineering or any other technology-related field
Previous experience working in a data engineering role
Fluency in Python
Experience in both batch processing and streaming data pipelines
Experience working with Google Cloud Platform
In-depth knowledge of SQL and no-SQL databases
In-depth knowledge of coding principles, including Oriented Object Programming
Experience with Git
Nice to have
Experience with code optimisation, parallel processing
Experience with Airflow, Google Composer or Kubernetes Engine
Experiences with other programming languages, like Java, Kotlin or Scala
Experience with Spark or other Big Data frameworks
Experience with distributed and real-time technologies (Kafka, etc..)
5-8 years commercial experience in a related role
STAFF
What You Will Do
Maintain and evolve the current data infrastructure and look to evolve it for new requirements
Maintain and extend our core data infrastructure and existing data pipelines and ETLs
Provide best practices and frameworks for data testing and validation and ensure reliability and accuracy of data
Design, develop and implement data visualization and analytics tools and data products.
Play a critical role in helping to set up directions and goals for the team
Build and ship high-quality code, provide thorough code reviews, testing, monitoring and proactive changes to improve stability
You are the one who implements the hardest part of the system or feature.
What You Will Need
Bachelor’s degree in Computer Science, Applied Mathematics, Engineering or any other technology-related field
Previous experience working in a data engineering role
Fluency in Python
Experience in both batch processing and streaming data pipelines
Experience working with Google Cloud Platform
In-depth knowledge of SQL and no-SQL databases
In-depth knowledge of coding principles, including Oriented Object Programming
Experience with Git
Ability to solve technical problems that few others can do
Ability to lead/coordinate rollout and releases of major initiatives
Nice to have
Experience with code optimisation, parallel processing
Experience with Airflow, Google Composer or Kubernetes Engine
Experiences with other programming languages, like Java, Kotlin or Scala
Experience with Spark or other Big Data frameworks
Experience with distributed and real-time technologies (Kafka, etc..)
8+ years commercial experience in a related role
COMPENSATION & PERKS
Full-time salary based on experience and meaningful equity in an industry-leading company
Hybrid model working from home & our office in Central London (SoHo)
Work from Anywhere Policy - up to 20 days to work remotely
ClassPass
Budgets for learning & professional development
Unlimited vacation policy; work hard and take time when you need it
Apple equipment
The opportunity to be a key player and build your career at a rapidly expanding, global technology company in an emerging field
Flexible work culture
Blockchain is committed to diversity and inclusion in the workplace and is proud to be an equal opportunity employer. We prohibit discrimination and harassment of any kind based on race, religion, color, national origin, gender, gender expression, sex, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. This policy applies to all employment practices within our organization, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, and apprenticeship. Blockchain makes hiring decisions based solely on qualifications, merit, and business need at the time.
Please let blockchain, inc. know that you found this role at devopsprojectshq.com as a way to support us, so we can keep providing you with awesome DevOps jobs.
Ready to land your dream job?
Create your profile and let companies find you!
Built and hosted in the EU 🇪🇺 we keep your data safe