We’re looking for a Senior Data Engineer to help build and scale an enterprise-wide Centralised Data Platform on Databricks. This role sits within a global financial services environment where data engineering is a top technical priority, underpinning analytics, APIs and future AI initiatives across the organisation.
The role
- Build and optimise data pipelines on the Databricks Lakehouse Platform
- Design scalable ETL/ELT and structured streaming pipelines
- Develop enterprise-grade data processing and analytics solutions
- Optimise Spark jobs and Databricks clusters for performance and cost
- Implement data quality, monitoring and governance standards
- Apply security, access control and cataloguing best practices
- Work closely with data scientists, analysts and business stakeholders
- Contribute to Agile delivery, code reviews and technical knowledge sharing
Experience
- 6+ years’ experience in data engineering roles
- Hands-on experience with Databricks and Apache Spark
- Strong Python and SQL skills with solid data modelling knowledge
- Experience building ETL/ELT pipelines and lakehouse architectures
- Cloud experience, ideally AWS
- Familiarity with Delta Lake, Unity Catalog and governance frameworks
- Experience with real-time or streaming data is a plus
- Exposure to AI/ML use cases or using AI tools in development is advantageous
- Strong problem-solving skills and the confidence to work in complex, regulated environments