Role: Data Engineer
Location: Dublin
Employment: 6 month fixed term contract initially
Superb opportunity for a hands-on Data Engineer to design, build and optimise scalable data solutions on Databricks and AWS within a global financial services company. You’ll work between data engineering, analytics, and cloud architecture, building robust pipelines, supporting data products, and contributing to the modernisation of enterprise data platforms.
Responsibilities
- Develop, enhance, and maintain Data Lakehouse and Data Warehouse environments on AWS using Databricks.
- Design and implement ETL/ELT pipelines for large-scale data processing (Python, SQL, Spark).
- Create and optimise database schemas, tables, indexes, and stored procedures.
- Work closely with business stakeholders to gather requirements and deliver reliable, production-grade data solutions.
- Collaborate with cross-functional teams to develop and maintain data pipelines from source systems to analytics layers.
- Manage orchestration, version control (Git), and CI/CD processes within an Agile delivery model.
- Apply best practices in data modelling, security, and performance tuning.
Skills & Experience
- 5+ years’ experience in Data Engineering / BI / DW development.
- 3+ years’ hands-on experience with Databricks on AWS.
- Strong skills in Python, SQL, and modern data frameworks (Spark, Delta Lake).
- Solid understanding of data modelling, ETL, and data architecture principles.
- Experience with AWS services (S3, Glue, Lambda, Step Functions, IAM, VPC).
- Exposure to machine learning / AI initiatives is a plus.
- Background in financial services or regulated data environments desirable.
#LI-LDM