We are seeking a talented and experienced Data Warehouse Engineer – DBT to join the Data Solutions domain in EMEA.
As a key contributor, you will design and implement scalable data warehouse and transformation architectures that empower the organization to make data driven decisions. You will collaborate with cross functional teams to build robust ELT pipelines, optimize warehouse performance, and ensure data availability, reliability, and quality across all layers of the platform.
Key Responsibilities:
• Contribute to the design and implementation of warehouse integration architectures, including data flows, process flows, and ingestion/ELT patterns.
• Build, maintain, and optimize data pipelines across warehouse layers, ensuring efficient ingestion, processing, and storage.
• Develop, test, and document dbt models (staging, intermediate, marts) following best practices in modularity, versioning, and lineage.
• Implement and uphold data governance, security, quality, and compliance throughout the data lifecycle.
• Monitor, troubleshoot, and optimize warehouse performance, including query tuning, storage optimization, and materialization strategies.
• Provide production support to ensure system reliability, observability, and rapid issue resolution.
• Work closely with analytics, platform, and engineering teams to support data modeling, warehouse design, and transformation best practices.
Technical Skills You'll Bring
• Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field.
• 7+ years of experience in a similar role within a fast paced, enterprise scale environment.
• Strong proficiency in SQL and hands on experience with DBT (Core or Cloud), or any other traditional ETL Tools (ODI, Informatica, Talend, SSIS, etc..)
• A solid understanding of end to end data ingestion and transformation processes.
• Strong understanding of data modeling methodologies such as Inmon, Kimball, and Data Vault.
• Experience with AWS services, including IAM, Lambda, EKS, S3 (Data Lake), EMR (PySpark), Lakehouse/Iceberg, MWAA, and other analytics relevant components.
• Expertise in Snowflake, including performance tuning, resource optimization, and efficient query design.
• Nice to have: experience with batch ingestion tools (StreamSets) and real time streaming technologies (Kafka).
• Exposure to CI/CD pipelines and Infrastructure as Code tools (Terraform or equivalent).
• Experience with Application Performance Monitoring tools is a plus.
• Knowledge of data quality frameworks and security best practices.
• Ability to interpret complex architecture patterns and produce accurate implementation estimates.
Non-Technical Skills:
• Demonstrated commitment, ownership, and accountability.
• Strong attention to detail, with a proactive approach to identifying and resolving problems.
• A genuine interest in emerging data technologies and a curiosity about continuous improvements
• Excellent verbal and written communication skills.
• AI Empowered Experience
To submit your CV for consideration click 'Apply' or contact [email protected] on 091 507515
#LI-RR1