Data Architect - Databricks
6 month contract
Dublin - Hybrid
€600-700 per day
The Role
Data Architect required to lead the evolution of our enterprise data platform infrastructure and engineering frameworks. This role will
be a key technical leader, working cross-functionally with data engineers, software developers, product teams, and enterprise architects to ensure the platform supports current and future data-driven initiatives & will be instrumental in shaping the architecture of our Azure-based data ecosystem, driving AI enablement, and ensuring scalable, secure, and high-performance data solutions across the organisation.
Principal Responsibilities
- Architectural Leadership: Lead the selection and implementation of data infrastructure technologies, such as data lakes, warehouses, lakehouses, orchestration tools, and streaming platforms.
- Design and evolve the enterprise data architecture, including integration with Databricks, Data Lake, Data Factory, and other core services.
- AI Enablement: Collaborate with Data Science and AI teams to architect solutions that support machine learning operations (MLOps), predictive analytics, and intelligent automation.
- Platform Strategy: Define & implement engineering standards, best practices, governance frameworks, and architectural blueprints for the data platform ecosystem.
- Stakeholder Engagement: Partner with business units, product owners, and engineering teams to translate strategic goals into scalable data solutions.
- Innovation & Optimisation: Evaluate emerging technologies and recommend enhancements to improve performance, cost-efficiency, and resilience of the data platform.
- Security & Compliance: Ensure data architecture adheres to compliance standards, including GDPR and EEA residency requirements.
- Enterprise Scale: Work with Enterprise DevOps and Cloud Engineering teams to ensure infrastructure-as-code, CI/CD, and monitoring practices are in place.
- Mentorship: Guide and upskill data engineers and analysts in best practices for data architecture, pipeline design, and cloud-native development.
Knowledge, Qualifications, Experience & Skills
Skills & Experience
• Proven experience architecting data platforms on Microsoft Azure, including Databricks, Data Lake Storage Gen2 & Azure Data Factory.
• Strong understanding of Lakehouse architecture, Delta Lake, PySpark & SQL.
• Experience with AI/ML workflows, including MLOps, model lifecycle management, and integration with data pipelines.
• Strong understanding of data governance, lineage, security, and privacy frameworks.
• Proficiency in data modelling, ETL/ELT design, and metadata management.
• Familiarity with Power BI, Azure DevOps or Jira, GitHub and CI/CD pipelines.
• Experience in large-scale enterprise environments with complex data ecosystems.
• Proven track record of delivering innovation and continuous improvement.
For more information please call Michael on 01-6146058 or [email protected]