Experience:
• Bachelor/Master Engineering degree in IT / Computer Science/ software engineering or relevant field
• 8 years of relevant experience in a complex, technical environment
• AWS Certified Solutions Architect - Associate/Professional level
• Experience building and optimizing ‘big data’ data pipelines, architectures and data sets on AWS data lake.
• Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
• Strong analytic skills related to working with unstructured datasets.
• Build processes supporting data transformation, data structures, metadata, dependency and workload management.
• A successful history of manipulating, processing and extracting value from large disconnected datasets.
• Must have experience working on Spark/Scala, Kafka, Elasticsearch and Python (at least two)
• Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
• Strong analytic skills related to working with unstructured datasets.
• Experience in manipulating, processing and extracting value from large disconnected datasets.
• Experience with AWS cloud services: (Redshift, RDS, EMR, Kinesis, S3, Glue, DMS-Batch/CDC, Athena, EC2, Lambda, SQS, SNS etc.)
• Experience working on AWS Data Lakes
• Experience working on AWS Data Pipeline and CI/CD processes
• Exposure to Hadoop Ecosystem preferably on AWS/EMR, NoSQL-based, SQL-like technologies
• Experience with Data Science tools & technologies on AWS Cloud is plus
• Experience supporting and working with cross-functional teams in a dynamic environment.
• Excellent communication and able to work with stakeholders
All listed duties, requirements and responsibilities are deemed as essential functions to this position; however, business conditions may require reasonable accommodations for additional task and responsibilities.