Senior Data Engineer - Dublin
Gibbs Hybrid is looking for an accomplished Data Engineer to support a Dublin based finance client on a large risk consolidation project. 2-3 days onsite work Dublin.
- Develop and maintain data pipelines that extract, transform, and load (ETL) data from various sources into a centralized data storage system
- Data Integration: Integrate data from multiple sources and systems, including databases, APIs, log files, streaming platforms, and external data providers.
- Data Transformation and Processing: Develop data transformation routines to clean, normalize, and aggregate data.
- Contribute to common frameworks and best practices in code development, deployment, and automation/orchestration of data pipelines. Implement data governance in line with company standards.
- Monitoring and Support: Monitor data pipelines and data systems to detect and resolve issues promptly. Develop monitoring tools, alerts, and automated error handling mechanisms to ensure data integrity and system reliability.
Essential:
- Extensive experience designing data solutions including data modeling.
- Extensive hands-on experience developing data processing jobs (PySpark / SQL) that demonstrate a strong understanding of software engineering principles.
- Experience orchestrating data pipelines using technology like Azure Data Factory/ADF, Airflow etc
- Experience working with both real-time and batch data, knowing the strengths and weaknesses of both and when to apply one over another.
- Experience building data pipelines on either AWS, Azure or GCP, following best practices in Cloud deployments
- Experience working with Hive /HBase / Presto
- Fluent in SQL (any flavor), with experience using Window functions and more advanced features.
- Understanding of DevOps tools, Git workflow and building CI/CD pipelines
- Experience supporting big data pipelines.
- Experience applying data governance controls within a highly regulated environment.
- Data Engineering experience
- Experience working in projects with agile/scrum methodologies.
- Familiar with Azure Data Factory or Apache Airflow
- Familiar with Azure Databricks or Snowflake
- Experience with shell scripting languages
- Well versed in Python, in fulfilling multiple general-purpose use-cases, and not limited to developing data APIs and pipelines.
- Experience with Apache Spark and related Big Data stack and technologies
- Experience working with Apache Kafka, building appropriate producer/consumer apps.
- Familiarity with production quality ML and/or AI model development and deployment.
- Experience working with Kubernetes and Docker, and knowledgeable about cloud infrastructure automation and management (e.g., Terraform)
Click Apply now/contact Lianne to be considered for the Senior Data Engineer - Dublin role