Data Engineer
- Location: Central London
- Hybrid working - 2 days per week at home, 3 days in the office.
- Job Type: Full-time
We are seeking a Data Engineer to significantly contribute to the execution of our data strategy. This role is essential for enhancing data-driven decision-making within the organisation, fostering client value growth, driving ROI, and streamlining internal processes. As a Data Engineer, you will be instrumental in developing and maintaining sophisticated data pipelines, consolidating disparate data sources into a unified data lake, and collaborating with various teams to establish a foundation for data visualisation, AI, and machine learning within a robust governance structure.
Day to Day of the Role:
- Construct and expand the data lake infrastructure to facilitate efficient data access, retrieval, and analysis, while proactively resolving bottlenecks.
- Design, develop, and maintain data pipelines that handle large volumes of structured and unstructured data from diverse sources, including on-premises and cloud platforms.
- Continuously monitor and enhance the performance and scalability of the data lake and pipelines to meet business needs and data processing demands.
- Implement data quality and governance protocols to ensure data integrity, consistency, and adherence to standards such as GDPR.
- Keep up-to-date with the latest technologies, best practices, and trends in data engineering and analytics to drive ongoing improvements.
Required Skills & Qualifications:
- Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or a related field.
- At least 2 years of experience in business analytics, data science, software development, data modelling, or data engineering, preferably in the Tech or Financial Services/FinTech sectors.
- A minimum of 1 year of experience as a Data Engineer with expertise in Spark SQL, PySpark, or Spark Scala.
- A minimum of 1 year of experience in TSQL and converting business needs into technical solutions.
- Proficiency in Python, Microsoft Power Apps, GA, Big Query, and Power BI is highly recommended.
- Solid understanding of data modelling, ETL/ELT processes, and data warehousing principles.
- Knowledge of Azure cloud computing platform and experience with Azure Data Factory, Azure Synapse, and Azure Databricks.
- Familiarity with Git operations, GitHub Copilot, and CI/CD workflows.
- Experience with data visualisation tools, particularly Power BI, for creating interactive dashboards and reports.
Benefits:
- Competitive salary and benefits package.
- Opportunities for professional growth in a cutting-edge technology environment.
- A collaborative workplace culture that encourages innovation and continuous learning