£52K/yr to £65K/yr
London, England
Contract, Variable

Data Engineer

Posted by Reed.

Main Duties of the Job

You will draw from an extensive background of experience in core data engineering. Your skills will enable and deliver data and analytical solutions on our analytics and data platforms. You will be able to act as a senior technical expert in all aspects of delivery and be prepared to provide leadership and guidance to small project teams/resources in a collaborative way.

You will develop and maintain the technology infrastructure that is integrating new and existing sources of data into Data Analytics and Surveillance processes and workflows, so that it can be exploited with maximum efficiency and efficacy. Key skills required include: data engineering, data architecture, data analysis, security architecture, DevOps and business partnering.

  • Programming and build - You can design, write and iterate code from prototype to production-ready. You understand security, accessibility and version control. You can use a range of coding tools and languages. You can develop code that self-generates documentation that supports Data Scientists and Data Analysts.
  • Technical understanding - You know about the specific technologies that underpin your ability to deliver the responsibilities and tasks of the role. You can apply the required breadth and depth of technical knowledge.
  • Testing - You can plan, design, manage, execute and report tests, using appropriate tools and techniques, and work within regulations. You know how to ensure that risks associated with deployment are adequately understood and documented
  • Problem resolution - You know how to log, analyse and manage problems in order to identify and implement the appropriate solution. You can ensure that the problem is fixed.

Essential Criteria

  • Extensive knowledge/experience at an enterprise scale (5+ years) of python programming language, including unit testing (pytest) and pep8 standards
  • Pandas data validation, manipulation, merging, joining and at times visualisation
  • Extensive knowledge/experience at an enterprise scale (5+ years) using Python, SQL, Spark and AWS
  • Hands on ETL development experience utilizing Microsoft enterprise stack / Azure and AWS Glue
  • Extensive knowledge/experience at an enterprise scale (5+ years) of data management platforms and development with SQL Server
  • Writing robust data pipeline code that can run unattended
  • Unix environment, server health and management of ongoing running processes
  • Github, git, pull requests, CI and code review
  • Ability to work as part of a team to develop and deliver end-to-end data warehouse solutions
  • Analytical skill set with an ability to understand data requirements and support the development of data solutions
  • Experience with publishing data sets for visualisation and analysis
  • Experience with supporting design of data models / data flows
  • Logging and reporting pragmatically
  • Ability to troubleshoot and solve numerical and technical problems
  • High attention to detail
  • Excellent communication and facilitation skills evidenced through verbal and written means to a wide range of stakeholders
  • Experience with or knowledge of Agile software development methodologies

Desirable Criteria

  • Machine learning for engineering practices, such as meta driven intelligent ETL and pipeline processes
  • Experience of working with JIRA (or Azure DevOps or similar tools) within an Agile/Scrum environment
  • Experience/Understanding of software and data lifecycle management
  • Educated to degree level (not essential, experience is key). Relevant numerate, technical or computer science discipline would be an advantage

for more info click the apply here button

We use cookies to measure usage and analytics according to our privacy policy.