Job description
We are seeking a talented and experienced Data Engineer to join our dynamic
team. As a Data Engineer, you will play a crucial role in designing, building, and
maintaining our data infrastructure to support our growing business needs. Your
role will include building a new data warehouse alongside the data team,
building data pipelines to provide data to both internal and external clients and
maintaining and ensuring data quality for the business.
Key Responsibilities:
- Design and build data warehouse alongside the data team
- Design, develop, and maintain data pipelines and ETL processes to ingest, process, and transform large volumes of data from various sources into our data warehouse.
- Build data pipeline to third party reporting and automation tools
- Work with clients to provide clean and reliable data in a safe environment
- Build and optimize data models, schemas, and databases to support analytical and reporting requirements.
- Implement data quality checks, monitoring, and error handling mechanisms to ensure the integrity and reliability of our data.
- Collaborate with the data team and business departments to understand data requirements and deliver data solutions that meet business needs.
- Continuously optimize and improve data infrastructure, performance, and scalability to support growing data volumes and evolving business requirements.
- Stay up-to-date with emerging technologies and best practices in data engineering, cloud computing, and data management.
Requirements:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 4+ years experience as a Data Engineer or similar role, with a strong background in designing and building data pipelines, ETL processes, and data warehouses.
- Hands-on experience with cloud-based data platforms and services, particularly AWS (Amazon Web Services) including but not limited to S3, Redshift, Lambda and Glue
- Proficiency in programming languages such as Python, SQL, and experience with data processing frameworks/libraries such as Apache Spark, Pandas, or similar.
- Experienced in managing compute workloads and scaling with tools such as kubernetes
- Solid understanding of database concepts, data modelling, and relational databases (e.g., PostgreSQL, MySQL).
- Strong analytical, problem-solving, and communication skills, with the ability to work effectively in a fast-paced and collaborative environment.
- Experience working in the fintech or financial services industry is a plus.
- Excellent communication and collaboration skills, with the ability to interact effectively with technical and non-technical stakeholders