____________________
___________________
___________________
_____________________
______________________
_______________________
________________________
_______________________
_______________________
_______________________
Data Engineer (Python) -TOP Asset Manager! Do you want to work in a brand-new team with full Autonomy? Are you driven and commercial? Do you like working in a fast-paced environment? Do you want to work in a company where you can make a BIG Impact? The Data Engineer must come from some of Fintech, Financial services, Insurance, PE/VC fund or Banking background. This role is based in London -3days onsite and 2 days from home. More flexibility when and if needed. You will be working with a Pragmatic Hiring Manager who has a good understanding of emotional intelligence. Vision for this role: The Data Engineer will be joining a BRAND-NEW Team and play a pivotal role in the current and future data strategy. You will be working with a High-End Technology Tech Stack which allows a Robust Data Pipeline for Data Lake Infrastructure that allows Portfolio managers to collect, validate and analyse large datasets. Qualifications/experience required Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field 2 years experience in business analytics, data science, software development, data modelling or data engineering work, ideally in Tech or Financial Services/FinTech 1 years experience as a Data Engineer manipulating and transforming data in Spark SQL, PySpark, or Spark Scala 1 years experience manipulating and transforming data in TSQL 1 years experience translating business requirements to technical requirement. Proficiency in Python, Microsoft Power Apps, GA, Big Query and Power BI highly recommended Competencies/skill set Proficiency in programming languages such as Python and SQL for data processing, manipulation, and analysis Experience with big data technologies and frameworks. Proficiency in Apache Spark and experience with Spark SQL, PySpark for distributed data processing and storage Strong understanding of data modelling concepts, ETL and ELT processes, and data warehousing principles Knowledge of cloud computing platforms, in particular Azure, and experience with Microsoft Fabric, Azure Data Factory, Azure Synapse, and Azure Databricks for data storage, processing, and analytics Knowledge and experience with Git operations, GitHub copilot and CI/CD flows Familiarity with data visualisation tools and techniques, especially Power BI, for creating interactive dashboards and reports Passion for data and the desire to learn & adopt new technologies ??This role offers a competitive base salary and up to 10-20% bonus potential, 25 days holidays Pension Medical Care ?? Don't miss out on this opportunity to work with one of the best in the industry! If you're interested in this opportunity, submit your CV as soon as possible. Interviews will be arranged ASAP! Robert Half Ltd acts as an employment business for temporary positions and an employment agency for permanent positions. Robert Half is committed to equal opportunity and diversity. Suitable candidates with equivalent qualifications and more or less experience can apply. Rates of pay and salary ranges are dependent upon your experience, qualifications and training. If you wish to apply, please read our Privacy Notice describing how we may process, disclose and store your personal data: gb/en/privacy-notice Security alert: scammers are currently targeting jobseekers. Robert Half do not ask candidates for a fee or request candidates to send applications through instant messaging services such as WhatsApp or Telegram. Learn how to protect yourself by visiting our website: gb/en/how-spot-recruitment-scams-and-protect-yourself
Bloomberg runs on data. Our products are fueled by powerful information. We combine data and context to paint the whole picture for our clients, around the clock - from around the world. In Data, we are responsible for delivering this data, news and analytics through innovative technology - quickly and accurately. We apply problem-solving skills to identify innovative workflow efficiencies, and we implement technology solutions to enhance our systems, products and processes - all while providing platinum customer support to our clients. Our Team: Our Securities Credit Data teams are responsible for the timely and accurate entry of new bonds and loans being brought to market, secondary market review and updates, as well as extensive client facing support. In addition, responsibilities include developing new data pipelines, implementing data validations, reviewing processes for efficiencies, finding opportunities to expand data sets. Our Private Credit team work closely with internal partners including the Engineering, Core Product, News and Enterprise. We are committed to delivering best-in-class quality and improving and expanding our Private Credit offerings through a deep understanding of market dynamics alongside our clients' current and future needs. What's the role? As a Data Engineer for Private Credit, you will use your domain expertise to handle and improve the financial data that feeds Bloomberg products, identify innovative workflow efficiencies, implement technical solutions to enhance our systems, products and processes, and establish links with key players in the financial market. You will be responsible for the data management of publicly filed Private Credit data by BDCs (Business Development Companies) as well as creating pipelines from additional sources, establishing and implementing our automation, data alignment and data quality strategies. You will ensure the best use of our existing Private Credit data as well as leading the technical implementation of on-boarding new datasets. You will use your experience to define what a fit-for-purpose data product looks like. You will work to plan and deliver the data modeling and technology strategy and roadmap which should align to the overall product vision. We'll trust you to: Develop data-driven strategies, balancing the best of technical, product, financial and dataset knowledge, and work with our engineering and product departments to craft solutions Be responsible for the end-to-end data ingestion of Ownership datasets including long positions and short position holdings data, transactional data, and derived datasets such as the free float field Analyze internal processes to find opportunities for improvement, as well as implement innovative solutions Communicate and collaborate across team members and complementary teams including Engineering, Product, Enterprise Data, and News Design data models to create data storage solutions for our raw and enriched data and contribute to our data alignment strategy by following the FAIR data principles Develop data quality strategies for the Ownership datasets and implement scalable, lasting solutions Use data visualization skills to report on results of ongoing operations and projects, as required You'll need to have: Please note we use years of experience as a guide, but we certainly will consider applications from all candidates who are able to demonstrate the skills necessary for the role. 3 years of programming experience in a development and/or production environment Proficiency using scripting languages to build pre-processing services that can be integrated in our data pipelines A proven grasp of data modeling principles and technologies to perform requirements analysis as well as conceptual, logical, and physical modeling Experience profiling new datasets, and implementing high volume, low-latency ETL pipelines Understanding of data quality standard methodologies to improve the value of the dataset Proven track record of effective project management and a customer focused mentality We'd like to see: Advanced degree/Master's degree or equivalent experience in a Finance or STEM subject and/or CFA designation (or working towards it) Familiarity with use cases of sophisticated statistical methods such as Machine Learning, Artificial Intelligence, and Natural language Processing Experience designing optimal database structures Experience working in an Agile development team Does this sound like you? Apply if you think we're a good match. We'll get in touch to let you know what the next steps are! We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Bloomberg is a disability inclusive employer. Please let us know if you require any reasonable adjustments to be made for the recruitment process. If you would prefer to discuss this confidentially, please email (Americas), (Europe, the Middle East and Africa), or (Asia-Pacific), based on the region you are submitting an application for. Alternatively, you can get support from our disability partner EmployAbility, please contact or .uk
Bloomberg runs on data. Our products are fueled by powerful information. We combine data and context to paint the whole picture for our clients, around the clock - from around the world. In Data, we are responsible for delivering this data, news and analytics through innovative technology - quickly and accurately. We apply problem-solving skills to identify innovative workflow efficiencies, and we implement technology solutions to enhance our systems, products and processes. The Team: The Physical Assets Data Team maintains databases on renewable projects and their corresponding financing instruments and parties that cover financing deals including transactions for the development, refinancing or acquisition of clean energy assets. New investment in renewable energy reached a record in 2023 and is a crucial component of the energy transition and the achievement of net-zero targets. The asset financing data is aggregated to create BloombergNEF flagship products like the Renewable Energy Investment Tracker and Energy Transition Investment Trends as well as monthly, quarterly, and annual league table reports published by the Data team. What's the role? As a Data Product Owner for the Asset Financing Dataset, you'll help grow Bloomberg's asset financing databases to further deliver an industry-leading data product offering. You will be responsible for developing essential elements of our data product, including its data model, data pipelines, and data quality specifications. You will be expected to monitor market and industry trends to influence Bloomberg's standard methodologies, tools, and technical infrastructure for data lifecycle management. You will engage with clients to understand their business needs and partner with Engineering and Product teams to propose, develop, and implement solutions for our clients. You will also define the data strategy, analyze data gaps and opportunities, profile data content, and translate business needs into requirements for a data model. You'll collaborate closely with both our Data counterparts and the BloombergNEF Product team in Princeton, New York, London, and Asia. We trust you to: Direct a comprehensive and outcome driven data strategy for the asset financing dataset by owning the schema design and suggesting enhancements in line with market trends. Engage with market participants and internal product partners to identify and ensure all client needs and trends are accounted for within our product design. Establish relationships with banks and other involved parties to increase the volume of data submissions. Use your market knowledge to perform market analysis to assess fit for purpose of the current asset financing databases and potential needs for expansion (e.g. beyond clean energy). Build quality data workflows to verify and validate third party data submitted directly by clients. Analyze internal processes to find opportunities for improvement, as well as devise and implement innovative solutions. Maintain workflow configurations for critical functions such as acquisition, worklist management, and quality control. Contribute to the creation of standard methodologies and guidelines for governance. Apply data visualization skills to report on results of on-going operations and projects, as required. Collaborate with a wide variety of partners from Engineering to Sales on product development. Partner with Engineering and Product to propose, develop and implement market leading solutions for our clients. Understand customer needs and markets to ensure our data sets are fit-for-purpose and seamlessly integrate with other data products when developing data product strategies. Stay updated on market, industry and dataset developments related to asset and project financing. Make well-informed decisions in a fast-paced, ever-changing environment. Report on results of on-going operations and projects, as required. Plan and adapt your work using an agile framework. You'll need to have: Please note we use years of experience as a guide but we certainly will consider applications from all candidates who are able to demonstrate the skills necessary for the role. 4 years of data management experience, for example improving data quality, accuracy, efficiency or timeliness. A Bachelor's degree or higher in relevant data technology field or equivalent professional work experience. Demonstrated experience in Data Profiling/Analysis using tools such as Python, SQL, Excel, or experience using Analytical/Dashboarding applications (such as Qlik Sense, Tableau). Strong passion for data and the overall energy transition movement. Good understanding of different financing instruments, project financing and clean energy investments. The ability to think creatively and provide out of the box solutions with an eagerness to learn and collaborate. Comfort with a high degree of autonomy and shown ability in handling priorities from multiple internal and external partners. Demonstrated project management experience, for example leading processes, designing dashboard. Excellent written and oral communication skills. Interest and/or experience in client and partner management. Familiarity with data processing paradigms and associated tools and technologies. We'd love to see: Master's degree in Sustainability, Public Policy, Economics, Finance, Engineering, MBA, or related field. Knowledge of statistical analysis, or programming and applied data analysis skills, with proficiency using R, Python or similar coding languages. Experience in semantic structures or data modeling. Experience profiling datasets and coming up with necessary requirements. Does this sound like you? Apply if you think we're a good match. We'll get in touch to let you know what the next steps are! Bloomberg is an equal opportunity employer and we value diversity at our company. We do not discriminate on the basis of age, ancestry, color, gender identity or expression, genetic predisposition or carrier status, marital status, national or ethnic origin, race, religion or belief, sex, sexual orientation, sexual and other reproductive health decisions, parental or caring status, physical or mental disability, pregnancy or parental leave, protected veteran status, status as a victim of domestic violence, or any other classification protected by applicable law. Bloomberg provides reasonable adjustment/accommodation to qualified individuals with disabilities. Please tell us if you require a reaso