______________________
_____________________
_____________________
___________________
______________________
_____________________
________________________
______________________
______________________
___________________
Our client is seeking a highly skilled Data Scientist to join their dynamic and innovative team. This role offers the opportunity to work on complex, global, and high-profile projects, combining end-to-end risk advisory, investigative, and disputes expertise to deliver holistic solutions. The successful candidate will utilise sophisticated mathematical, statistical, and computer science techniques, including Artificial Intelligence (AI), machine learning (ML), and algorithmic approaches. This is an exciting opportunity to be at the forefront of technological developments in this space, evolving with cutting-edge advancements and playing a pivotal role in the company's research and development arm. Work on complex, global, and high-profile projects Utilise sophisticated AI/ML methodologies Be at the forefront of technological developments What you'll do: As a Data Scientist, you will be involved in all stages of a project from data collection to platform development. You will perform data availability assessments, write SQL queries or scripts to query APIs, perform exploratory data analysis, develop robust models and statistical analyses, build complete pipelines potentially working alongside engineers/developers, communicate findings to clients, and incorporate their feedback/domain knowledge to tune a model. You will also support engineers/developers/UI team in building end-to-end solutions. Perform data availability/quality assessment and own ETL processes Research and acquire data from public sources Perform exploratory data analysis, visualize and communicate key findings Develop robust models and statistical analyses Build complete pipelines and test performance at scale Communicate findings to clients and incorporate their feedback to tune a model What you bring: The ideal candidate for the Data Scientist role will bring a wealth of technical skills and experience. She should have a degree within a technical discipline such as Math, Physics, Computer Science, Statistics or Economics or demonstrated practical technical experience. She should have at least three years of experience working in a data analytics or data-centric role. Proficiency in SQL and Python is essential. Knowledge of AI/ML concepts and techniques is required along with experience with data wrangling/cleaning in a dataframe package like Python's pandas. The ability to work with Linux/nix systems, bash scripting, SSH is also necessary. Degree within a technical discipline such as Math, Physics, Computer Science, Statistics or Economics or demonstrated practical technical experience At least three years of experience working in a data analytics or data-centric role Proficiency in SQL and Python Knowledge of AI/ML concepts and techniques Experience with data wrangling/cleaning in a dataframe package like Python's pandas Ability to work with Linux/nix systems, bash scripting, SSH What sets this company apart: Our client is a global leader in the field of data science and analytics. They are committed to delivering innovative solutions to complex business challenges for a diverse range of clients. Their team is at the forefront of technological developments, continuously evolving with cutting-edge advancements. They offer a supportive and inclusive work environment where every team member's contribution is valued. This is an exciting opportunity to be part of a company that values innovation, collaboration, and professional growth. What's next: If you're ready to take your career to the next level with a role that offers challenge, innovation and professional growth, don't hesitate! Apply today by clicking on the link! Robert Walters Operations Limited is an employment business and employment agency and welcomes applications from all candidates
Data Scientist/Engineer - Hybrid/Bristol Job Title: Data Scientist/Engineer Location: Hybrid/Bristol Remuneration: £50,000 - £60,000 per annum Contract Type: Permanent Role will be based with the modelling team. 25% Data Submission and Cleansing (Data Engineer), 75% Process Improvement (Data Science). Ideally, seeking someone with strong Data Science exposure but Data Engineering skills. Responsibilities: Play a pivotal role in crafting, enhancing, and managing intricate data ingestion pipelines, ensuring their efficiency, scalability, and dependability. Integrate advanced machine learning elements into data pipelines to elevate data processing and analytical capabilities. Devise and implement data quality metrics and dashboards for monitoring, ensuring the fidelity and precision of ingested data. Collaborate closely with data scientists, engineers, and cross-functional team members in an agile development setting to meet project objectives and deadlines. Harness modern NLP techniques to enrich data comprehension and processing. Stay abreast of the latest advancements in data science and engineering to continuously refine our systems and methodologies. About Us: Situated in Bristol, UK, our client operates at the forefront of cyber risk assessment and management. Their pioneering strategies and exclusive models are reshaping the understanding and mitigation of cyber risk within the reinsurance sector. They are dedicated to excellence and innovation, positioning them as pioneers in their field. Why Join? Influence: Drive significant enhancements in our data ingestion capabilities, directly contributing to our efficacy in managing cyber risk. Innovation: Engage with state-of-the-art technologies and approaches in the realm of data science and engineering. Development: Access opportunities for professional growth within a nurturing and forward-looking environment. Culture: Become part of a cooperative, inventive team that values the contributions of each member toward our shared objectives. Qualifications: Experience: A minimum of 3 years' experience in crafting intricate data ingestion pipelines incorporating advanced ML elements. Prior exposure to the insurance domain is advantageous. Education: A degree in Computer Science, Data Science, Engineering, or a related discipline. Technical Skills: Proficient in Python, SQL, PySpark, and Databricks. Demonstrated proficiency in modern NLP techniques and tools. Proven track record in developing and managing data quality metrics and dashboards. Experience collaborating within a cross-functional agile development team. Competence in utilising Git for version control. Industry Knowledge: Familiarity with the insurance sector and its associated data challenges is highly desirable. Soft Skills: Exceptional problem-solving abilities. Effective communication skills. Collaborative mindset. Perks: Hybrid work options. Health insurance coverage. Sponsored training opportunities. Employee discounts. We seek talented and seasoned Mid-Level Data Scientists passionate about sculpting sophisticated data ingestion pipelines. If you thrive in a dynamic, agile setting and possess a robust foundation in Python, Databricks, and SQL, we invite you to connect with us. Our office is conveniently located in the vibrant area of Broadmead, Bristol, with nearby parking at Mall Galleries just a 2-minute walk away. Bristol Temple Meads train station is also accessible within a 18-minute walk from our office. Join our esteemed organisation and contribute to our culture of innovation and foresight. Apply now and embark on the next phase of your career journey as a Mid-Level Data Scientist. Adecco is a disability-confident employer. It is important to us that we run an inclusive and accessible recruitment process to support candidates of all backgrounds and all abilities to apply. Adecco is committed to building a supportive environment for you to explore the next steps in your career. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you. KEYWORDS: Python / Databricks / Data bricks / Google Cloud Platform / GCP / SQL / PostgreSQL / Pandas / SQLAlchemy / Apache Airflow / DataBricks / Data Bricks / Snowflake / Luigi / BigTable / Redis / CouchDB / RethinkDB / Elasticsearch / Postgres / Postgres SQL / Insurance / Cyber Security / Cyber Risk Pipelines / Cyber Reinsurance / Risk Management / Asset Management / Reinsurance / Big Data / Python / Databricks / Data bricks / Google Cloud Platform / GCP / SQL / PostgreSQL / Pandas / SQLAlchemy / Apache Airflow / DataBricks / Data Bricks / Snowflake / Luigi / BigTable / Redis / CouchDB / RethinkDB / Elasticsearch / Postgres / Postgres SQL / Insurance / Cyber Security / Cyber Risk Pipelines / Cyber Reinsurance / Risk Management / Asset Management / Reinsurance / Big Data / Python / Databricks / Data bricks / Google Cloud Platform / GCP / SQL / PostgreSQL / Pandas / SQLAlchemy / Apache Airflow / DataBricks / Data Bricks / Snowflake / Luigi / BigTable / Redis / CouchDB / RethinkDB / Elasticsearch / Postgres / Postgres SQL / Insurance / Cyber Security / Cyber Risk Pipelines / Cyber Reinsurance / Risk Management / Asset Management / Reinsurance / Big Data / Python / Databricks / Data bricks / Google Cloud Platform / GCP / SQL / PostgreSQL / Pandas / SQLAlchemy / Apache Airflow / DataBricks / Data Bricks / Snowflake / Luigi / BigTable / Redis / CouchDB / RethinkDB / Elasticsearch / Postgres / Postgres SQL / Insurance / Cyber Security / Cyber Risk Pipelines / Cyber Reinsurance / Risk Management / Asset Management / Reinsurance / Big Data /