Clarity AI is a global tech company founded in 2017 committed to bringing social impact to markets. We leverage AI and machine learning technologies to provide investors, governments, companies, and consumers with the right data, methodologies, and tools to make more informed decisions.
We are now a team of more than 300 highly passionate individuals from all over the world. Together, we have established Clarity AI as a leading sustainability tech company backed by investors and strategic partners such as SoftBank, BlackRock, and Deutsche Börse, who believe in us and share our goals. We have plans to continue growing our teams in Spain, the UK, and the US this year, so if you would like to join us on this rocket ship, keep reading! Your work will shape and guide the sustainable decisions of consumers and investors worldwide.
Role Description
We are looking for a Data Engineer for the Tech team. If you’re a Software/Data Engineer, specializing in Data and enjoy solving complex problems with code, and developing and you are not afraid of learning new things, we are looking for you.
You will be part of the team delivering the different parts of our production-ready product while co-designing and implementing an architecture that can scale up with the product and the company.
Our tech stack is documented here: https://stackshare.io/clarity-ai/clarity-ai-data
Location
The role is based in Madrid
Way of Working: Remote / Hybrid
Key Responsibilities
- You will help to build a data platform to enable other teams to self-service high-quality data while performing data operations to support their day-to-day actions.
- You will join the data engineering team which is responsible for developing models, practises and systems to support the data lifecycle.
- You’ll support designing and creating all aspects of our ever-growing set of external and internal data pipelines, understand the problems, and tie them back to data engineering solutions
- You will be responsible of designing and building our ETL's
- You’ll transform raw data from different sources (batch and near-real-time) into intuitive data models, using a diverse set of tools (such as Spark, Hadoop, Redshift, Kafka, etc.) to build robust data pipelines of high quality in a scalable fashion (regarding volume and maintainability).
- In order to develop feature and product features, you will form part of cross-functional squads including people of many different teams.
- Work closely with Platform, Data Science and Backend teams, as well as product and tech teams.
- Fix, troubleshoot and solve bugs and issues