Hi! In Frontiers we want to give a boost to our IT department in Madrid and reinforce it with an enthusiastic hands-on Data Engineer to help us improve the quality, scalability, and resilience of our data-intensive applications in the company. Are you interested?
Data is one of our key assets, and actually it’s the main deliverable of some of our IT teams. But not just data, but data at scale built from different external and internal sources and applying Data Science models on it. Depending on the application we can manage stream data, batch data, bulk data, processed data, data lake, data pipelines, ETLs, machine learning models… it’s an amazing playground for an eager Data Engineer, isn’t it?
The ideal candidate will have a strong technical background and excellent IT skills. It’s key to have hands-on experience designing data-intensive applications and big data solutions, and we also expect a good knowledge of all types of data storage: structured files, relational databases, documental databases, graph databases, etc.
And what about us? Well, I invite you to check out Frontiers as a company, our mission, and how we envision society and the world. It's so important to work in a company you can be proud of! By joining us you will play a key role in shaping the future of science and academic publishing, and the way it affects the people, the planet and the evolution of our society. If this resonates with you, please check the job description below, maybe you are the person that can help us!
What you do
- Understand the functional requirements for defining the best data models and data flows between our applications, services, data storages and synchronization mechanisms.
- Integrate, transform, and consolidate data from various structured and unstructured data systems into structures that are suitable for building analytics solutions.
- Ensure that our data applications/processes are scalable, reliable, secure, extensible, traceable, available and manageable.
- Design, implement, monitor, and optimize our data platforms to meet the data pipeline needs.
- Support the different SW development teams in the modelling, design, construction, evolution, and decommission of their data-intensive applications and data models
- Understand and promote the best Data frameworks and solutions, technical standards and key technologies, to effectively support existing and future business requirements. Develop guidelines and procedures to achieve this goal.
How you do it
- Work closely with IT Architects to provide overall consistent and reliable data solutions for all the applications ecosystem.
- Create a partnership with Scrum teams and POs, understanding the application and business requirements, and helping them understand the data through exploration, building and maintaining secure and compliant data processing pipelines.
- Collaborate closely with Machine Learning and Data Science Team to improve the performance of our ML pipelines.
- Create models and prototypes that validate your ideas, before bringing them to the development teams.
- When necessary, provide Technical Support and Training to other IT teams.
- Create and maintain up to date the documents describing the Data strategy of your applications domain, as well as all relevant guidelines and standards.
- Attend IT conferences and participate in relevant Data Engineering forums where trends and innovations are discussed.
What we are looking for
- Solid knowledge of data querying and processing languages, such as SQL, Python, or Scala.
- Spark and PySpark are also a must.
- Understands big data principles and is able to implement them (volume, velocity, variety, veracity and value)
- Familiar with data quality requirements and implementation
- Excellent understanding of parallel processing and data architecture patterns. Focus on designing for efficiency: for querying, processing, pruning, archiving...
- Solid knowledge on DataBricks, DataFactory, SQL Server, MongoDB, data file types (for storage and for analytical queries). Knowledge on ElasticSearch and DeltaLake are a nice-to-have.
- Experience on building Data Lakes.
- Expertise in data processing: data ingest and transformation, batch processing, streaming data processing, distributed processing, monitoring, optimization, logging.
- Experienced in troubleshooting data processing and data storage
- Knowledgeable of data security standards
- Knowledgeable of serving layer design: star schema, dimensions, incremental loading, stores,…
- Knowledgeable of physical data storage structures: compression, partitioning, sharding, redundancy, distributions, archiving…
- Comfortable handling uncertainty in evolving scenarios, able to understand priorities and balance them.
- Familiarity with Agile framework
- BS in computer science, engineering or relevant field
- You are willing to Travel to the Frontiers' headquarters in Switzerland occasionally.
- You have good English skills, we are an international company and English is our working language.
What we’re offering
- Competitive salary.
- 25 leave days.
- Great work-life balance.
- A top-notch office in an awesome location
- Great flexibility for working from home.
- Fresh fruit, snacks and coffee.
- English classes.
- Flexible retribution scheme (nursery cheques, restaurant pass, transportation).
- Team building/sport activities and monthly social events.
- Lots of opportunities to work with exciting technologies and solve challenging problems.
- Joining a company that can really boost the beneficial impact of science on people, society and the planet
- Please submit your application in English (cover letter and resume).
- Applicants must be Spanish or EU citizen, or have a valid Spanish work permit.
- Agencies must first contact email@example.com and confirm agreement to our T&C’s, failing which any exclusivity and/or candidate representation right will be considered to be waived.
Thanks for your time checking the job description, if you think you can match with the role don't hesitate to send us your CV, we’re looking forward to hearing from you!