Filled
This offer is not available anymore

Data Scientist in Madrid

Geoblink

Workplace
Onsite
Hours
Full-Time
Internship
No
Share offer

Job Description

About Geoblink

We’re a fast growing startup that has already raised close to $8 million in investment from leading venture capital firms, and have been named by Bloomberg as one of the 50 most promising startups in the world to look out for. Our goal is to revolutionise the world of Location Intelligence and the way businesses think about, and act upon location intelligence data.

At Geoblink we use the latest technologies to find solutions to real world problems businesses face when trying to expand or increase efficiency. We leverage GIS technologies and Big Data to create a beautiful map-based user interface that not only provides lots of awesome statistics but also a great user experience.

We are proud of the environment of collaboration and diversity we have built and continue to foster, with plenty of opportunities to have a real impact on the business.

About Geoblink Tech

Our systems are built using an SOA approach that allows us to perform multiple deployments per day. We <3 monitoring, pull requests, iteration, continuous deployment and automated testing. The trunk of our stack is Python, Node.js, Vue.js, PostgreSQL and Spark but our architecture is language-agnostic. We move fast but put a lot of thought into the design of our architecture so that it’s simple and scalable. We write clean, modular code to produce great software that solves the needs of our clients.

Our Tech&Data culture is based on the high standards we try to achieve in everything we build and the personal development of our team. We foster an inclusive atmosphere of non-ego and respect where ideas are shared and feedback is used to promote quality and innovation. Some initiatives we have in place are hackathons twice a year, bi-weekly Tech&Data talks, personal development budget for books, training and conferences and time for side projects every other Friday.

You can visit our Tech blog to learn more about the projects and technologies at Geoblink.

About the DataLab team

Data is at the heart of all the technical challenges at Geoblink. As a Data Scientist at Geoblink you will be part of a team called Datalab, responsible for answering business questions vital to our clients, based on data. Geoblink relies on a large amount of spatial datasets to model urban behaviour, which can be used to make decisions such as where are my competitors? which is the best location for a new site? which features are driving my sales? Is my point of sale under/overperforming with respect to its potential?

Our dataset streams come from both internal data (coming directly from our customers for their own use) and external data (retrieved, cleaned and prepared internally at Geoblink from over 60 sources), which are used to deliver analyses and insights allowing our customers to understand their current and past business situation (descriptive analytics), providing them with models and tools to be able to predict the impact and effects of potential business actions and decisions (predictive analytics) and, at a higher level, recommend them which of those actions and decisions should be taken to maximize their final revenue (prescriptive analytics). To do so we rely heavily on Python, and more precisely, on three different types of technology: data management and analysis tools (e.g. Pandas, Matplotlib, Seaborn, Plotnine, Jupyter…), machine learning frameworks (e.g. scikit-learn, LIME…) and software development oriented libraries (e.g. Flask, Pytest..)

Who we’re looking to recruit

We are looking for a Data Scientist passionate about finding, processing and modelling data to solve real world problems. You would be one of the main points of reference to, given a business problem, figure out how to solve it with the existing datasets, or how to find out those which could be useful for complement the existing ones.

Here are some other things we’re looking for:

  • BS or MS degree in Physics, Math, Computer Science or related degree or experience.
  • Hands-on experience working on the whole life cycle of predictive models development (data cleansing/preparation, feature selection, feature engineering, outlier detection, model selection, optimization, evaluation and final deployment).
  • Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
  • Excellent understanding of machine learning techniques and algorithms, such as k-NN, SVM, Decision Trees, Random Forests, XGBoost, etc.
  • Good Python coding skills, high standards for good quality code that is elegant, well structured and easy to understand.
  • Deep knowledge of the main Python libraries and tools focused on data analysis and machine learning (e.g. Numpy, Pandas, Jupyter, scikit-learn, matplotlib, seaborn, etc.).
  • You craft elegant, structured and tested code (e.g. PEP8, Pytest…) and are used to working with code repositories as part of a team (e.g. Git, peer code review…).
  • Some experience with relational databases.
  • Ability to craft simple and elegant solutions to complex problems.
  • You have experience working with business or product stakeholders and organizing your work to meet deadlines with high-quality deliverables.
  • You are passionate about different realms of data: statistics, databases, data engineering, data mining, geolocated data, Big Data, Machine Learning, Deep Learning, neural networks, etc. You have experience with some, have read about others, but feel curious and interested in all of them. You try to red about new trends and may incorporate some when there are good reasons for it.
  • Comfortable working in a startup environment.
  • You are a curious person and loves solving challenges.
  • Able to explain what you did during the weekend in English.
  • Passionate about what you do, you care deeply about the things you build.
  • Excellent written and verbal communication skills, you are able to explain complex analysis to non-technical business-driven people. You would be able to explain a project to a client so they can understand the value it adds.

You will get extra kudos if you have:

  • Previous experience with Linux, Bash, Git, NoSQL databases or distributed technologies like Spark is a big plus.
  • A deep interest about Geomarketing and Location Intelligence.
  • Experience interacting with clients.
  • Experience working with spatial data or GIS systems and/or mobility data (GPS, etc).
  • Experience building pipelines of data and/or with related tools (like Airflow).
  • Experience in spatial Econometrics.

What you can expect from the job:

You will:

  • Mathematical modelling to answer business question in retail. You will design, develop, monitor and maintain a wide range of explanatory and predictive models. In addition, you will also participate in the integration of those models into the final app to allow our customers to better understand their current and future business. And, of course, you will be expected to follow modelling best practices to create solutions that are correct, accurate, scalable and performant.
  • Analysis of the most extensive and complete spatial database to understand the retail ecosystem and consumer's behaviour patterns.
  • Create ad-hoc solutions for our clients to take the most of Geoblink platform: creating/loading specific data, map layers or indicators for them.
  • Enrich our database of points of interest and points of sales: master techniques of web scraping, manage exclusive providers of spatial data.
  • Learn as much as you can.

Other tasks and areas of responsibility:

  • Constantly review and update existing systems to find better solutions or technologies to improve and make them more flexible, scalable and/or performant.
  • Coach and mentor other team members to create a culture that fosters collaboration and personal growth.
  • Work with stakeholders throughout our customers and even inside Geoblink to identify opportunities for leveraging both external and internal data to drive business solutions.
  • Participate in the design and implementation of new features allowing to evolve Geoblink’s app from descriptive to predictive and, finally, to prescriptive.
  • Work closely with the rest of the team including Product Owners, Data Scientists and Software Engineers to understand everyone needs and develop optimal solutions.
  • Actively collaborate in the different initiatives the company works on regarding brand awareness (e.g. blog, meetups, talks, etc.)

Why work for Geoblink?

We operate a “zero-policy” which means there are no restrictions on vacation days, office hours, working from home days, etc. We believe everyone here is a “mini-CEO”, and should have the opportunity to make their own decisions about their work schedule.

Everyone at Geoblink is passionate about their job, whether it be growing businesses ROI or building complex data systems. People join Geoblink not just for the flexibility that we offer but because we have worked hard to foster a collaborative environment filled with plenty of opportunity to have a real impact in the business and collaborate with some of the best minds in the industry.

  • Plenty of training initiatives to help your career progression
  • Unlimited coffee, team, soft drinks, fruit etc
  • Plenty of chill-out space with the typical start-up ping-pong table and office yoga
  • Plenty of quiet space for you to work in peace to produce your best work
  • The greatest start-up culture with fun initiatives and company events for all to enjoy.
  • For now we are a full remote company
 

About Geoblink

.

Other data engineer jobs that might interest you...