About Geoblink
We’re a fast growing startup that has already raised close to $8 million in investment from leading venture capital firms, and have been named by Bloomberg as one of the 50 most promising startups in the world
to look out for. Our goal is to revolutionise the world of Location
Intelligence and the way businesses think about, and act upon location
intelligence data.
At Geoblink we use the latest technologies to
find solutions to real world problems businesses face when trying to
expand or increase efficiency. We leverage GIS technologies and Big Data
to create a beautiful map-based user interface that not only provides
lots of awesome statistics but also a great user experience.
We
are proud of the environment of collaboration and diversity we have
built and continue to foster, with plenty of opportunities to have a
real impact on the business.
About Geoblink Tech
Our
systems are built using an SOA approach that allows us to perform
multiple deployments per day. We <3 monitoring, pull requests,
iteration, continuous deployment and automated testing. The trunk of our
stack is Python, Node.js, Vue.js, PostgreSQL and Spark but our
architecture is language-agnostic. We move fast but put a lot of thought
into the design of our architecture so that it’s simple and scalable.
We write clean, modular code to produce great software that solves the
needs of our clients.
Our Tech&Data culture is based on the
high standards we try to achieve in everything we build and the personal
development of our team. We foster an inclusive atmosphere of non-ego
and respect where ideas are shared and feedback is used to promote
quality and innovation. Some initiatives we have in place are hackathons
twice a year, bi-weekly Tech&Data talks, personal development
budget for books, training and conferences and time for side projects
every other Friday.
You can visit our Tech blog to learn more about the projects and technologies at Geoblink.
About the Data-App team
Data
is at the heart of all the technical challenges at Geoblink. The
Data-App team is in charge of cleaning, normalizing and transforming
data from over 60 sources and deploying the result into our production
systems to feed our SaaS solution, used by our customers across
different countries. We call that our “data cooking” process, and
consists of several stages where we use different techniques and
technologies including ETLs, job schedulers like Airflow and Jenkins,
message brokers for cross-service communication like RabbitMQ, Spark for
parallel processing, geospatial databases like Neo4J and PostGIS, GIS
tools like Geoserver for geolocation operations or Python and Bash as
general scripting languages.
This requires a mix of mainly two
different roles in the team: Data Scientists for process ideation and
analysis, and Data Engineers to implement some of the most complex parts
of the data treatment and plug the results into our testing and
production systems.
Who we’re looking to recruit
We
are looking for a Data Scientist passionate about finding, processing
and modelling data to solve real world problems. You would be one of the
main points of reference to, given a product requirement, figure out
how to obtain the different types of data that will make it possible,
and transform them into meaningful insights so that they can be consumed
by the back end systems that fuel our multi-country solution.
Here are some other things we’re looking for:
- BS or MS degree in Physics, Math, Computer Science or related degree or experience.
- Experience
on the full cycle of a data analysis, from exploration techniques,
cleaning/transformation of the data to statistical analysis and
modelling.
- Advanced knowledge of Statistics, including Machine Learning techniques, hypothesis tests and interpretation of results.
- At least 1 year experience with relational databases.
- Great
coding skills (some Python skills required), high standards for good
quality code that is elegant, well structured and easy to understand.
- Ability to craft simple and elegant solutions to complex problems.
- You
have experience working with business or product stakeholders and
organizing your work to meet deadlines with high-quality deliverables.
- You
are passionate about different realms of data: statistics, databases,
data engineering, data mining, geolocated data, Big Data, Machine
Learning, Deep Learning, neural networks, etc. You have experience with
some, have read about others, but feel curious and interested in all of
them.
- Comfortable working in a startup environment.
- You are a curious person and loves solving challenges.
- Able to explain what you did during the weekend in English.
- Passionate about what you do, you care deeply about the things you build.
You will get extra kudos if you have:
- Previous experience with Linux, Bash, Git, NoSQL databases or distributed technologies like Spark or Hadoop is a big plus.
- A deep interest about Geomarketing and Location Intelligence.
- Experience managing a team of Data Scientists.
- Hands on experience working with Machine Learning techniques and algorithms.
- You have published open source code.
- Experience working with spatial data or GIS systems and/or mobility data (GPS, etc).
- Experience building pipelines of data and related tools (like Airflow).
- Experience in spatial Econometrics.
What you can expect from the job:
As
a Data Scientist your main task will be to work closely with the rest
of Tech and the Product team including developers and product owners to
understand the needs of our clients and figure out how to fulfill them
transforming existing data or through new sources.
To achieve
this, you will work side by side with our Data Engineers to design and
implement the automation of processes and models to transform
unstructured raw data from dozens of sources into normalized, uniform
and precise data to fuel an advanced Location Intelligence solution.
Other tasks and areas of responsibility:
- Setting
up and maintaining the processes and systems that take care of the data
cycle: integrate with data sources, create data pipelines and
ultimately feed the data once it’s cleaned and formatted into the
Geoblink back end systems that connect to the front end applications.
- Analyze
a variety of sources and plan how to squeeze data from them (maybe
applying statistical techniques or Machine Learning algorithms) to
ultimately use it to provide our users with useful insights about the
real world to help their businesses.
- Constantly review and
update existing systems to find better solutions or technologies to
improve and make them more flexible, scalable and/or performant.
- Get
involved in DevOps and Infrastructure tasks along with the Data
Engineers to automate the data pipeline as much as possible, making the
team and the processes more efficient.
- Coach and mentor other team members to create a culture that fosters collaboration and personal growth.
The salary bracket for this position is 30.000€ to 45.000€ a year.
Perks of the job
We
have something called the “zero-policy” which means there are no
restrictions on vacation days, office hours, working from home days,
etc. We believe everyone here is a “mini-CEO”, and should have the
flexibility and the opportunity to make their own decisions about their
work schedule.
Other perks we offer:
- Flexible work schedule and ability to work from home
- Stock options package after the first year
- Attractive compensation package
- Personal annual budget to spend on training and conferences, anything that will help you get to the next level
- Great central office located in the heart of Chamberi area of Madrid, close to great food, fun and linea 2
- Unlimited coffee, tea, coke to keep you going
- Chillout space with ping-pong, table football, draft beer, etc
- Lots of space for you to work in peace and produce your best work
- Opportunity to work side by side with smart and humble peers
- Great career progression
- And much much more!