Picnic is an app-only supermarket. We rely on our Data Engineers to
glean insights from large data sets and promote business intelligence.
Working with next generation technologies they write the future of
in-app grocery ordering. We’re on a quest for new Data Engineers to join
our all-star team.
Where you fit in
Picnic
is data-driven. As one of our engineers, you play a critical role in
each aspect of our business. From route planning, delivery of groceries,
to analysing supply chains. You ensure that each level of our operation
is supported, adjusted, and predicted with data.
There’s
no two ways about it – you’re a Data Wizard (or Witch). While
manipulating data, you bring out detailed information and quirky
insights. You find what others can’t and glean business opportunity from
numbers. Collaborating with our analysts, you find practical solutions
to persistent problems.
By working
towards a reliable data pipeline, you allow the team to mine and crunch
data. You analyse, experiment, and promote statistics that pique your
interest. Do you think you've spotted how to ensure our large fleet of
electric vehicles is used in the most efficient way? Test, evaluate, and
evolve your ideas alongside our dedicated Distribution team.
More
interested in customer behaviour? That’s fine – work on in-app
analytics to ensure our mobile store remains smooth, speedy, and robust.
You have the opportunity to work on what you love while writing the
future of in-app grocery shopping!
We’re using the Snowplow Framework to improve our store. If you want more motivation to apply, check out this case-study here.
What you do
- Design, implement, and maintain scalable data pipelines
- Collaborate with domain experts and analysts to solve data challenges
- Develop advanced data reporting and visualizations
- Apply data modelling methodologies and contribute to a robust data platform
What you need
- Master’s degree in Computer Science or equivalent
- Experience with SQL and relational databases
- Experience with one or more programming languages (Python or Java preferred)
- Strong understanding of data models (Data Vault and Kimball) and data warehouses in general
- Quality control: Commitment to excellence, performance, and efficiency
- Critical thinking and initiative: Hands-on, nothing-is-impossible mindset
- Mental athleticism: Highly analytical and curious intellect
- Superb communication: Ability to articulate technical problems and projects to all teams
Technologies we use
- Python, Pentaho Data Integration with custom components developed in Java
- Amazon Redshift (incl. Spectrum), Amazon Athena, MongoDB, PostgreSQL, Tableau
- Spark, Elastic MapReduce, Snowplow, Kinesis
- AWS, Docker, Kubernetes, Terraform, Vault