What We Do:
DataWallet is on a bold mission to give people full control over
their data through a self-sovereign wallet paired with our decentralized
C2B data marketplace.
Our platform allows people to control who can access their data and
for what purpose enabling them to profit from an asset that is
rightfully theirs and make their data actionable by, e.g., personalizing
web services by powering companies’ AI algorithms with their data. By
harnessing the Blockchain to empower users to control and profit from
the data they create we will disrupt the the $300 billion data brokerage
market as envisioned by our early investors Tim Draper and Marc
Benioff.
The Role:
We are looking for a full-time data engineer. You will be at the core
of making people's data work for them. You will design and maintain the
ETL data pipeline---from pulling and parsing data from various APIs to
populating normalized RDBs and calculating cached views (usually in a
NoSQL form) to power our various data products and services. While you
are not constrained in your tools, our current stack involves Airflow,
Python, js/node, PostgreSQL, MongoDB, and AWS hosting. You will be a
core part of a highly skilled and motivated team located between Berlin
and New York City that is changing one of the most unethical sectors in
our modern economy.
What We are Looking For:
[Minimum Qualifications]
At least 5 years of software engineering experience (Python or
Javascript), with at least 2 years experience in a data-focused role
Expertise in building out data pipelines, efficient ETL design, implementation, and maintenance
Mastery of RDBs and ability to generate normative schemas from datasets
Experience with NoSql dbs (such as MongoDB)
Passion for creating data infrastructure technologies from scratch using the right tools for the job
Experience building and maintaining a data warehouse in production environments
Ability to turn vague requirements into clear deliverables with minimal guidance
[Other desirable qualifications]
Experience with Apache Airflow, AWS tools, git, Linux.
Experience with systems for transforming large datasets such as Spark or Hadoop
Familiarity with Python-based data science tools (e.g., pandas) is also highly desirable.
Compensation:
Highly competitive wages
Flexible work hours
New equipment: Macbook Pro, nice monitors, and Bose headphones
Opportunities to travel to both the Berlin and New York City HQ