A few months ago we processed more than 200 million messages in a
single month and fast delivery of those messages is critical to our
15.000+ customers. Every message goes through many states and all these
transitions are valuable internally and to our customers.
We are growing fast but so is the number of products we offer. By consequence so is the data.
In order to deliver the best product our systems and our employees
need to have the best insight into this drowning amount of information.
Accurate, real time and strongly connected information.
Responsibilities
- You will design and operate scalable solutions for processing, analysing and managing large quantities of data
- Build real time pipelines and their infrastructure that aggregate data across all of our products (sms, voice, chat, video etc)
- Deliver insights and predictions internally, to our partners and customers
- Bring machine learning to every nook and cranny of the company
- Treat our data systems as another data producing systems: foster monitoring as a base of architectural decisions
Requirements
- 2+ years experience in a relevant role
- Experience with Hadoop framework e.g. Hive, HBase, Spark, Kafka
- Experience in implementing data warehouse solutions, ETL, Analytics and Reporting
- Experience with building and maintaining data pipelines
- Experience with no-sql databases like Cassandra, MongoDB, DynamoDB or something similar
- You are excited about the opportunity to work with large data sets
- Loves to code
Bonus points for
- Extensive experience with one or more of Spark, Apache Beam (Dataflow), ClickHouse, ZooKeeper
- Experience with production grade Machine Learning
- Experience with cloud platforms (AWS, Google, Azure)
- Exposure to Kubernetes or other similar container based systems
- Experience with scaling data pipelines
Come work with us and build the data driven telecom of the future.