JUMP is a Data as a Service platform for OTT that allows internet video and TV distributors to increase monetization and make the best of their content and technology investments.
JUMP offers an easy to use cloud platform design which has been built to the principles of Big Data, Artificial Intelligence and Experimentation.
The Data Engineer at JUMP is responsible for overall retrievement, improvement, cleaning, and manipulation of data in the business’s operational and analytics databases.
The Data Engineer works with the JUMP software engineers and data scientists in order to understand and aid in the implementation of data requirements, analyze performance, and troubleshoot any existent issues.
The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing.
The Data Engineer defines and builds the data pipelines that will enable faster, better, data-informed decision-making within the JUMP platform.
- The Data Engineer manages his position and future junior data engineering support personnel position by creating databases optimized for performance, implementing schema changes, and maintaining data architecture standards across all of the business’s databases.
- The Data Engineer leads innovation through exploration, benchmarking, making recommendations, and implementing big data technologies for platforms. He is also tasked with the development and implementation of scripts for database maintenance, monitoring, performance tuning, and so forth.
- The Data Engineer is additionally tasked with designing and developing scalable ETL packages from the business source systems and the development of ETL routines in order to populate databases from sources and also to create aggregates.
- Furthermore, the Data architect will plan and design data systems for Near Realtime Data analytics (NRT) and machine learning + AI data pipelines.
- The Data architect will have to plan data processing orchestration and scheduling tasks.
- It is also the role of the Data Engineer to oversee large-scale data Hadoop platforms and to support the fast-growing data within the business.
- The Data Engineer is also responsible for performing thorough testing and validation in order to support the accuracy of data transformations and data verification used in machine learning models.
- The Data Engineer also plays a key role in the implementation of the data warehouse for the new big data platforms.
- It is also the Data Engineer director duty to keep up with industry trends and best practices, advising senior management on new and improved data engineering strategies that will drive performance leading to improvement in overall improvement in data governance across the business, promoting informed decision-making, and ultimately improving overall business performance.
Qualifications, work experience and education
- The Data Engineer must have a bachelor’s degree in Computer Science, Applied Mathematics, Engineering, or any other technology related field. An equivalent of the same in working experience is also accepted for the position.
- At least 3 years of working experience as a data engineering within a fast-paced a complex business setting.
- Experience working with large and complex data sets as well as experience analyzing volumes of data.
- Strong working and conceptual knowledge of building and maintaining physical and logical data models
- Communication skills where he has to convey messages and instructions clearly to the supporting personnel in order to ensure efficient execution of duties within the junior department.
- Technological Savvy/Analytical Skills: exceptional analytical skills, showing strong in the use of tools of Apache Spark, Hadoop based systems, Java, Databases and Data architecture (Data warehousing, Datamarts, Datalakes, Kappa/Lambda architecture, etc.) and fluency in others like MySQL, Python, Shell, and T-SQL programming skills.
- Knowledge in Data orchestration, hadoop clustering and data process scheduling are also required.
- He must also be technologically adept, demonstrating strong computer skills.
- Ability to design, build, and maintain the business’s ETL pipeline and data warehouse. The candidate will also demonstrate expertise in data modeling and query performance tuning on SQL Server, MySQL, PostgreSQL, Redshift, Postgres, Parquet format or similar platforms.
- Analytical and creative thinker, be an innovative problem solver, be self-motivated and proactive, be highly organized, have ability to handle-multiple and simultaneous tasks meeting aggressive deadlines, be a team player, and demonstrate an exceptional ability to stay calm and composed in the face of adversity.
- Approachable and relatable individual who forms strong connections with others and inspires trust and confidence in both his juniors and seniors.