S

Specialist Data Engineer (Big Data)

Safaricom
On-site
Addis Ababa Addis Ababa Ethiopia

Responsibilities



  • Create and maintain optimal data pipeline architectures in Apache Kafka, Apache Spark, Apache Nifi etc.

  • Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.

  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

  • Keep our data separated and secure across national boundaries through multiple data centers for DR.

  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

  • Work with data and analytics experts to strive for greater functionality in our data systems.

  • Understanding of Microservices architectures

  • Experience with Java technologies and frameworks mainly Spring and Hibernate.

  • Experience in containerization platforms like Kubernetes, docker-swarms or RedHat OpenShift.

  • Experience with event-based and message-driven distributed system like Apache Kafka, ActiveMQ, Rabbit MQ or Tibco EMS


 


 Qualifications



  • BS or MS degree in Computer Science (or related fields like Electronic Engineering, Physics or Mathematics).

  • 3+ years of software design, development, implementation & deployment experience with backend data services.

  • 4+ years of hands-on experience in anyobject-orientedprogramming & scripting languages such as C++, C#, Java, or Python.

  • Hands on experience in private cloud (docker/ kubernetes-K8s) computing.

  • Knowledge of data structures and algorithms and algorithm

  • Advanced working SQL/NoSQL knowledge and experience working with relational/nonrelational databases.

  • Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.




How To Apply


 


If you feel that you are up to the challenge and possess the necessary qualification and experience, kindly proceed to update your candidate profile on the career portal and then Click on the apply button. Remember to attach your resume.


 


The closing date for receiving applications is Thursday December 05,2024