Reporting to Senior Specialist – Big Data, we are pleased to announce the vacancy for Specialist Data Engineer (Big Data) – Band H within the Information and Technology vacancy in Safaricom Telecommunication Ethiopia PLC. They will work closely with all stakeholders to ensure all data ingestion pipelines as optimally built, deployed, and operationally supported. In keeping with our current business needs, we are looking for a person who meets the criteria indicated below.
Responsibilities
-
Create and maintain optimal data pipeline architectures in Apache Kafka, Apache Spark, Apache Nifi etc.
-
Assemble large, complex data sets that meet functional / non-functional business requirements.
-
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
-
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
-
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
-
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
-
Keep our data separated and secure across national boundaries through multiple data centers for DR.
-
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
-
Work with data and analytics experts to strive for greater functionality in our data systems.
-
Understanding of Microservices architectures
-
Experience with Java technologies and frameworks mainly Spring and Hibernate.
-
Experience in containerization platforms like Kubernetes, docker-swarms or RedHat OpenShift.
-
Experience with event-based and message-driven distributed system like Apache Kafka, ActiveMQ, Rabbit MQ or Tibco EMS
Qualifications
-
BS or MS degree in Computer Science (or related fields like Electronic Engineering, Physics or Mathematics).
-
3+ years of software design, development, implementation & deployment experience with backend data services.
-
4+ years of hands-on experience in any object-oriented programming & scripting languages such as C++, C#, Java, or Python.
-
Hands on experience in private cloud (docker/ kubernetes-K8s) computing.
-
Knowledge of data structures and algorithms and algorithm
-
Advanced working SQL/NoSQL knowledge and experience working with relational/nonrelational databases.
-
Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
-
Experience in performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
-
Strong analytic skills related to working with unstructured datasets.
-
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
-
A successful history of manipulating, processing, and extracting value from large, disconnected datasets.
-
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
-
Strong project management and organizational skills.
-
Experience supporting and working with cross-functional teams in a dynamic environment.
-
Experience with big data tools: Hadoop, Spark, Kafka, Nifi etc.
-
Experience with relational SQL (Oracle & PostgreSQL) and NoSQL (HDFS Hive, HBase, Cassandra) databases.
-
Experience with data pipeline and workflow management tools like Airflow, etc.
How To Apply
If you feel that you are up to the challenge and possess the necessary qualification and experience, kindly proceed to update your candidate profile on the career portal and then Click on the apply button. Remember to attach your resume.
The closing date for receiving applications is Thursday October 10,2024