Data Engineer job at Absa Bank
New
3 Days Ago
Linkedid Twitter Share on facebook
Data Engineer
2025-07-08T17:29:11+00:00
Absa Bank
https://cdn.greatugandajobs.com/jsjobsdata/data/employer/comp_3160/logo/Absa%20Bank.png
FULL_TIME
 
Kampala
Kampala
00256
Uganda
Banking
Computer & IT
UGX
 
MONTH
2025-07-15T17:00:00+00:00
 
Uganda
8

Data Engineer at Absa Bank

With over 100 years of rich history and strongly positioned as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of this exciting growth journey, to reset our future and shape our destiny as a proudly African group.

My Career Development Portal: Wherever you are in your career, we are here for you. Design your future. Discover leading-edge guidance, tools and support to unlock your potential. You are Absa. You are possibility.

Job Summary

Responsible for designing and maintaining secure, scalable ETL pipelines that integrate data from various banking systems, while managing data warehouses and lakes to ensure efficient storage, backup, and replication. Will support regulatory compliance through automated reporting and real-time processing for fraud detection and collaborate with analysts and data scientists to deliver clean, high-quality data. The role is grounded in strong data governance and architecture principles, ensuring that all systems are aligned, reliable, and optimized for performance and compliance.

Job Description

Accountability:  Data Pipeline & Integration – 30%

  • Design and implement automated ETL (Extract, Transform, Load) pipelines to collect data from core banking systems, mobile apps, ATMs, and third-party APIs.
  • Standardize and transform raw data into consistent formats for downstream systems.
  • Ensure secure, encrypted data transfer and enforce access controls to protect sensitive financial information.
  • Contribute to the data architecture by defining how data flows across systems, ensuring scalability, modularity, and maintainability.

Accountability:  Data Warehousing & Management – 25%

  • Build and manage data warehouses and data lakes to store structured and unstructured data efficiently.
  • Apply data modeling techniques and optimize storage using indexing, partitioning, and compression.
  • Implement data lifecycle management, including retention, archival, and deletion policies.
  • Set up data backup and replication strategies to ensure high availability, disaster recovery, and business continuity.
  • Align storage solutions with the bank’s enterprise data architecture, ensuring compatibility with analytics, reporting, and compliance systems.

Accountability:  Compliance & Real-Time Processing – 25%

  • Automate data preparation for regulatory reporting (e.g., KYC, AML, Basel III) using governed ETL workflows.
  • Build real-time data processing systems using tools like Apache Kafka or Spark Streaming for fraud detection and transaction monitoring.
  • Ensure data lineage, auditability, and traceability to support compliance audits and internal controls.
  • Design real-time processing components as part of the broader data architecture, ensuring they integrate seamlessly with batch systems and reporting tools.

Accountability:  Collaboration, Data Quality & Governance – 20%

  • Work with data scientists and analysts to deliver clean, reliable datasets for modeling and reporting.
  • Apply validation rules, anomaly detection, and monitoring to maintain high data quality across ETL pipelines.
  • Maintain metadata catalogs, data dictionaries, and lineage tracking to support transparency and governance.
  • Collaborate with data stewards and architects to enforce data governance policies and ensure alignment with the bank’s overall data strategy.

Role/person specification:

Preferred Education

  • Bachelor’s degree in Computer Science, Software Engineering, Information Technology, Data Science, Computer Engineering, Mathematics, Statistics, or a related field. (Master is an added advantage)
  • Relevant professional certifications in data engineer like, Google Cloud Data Engineer, Azure Data Engineer (DP-203), AWS Data Analytics Specialty, Databricks Data Engineer, Snowflake, Kafka, Kubernetes, Analytics, Machine Learning, Artificial Intelligence and Cloud Platforms (GCP, AWS, Azure) are considered added advantages

Preferred Experience

  • •At least 3-5 years’ experience in working on building data pipelines, working with big data and cloud platform, managing real-time and warehouse data systems, and collaborating with cross-functional teams.
  • Financial domain knowledge is an added advantage

Knowledge and Skills

  • Technical Proficiency: Skilled in data modeling, ETL/ELT, big data tools, programming (Python, R, SQL), data visualization, and cloud platforms.
  • Analytical & Problem-Solving: Able to manage complex datasets, optimize pipelines, and ensure data quality.
  • Communication & Collaboration: Effective in documenting workflows and working with cross-functional teams.

Education

Bachelor's Degree: Information Technology (Required)

Accountability:  Data Pipeline & Integration – 30% Design and implement automated ETL (Extract, Transform, Load) pipelines to collect data from core banking systems, mobile apps, ATMs, and third-party APIs. Standardize and transform raw data into consistent formats for downstream systems. Ensure secure, encrypted data transfer and enforce access controls to protect sensitive financial information. Contribute to the data architecture by defining how data flows across systems, ensuring scalability, modularity, and maintainability. Accountability:  Data Warehousing & Management – 25% Build and manage data warehouses and data lakes to store structured and unstructured data efficiently. Apply data modeling techniques and optimize storage using indexing, partitioning, and compression. Implement data lifecycle management, including retention, archival, and deletion policies. Set up data backup and replication strategies to ensure high availability, disaster recovery, and business continuity. Align storage solutions with the bank’s enterprise data architecture, ensuring compatibility with analytics, reporting, and compliance systems. Accountability:  Compliance & Real-Time Processing – 25% Automate data preparation for regulatory reporting (e.g., KYC, AML, Basel III) using governed ETL workflows. Build real-time data processing systems using tools like Apache Kafka or Spark Streaming for fraud detection and transaction monitoring. Ensure data lineage, auditability, and traceability to support compliance audits and internal controls. Design real-time processing components as part of the broader data architecture, ensuring they integrate seamlessly with batch systems and reporting tools. Accountability:  Collaboration, Data Quality & Governance – 20% Work with data scientists and analysts to deliver clean, reliable datasets for modeling and reporting. Apply validation rules, anomaly detection, and monitoring to maintain high data quality across ETL pipelines. Maintain metadata catalogs, data dictionaries, and lineage tracking to support transparency and governance. Collaborate with data stewards and architects to enforce data governance policies and ensure alignment with the bank’s overall data strategy.
 
•At least 3-5 years’ experience in working on building data pipelines, working with big data and cloud platform, managing real-time and warehouse data systems, and collaborating with cross-functional teams. Financial domain knowledge is an added advantage Knowledge and Skills Technical Proficiency: Skilled in data modeling, ETL/ELT, big data tools, programming (Python, R, SQL), data visualization, and cloud platforms. Analytical & Problem-Solving: Able to manage complex datasets, optimize pipelines, and ensure data quality. Communication & Collaboration: Effective in documenting workflows and working with cross-functional teams. Education Bachelor's Degree: Information Technology (Required)
bachelor degree
36
JOB-686d55672181a

Vacancy title:
Data Engineer

[Type: FULL_TIME, Industry: Banking, Category: Computer & IT]

Jobs at:
Absa Bank

Deadline of this Job:
Tuesday, July 15 2025

Duty Station:
Kampala | Kampala | Uganda

Summary
Date Posted: Tuesday, July 8 2025, Base Salary: Not Disclosed

Similar Jobs in Uganda
Learn more about Absa Bank
Absa Bank jobs in Uganda

JOB DETAILS:

Data Engineer at Absa Bank

With over 100 years of rich history and strongly positioned as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of this exciting growth journey, to reset our future and shape our destiny as a proudly African group.

My Career Development Portal: Wherever you are in your career, we are here for you. Design your future. Discover leading-edge guidance, tools and support to unlock your potential. You are Absa. You are possibility.

Job Summary

Responsible for designing and maintaining secure, scalable ETL pipelines that integrate data from various banking systems, while managing data warehouses and lakes to ensure efficient storage, backup, and replication. Will support regulatory compliance through automated reporting and real-time processing for fraud detection and collaborate with analysts and data scientists to deliver clean, high-quality data. The role is grounded in strong data governance and architecture principles, ensuring that all systems are aligned, reliable, and optimized for performance and compliance.

Job Description

Accountability:  Data Pipeline & Integration – 30%

  • Design and implement automated ETL (Extract, Transform, Load) pipelines to collect data from core banking systems, mobile apps, ATMs, and third-party APIs.
  • Standardize and transform raw data into consistent formats for downstream systems.
  • Ensure secure, encrypted data transfer and enforce access controls to protect sensitive financial information.
  • Contribute to the data architecture by defining how data flows across systems, ensuring scalability, modularity, and maintainability.

Accountability:  Data Warehousing & Management – 25%

  • Build and manage data warehouses and data lakes to store structured and unstructured data efficiently.
  • Apply data modeling techniques and optimize storage using indexing, partitioning, and compression.
  • Implement data lifecycle management, including retention, archival, and deletion policies.
  • Set up data backup and replication strategies to ensure high availability, disaster recovery, and business continuity.
  • Align storage solutions with the bank’s enterprise data architecture, ensuring compatibility with analytics, reporting, and compliance systems.

Accountability:  Compliance & Real-Time Processing – 25%

  • Automate data preparation for regulatory reporting (e.g., KYC, AML, Basel III) using governed ETL workflows.
  • Build real-time data processing systems using tools like Apache Kafka or Spark Streaming for fraud detection and transaction monitoring.
  • Ensure data lineage, auditability, and traceability to support compliance audits and internal controls.
  • Design real-time processing components as part of the broader data architecture, ensuring they integrate seamlessly with batch systems and reporting tools.

Accountability:  Collaboration, Data Quality & Governance – 20%

  • Work with data scientists and analysts to deliver clean, reliable datasets for modeling and reporting.
  • Apply validation rules, anomaly detection, and monitoring to maintain high data quality across ETL pipelines.
  • Maintain metadata catalogs, data dictionaries, and lineage tracking to support transparency and governance.
  • Collaborate with data stewards and architects to enforce data governance policies and ensure alignment with the bank’s overall data strategy.

Role/person specification:

Preferred Education

  • Bachelor’s degree in Computer Science, Software Engineering, Information Technology, Data Science, Computer Engineering, Mathematics, Statistics, or a related field. (Master is an added advantage)
  • Relevant professional certifications in data engineer like, Google Cloud Data Engineer, Azure Data Engineer (DP-203), AWS Data Analytics Specialty, Databricks Data Engineer, Snowflake, Kafka, Kubernetes, Analytics, Machine Learning, Artificial Intelligence and Cloud Platforms (GCP, AWS, Azure) are considered added advantages

Preferred Experience

  • •At least 3-5 years’ experience in working on building data pipelines, working with big data and cloud platform, managing real-time and warehouse data systems, and collaborating with cross-functional teams.
  • Financial domain knowledge is an added advantage

Knowledge and Skills

  • Technical Proficiency: Skilled in data modeling, ETL/ELT, big data tools, programming (Python, R, SQL), data visualization, and cloud platforms.
  • Analytical & Problem-Solving: Able to manage complex datasets, optimize pipelines, and ensure data quality.
  • Communication & Collaboration: Effective in documenting workflows and working with cross-functional teams.

Education

Bachelor's Degree: Information Technology (Required)

 

Work Hours: 8

Experience in Months: 36

Level of Education: bachelor degree

Job application procedure

Interested and qualified? Click Here to Apply

 

All Jobs | QUICK ALERT SUBSCRIPTION

Job Info
Job Category: Data, Monitoring, and Research jobs in Uganda
Job Type: Full-time
Deadline of this Job: Tuesday, July 15 2025
Duty Station: Kampala | Kampala | Uganda
Posted: 08-07-2025
No of Jobs: 1
Start Publishing: 08-07-2025
Stop Publishing (Put date of 2030): 15-07-2025
Apply Now
Notification Board

Join a Focused Community on job search to uncover both advertised and non-advertised jobs that you may not be aware of. A jobs WhatsApp Group Community can ensure that you know the opportunities happening around you and a jobs Facebook Group Community provides an opportunity to discuss with employers who need to fill urgent position. Click the links to join. You can view previously sent Email Alerts here incase you missed them and Subscribe so that you never miss out.

Caution: Never Pay Money in a Recruitment Process.

Some smart scams can trick you into paying for Psychometric Tests.