Data Analytics Engineer
2025-09-24T07:45:55+00:00
Peek Vision
https://cdn.greatugandajobs.com/jsjobsdata/data/employer/comp_6805/logo/Peek%20Vision.jpeg
https://peekvision.org/
FULL_TIME
Uganda
Kampala
00256
Uganda
Professional Services
Science & Engineering
2025-10-06T17:00:00+00:00
Uganda
8
You will be responsible for delivering impactful projects leveraging Peek’s sector leading data sources. You will collaborate closely with a talented, mission-driven team including internationally respected public health analysts, DevOps engineers and data scientists as well as external data and scientific partners. As the organisation’s data engineer expert, you will have a wide scope to determine the architecture of our data stack from the interface with the primary database layer through to data visualisation.
Peek’s culture promotes individual ownership, accountability and collaboration across and within teams, with team leads playing a supportive role in wellbeing, development and prioritisation. Peek’s staff are distributed internationally around the globe and our customers and software users operate and deliver programmes in multiple countries. Travel to programmes using Peek in different countries will be part of the role (in line with Peek’s Travel Safety Policy).
Responsibilities and Attributes
The 5 key responsibilities of the role are:
- Analytics Platform Development: Building and maintaining analytics tools, dashboards (using Preset / Apache Superset), and reporting systems that enable business users to access and analyze data. This includes creating automated reporting solutions, developing self-service analytics capabilities, and ensuring data accessibility across the organization.
- Data Modeling and Schema Design: Creating logical and physical data models that support analytical workloads, including dimensional modeling for data warehouses, defining data schemas, and ensuring data is structured optimally for querying and analysis.
- Data Pipeline Development and Management: Designing, building, and maintaining robust data pipelines that extract, transform, and load (ETL/ELT) data from various sources into data warehouses or lakes. This includes ensuring data quality, handling data validation, and optimizing pipeline performance for reliability and scalability.
- Data Architecture and Infrastructure: Designing and implementing scalable data architecture solutions, including data warehouses and analytics platforms. This involves selecting appropriate technologies in a way that is scalable and compliant with our security standards.
- Performance Optimization and Monitoring: Continuously monitoring data systems for performance issues, optimizing query performance, troubleshooting data quality problems, and implementing monitoring solutions to ensure data reliability and system uptime.
The 5 key attributes we have identified for the role are:
- Technically Proficient: Deep expertise in programming skills in Python and SQL along with expertise in data engineering tools and platforms (such as dbt, sqlmesh, airflow, prefect, dagster or similar), cloud data services (AWS or similar), Business Intelligence platforms (Preset / Apache Superset;, Preset, Tableau, Looker or similar), and database technologies (such as Redshift, Athena, PostgreSQL or similar).
- Demonstrated expertise across the full software development life cycle in Agile settings, including hands-on experience with CI/CD practices and tooling.
- Deep expertise in developing and maintaining robust APIs with the capability to build internal testing framework and a strong advocate of automated testing.
- Ability to understand business requirements, translate technical concepts for non-technical stakeholders, and collaborate effectively with product managers, data analysts, data scientists, and business users to ensure data solutions meet organizational needs.
- Adaptability and Continuous Learning: Staying current with rapidly evolving data technologies, particularly LLMs and being comfortable with ambiguity, and quickly adapting to new tools and methodologies. The data landscape changes frequently, requiring continuous skill development and flexibility in approach.
Desirable attributes for the role are :
- Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field and/or postgraduate training in Data Engineering, Analytics, or Generative AI.
- Familiarity with integrating with one or more AI Chatbots (ChatGPT, Gemini, Grok, Perplexity, Anthropic, Mistral etc.)
- Familiarity with machine learning
- Familiarity with statistical techniques
- Knowledge of NoSQL technology (e.g. MongoDB)
- Familiarity with integrating healthcare information management systems, such as DHIS2.
JOB-68d3a1b359b24
Vacancy title:
Data Analytics Engineer
[Type: FULL_TIME, Industry: Professional Services, Category: Science & Engineering]
Jobs at:
Peek Vision
Deadline of this Job:
Monday, October 6 2025
Duty Station:
Uganda | Kampala | Uganda
Summary
Date Posted: Wednesday, September 24 2025, Base Salary: Not Disclosed
Similar Jobs in Uganda
Learn more about Peek Vision
Peek Vision jobs in Uganda
JOB DETAILS:
You will be responsible for delivering impactful projects leveraging Peek’s sector leading data sources. You will collaborate closely with a talented, mission-driven team including internationally respected public health analysts, DevOps engineers and data scientists as well as external data and scientific partners. As the organisation’s data engineer expert, you will have a wide scope to determine the architecture of our data stack from the interface with the primary database layer through to data visualisation.
Peek’s culture promotes individual ownership, accountability and collaboration across and within teams, with team leads playing a supportive role in wellbeing, development and prioritisation. Peek’s staff are distributed internationally around the globe and our customers and software users operate and deliver programmes in multiple countries. Travel to programmes using Peek in different countries will be part of the role (in line with Peek’s Travel Safety Policy).
Responsibilities and Attributes
The 5 key responsibilities of the role are:
- Analytics Platform Development: Building and maintaining analytics tools, dashboards (using Preset / Apache Superset), and reporting systems that enable business users to access and analyze data. This includes creating automated reporting solutions, developing self-service analytics capabilities, and ensuring data accessibility across the organization.
- Data Modeling and Schema Design: Creating logical and physical data models that support analytical workloads, including dimensional modeling for data warehouses, defining data schemas, and ensuring data is structured optimally for querying and analysis.
- Data Pipeline Development and Management: Designing, building, and maintaining robust data pipelines that extract, transform, and load (ETL/ELT) data from various sources into data warehouses or lakes. This includes ensuring data quality, handling data validation, and optimizing pipeline performance for reliability and scalability.
- Data Architecture and Infrastructure: Designing and implementing scalable data architecture solutions, including data warehouses and analytics platforms. This involves selecting appropriate technologies in a way that is scalable and compliant with our security standards.
- Performance Optimization and Monitoring: Continuously monitoring data systems for performance issues, optimizing query performance, troubleshooting data quality problems, and implementing monitoring solutions to ensure data reliability and system uptime.
The 5 key attributes we have identified for the role are:
- Technically Proficient: Deep expertise in programming skills in Python and SQL along with expertise in data engineering tools and platforms (such as dbt, sqlmesh, airflow, prefect, dagster or similar), cloud data services (AWS or similar), Business Intelligence platforms (Preset / Apache Superset;, Preset, Tableau, Looker or similar), and database technologies (such as Redshift, Athena, PostgreSQL or similar).
- Demonstrated expertise across the full software development life cycle in Agile settings, including hands-on experience with CI/CD practices and tooling.
- Deep expertise in developing and maintaining robust APIs with the capability to build internal testing framework and a strong advocate of automated testing.
- Ability to understand business requirements, translate technical concepts for non-technical stakeholders, and collaborate effectively with product managers, data analysts, data scientists, and business users to ensure data solutions meet organizational needs.
- Adaptability and Continuous Learning: Staying current with rapidly evolving data technologies, particularly LLMs and being comfortable with ambiguity, and quickly adapting to new tools and methodologies. The data landscape changes frequently, requiring continuous skill development and flexibility in approach.
Desirable attributes for the role are :
- Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field and/or postgraduate training in Data Engineering, Analytics, or Generative AI.
- Familiarity with integrating with one or more AI Chatbots (ChatGPT, Gemini, Grok, Perplexity, Anthropic, Mistral etc.)
- Familiarity with machine learning
- Familiarity with statistical techniques
- Knowledge of NoSQL technology (e.g. MongoDB)
- Familiarity with integrating healthcare information management systems, such as DHIS2.
Work Hours: 8
Experience in Months: 12
Level of Education: bachelor degree
Job application procedure
Interested in applying for this job? Click here to submit your application now
All Jobs | QUICK ALERT SUBSCRIPTION