Services on Demand
Article
Indicators
Related links
- Cited by Google
- Similars in Google
Share
South African Computer Journal
On-line version ISSN 2313-7835
Print version ISSN 1015-7999
SACJ vol.35 n.1 Grahamstown Jun. 2023
http://dx.doi.org/10.18489/sacj.v35i1.1189
COMMUNICATION
Namibia's first high performance computer
Jimmy ShapopiI; Anton LimboII; Michael BackesIII, IV
IDepartment of Environmental Science, University of Namibia, Ogongo, Namibia. Email: jshapopi@unam.na
IIDepartment of Computing, Mathematics and Statistics, University of Namibia, Windhoek, Namibia. Email: alimbo@unam.na
IIIDepartment of Physics, Chemistry & Material Science, University of Namibia, Windhoek, Namibia. Email: mbackes@unam.na
IVCentre for Space Research, North-West University, Potchefstroom, South Africa
ABSTRACT
High performance computing (HPC) refers to the practice of aggregating computing power of several computing nodes in a way that delivers much higher performance than one could achieve by a typical desktop computer in order to solve large problems in business, science, or engineering. The University of Namibia has so far received two HPC racks from the Centre for High Performance Computing in South Africa, of which one is operational. The primary use of the rack was foreseen to be human capacity development and awareness in HPC and to form part of Namibia's readiness in participating in the Square Kilometre Array (SKA) and the African Very Long Baseline Interferometry Network (AVN) projects, but is now also being used for research in multi-wavelength astronomy and beyond. This is one of the first HPC services set up and operated by an entirely African team. We perform tests to benchmark the computational power and data transfer capabilities of the system and find that each node, on average, has a peak performance power of 82.4+1.1 GFLOPS. We also summarise all the projects that have enlisted the HPC facility.
1 INTRODUCTION
Over the past two decades, the infrastructure and human capacity for high performance computing (HPC) has been steadily increasing in African countries. The Centre for High Performance Computing (CHPC) in South Africa has provided HPC facilities to researchers in Africa (Amolo, 2018). This was done in conjunction with the African School on Electronic Structure Methods and Applications (ASESMA), which is a biannual school to introduce researchers and graduate students to computational modelling (Chetty et al., 2010). A number of countries in Africa now use HPC facilities for weather and climate modelling and in most cases, this is done on the facilities provided by the CHPC (Bopape et al., 2019; Somses et al., 2020). The Southern African Development Community (SADC) Cyber-Infrastructure Framework, which aims to build capacity in regional research and education networks, data sharing infrastructure and trained human capital, has an infrastructure development pillar which has commissioned a few sites and deployed HPC systems (Bopape et al., 2019; Motshegwa et al., 2018). To date, most of the HPC systems that have been deployed, have also been used for weather modelling and considerations by a few countries have been taken to integrate high performance computing into university curricula (Mwasaga et al., 2015; Narasimhan & Mot-shegwa, 2018). The Abdus Salam International Centre for Theoretical Physics (ICTP) ran a 3-year project between 2008 to 2011 with the purpose of developing infrastructure and human capacity. Within this project one HPC system with 80 cores was donated to the Addis Ababa University in Ethiopia (Abiona et al., 2011).
With steady developments in astronomy in Africa (Povic et al., 2018) and Namibia (Backes et al., 2018) in 2016, the University of Namibia (UNAM) received one rack of computing nodes that was previously part of the Texas Advanced Computer Center (TACC; University of Texas, Austin) Ranger supercomputer, which had its debut in 2008 as the 5th most powerful computer in the world (Black, 2014; Erich et al., 2008b). The Ranger was re-purposed to function as single racks that were distributed to a select number of institutes in the African Square Kilometre Array (SKA) partner countries (Black, 2014), one of which is the University of Namibia. This was a result of the efforts made by the CHPC in South Africa and the Namibian National Commission on Research, Science, and Technology (NCRST), which facilitated the delivery. The CHPC coordinates the HPC Ecosystems Project (Johnston, 2019) which has the objective of facilitating readiness in advanced research computing for the upcoming Africa Very Long Baseline Interferometry Network (AVN) (Gaylard et al., 2011) and Square Kilometre Array (Carilli & Rawlings, 2004) projects. Since 2017, this single rack, together with a manager node has been operational as the first HPC system in Namibia, as the UNAM HPC (UHPC).
1.1 The UHPC/Head Node
The manager node of the cluster at UNAM is a Dell T430 server, running two central processing units (CPUs), specifically Intel Xeon E5-2603s. Each CPU has six cores with a peak frequency of 1.7 GHz. This server has 32 GB of random access memory (RAM) available, and has the capacity to accommodate 384 GB, leaving sufficient room for expansion. The server currently has 13 TB of storage installed, which is set up as central shared storage for the whole cluster. This node will serve as the manager node to the two entirely different computing racks and a storage rack, i.e. this node is to run a spectrum of discrete non-identical hardware.
1.2 The UHPC/Ranger
The rack of the former Ranger HPC hosts four shelves, each with 12 server modules ('nodes'), for a total of 48 server modules, as can be seen in Figure 1. It is a Sun Blade 6048 modular system and is designed so that it is easy to service (Sun Microsystems, Inc., 2009). Further details about the dimensions can be found in the comprehensive site planning guide (Oracle and/or its affiliates, 2012). For this system, each compute node contains 4 AMD Opteron 8356 quad-core CPUs with a peak frequency of 2.0 GHz. Each core boasts 2 GB of RAM for an aggregate memory of 32 GB per node (Limbo et al., 2019). The peak performance of the UHPC/Ranger is calculated from the current peak performance power of each server module as shown in Equation 1.
RPEAK is the peak theoretical performance and the cycles/second is also known as the frequency of each core. Applying Equation 1 to the UHPC/Ranger using the details indicated in Table 1 yields, RPEAK = 6.1 TFLOPS1. A test of this performance is demonstrated in Section 2.4.
2 OPERATION
This section outlines the current operational procedures used for the UHPC cluster. As of yet, 24 computing nodes are powered for sustainability of the HPC cluster and for maintenance, half of the 48 modules are kept as spares. Capacity building activities are ongoing as will be outlined in Section 2.7.
2.1 Setup
The setup and operational details of the UHPC can be seen in Figure 2. Users connect to the UHPC via a secured protocol, mostly Secure Shell (SSH), to the UHPC/Head Node passing through the university firewall. A 1 Gbit/s fibre link connects the firewall to the internet and the head node to the firewall. As outlined in Section 1.1, the UHPC/Head Node currently hosts 13 TB storage and will soon be bolstered by a storage server as indicated in the figure and outlined in Section 3.1. The UHPC/Head Node is connected to the computing nodes through a 48-port switch. The connection between the UHPC/Head Node and the switch is a 1 Gbit/s Ethernet link, as are the connections between the switch and each computing node.
2.2 Software stack
The UHPC cluster employs open source software for both server and computing nodes. The software being used is mainly from the OpenHPC stack, with CentOS as the base operating system for both the server and computing nodes and Portable Batch System (PBS) as the scheduling software (Johnston, 2019). The UHPC site was the first site in the entire SADC HPC Ecosystems Project to deploy the OpenHPC stack on the UHPC/Ranger, with CentOS version 7.6. The cluster uses the Ganglia software to monitor different load of the hardware such as CPU and RAM usage. Ganglia is a scalable distributed monitoring system for HPC systems such as clusters (Massie et al., 2004). It takes advantage of the a few widely used tools for data representation, compact data transport, storage and visualisation.
2.3 Using the cluster
Users access the cluster using Secure Shell. New users can request for a user account from the internal UHPC website2. The website also has guides for users to familiarise themselves with the cluster such as uploading and downloading data, submitting a job to the scheduler, monitoring a job submitted to the scheduler as well the results of a submitted job. For users that are not familiar with using a cluster, the Virtual Institute of Scientific Computing and Artificial Intelligence (VI-SCAI) offers regular workshops and training aimed at equipping users with the necessary skills to work with an HPC cluster. Usage of the cluster currently is free of charge, however, users are required to give acknowledgement to VI-SCAI in published articles that used the UHPC.
2.4 Performance
The performance of an HPC cluster is influenced by a number of factors and the performance of any given processor may differ from instance to instance. The number of floating point operations per second (FLOPS) has long been a benchmark standard for estimating the computational power of an HPC system and the LINPACK Benchmark software is the most recognised approach for ranking HPC systems (J. Dongarra & Heroux, 2013; J. J. Dongarra et al., 2003). Stating FLOPS is rarely useful for comparing real world performance (Vetter et al., 2005), however, it does help to give an overview of a cluster's possible capabilities and is a quick way to quantify these capabilities. Additionally, the wide use of the test allows for easy comparability.
At its inauguration, the Ranger was ranked the fifth fastest supercomputer in the world, with a measured computational power peaking at RMAX = 326 TFLOPS while the theoretical RPEAK = 503.8 TFLOPS for the system of 3,936 nodes with 62,976 cores (Erich et al., 2008a, 2008b). This corresponds to an efficiency of 64.7%. It can be estimated that the power for each rack, i.e. 48 nodes, then was 3.9 TFLOPS and for each node 82.8 GFLOPS, accordingly. This calculation is a good estimate as the LINPACK Benchmark scales linearly with the number of CPU cores. It is noted that Erich et al. (2008a) states a different value for the theoretical computing power of the Ranger (RPEAK = 579.4 TFLOPS) than the one stated above. This stems from the fact that Erich et al. (2008a) also record a different value for the cycles per second: 2.3 GHz as opposed to 2.0 GHz. Here, we use the 2.0 GHz values as stated in Erich et al. (2008b), as this leads to consistently reproducing the cited results as described below.
We perform a LINPACK Benchmark for the currently active nodes of the UHPC/Ranger and receive an average of 82.4 ± 1.1 GFLOPS per node. Figure 3 shows the performance for each node in the UHPC/Ranger at the UHPC cluster. Theoretically, at 4 operations per cycle and a processor speed of 2.0 GHz (see Table 1), one would expect 128 GFLOPS of processing power. Thus, the UHPC/Ranger nodes are operating at 64.40 ± 0.88%-efficiency, on average, which is consistent with the 64.7%-efficiency stated in (Erich et al., 2008b). In total, with all nodes operating, the UHPC/Ranger is capable of 3.955 ± 0.052 TFLOPS, compared to the peak theoretical performance RPEAK = 6.1 TFLOPS, calculated in Section 1.2.
2.5 Data exchange capabilities
Data transfer is becoming a demanding area in the world of science. Large data transfers are common in the age of information. This work is done in an effort to understand how easily data can be transferred between the UHPC and other HPC systems regionally (in South Africa) and internationally (in Germany). This section presents the results of data transfers between the High Performance Computer at the University of Namibia, an HPC operated by the astronomy group of the School of Physics at the University of the Witwatersrand (Wits) in Johannesburg, South Africa, and an HPC at the Max-Planck-Institute for Nuclear Physics in Heidelberg (HD), Germany. These tests serve as a benchmark for further big data work, including large data transfers, to be conducted with the UHPC.
For this, a 1 GB data set was prepared using C+ + : 1.25 χ 108 random 64 bit floating point numbers were generated in an ASCII file, outlined in two columns. This file was exactly 1 GB in size and served as the test data set. This data was transferred to and from the clusters in Germany and South Africa using the scp command. The output of scp was written to a text file. Using sed the text file was reformatted and appended to a standard data file which had the modified Julian date (MJD) time taken for transfer and average transfer time. The transfer was repeated for a full week, every hour of every day.
Data transfers between HD and UNAM were conducted in the week of 19 February 2019 (MJD: 58533.441910) to 26 February 2019 (MJD: 58540.819460) and those between Wits and UNAM were performed in the week of 11 February 2019 (MJD: 58525.611143) to 18 February 2019 (58532.652806). Both pairs of transfers were done in both directions. Table 2 summarises the results obtained for these data transfers.
The upload and download speeds between UNAM and HD appear to be limited at about 10 MB/s, whereas the download speed from Wits to UNAM seems throttled at below 2 MB/s. The upload speed of the UNAM cluster appears limited only at a minimum of 50 MB/s. In Table 2, medians are reported as the distributions observed for the transfer times and speeds are skewed, often with distant outliers. Figure 4 show the distributions of the speeds and times of transfer. In these figures, it can be seen how the outliers affect the average values; the median seems a more appropriate qualifier.
2.6 Ongoing HPC projects
2.6.1 Land degradation assessment baseline report
A land degradation assessment baseline study was conducted for the Omusati region, a region in the northern part of Namibia. The project was carried out by the Ministry of Environment and Tourism in conjunction with UNAM's Department of Geography Information System and the Deutsche Gesellschaft für Internationale Zusammenarbeit GmbH (GIZ). The assessment was to quantify the land degradation in the region as well as recommendations on reducing further degradation of the land. To achieve this, the data was collected in the region in the form of soil samples for analysis on soil organic carbon. The analysis of these samples was carried out on the UHPC using the R programming language. The results were represented as a map showing different percentages of the soil organic carbon of the Omusati region, outlining which part of the region still has soil suitable for agriculture (Hengari et al., 2019).
2.6.2 Modelling of broadband emission of globular clusters
Globular clusters (GCs), spherically bound collections of stars, are among the most ancient of bound stellar systems of the cosmos and consists of about 104-106 stars (Ndiyavala et al., 2018). Terzan 5 is the only Galactic globular cluster that has plausibly been detected in the very-high-energy range. Data from the Fermi Large Area Telescope was used to calculate the broadband spectral energy distribution (SED) and then this SED was modelled. The emission is thought of as pulsed and un-pulsed emission. The pulsed emission is attributed to the embedded pulsars in the GC and the un-pulsed emission attributed to the interaction of the leptonic winds ambient magnetic and soft-photon fields (Ndiyavala et al., 2019). The HPC at UNAM was used to study the uncertainty in the model parameters and to demonstrate that this uncertainty leads to a large spread in the model predicted flux (Ndiyavala-Davids et al., 2021; Venter et al., 2022).
2.6.3 Analysis of gamma ray data of active galactic nuclei
The University of Namibia is a part of the High Energy Stereoscopic System (H.E.S.S.) (de Naurois, 2018) collaboration and thus has access to data from the array of five imaging atmospheric Cherenkov telescopes (IACTs). This data consists of, in part, images of the Cherenkov radiation produced when a highly energetic particle is incident on the atmosphere. The analysis of the data involves doing a pixel by pixel comparison of the data taken with simulated images so as to estimate the parameters of the incident gamma ray. A log-likelihood approach is taken for the parameter estimation. This is a computationally intensive analysis, given that there are five telescopes (four with 960 pixels (Ashton et al., 2020) and one with 2048 pixels) each containing a large set of variables recorded during observation, that have to be compared to a large number of simulations done with different parameters, such as energy (of the incident particle), depth of first interaction, direction, etc. The UHPC has been configured to perform such analyses and already research projects for two master's theses (Nanghonga, 2020; Shapopi, 2019) and a bachelor's thesis (Brand, 2020) have been completed in that context.
2.6.4 Case study weather modelling
The Namibia Meteorological Services enlisted the UHPC to conduct weather modelling for particular weather events that occurred in Namibia. One such event was heavy rainfall that occurred in October 2018 around the north-western part of Namibia in the Kunene region. This particular event was of interest as it occurred after a long period of dry conditions, and resulted in more than 50 mm of rainfall in less than 3 hours, resulting in floods and the death of a number of animals. The Namibian Meteorological Services used the UHPC to simulate this event employing a weather research and forecasting system (Somses et al., 2020).
2.7 Human capital development
In Namibia, the concept of high performance computing in its universal meaning is relatively new. Thus, to take advantage of the new facilities at UNAM, training must be provided to build capacity in high performance computing. With this goal in mind, efforts have been made to grant training in the form of workshops and schools. The first workshop was held in February 2017 at the Namibia University of Science and Technology. The workshop was aimed at training system administrators in HPC and was facilitated by trainers from the South African CHPC. The workshop was attended by members from both universities, and consisted of people from computer science, physics, mathematics, and statistics. A second workshop was held at the University of the Witwatersrand, South Africa, in August 2017. This workshop was also aimed at training system administrators in HPC, and was attended by members from different SADC countries that are members of the SADC HPC Ecosystem Project. A series of sponsorships followed, where individuals were sponsored by STEM-Trek and CHPC to attend the Supercomputing conference held annually in the United States of America.
Another workshop for capacity building was held in September of 2018 in UNAM's School of Computing. In addition to this, there have been multiple schools that also gave introductory lessons on high performance computing, such as the biannual African School of Fundamental Physics and Applications, held in 2018 at UNAM (Acharya et al., 2018) and the Development in Africa with Radio Astronomy (DARA) project (Hoare, 2018), supported by the CHPC in South Africa, that has a yearly intensive program at the Hartebeesthoek Radio Astronomy Observatory (HartRAO) in South Africa.
There are now plans to hold more workshops aimed at creating awareness on the potential uses of HPC in Namibia as well as training more people in using and administering HPC systems. It is noteworthy that capacity building in high performance computing has been bolstered by the DARA Big Data project which provides bursaries for students from the partner countries of the AVN (Scaife & Cooper, 2020). This project is made possible by a partnership between the UK Newton Fund, the Global Challenges Research Fund program, and the South African Department of Science & Technology. Given the computationally intensive nature of the field of Big Data, it is expected that many of the graduates from these scholarships will be conversant in high performance computing.
3 CONCLUSIONS AND OUTLOOK
Namibia's first HPC system is steadily growing with contributions and efforts from different organisations and is actively being used to develop human capacity. The UHPC facility intends to further leverage on the relationship with the South African Centre for High Performance Computing, in terms of support and training to further human capital development in HPC. Extensions to the UHPC that will be realised in the near future are listed below.
3.1 The UHPC/H.E.S.S. storage server
The High Energy Stereoscopic System in Namibia recently upgraded its on-site storage server (Zhu et al., 2021) and donated part of the former one (Balzer et al., 2014) to the University of Namibia. This consists of four modules, each taking 16 hard disk drives of 1 TB capacity, three modules, each taking 16 hard disk drives of 3 TB capacity, and 10 computing modules hosting Intel Xeon e5450 processors. In total, this amounts to a storage capacity of 202 TB, which is a sizeable addition to the 13 TB storage space already available, and essentially positions UNAM well to host a subset of the entire H.E.S.S. data locally.
3.2 The UHPC/Stampede
UNAM also received a Dell PowerEdge C8220 Stampede rack with 40 computing nodes. Each compute node on Stampede has two Intel Sandy Bridge 80623 CPUs (eight cores each) with 32 GB RAM and 250 GB on-board storage. Once operational, the UHPC/Stampede will complement the UHPC/Ranger in boosting the capacity of the UHPC facility to offer state of the art computational needs not only to the UNAM community but to the Namibian community at large.
ACKNOWLEDGEMENTS
We want to acknowledge the HPC Ecosystems Project at the Centre for High-performance Computing (CHPC) in South Africa for the provision of the Ranger and Stampede HPC racks as well as the Head Node and the Namibian National Commission on Research, Science, and Technology (NCRST) for facilitating the transport. We also want to acknowledge the donation of the H.E.S.S. Storage Server by the H.E.S.S. collaboration. We want to thank the Max-Plank Institute for Nuclear Physics in Heidelberg, Germany, and the Centre for Astrophysics at the University of Witwatersrand in Johannesburg, South Africa, for granting us access to their clusters to perform data transfer tests. The support of Jim Hinton and Nukri Komin in this is highly acknowledged. The Virtual Institute for Scientific Computing and Artificial Intelligence (VI-SCAI) is gratefully acknowledged for operating the High Performance Computing (HPC) cluster at the University of Namibia (UNAm). VI-SCAI is partly funded through a UNAM internal research grant.
References
Abiona, O., Onime, C., Cozzini, S., & Hailemariam, S. (2011). Capacity building for HPC infrastructure setup in Africa: The ICTP experience. 2011 IST-Africa Conference Proceedings, 1-8. https://ieeexplore.ieee.org/abstract/document/6107383
Acharya, B., Assamagan, K., Backes, M., Cecire, K., Dabrowski, A., Darve, C., Ellis, J., Gray, J., Kasai, E., Muanza, S., Ndjamba, J., Philander, A., Shahungu, M., Simon, G., Singh, D., Steenkamp, R., Voss, R., & Zulu, A. (2018). Activity Report on the Fifth Biennial African School of Fundamental Physics and Applications (tech. rep.). African School of Physics. https://www.africanschoolofphysics.org/wp-content/uploads/2019/08/ASP2018.pdf
Amolo, G. O. (2018). The growth of high-performance computing in Africa. Computing in Science & Engineering, 20(03), 21-24. https://doi.org/10.1109/MCSE.2018.03221926 [ Links ]
Ashton, T., Backes, M., Balzer, A., Berge, D., Bolmont, J., Bonnefoy, S., Brun, F., Chaminade, T., Delagnes, E., Fontaine, G., Füßling, M., Giavitto, G., Giebels, B., Glicenstein, J. Gräber, T., Hinton, J. A., Jahnke, A., Klepser, S., Kossatz, M., ... Vincent, P. (2020). A NECTAr-based upgrade for the Cherenkov cameras of the H.E.S.S. 12-meter telescopes. Astroparticle Physics, 118, Article 102425, 102425. https://doi.org/10.1016/j.astropartphys.2019.102425 [ Links ]
Backes, M., Evans, R., Kasai, E. K., & Steenkamp, R. (2018). Status of astronomy in Namibia. The African Review of Physics, 13, 90-95. https://doi.org/10.48550/arXiv.1811.01440 [ Links ]
Balzer, A., Füßling, M., Gajdus, M., Göring, D., Lopatin, A., de Naurois, M., Schlenker, S., Schwanke, U., & Stegmann, C. (2014). The H.E.S.S. central data acquisition system. Astroparticle Physics, 54, 67-80. https://doi.org/10.1016/j.astropartphys.2013.11.007 [ Links ]
Black, D. (2014). TACC Ranger finds new life in South Africa. insideHPC, (July 14). https://insidehpc.com/2014/07/tacc-ranger-finds-new-life-south-africa/
Bopape, M.-J., Sithole, H., Motshegwa, T., Rakate, E., Engelbrecht, F., Archer, E., Morgan, A., Ndimeni, L., & Botai, O. (2019). A regional project in support of the SADC cyber-infrastructure framework implementation: Weather and climate. Data Science Journal, 18(34), 1-10. https://doi.org/10.5334/dsj-2019-034 [ Links ]
Brand, A. (2020). Gamma-hadron separation in VHE gamma-ray astronomy using a multivari-ate analysis method.
Carilli, C. L., & Rawlings, S. (2004). Motivation, key science projects, standards and assumptions. New Astronomy Reviews, 48(11-12), 979-984. https://doi.org/10.1016/j.newar.2004.09.001 [ Links ]
Chetty, N., Martin, R. M., & Scandolo, S. (2010). Material progress in Africa. Nature Physics, 6, 830-832. https://doi.org/10.1038/nphys1842 [ Links ]
de Naurois, M. (2018). Blue light in the desert night. Nature Astronomy, 2, 593. https://doi.org/10.1038/s41550-018-0513-1 [ Links ]
Dongarra, J., & Heroux, M. (2013). Toward a new metric for ranking high performance computing systems (tech. rep.). Office of Scientific and Technical Information. https://doi.org/10.2172/1089988
Dongarra, J. J., Luszczek, P., & Petitet, A. (2003). The LINPACK benchmark: Past, present and future. Concurrency and Computation: Practice and Experience, 15(9), 803-820. https://doi.org/10.1002/cpe.728 [ Links ]
Erich, S., Jack, D., Horst, S., & Martin, M. (2008a). Ranger - SunBlade x6420, Opteron Quad 2Ghz, Infiniband. https://www.top500.org/system/175589/
Erich, S., Jack, D., Horst, S., & Martin, M. (2008b). Top 500 list. https://top500.org/lists/top500/2008/06/
Gaylard, M. J., Bietenholz, M. F., Combrinck, L., Booth, R. S., Buchner, S. J., Fanaroff, B. L., MacLeod, G. C., Nicolson, G. D., Quick, J. F. H., Stronkhorst, P., & Venkatasubramani, T. L. (2011). An African VLBI network of radio telescopes. Proceedings ofSAIP2011: the 56th Annual Conference of the South African Institute of Physics, 473-478. https://events.saip.org.za/event/7/page/227-proceedings
Hengari, S., Angombe, S., Katjioungua, G., Fabiano, E., Zauisomue, E., Nakashona, N., Ipinge, S., Andreas, A., Muhoko, E., Emvula, E., Mutua, J., Kempen, B., & Nijbroek, R. (2019). Land degradation assessment baseline report: Omusati region, Namibia (tech. rep.). Harvard Dataverse. https://hdl.handle.net/10568/100643
Hoare, M. G. (2018). UK aid for African radio astronomy. Nature Astronomy, 2, 505-506. https://doi.org/10.1038/s41550-018-0515-z [ Links ]
Johnston, B. (2019). HPC Ecosystems Project: Facilitating advanced research computing in Africa. Proceedings of the Practice and Experience in Advanced Research Computing on Rise of the Machines (Learning). https://doi.org/10.1145/3332186.3333264
Limbo, A., Shapopi, J. N. S., & Backes, M. (2019). Overview of the University of Namibia High Performance Computer. 7th Annual Science Research Conference, University of Namibia. https://www.researchgate.net/publication/344609297_Overview_of_the_University_of_Namibia_High_Performance_Computer
Massie, M. L., Chun, B. N., & Culler, D. E. (2004). The Ganglia distributed monitoring system: Design, implementation, and experience. Parallel Computing, 30(7), 817-840. https://doi.org/10.1016/j.parco.2004.04.001 [ Links ]
Motshegwa, T., Wright, C., Sithole, H., Ngolwe, C., & Morgan, A. (2018). Developing a cyber-infrastructure for enhancing regional collaboration on education, research, science, technology and innovation. 2018 IST-Africa Week Conference (IST-Africa), 1-9. https://ieeexplore.ieee.org/document/8417349
Mwasaga, M. N., Apiola, M., Suhonen, J., & Joy, M. (2015). Integrating high performance computing into a Tanzanian IT engineering curriculum. 2015 IEEE International Conference on Engineering, Technology and Innovation/ International Technology Management Conference (ICE/ITMC), 1-9. https://doi.org/10.1109/ICE.2015.7438646
Nanghonga, T. (2020). Data analysis ofMarkarian 421 observed by the high energy stereoscopic system (H.E.S.S.) in January 2017 (Master's thesis). University of Namibia. [ Links ]
Narasimhan, L., & Motshegwa, T. (2018). Parallel & high performance computing education -a Botswana perspective. Workshop on Education for High-Performance Computing. https://hipc.org/eduhipc_bak/
Ndiyavala, H., Venter, C., Johnson, T. J., Harding, A. K., Smith, D. A., Eger, P., Kopp, A., & van der Walt, D. J. (2019). Probing the pulsar population of Terzan 5 via spectral modeling. The Astrophysical Journal, 880(1), 53. https://doi.org/10.3847/1538-4357/ab24ca [ Links ]
Ndiyavala, H., Krüger, P. P., & Venter, C. (2018). Identifying the brightest Galactic globular clusters for future observations by H.E.S.S. and CTA. Monthly Notices of the Royal Astronomical Society, 473(1), 897-908. https://doi.org/10.1093/mnras/stx2336 [ Links ]
Ndiyavala-Davids, H., Venter, C., Kopp, A., & Backes, M. (2021). Assessing uncertainties in the predicted very high energy flux of globular clusters in the Cherenkov Telescope Array era. Monthly Notices of the Royal Astronomical Society, 500(4), 4827-4836. https://doi.org/10.1093/mnras/staa3588 [ Links ]
Oracle and/or its affiliates. (2012). Sun blade 6048 modular system: Site planning guide. https://docs.oracle.com/cd/E19926-01/E28555/E28555.pdf
Povic, M., Backes, M., Baki, P., Baratoux, D., Tessema, S. B., Benkhaldoun, Z., Bode, M., Klutse, N. A. B., Charles, P., Govender, K., van Groningen, E., Jurua, E., Mamo, A., Manxoyi, S., McBride, V., Mimouni, J., Nemaungani, T., Nkundabakura, P., Okere, B., ... Yilma, A. (2018). Development in astronomy and space science in Africa. Nature Astronomy, 2, 507-510. https://doi.org/10.1038/s41550-018-0525-x [ Links ]
Scaife, A. M. M., & Cooper, S. E. (2020). The DARA Big Data Project. Proceedings of the International Astronomical Union, 14(A30), 569-569. https://doi.org/10.1017/S174392131900543X [ Links ]
Shapopi, J. N. S. (2019). A hybrid analysis approach to the high energy stereoscopic system phase II mono-analysis (Master's thesis). University of Namibia. http://hdl.handle.net/11070/2722 [ Links ]
Somses, S., Bopape, M.-J. M., Ndarana, T., Fridlind, A., Matsui, T., Phaduli, E., Limbo, A., Maikhudumu, S., Maisha, R., & Rakate, E. (2020). Convection parametrization and multi-nesting dependence of a heavy rainfall event over Namibia with weather research and forecasting (wRF) model. Climate, 8(10), 112. https://doi.org/10.3390/cli8100112 [ Links ]
Sun Microsystems, Inc. (2009). Sun blade 6048 modular system service manual (Revision A). https://docs.oracle.com/cd/E19926-01/820-2863-13/820-2863-13.pdf
Venter, C., Davids, H., Kopp, A., & Backes, M. (2022). Modelling uncertainties in GeV - TeV flux predictions of Galactic globular clusters. PoS, ICRC2021, 927". https://doi.org/10.22323/1.395.0927
Vetter, J. S., de Supinski, B. R., Kissel, L., May, J., & Vaidya, S. (2005). Evaluating high-performance computers. Concurrency and Computation: Practice and Experience, 17(10), 1239-1270. https://doi.org/10.1002/cpe.892 [ Links ]
Zhu, S. J., Murach, T., Ohm, S., Fuessling, M., Krack, F., Mosshammer, K., Lindemann, R., Holch, T. L., & de Naurois, M. (2021). The upgraded Data Acquisition System of the H.E.S.S. telescope array. PoS, ICRC2021, 759. https://doi.org/10.22323/L395.0759
1 1 TFLOPS = 10 floating point operations per second (FLOPS)
2 uhpc.unam.na (Only accessible within the local UNAM network)