SciELO - Scientific Electronic Library Online

 
vol.12 issue1Corporate sustainability: a social-ecological research agenda for South African businessMales in predominantly female-dominated positions: a South African perspective author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Article

Indicators

Related links

  • On index processCited by Google
  • On index processSimilars in Google

Share


Journal of Contemporary Management

On-line version ISSN 1815-7440

JCMAN vol.12 n.1 Meyerton  2015

 

RESEARCH ARTICLES

 

Synchronisation of the process of quantitative and qualitative monitoring and evaluation of activities in public sector organisations

 

 

A DrotskieI; B OkangaII

IDepartment of Business Management: University of Johannesburg adrid@iburst.co.za
IIDepartment of Business Management: University of Johannesburg Okanga@smresearch.co.za

 

 


ABSTRACT

This article examines how synchronisation of quantitative and qualitative M&E methods predicts effectiveness of activities' monitoring and evaluation to enhance the successful implementation of different government projects.
While using qualitative research method and meta-synthesis as a technique of conceptual analysis, critical analysis was accomplished according to three main stages that include: (1) analysis of core M&E theories, (2) critical review of M&E practices in the South African public sector, and (3) comparison of findings on core M&E theories with practices in the South African public sector.
Findings indicated that linking quantitative to qualitative M&E measures influences effectiveness of monitoring and evaluation. However, no single theory or conceptual framework was found to elucidate on the essence of linking quantitative to qualitative M&E measures. Such conceptual shortfalls were also found to be evident in the South African public sector in which the evolution of monitoring and evaluation has been mainly quantitative.
The study filled such a gap by postulating a strategic framework for enhancing the synchronisation of the processes of quantitative and qualitative monitoring and evaluation.

Key phrases: monitoring and evaluation; public sector; quantitative & qualitative M&E; synchronisation


 

 

1. INTRODUCTION

Monitoring is an ongoing process of assessing the extent to which project implementation is contributing towards the achievement of the desired results (Adato 2011:4). Evaluation is a more programmed periodic analysis of efficiency, impact, relevance and sustainability of the process of project implementation (Adato 2011:4). The extent to which a policy framework for government-wide monitoring and evaluation is able to influence effectiveness of monitoring and evaluation depends on whether it permits a more synchronised approach in the application of quantitative and qualitative M&E methods (Bamberger, Rao & Woolcock 2010:5; Plano & Creswell 2008:19).

Quantitative M&E methods deploy statistical processes involving defining quantitative indicators and application of techniques such as surveys, KAP (knowledge, attitude and practices) and analysis of existing statistics (Adato 2011:4; Bamberger et al. 2010:5; Plano & Creswell 2008:19).

Qualitative M&E methods are non-statistical and rely on techniques like focus group discussions, interviews, performance management and benchmarking to elicit detailed narratives of participants' feelings, perceptions and experiences about the implementation of a particular government programme (Adato 2011:4; Bamberger et al. 2010:5; Plano & Creswell 2008:19).

A synchronised approach for using quantitative and qualitative M&E measures enhances eliciting of statistical facts on the number of units achieved or not achieved, and detailed exploring and identification of critical underlying inhibitors or influencers in the process of project implementation (Adato 2011:4).This influences the ability of public sector managers to apply accurate intervention measures to correct the identified deviations and ensure that the implementation of different government projects is successful (Adato 2011:4; Bamberger et al. 2010:5; Plano & Creswell 2008:19).

Unfortunately, empirical facts indicate that the interpretation of outcome based approach in the framework for South African government-wide monitoring and evaluation has been skewed towards the use of quantitative M&E measures rather than application of a combination of quantitative and qualitative measures for monitoring and evaluation (Department of Performance Management & Evaluation - DPME 2012:19; Public Service Commission - PSC 2012:8).

 

2. M&E POLICY FRAMEWORKS IN THE SOUTH AFRICAN PUBLIC SECTOR

DPME's (2007:6) founding policy framework on government-wide monitoring and evaluation prescribes the main steps for monitoring and evaluation to involve identification of issue of concern, policy decisions, determining outcomes to be achieved and measuring whether such outcomes are being achieved (DPME 2007:6). Although DPME's (2007:6) flowchart is a general framework which must still be interpreted at the provincial and municipal levels, it is still vague on the essence of synchronising quantitative and qualitative M&E measures. This limits its positive effects on the effectiveness of monitoring and evaluation in the South African public sector organisations.

DPME (2012:19) links the process for defining indicators to the national development plan, provincial and municipal integrated development plans. However, it underscores the importance of highlighting specific quantitative and qualitative indicators and techniques that can be used for monitoring and evaluating such outlined indicators. This ambiguity seems to have also affected the interpolation of quantitative and qualitative M&E measures at provincial and municipal levels (Kane & Trochim 2007:29).

Although the Department of Western Cape Provincial Government (2012:3) indicates clear M&E guidelines on how indicators can be defined and linked to relevant techniques that can be used, its approach is also largely skewed towards the use of only quantitative M&E measures. The City of Johannesburg's (2012:37) M&E framework elucidates more clearly on the key indicators that must be defined and from which objectives are derived, but it still fails to specify the quantitative and qualitative techniques that can be used in monitoring and evaluation of such indicators.

The same shortfalls are also apparent in the Eastern Cape and Kwa-Zulu Natal Provincial Governments in which the process of M&E is still characterised by lack of a single M&E framework that synchronises quantitative and qualitative M&E measures (Eastern Cape Provincial Government 2007:10; Eastern Cape Provincial Government 2012:229; Kwa-Zulu Natal Provincial Government 2009:1). Such ambiguity undermines the overall effectiveness of activities' monitoring and evaluation at the provincial and municipal levels.

It is therefore against that backdrop that a meta-synthesis of relevant M&E theories is undertaken in this article to highlight how the synchronisation of the process of quantitative and qualitative monitoring and evaluation would influence effective activities' monitoring and evaluation, and the successful implementation of different government projects in the South African public sector organisations. In a bid to accomplish this objective, the entire process of theoretical evaluation in this conceptual article is guided by the fundamental reasoning in Figure 1.

 

 

3. THEORETICAL FRAMEWORK

It is argued in Figure 1 that an M&E policy framework that synchronises quantitative and qualitative methods for monitoring and evaluation predicts effectiveness of activities' monitoring and evaluation to impact positively on the successful implementation of different government projects.

Figure 1 highlights that embracement of a more synchronised approach for using quantitative and qualitative M&E measures is critical for avoiding misinterpretation at provincial and municipal levels that the concept of outcome-based monitoring and evaluation only implies the application of quantitative M&E measures.

In other words, the specific M&E challenges that motivate this research are as succinctly stated in the next section.

 

4. PROBLEM STATEMENT

Poor synchronisation between quantitative and qualitative processes of monitoring and evaluation undermines the ability of the South African public sector organisations to ensure the process of activities' monitoring and evaluation influence identification and elimination of all deviations to impact positively on the successful implementation of different government projects (DPME 2012:19; PSC 2012:8).

 

5. PURPOSE OF THE RESEARCH

The main purpose of this research is to postulate a strategic framework that can be adopted for improving the extent to which the synchronisation of the processes for quantitative and qualitative monitoring and evaluation is able to impact positively on the effectiveness of activities' monitoring and evaluation. Subsequently the improvement of the process for the implementation of different government projects in the contemporary South Africa public sector organisations is also highlighted.

 

6. RESEARCH METHODOLOGY

In line with the above indicated motive of the study and Moore's (1899:59) founding theory on analytical philosophy (Cronin, Ryan & Coughlan 2008:38), this article uses conceptual analysis as a principal qualitative research technique to seek answers to the two critical research questions involving evaluation of:

Which M&E framework would influence effective synchronisation of quantitative and qualitative methods for monitoring and evaluation in the contemporary South African public sector organisations?

How would the use of such a framework influence the improvement in activities' monitoring and evaluation to influence the successful implementation of different government projects by the South African public sector organisations?

In a bid to seek appropriate responses to these two research questions, a meta-synthesis as a technique for conceptual analysis was used according to the three main steps encompassing: (1) analysis of core M&E theories, (2) critical review of M&E practices in the South African public sector, and (3) comparison of the findings on core M&E theories with the M&E practices in the South African public sector (Blanchette 2012:29).

This three stages' process enabled logical conclusions to be reached on the inhibitors that mar effective synchronisation of quantitative and qualitative methods for monitoring and evaluation in the contemporary South African public sector organisations.

Such conclusions informed the decision on the strategic M&E framework in Figure 3 that must be adopted by the managers in the South African public sector organisations to improve synchronisation of quantitative and qualitative M&E methods. The detailed results of these analyses are as presented and discussed in the next section.

 

 

 

 

7. FINDINGS

Evaluation of theoretical findings in this section are accomplished according to the two main sections that encompass analysis of core theories on quantitative and qualitative M&E measures, and assessment of M&E practices in the South African public sector organisations.

7.1 Theories on quantitative and qualitative M&E methods

Figure 2 indicates that implicit consensus exists among different authors that the five key steps that define effective synchronisation of quantitative and qualitative methods for monitoring and evaluation in public sector organisations include (Bamberger et al. 2010:5; Garbarino & Holland 2009:7; Public Service Transformation Network 2014:11):

analysis and understanding of objectives and targets in the development plans;

outline of quantitative and qualitative indicators;

selection of a combination of quantitative and qualitative data collection methods;

outline of quantitative and qualitative techniques for data analysis and interpretation;

interpolation of quantitative findings with qualitative results to identify deviations and intervention measures that can be undertaken.

7.1.1Understanding objectives and targets in the development plans

A development plan is a framework outlining critical activities that must be accomplished to facilitate the successful implementation of different government projects and programmes (Rugg 2010:19).

It is a prelude to effective monitoring and evaluation for the reason that it provides critical objectives, targets and processes that influence how activities associated with the implementation of different public sector programmes must be implemented (Rugg 2010:19). All these signify that understanding objectives and targets in the development plan is critical for the overall effectiveness of monitoring and evaluation.

It is from the objectives in the development plan that key quantitative and qualitative indicators are derived for guiding the process of activities' monitoring and evaluation (Bamberger et al. 2010:5). In the event of misinterpretation of key objectives and targets in the development plan or a project plan, the overall effectiveness of the indicators used in the process of monitoring and evaluation may also be affected (Bamberger et al. 2010:5).

In other words, analysis of the development plan and its critical objectives and targets must be undertaken in reflections to the expected key indicators that must be outlined (Patton 2011:43; Taryn et al. 2013:7).

7.1.2Indicators

Indicators are qualitative and quantitative symbols used in monitoring and evaluation to enable public sector managers reach conclusions on whether the implementation of a particular programme has been successful (Kusek & Rist 2013:12; Rugg 2010:19). It is from such conclusions that intervention measures that can be undertaken are determined. Indicators can be input, process, output, outcome and impact indicators (Kusek & Rist 2013:12; Rugg 2010:19).

Input indicators are used for measuring amount of resources used in programme implementation. Such inputs may include among others labour, equipment, financial resources, materials required for project implementation and physical facilities (Marais, Human & Botes 2008:376).

Process indicators are meant for evaluating the overall efficiency and effectiveness of the process of project implementation.

Output indicators measure the overall results of the effects of project implementation in relation to the overall levels of inputs into a particular project (Kent 2011:18). Output indicators supplement roles played by process indicators in the assessment of the overall efficiency and effectiveness of the process for project implementation (Picciotto 2011:165).

Outcome indicators assess the effects of project implementation on the improvement of areas such as the quality of services, and factors like conditions and standards of living of the population in the region where the project was implemented (Picciotto 2011:165).

Impact indicators are often classified by certain authors as outcome indicators. However, impact indicators are different from output and outcome indicators on the basis that impact indicators provide guidelines for monitoring and evaluation of long term effects associated with the implementation of a particular programme (Picciotto 2011:165).

As quantitative indicators are being set, it is also important that public sector managers take corresponding actions by ensuring that qualitative indicators are outlined to facilitate the assessment of what, why, when, who and how the outcome of a particular indicator has been achieved (Marais et al. 2008:376; Rugg 2010:19; Lavela & Galland 2014:28). Clearly outlined qualitative indicators enhance effective evaluation of the detailed aspects of quantitative indicators to influence identification and elimination of all deviations (Kusek & Rist 2013:12). However, theoretical illustration in Figure 2 suggests that all these must be followed by determining a combination of quantitative and qualitative M&E data collection methods that public sector M&E practitioners can use (Public Service Transformation Network 2014:11 ; Woolcock 2009:1).

7.1.3 Data collection methods

Data collection methods connote the quantitative and qualitative M&E tools that are used for gathering primary quantitative and qualitative data (Rugg 2010:19). Data collection elicits relevant information for relevant analysis to be conducted to reach logical conclusions on whether the implementation of a particular government programme has been successful (Rugg 2010:19).

The commonly used quantitative methods for data collection include; surveys, KAP (knowledge, attitude and practices) survey (Rugg 2010:19), case study, and analysis and interpretation of existing statistics to make relevant conclusions inhibitors and intervention measures for enhancing programme implementation (Bamberger et al. 2010:5; Garbarino & Holland 2009:7; Vladut 2014:64). It also involves actual auditing or analysis of audit reports (Rugg 2010:19).

Qualitative M&E data methods encompass the use of focus group discussions, rapid appraisals, performance measurement, benchmarking, letters, citizens' reports, telephone hotlines and focus group discussions (Berg 2007:16; Maxwell 2012:29; Woolcock 2009:1). Other qualitative M&E methods include documents' analysis and interpretation, case study, interviews and online complaints' portal (Berg 2007:16; Maxwell 2012:29; Woolcock 2009:1).

Some public sector managers use only a few of either quantitative or qualitative data collection methods (Berg 2007:16). However, theories indicate that an integrated approach that facilitates application of a combination of qualitative and quantitative data collection methods enhances effectiveness of activities' monitoring and evaluation and the identification of all the inhibitors in the process of project implementation (Creswell 2009:49; Garbarino & Holland 2009:7). This also impacts positively on effective determining of the intervention measures that can be undertaken.

With relevant data and information obtained from different sources, the selection of techniques for data analysis and interpretation depends on the kind of data and information that managers have collected (Public Service Transformation Network 2014:17; Woolcock 2009:1).

7.1.4 Data analysis

Data analysis refers to the process of organising the collected data and conducting relevant assessments and critical evaluation to determine whether the process of the implementation of different government programmes has been successful (Corder & Foreman 2014:214).

Whether or not the implementation of such programmes has been successful, the process of data analysis also extends to analysis of what could have been the major influencers or inhibitors to provide public sector managers with critical information that can be used to inform intervention measures that can be undertaken (Corder & Foreman 2014:214).

Techniques for quantitative data analysis involve the use of either parametric or non-parametric tests (Bagdonavicius, Kruopis & Nikulin 2011:102: Corder & Foreman 2014:214). Parametric tests assume a normal distribution of data divided by two real limits of either above 1.96 or below -1.96 (Hollander, Wolfer & Chicken 2014:16). This is associated with the argument that certain results will only be acceptable if they fall within this limit. The commonly used parametric tests include t-test, analysis of variance (ANOVA) and multivariate analysis encompassing confirmatory or exploratory factor analysis (Corder & Foreman 2014:214).

Parametric tests also involve the use of correlation analysis techniques like Pearson correlation co-efficient (r2), curvilinear correlation, outliers and simple linear regression analysis (Corder & Foreman 2014:214). In contrast to parametric tests that rely on assumptions, non-parametric tests are constrained by certain parameters (Hollander et al. 2014). Often, parametric tests are used for assessing a change from a fixed factor (Hollander et al. 2014). The commonly used techniques for non-parametric tests encompass sign test and chi-square (x2) analysis (Bagdonavicius et al. 2011:102).

Non-parametric tests are associated with the advantage that the obtained results are summarised and spot on the point, but some authors point that parametric tests must be used more reservedly (Morgan & Winship 2007:19; Vladut 2014:64). If assumptions under parametric tests are not well crafted, its negative implications are often reflected in wrong conclusions on the situation being monitored and evaluated (Vladut 2014:64; Morgan & Winship 2007:19). This explains why tailoring the use of quantitative measures with qualitative techniques is of significant importance for enhancing the veracity and validity of the results attained (Garbarino & Holland 2009: 7; Public Service Transformation Network 2014:11).

Qualitative data analysis involves perusal of the collected data quite repeatedly, the identification of the commonly occurring themes linked to the outlined indicators, and assessment and identification of sub-themes that explain why, when, who and how of input, process, outcome and impact indicators (Creswell 2009:49; Lavela & Galland 2014:28; Maxwell 2012:29). With all the main themes and sub-themes identified, the next step involve crafting a thematic network providing accurate sources of successes or failures and implications of such successes or failures on the implementation of government projects.

Although most public sector managers often reach decisions based on quantitative or qualitative results only, collation of the results of qualitative and qualitative process of M&E is critical for public sector managers to reach relevant logical decisions on the actual challenges and the appropriate intervention measures that can be undertaken.

7.1.5 Collation of results

Collation or interpolation of quantitative and qualitative results is an important aspect of monitoring and evaluation. However, it is often the most ignored part of the process for accomplishing monitoring and evaluation (Kane & Trochim 2007:49; Patton 2008:3). The reasons are attributable to the fact that while analysts are hurrying to provide reports, interpolation has often been ignored (Kane & Trochim 2007:49; Patton 2008:3). This results in instances where either only quantitative or qualitative results are used.

Yet, it is apparent from theories that quantitative and qualitative M&E measures for monitoring and evaluation are linked to unique inherent weaknesses that can only be offset through interpolation (Kane & Trochim 2007:49; Patton 2008:3). Interpolation is the process of interpreting the quantitative results in the context of the available qualitative results or vice versa.

Quantitative M&E measures facilitate eliciting of hard and rather rigid numerical facts that provide accurate reflection of the nature of the problem and the intervention measures that can be undertaken. However, they are often largely elusive of the alluring positive effects of in-depth qualitative assessment that facilitate the understanding of the web of interrelation between critical factors explaining failures or successes in project implementation (Adato 2011:4; Plano & Creswell 2008:19).

In M&E approach that enhances the interplay between quantitative and qualitative measures, such quantitative M&E limitations are often overridden by values of qualitative M&E methods that are latent in facilitating in-depth analysis and eliciting of flexible non-numerical information on successes and failures of programme implementation (Adato 2011:4; Maluccio et al. 2010:26).

Interpolation is therefore a critical aspect of public sector M&E for that its application enables managers and decision makers gain the wider picture and understanding of the scenario prior to assessing the remedial measures or actions that must be undertaken(Cook, Scriven, Coryn & Evergreen 2010:105). It is not axiomatic that managers and executives that do not interpolate results from these two perspectives are most likely to make wrong decisions, but over time, it has been established that interpolation of the available qualitative data with quantitative results enhance the accuracy of the intervention measures that can be applied to resolve the identified challenges (Cook et al. 2010:105).

7.2 M&E practices in the South African public sector

The outcome-based M&E approach espoused by the DPME (2012:6) and PSC (2012:8) implies that the approach for activities' monitoring and evaluation in the entire South African public sector organisations must be quantitative as well as qualitative. However, from 2009 onwards, it is more apparent that the approach undertaken in the monitoring and evaluation of activities in most of the South African public sector organisations has been mainly quantitative (Mogaswa & Moodley 2012:19).

In the more quantitative scale rather than a qualitative measure which is prescribed by PSC (2012:8), it is posited that for performance management in the public service to be effectively accomplished, it must be based on a rating scale that comprises of certain five performance bands. In performance band 1, a score of 0.25-1.00 at a range of 0% - 20% implies that the department has not achieved good performance against all the standards. Under performance band 2, a score of 1.25 - 2.00 in the range of 21% - 40% suggests that the departmental performance is quite poor against most of the standards.

PSC (2012:8) highlights that adequate performance is noted to fall under performance band 3 which is achieved if the score if 2.25 -3.00 in the range of 41% - 60%. Under performance band 4, PSC (2012:8) notes that performance is treated as good against most of the standards if the obtained score is 3.25 -4.00 at the range of 61% - 80%. Finally, PSC (2012:8) highlights that excellent performance is attained under performance band 5 when the department's performance is considered as excellent against all the standards and scoring 4.25 - 5.00 in the range of 81% - 100%. The embracement M&E approach which is largely quantitative has not only led to skewed use of quantitative indictors, but also stronger preponderance of the M&E managers in the South African public sector to use M&E techniques that are largely quantitative (Marais et al. 2008:376).

7.2.1 Skewed outline of quantitative indicators

As compared to qualitative indicators, some of the quantitative indicators that the Department of Health uses to measure its achievements include among others the percentage of the reduction in mortality rates and the percentage of increment/reduction in HIV infections, the number of new clinics, the number of new patients put on ART and the number of babies immunised (Department of Health 2013:19). Just like in the Department of Health, analysis of the annual performance plan of the Department of Basic Education would also reveal stronger use of quantitative indicators as compared to qualitative indicators (Department of Basic Education 2014:44). This is reflected in the use of indicators such as enrolment rates, the rate of early childhood development, pass rates, rate of adult literacy, and the new number of school infrastructure built as measures for gaining insight into the overall progress of different activities being accomplished by the Department of Basic Education.

Without the accompanying use of qualitative measures, such figures may not necessarily indicate factors explaining progress or failures in the process of the implementation of certain public education programmes (Department of Basic Education 2014:44). This skewed use of quantitative indicators also influences skewed application of quantitative techniques for the reason that indicators define how monitoring and evaluation are accomplished (Marais et al. 2008:376). If indicators are qualitative, M&E techniques may also tend to be qualitative and vice-versa

7.2.2 Skewed use of quantitative techniques

In the South African Department of Health, the analysis of different documents implies that with indicators being largely quantitative, the common methods of data collection involve the use of survey to gain insight into the overall process for the implementation of different healthcare programmes (Department of Health 2013:19). The same approach is also applied in the Department of Housing and Human Settlement in which findings indicated that there is often preponderance to use surveys as a principal data collection technique (Department of Human Settlements 2014:56: Gauteng Department of Human Settlements 2014:17).

It is also highly apparent in the Department of Education (2014:44) that monitoring and evaluation in the South African Department of Education has usually been accomplished using quantitative measures. In a bid to circumvent the often limited meaning attached to quantitative results, the analysis indicates that some of the specialists in government departments tend to use certain parametric and non-parametric tests as the measures for discerning causal relationships between variables (National School of Government-NSG 2014;89).

However, it is still evident in M&E theories that M&E approach which is largely quantitative limit effective analysis of how and why problems have occurred to the detriment of the effectiveness of the intervention measures which could have been put in place (Public Service Transformation Network 2014:17; Woolcock 2009:1).

 

8. DISCUSSION

The overall findings of this article support the underpinning reasoning in Figure 1 that adoption of the M&E policy framework that enhances synchronisation between quantitative and qualitative M&E measures influences the effectiveness of monitoring and evaluation. It is also confirmed in the study that subsequently, this enhances the identification and elimination of all deviations to enhance the improvement of the process for the implementation of different government projects. However, findings also imply that practices in the South African public sector signify that misinterpretation of the notion of outcome-based approach to monitoring and evaluation is undermining the extent to which government departments are able to adopt approach that favour synchronisation between quantitative and qualitative M&E measures.

Analysis of documents reflecting M&E practices in DPME (2012:5), Department of Human Settlements (2014:56), Department of Basic Education (2014:44), and Department of Health (2013:19) signifies that to most of the managers and executives in the public sector, outcome based approach to monitoring and evaluation seems to have been constrained to quantitative monitoring and evaluation.

This practice contradicts the prescriptions in core theories on M&E that in public sector monitoring and evaluation, expected outcomes include both quantitative and qualitative outcomes (Public Service Transformation Network 2014:17; Woolcock 2009:1). This misinterpretation caused significant reliance of the South African public sector managers on quantitative M&E measures to the detriment of alluring positive in-depth results which are often obtained through qualitative M&E (Health Systems Trust 2014:1).

Findings also imply that the approach that emphasises skewed use of quantitative measures affects the indicators which are put in place. Since M&E is skewed towards the use of quantitative measures, the outlined indicators and the techniques applied may also tend to be mainly quantitative. This limits the extent to which detailed information which is usually elicited through qualitative M&E measures can be used to enhance detailed understanding of quantitative results and subsequently effective identification of the challenges marring the effectiveness of project implementation. With poor understanding of the quantitative and qualitative facts, it often turns difficult for public sector managers to determine appropriate intervention measures that can be undertaken to ensure the successful implementation of different government projects.

 

9. MANAGERIAL IMPLICATIONS

If public sector managers are to reverse the current situation in which significant reliance is placed on quantitative measures to the detriment of effective monitoring and evaluation, then, the narrow interpretation of the concept of outcome-based monitoring and evaluation must be reviewed.

While drawing from the view in Figure 1 that the adoption of the M&E policy framework that synchronises quantitative and qualitative M&E methods edifies effectiveness of activities' monitoring and evaluation, it is argued in Figure 3 that such a review will involve the application of the critical seven steps that include:

Step 1: Embracement of outcome-based quantitative and qualitative M&E

M&E managers in the South African public sector will have to ensure that outcome-based monitoring and evaluation is construed to imply the achievement of quantitative as well as qualitative outcomes.

As it is indicated in Figure 3, the overall process will commence from thorough analysis and understanding of key objectives, goals and targets in the planning instruments such as the national development plan, the integrated provincial development plan and the integrated municipal development plan. This renders it easier for M&E managers to internalise and determine the quantitative and qualitative indicators that must be put in place.

Step 2: Outline of quantitative and qualitative indicators

Core quantitative indicators can be set by stipulating percentages or the expected units of achievement on input, process, output, impact and outcome indicators.

Qualitative indicators must also be outlined to explain why, when, who and how of input, process, outcome and impact indicators. Outlining quantitative and qualitative indicators must be followed by the prescription of the quantitative and the qualitative data collection methods that can be used.

Step 3: Determine quantitative and qualitative M&E data collection methods

To facilitate synchronisation of quantitative and qualitative M&E methods, Figure 3 indicates that the quantitative methods that public sector managers can use include surveys, KAP (knowledge, attitude and practices) survey, case study, analysis and interpretation of existing statistics to make relevant conclusions and auditing. As the accompanying qualitative methods encompass the use of focus group discussions, rapid appraisals, performance measurement, benchmarking, letters, citizens' reports, telephone hotlines, documents' analysis and interpretation, case study, interviews, and online complaints' portal.

Step 4: Integrate key success factors for M&E in the public sector

Before actual data collection from the field or documents can commence, training and development programmes must be conducted to highlight to the prospective evaluators and monitors how different available quantitative and qualitative techniques can be effectively used. The management must also consistently ensure that sufficient funds are allocated in their budgets for accomplishment of activities related to monitoring and evaluation. Such initiatives will enhance the extent to which M&E is able to be effectively accomplished. With M&E staffs equipped with relevant skills for data collection and analysis, the actual data collection can commence.

Step 5: Data collection

The common sources of primary data include the larger population drawn from either throughout the country or just from some geographical locations. The secondary sources of data may include the analysis of the existing documents that can be audit reports, annual reports, findings of prior conducted studies and the results of previous exercises of monitoring and evaluation. The completion of the process of data collection defines the beginning of the process of data analysis.

Step 6: Data analysis and interpretation

For quantitative data, the techniques that can be used include; planning the overall process of data analysis in the context of the outlined indicators, calculation of percentages, means and standards deviations and presentation in tables, charts or graphs and the use of cross-tabulation in the making of the necessary interpretation. Parametric tests that public sector managers can use encompass t-test, analysis of variance (ANOVA), multivariate analysis and correlation analysis. Non-parametric tests that can also be applied in quantitative data analysis and interpretation encompass sign test and chi-square (x2) analysis. For qualitative data analysis, evaluators and monitors can use key logical steps that include perusal of the collected data quite repeatedly, the identification of the commonly occurring themes linked to the outlined indicators, identification of sub-themes that explain the why, when, who and how of input, process, outcome and impact indicators, and crafting of a thematic framework that explains the situation being monitored and evaluated.

Step 7: Comparison of quantitative and qualitative results

Comparison and corroboration between qualitative findings and quantitative results must be undertaken to facilitate the identification of all deviations and the underlying explaining factors.

After all the results of quantitative M&E are interpreted and compared with the findings of qualitative M&E to enhance the detailed understanding of the facts of the situation being monitored and evaluated, public sector managers can assess whether the implementation of the project has been successful or not. Even in instances where the implementation of the project is found to be going on as planned, it is still critical that public sector managers must assess the relevant improvement measures that can still be undertaken.

 

10. CONCLUSION

A major challenge identified during this research is a conceptual deficiency anchored on lack of appropriate strategic framework for enhancing the synchronisation of quantitative and qualitative monitoring and evaluation in the contemporary South African public sector organisations.

By postulating the strategic framework in Figure 3, this conceptual paper remedies such a deficiency. This signifies that if the strategic framework in Figure 3 is adopted, the executives and managers in the South African public sector will be able to eliminate some of the limitations that have been undermining the effectiveness of their M&E frameworks and improve the performance of their organisations.

However, it is critical to note that as much as the strategic framework in Figure 3 has been confirmed to influence the improvement in the efficacy of the process of monitoring and evaluation, its direct positive effects on the improvement of the organisational performance has not been tested. It is on that basis that it is suggested that further research must examine how the use of strategic framework of quantitative and qualitative M&E measures in Figure 3 would influence the improvement in the performance of the contemporary public sector organisations.

 

REFERENCES

ADATO M. 2011. Combining quantitative and qualitative methods for programme monitoring and evaluation: why are mixed methods design best. Special Series No. 9. Geneva, CH: The World Bank.         [ Links ]

AMBE IM & BADENHORST WJ. 2012. Procurement challenges in the South African public sector. Journal of Transport and Supply Chain Management 1 (1): 242-261.         [ Links ]

AMIRKHANYAN AA .2010. Monitoring across sectors; examining the effect of non-profit and for profit contractor ownership on performance monitoring in state and local contracts. Public Administration Review 1(1):742-755.         [ Links ]

BAGDONAVICIUS V, KRUOPIS J & NIKULIN MS. 2011. Non-parametric and parametric tests for complete data. London, UK: Wiley.         [ Links ]

BAMBERGER M, RAO V & WOOLCOCK M. 2010. Using mixed methods in monitoring and evaluation: experience from international development. policy research working paper 5245. The World Bank Development Research Group. Geneva, SL. Page 1-28. [Internet: www./openknowledge.worldbank.org.: downloaded on 2014-12-16.         [ Links ]]

BLANCHETTE P. 2012. Frege's concept of logic. New York, NY: Oxford University Press.         [ Links ]

BOGHOSSIAN P. 2011. Williamson and the a priori and the analytic. Philosophy and Phenomenological Research 82(1):488-497.         [ Links ]

BERG BL. 2007. Qualitative research methods for the social sciences. 6th ed. Boston, MA: Allyn & Bacon.         [ Links ]

CAIDEN GE & CAIDEN NJ. 2004. Measuring performance in public sector programmes. Public Administration and Public Policy 1(1):1-29.         [ Links ]

CITY OF JOHANNESBURG. 2012. Annexure 3: City of Johannesburg's monitoring and evaluation framework. [Internet: www.joburg.org.za/images/stories/2013/June/monitoringandevaluationframework: downloaded on 2015-01-05.         [ Links ]]

COOK TD, SCRIVEN M, CORYN CL & EVERGREEN SD. 2010. Contemporary thinking about causation in evaluation. American Journal of Evaluation 31 (1):105 -117.         [ Links ]

CORDER GW & FOREMAN DI. 2014. Non-parametric and parametric statistics: a step by step approach. London, UK: Wiley.         [ Links ]

CRESWELL JW. 2009. Research design: qualitative, quantitative and mixed methods approaches. Thousand Oaks, CA. Sage.         [ Links ]

CRONIN P, RYAN F & COUGHLAN M. 2008. Undertaking a literature review; a step by step approach. British Journal of Nursing 17 (1):38-43.         [ Links ]

DEPARTMENT OF HEALTH. 2013. Annual report 2012/2013. Republic of South Africa. Pretoria. [Internet:www.africacheck.org; downloaded on 2015-01-09.         [ Links ]]

DEPARTMENT OF PERFORMANCE MANAGEMENT AND EVALUATION. 2007. Policy framework for government-wide monitoring and evaluation. [Internet: www.thepresidency.gov.za; downloaded on 2015-01-09.         [ Links ]]

DEPARTMENT OF PERFORMANCE MANAGEMENT AND EVALUATION. 2011 National evaluation policy framework. Pretoria; The Presidency, Republic of South Africa.         [ Links ]

DEPARTMENT OF PERFORMANCE MANAGEMENT AND EVALUATION. 2012. Development indicators. [Internet: www.thepresidency.gov.za; downloaded on 2015-01-09.         [ Links ]]

DEPARTMENT OF BASIC EDUCATION. 2014. Annual performance plan 2013-2014. Department of Basic Education, Pretoria. [Internet: www.education.gov.za downloaded on 2015-01-09.         [ Links ]]

DEPARTMENT OF WESTERN CAPE PROVINCIAL GOVERNMENT. 2010. Monitoring and results framework for the 6 month report card. Cape Town, Western Cape Provincial Government. [Internet: www.westerncape.gov.za;downloaded on 2015-01-09.         [ Links ]]

DEPARTMENT OF HUMAN SETTLEMENTS. 2014. Annual report for year ended 31 March 2014. Pretoria, Government Printer. [Internet: www.gov.za:downloaded on 2015-01-05.         [ Links ]]

EASTERN CAPE PROVINCIAL GOVERNMENT. 2012. Eastern Cape Provincial strategic plan 2012-2016. Port Elisabeth: Eastern Cape Provincial Government        [ Links ]

EASTERN CAPE PROVINCIAL GOVERNMENT. 2007. Government-wide monitoring and evaluation framework: getting from policy vision to operational reality. Port Elisabeth: Eastern Cape Provincial Government        [ Links ]

EDMUNDS R & MARCHANT T. 2008. Official statistics and monitoring and evaluation systems in developing countries: friends or foes? Paris, FR: The World Bank Development Data Group.         [ Links ]

EUROPEAN UNION. 2012. Introduction to monitoring and evaluation using the logical framework approach. Brussels: European Union.         [ Links ]

GARBARINO S & HOLLAND J. 2009. Quantitative and qualitative methods in impact evaluation and measuring results. Research paper: Governance and social development resource centre. London, UK. Page 7-18. [Internet: www.gsdrc.org: downloaded on 2014-12-17.         [ Links ]]

GAUTENG DEPARTMENT OF HUMAN SETTLEMENTS. 2014. Strategic plan 2014/15-2018/19. Gauteng Provincial Government. Pretoria, Government Printer. [Internet: www.gdhs.gpg.gov.za/Policies; downloaded on; 2015-01-05.         [ Links ]]

HEALTH SYSTEMS TRUST. 2014. Key health indicators. Health systems trust. Pretoria. [Internet: www.hst.org.za; downloaded on 2015-01-09.         [ Links ]]

HOLLANDER M, WOLFER DA & CHICKEN E. 2014. Non-parametric statistical methods. London, UK, Wiley.         [ Links ]

KANE M & TROCHIM W. 2007. Concept mapping for planning and evaluation. Thousand Oaks, CA: Sage.         [ Links ]

KENT MM. 2011. Monitoring and evaluation tool kit. HIV, tuberculosis, malaria, and health and community system strengthening. Geneva, CH: The Global Fund.         [ Links ]

KUSEK JZ & RIST RC. 2013. Ten steps to a result-based monitoring and evaluation system. a handbook of development practitioners. The World Bank. [Internet: www.performance.gov.in/sites/all/document: downloaded on 2014-12-09.         [ Links ]]

KWA-ZULU NATAL PROVINCIAL GOVERNMENT. 2009. Improving public sector performance through monitoring & evaluation. Pietermaritzburg: Kwa-Zulu Natal Provincial Government.         [ Links ]

LAVELA SL & GALLAND AS. 2014. Evaluation and measurement of patient experience. Journal of Patient Experience (PXJ) 1(1):28-36.         [ Links ]

MALUCCIO JM, ADATO M & SKOUFIAS E. 2010. Combining quantitative and qualitative research methods for the evaluation of conditional cash transfer programs. Baltimore, MD: Johns Hopkins University Press.         [ Links ]

MARAIS L, HUMAN E & BOTES L. 2008. Measuring what? The utilisation of development indicators in the integrated development process. Journal of Public Administration 43(3):376-400.         [ Links ]

MAXWELL JA. 2012. Qualitative research design; an interactive approach. Thousand Oaks, CA: Sage.         [ Links ]

MOGASWA E & MOODLEY E. 2012. The relationship between planning and budgeting and monitoring and evaluation in the public sector in public service commission, evolution of monitoring and evaluation in the South African Public Service. Official Magazine of the Public Service Commission. Pretoria: Public Service Commission. [Internet: www.psc.gov.za; downloaded on 2015-01-09.         [ Links ]]

MOORE GE. 1899. The nature of judgment. (Reprinted in 1986.) Philadelphia, PA: Temple University Press; pp. 59-80.         [ Links ]

MORGAN S & WINSHIP C. 2007. Counterfactuals and causal inference; methods and principles for social research. New York, NY: Cambridge University Press.         [ Links ]

PARLIAMENTARY MONITORING GROUP. 2014. Deliberation of human settlements strategic plan: committee researcher's briefing. [Internet: www.pmg.org.za/report; downloaded on 2015-01-09.         [ Links ]]

PATTON MQ. 2011. Developmental evaluation; applying complexity to enhance innovation and use. New York, NY: Guildford.         [ Links ]

PATTON MQ. 2008. Utilisation-focused evaluation. Thousand Oaks, CA: Sage.         [ Links ]

PICCIOTTO R. 2011. The logic of evaluation professionalism. Evaluation 17(2):165-180.         [ Links ]

PLANO CVL & CRESWELL JW. 2008. The mixed-methods reader. Thousand Oaks, CA: Sage.         [ Links ]

PSC see PUBLIC SERVICE COMMISSION        [ Links ]

PUBLIC SERVICE COMMISSION. 2012. Evolution of monitoring and evaluation in the South African Public Service. Official Magazine of the Public Service Commission. Pretoria: Public Service Commission [Internet: www.psc.gov.za; downloaded on 2015-01-09.         [ Links ]]

PUBLIC SERVICE TRANSFORMATION NETWORK. 2014. Public service transformation; introductory guide to evaluation. Public Service Transformation Network. London, UK. (Internet: www.mycommunityrights.org.uk/wp-content: downloaded on 2015-01-10.         [ Links ]]

RUGG D. 2010. An introduction to indicators. Geneva, CH: UNAIDS.         [ Links ]

TARYN A, BAMBERGER M, PUCILOWSKI DGM & DUTHIE M. 2013. Evaluation: some tools, methods and approaches. Washington, DC: US Department of State. [Internet: www.socialimpact.com/resource-center/downloads/evaluation-toolkit.pdf: downloaded on 2015-01-05.         [ Links ]]

VLADUT SL. 2014. Risk management and evaluation and qualitative methods within the project. ECOFORUM 3(4):60-67.         [ Links ]

WOOLCOCK M. 2009. Toward a plurality of methods in project evaluation; a contextualised approach to understanding impact trajectories and efficacy. Journal of Development Effectiveness 1(1):1-4.         [ Links ]

 

 

* Corresponding author

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License