Промышленный лизинг Промышленный лизинг  Методички 

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [ 26 ] 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52

built into information gathering, decisionmaking, and implementation. Similarly, guidelines should include a brief explanation of the key concept of institutional economics-that behavior is governed by the (formal and informal) norms and rules prevailing in an organization and can be changed only by altering those rules, especially those concerning material and non-material individual incentives.

The intention is not to create ersatz institutional specialists, but simply to make the assessment teams aware of the systemic factors affecting efficient and effective public expenditure management-and hence to know if and when to ask for help and advice. To the extent possible, assessments should draw on existing information from sources such as anticorruption surveys and the Banks Institutional and Governance Reviews (IGRs) that have been carried out in some countries. There is no reason for every instrument to have the same approach or extent of coverage in this area. Indeed, it is far better for instruments to be focused on their key objectives-as with CPARs and Fiscal ROSCs-than to attempt superficial and probably misleading institutional analyses that would best be carried out by other means. If it is decided that institutional or governance problems are central to the assessment, the relevant expertise should be added to the team. Moreover, though institutional and governance considerations are relevant to every assessment and instrument, they are best handled by PERs due to their broad scope and links to the macroeconomic environment and public administration effectiveness.

But putting the onus on PERs is not an entirely satisfactory solution. Stronger efforts must be made to analyze the political and governance underpinnings of the budget process, and doing so requires good understanding of the political economy of the countries concerned. It may be difficult to obtain such an understanding by inserting governance components into PERs, which are already overloaded. More fundamental, thorough analysis is required. Resources and time permitting, IGRs could be used to conduct such analysis, with the findings then incorporated into other instruments. Politics and governance arrangements usually do not change much over the medium term, so such reviews would only need to be done every five years or so.

In response to concerns about overloading missions and overburdening governments, these reviews could be combined with PERs (as with Turkeys recent Public Expenditure and Institutional Review and a similar review in Bolivia). Moreover, successive IGRs would not have to be as extensive as the first. To support this development, consideration should be given to



developing guidelines for IGRs that identify the main concerns of the budget cycle and propose remedies to address them. Doing so would establish a closer link between IGRs and assessments of public expenditure management. If this approach were taken, it would also be critical to form the right team of specialists for IGRs-including political scientists (or political economists) with knowledge of the countries reviewed.

FOLLOW-UP AND PERFORMANCE MONITORING

As noted, the recommendations and action plans in assessment reports are often weak. If assessments gave greater emphasis to institutional and governance analysis, in the ways suggested above, action plans would have a stronger basis in reality and be more valuable in promoting reform and building capacity. In addition, many countries often pay insufficient attention to implementing the recommendations made in assessments. Thus experiences with joint donor reviews in countries such as Cambodia, Mozambique, and Tanzania, linked to the budget cycle or the Poverty Reduction Strategy Paper process, could be a useful model for other countries.

A related issue involves the need for assessments to develop a framework for setting benchmarks and measuring performance and monitoring improvements in public expenditure management over time. This gap has partly been filled by HIPC AAPs, which use 15 indicators to evaluate public expenditure management. (DFID is piloting a similar framework to evaluate fiduciary risk.) Further work is being done by the Bank under the Governance Operations Progress Indicators (GOPIs) program to develop, among other things, public expenditure management indicators and test them in selected countries. In addition, the HIPC AAP indicators are being updated by the Bank and IMF with PEFA support, and new indica-tors-for example, on procurement-are being introduced for the round of HIPC assessments in fiscal year 2003. The OECDs Development Assistance Committee is interested in following up on its good practice paper on conducting diagnostic assessment and measuring performance (see DAC 2003 and annex 2). Among the questions that need to be addressed in such work are:

What are the primary objectives in measuring the performance of public expenditure management? Risk assessments, long-term development requirements, or both?



What conceptual framework should be used to capture the required dimensions of public expenditure management performance? Should political economy factors and issues such as corruption be explicitly included, or are measures of systems, institutions, and budget outcomes adequate?

Should there be a standard set of indicators that can be applied to many countries, or should the aim be to develop a larger menu of indicators that can be drawn on by governments and their partners in developing national frameworks for measuring performance?

International standards and codes exist for issues such as accounting (IFAC), external and internal audit (IIA and INTOSAI), fiscal classification (IMF), and fiscal transparency (IMF). Should international standards be developed for other issues, public procurement for example, and if so what bodies could be given responsibility for taking forward such work?

How should a widely accepted set of performance indicators be developed? And how should the information be collected-for example, from World Bank and IMF assessments and EC audits and compliance tests?

WHO ASSESSES THE ASSESSORS?

In the end the teams that conduct assessments and ensure their quality are more important than the content of any guideline. The research and interviews conducted for this study suggest that assessment teams do not always have the right skills-or sufficient restraint to focus on the issues most relevant to the instrument in question. This problem seems to be most severe for PERs, CFAAs, and Fiscal ROSCs.

Some PER teams have engaged recipient governments in discussions about complex accounting or performance budgeting issues without any team member having significant international experience in these areas or understanding of the technical issues involved. In addition, there have been several instances of recommendations that were either wrong or advanced without a sense of proper sequencing and practical requirements. If such recommendations are adopted, the damage can be-and sometimes has been-considerable.

Similarly, some recent CFAA teams have ranged far from their core competencies, especially into budget preparation and expenditure pro-



1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [ 26 ] 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52