Monitoring is the regular observation and recording of activities taking place in a project or programme. It is a process of routinely gathering information on all aspects of the project. To monitor is to check on how project activities are progressing. It is observation; - systematic and purposeful observation. Monitoring also involves giving feedback about the progress of the project to the donors, implementers and beneficiaries of the project. Reporting enables the gathered information to be used in making decisions for improving project performance.
Monitoring is the systematic collection and analysis of information as a project progresses. It is aimed at improving the efficiency and effectiveness of a project or organisation. It is based on targets set and activities planned during the planning phases of work. It helps to keep the work on track, and can let management know when things are going wrong. If done properly, it is an invaluable tool for good management, and it provides a useful base for evaluation.
It enables you to determine whether the resources you have available are sufficient and are being well used, whether the capacity you have is sufficient and appropriate, and whether you are doing what you planned to do Purpose of Monitoring: Monitoring is very important in project planning and implementation. It is like watching where you are going while riding a bicycle; you can adjust as you go along and ensure that you are on the right track. Monitoring provides information that will be useful in: Analyzing the situation in the community and its project; • Determining whether the inputs in the project are well utilized; • Identifying problems facing the community or project and finding solutions; • Ensuring all activities are carried out properly by the right people and in time;
Order custom essay Project Management and Monitoring with free plagiarism report
• Using lessons from one project experience on to another; and • Determining whether the way the project was planned is the most appropriate way of solving the problem at hand. Planning, Monitoring and Controlling Cycle: [pic] Importance of Monitoring: Monitoring is important because: it provides the only consolidated source of information showcasing project progress; • it allows actors to learn from each other’s experiences, building on expertise and knowledge; • it often generates (written) reports that contribute to transparency and accountability, and allows for lessons to be shared more easily; • it reveals mistakes and offers paths for learning and improvements; • it provides a basis for questioning and testing assumptions;
• it provides a means for agencies seeking to learn from their experiences and to incorporate them into policy and practice; • it provides a way to assess the crucial link between implementers and beneficiaries on the ground and decision-makers; • it adds to the retention and development of institutional memory; • it provides a more robust basis for raising funds and influencing policy. WHY DO MONITORING? Monitoring enable you to check the “bottom line” (see Glossary of Terms) of development work: Not “are we making a profit? ” but “are we making a difference? ” Through monitoring and evaluation, you can: _ Review progress; _ Identify problems in planning and/or implementation; _ Make adjustments so that you are more likely to “make a difference”.
In many organisations, “monitoring and evaluation” is something that that is seen as a donor requirement rather than a management tool. Donors are certainly entitled to know whether their money is being properly spent, and whether it is being well spent. But the primary (most important) use of monitoring and evaluation should be for the organisation or project itself to see how it is doing against objectives, whether it is having an impact, whether it is working efficiently, and to learn how to do it better. Plans are essential but they are not set in concrete (totally fixed). If they are not working, or if the circumstances change, then plans need to change too.
Monitoring and evaluation are both tools which help a project or organisation know when plans are not working, and when circumstances have changed. They give management the information it needs to make decisions about the project or organisation, about changes that are necessary in strategy or plans. Through this, the constants remain the pillars of the strategic framework: the problem analysis, the vision, and the values of the project or organisation. Everything else is negotiable. (See also the toolkit on strategic planning) Getting something wrong is not a crime. Failing to learn from past mistakes because you are not monitoring and evaluating, is.
The effect of monitoring and evaluation can be seen in the following cycle. Note that you will monitor and adjust several times before you are ready to evaluate and replan. Monitoring involves: _ Establishing indicators (See Glossary of Terms) of efficiency, effectiveness and impact; _ Setting up systems to collect information relating to these indicators; _ Collecting and recording the information; _ Analysing the information; _ Using the information to inform day-to-day management. Monitoring is an internal function in any project or organisation. WHAT DO WE WANT TO KNOW? What we want to know is linked to what we think is important. In development work, what we think is important is linked to our values.
Most work in civil society organisations is underpinned by a value framework. It is this framework that determines the standards of acceptability in the work we do. The central values on which most development work is built are: _ Serving the disadvantaged; _ Empowering the disadvantaged; _ Changing society, not just helping individuals; _ Sustainability; _ Efficient use of resources. So, the first thing we need to know is: Is what we are doing and how we are doing it meeting the requirements of these values? In order to answer this question, our monitoring and evaluation system must give us information about: _ Who is benefiting from what we do? How much are they benefiting? Are beneficiaries passive recipients or does the process enable them to have some control over their lives?
_ Are there lessons in what we are doing that have a broader impact than just what is happening on our project? _ Can what we are doing be sustained in some way for the long-term, or will the impact of our work cease when we leave? _ Are we getting optimum outputs for the least possible amount of inputs? MONITORING When you design a monitoring system, you are taking a formative view point and establishing a system that will provide useful information on an ongoing basis so that you can improve what you do and how you do it. On the next page, you will find a suggested process for designing a monitoring system.
For a case study of how an organisation went about designing a monitoring system, go to the section with examples, and the example given of designing a monitoring system. Monitoring DESIGNING A MONITORING SYSTEM Below is a step-by-step process you could use in order to design a monitoring system for your organisation or project. For a case study of how an organisation went about designing a monitoring system, go to examples. Step 1: At a workshop with appropriate staff and/or volunteers, and run by you or a consultant:
_ Introduce the concepts of efficiency, effectiveness and impact (see Glossary of Terms). _ Explain that a monitoring system needs to cover all three. Generate a list of indicators for each of the three aspects. _ Clarify what variables (see Glossary of Terms) need to be linked. So, for example, do you want to be able to link the age of a teacher with his/her qualifications in order to answer the question: Are older teachers more or less likely to have higher qualifications? _ Clarify what information the project or organisation is already collecting. Step 2: Turn the input from the workshop into a brief for the questions your monitoring system must be able to answer. Depending on how complex your requirements are, and what your capacity is, you may decide to go for a computerised data base or a manual one.
If you want to be able to link many variables across many cases (e. g. participants, schools, parent involvement, resources, urban/rural etc), you may need to go the computer route. If you have a few variables, you can probably do it manually. The important thing is to begin by knowing what variables you are interested in and to keep data on these variables. Linking and analysis can take place later. (These concepts are complicated. It will help you to read the case study in the examples section of the toolkit. ) From the workshop you will know what you want to monitor. You will have the indicators of efficiency, effectiveness and impact that have been prioritised.
You will then choose the variables that will help you answer the questions you think are important. So, for example, you might have an indicator of impact which is that “safer sex options are chosen” as an indicator that “young people are now making informed and mature lifestyle choices”. The variables that might affect the indicator include: _ Age _ Gender _ Religion _ Urban/rural _ Economic category _ Family environment _ Length of exposure to your project’s initiative _ Number of workshops attended. By keeping the right information you will be able to answer questions such as: _ Does age make a difference to the way our message is received? _ Does economic category i. e. o young people in richer areas respond better or worse to the message or does it make no difference?
_ Does the number of workshops attended make a difference to the impact? Answers to these kinds of questions enable a project or organisation to make decisions about what they do and how they do it, to make informed changes to programmes, and to measure their impact and effectiveness. Answers to questions such as: _ Do more people attend sessions that are organised well in advance? _ Do more schools participate when there is no charge? _ Do more young people attend when sessions are over weekends or in the evenings? _ Does it cost less to run a workshop in the community, or to bring people to our training centre to run the workshop? nable the project or organisation to measure and improve their efficiency.
Step 3: Decide how you will collect the information you need (see collecting information) and where it will be kept (on computer, in manual files). Step 4: Decide how often you will analyse the information – this means putting it together and trying to answer the questions you think are important. Step 5: Collect, analyse, report. PURPOSE OF MONITORING AND EVALUATION What development interventions make a difference? Is the project having the intended results? What can be done differently to better meet goals and objectives? These are the questions that monitoring and evaluation allow organizations to answer.
Monitoring and evaluation are important management tools to track your progress and facilitate decision making. While some funders require some type of evaluative process, the greatest beneficiaries of an evaluation can be the community of people with whom your organization works. By closely examining your work, your organization can design programs and activities that are effective, efficient, and yield powerful results for the community. Definitions are as follows: Monitoring can be defined as a continuing function that aims primarily to provide the management and main stakeholders of an ongoing intervention with early indications of progress, or lack thereof, in the achievement of results.
An ongoing intervention might be a project, program or other kind of support to an outcome. Monitoring helps organizations track achievements by a regular collection of information to assist timely decision making, ensure accountability, and provide the basis for evaluation and learning. STRATEGIC QUESTIONS In conducting monitoring and evaluation efforts, the specific areas to consider will depend on the actual intervention, and its stated outcomes. Areas and examples of questions include: • Relevance: Do the objectives and goals match the problems or needs that are being addressed?
• Efficiency: Is the project delivered in a timely and cost-effective manner? Effectiveness: To what extent does the intervention achieve its objectives? What are the supportive factors and obstacles encountered during the implementation? • Impact: What happened as a result of the project? This may include intended and unintended positive and negative effects. • Sustainability: Are there lasting benefits after the intervention is completed? COMMON TERMS Monitoring and evaluation take place at different levels. The following box defines the common terms with examples. INPUTS The financial, human, and material resources used for the development intervention. Technical Expertise Equipment Funds ACTIVITIES Actions taken or work performed.
Training workshops conducted OUTPUTS The products, capital goods, and services that result from a development intervention. Number of people trained Number of workshops conducted OUTCOMES The likely or achieved short-term and medium-term effects or changes of an intervention’s outputs. Increased skills New employment opportunities IMPACTS The long-term consequences of the program, may be positive and negative effects. Improved standard of living STEP-BY-STEP: Planning for Monitoring and Evaluation Steps for designing a monitoring and evaluation system depend on what you are trying to monitor and evaluate. The following is an outline of some general steps you may ake in thinking through at the time of planning your activities:
1. Identify who will be involved in the design, implementation, and reporting. Engaging stakeholders helps ensure their perspectives are understood and feedback is incorporated. 2. Clarify scope, purpose, intended use, audience, and budget for evaluation. 3. Develop the questions to answer what you want to learn as a result of your work. 4. Select indicators. Indicators are meant to provide a clear means of measuring achievement, to help assess the performance, or to reflect changes. They can be either quantitative and/or qualitative. A process indicator is information that focuses on how a program is implemented. 5.
Determine the data collection methods. Examples of methods are: document reviews, questionnaires, surveys, and interviews. 6. Analyze and synthesize the information you obtain. Review the information obtained to see if there are patterns or trends that emerge from the process. 7. Interpret these findings, provide feedback, and make recommendations. The process of analyzing data and understanding findings should provide you with recommendations about how to strengthen your work, as well as any mid-term adjustments you may need to make. 8. Communicate your findings and insights to stakeholders and decide how to use the results to strengthen your organization’s efforts.
Monitoring and evaluation not only help organizations reflect and understand past performance, but serve as a guide for constructive changes during the period of implementation. Why have a detailed toolkit on monitoring and evaluation? If you don’t care about how well you are doing or about what impact you are having, why bother to do it at all? Monitoring and evaluation enable you to assess the quality and impact of your work, against your action plans and your strategic plan. In order for monitoring and evaluation to be really valuable, you do need to have planned well. Planning is dealt with in detail in other toolkits on this website. Who should use this toolkit?
This toolkit should be useful to anyone working in an organisation or project who is concerned about the efficiency, effectiveness and impact of the work of the project or organisation. When will this toolkit be useful? This toolkit will be useful when: _ You are setting up systems for data collection during the planning phases of a project or organisation; _ You want to analyse data collected through the monitoring process; _ You are concerned about how efficiently and how effectively you are working; _ You reach a stage in your project, or in the life of your organisation, when you think it would be useful to evaluate what impact the work is having; _ Donors ask for an external evaluation of your organisation and or work. DESIGNING A MONITORING SYSTEM – CASE STUDY
What follows is a description of a process that a South African organisation called Puppets against AIDS went through in order to develop a monitoring system which would feed into monitoring and evaluation processes. The main work of the organisation is presenting workshopped plays and/or puppet shows related to lifeskill issues, especially those lifeskills to do with sexuality, at schools, across the country. The organisation works with a range of age groups, with different “products” (scripts) being appropriate at different levels. Puppets against AIDS wanted to develop a monitoring and evaluation system that provided useful information on the efficiency, effectiveness and impact of its operations. To this end, it wanted to develop a data base that:
Provided all the basic information the organisation needed about clients and services given; _ Produced reports that enabled the organisation to inform itself and other stakeholders, including donors, partners and even schools, about the impact of the work, and what affected the impact of the work. The organisation made a decision to go for a computerised monitoring system. Much of the day-to-day information needed by the organisation was already on a computerised data base (e. g. schools, regions, services provided and so on), but the monitoring system would require a substantial upgrading and the development of data base software specific to the organisation’s needs.
The organisation also made the decision to develop a system initially for a pilot project, but with the intention of extending it to all the work over time. This pilot project would work with about 60 schools, using different scripts each year, over a period of three years. In order to raise the money needed for this process, Puppets against AIDS needed some kind of a brief for what was required so that it could be costed. At an initial workshop with staff, facilitated by consultants, the staff generated a list of indicators for efficiency, effectiveness and impact, in relation to their work. These were the things staff wanted to know from the system about what they did, how they did it, and what difference it made. The terms were defined as follows:
Efficiency Here what needed to be assessed was how quickly, how correctly, how cost effectively and with what use of resources the services of the organisation were offered. Much of this information was already collected and was contained in reports which reflected planning against achievement. It needed to be made “computer friendly”. Effectiveness Here what needed to be assessed was getting results in terms of the strategy and shorter-term impact. For example, were the puppet shows an effective means of communicating messages about sexuality? Again, this information was already being collected and just needed to be adapted to fit the computerised system.
Impact Here what needed to be assessed was whether the strategy worked in that it had an impact on changing behaviour in individuals (in this case the students) and that that change in behaviour impacted positively on Monitoring and Evaluation Monitoring and Evaluation by Janet Shapiro (email: nellshap@hixnet. co. za that happens when a donor insists on it, in fact, monitoring and evaluation are invaluable internal management tools. If you don’t assess how well you are doing against targets and indicators, you may go on using resources to no useful end, without changing the situation you have identified as a problem at all. Monitoring and evaluation enable you to make that assessment.
Cite this Page
Project Management and Monitoring. (2016, Sep 28). Retrieved from https://phdessay.com/project-management-and-monitoring/
Run a free check or have your essay done for you