Leverage Learning Data to Monitor Progress
This section will guide you through the building blocks required to effectively use learning data to gather insights and target investments.
Measuring Proficiency Levels
Minimum proficiency levels (MPLs) refer to the essential skills and knowledge that students should ideally achieve in a specific subject, such as reading or math, to ensure their academic success. It is not an externally imposed standard, but rather a fundamental benchmark to help educators and policymakers, in each country, identify areas for improvement and support each student’s learning journey. MPLs were developed through a collaborative process involving global experts, educators, and stakeholders, ensuring they reflect a shared understanding of the foundational skills students should achieve in reading and mathematics. See definitions for minimum proficiency levels for reading and math.
Approaches to measure minimum proficiency typically involve using assessment tools and methodologies that:
- Identify essential skills and knowledge in reading or math.
- Develop benchmarks or performance indicators that reflect the desired minimum proficiency levels.
- Administer assessments to students, which can include standardized tests, classroom-based assessments, or performance tasks.
- Analyze and interpret the collected data to determine the proportion of students achieving the minimum proficiency levels.
- Utilize the findings to inform educational policies, improve curricula, and provide targeted support to enhance student learning outcomes.
One approach, the Assessment for Minimum Proficiency Levels (AMPL), evaluates how well students meet basic skill levels in reading and math at specific levels of education. AMPL tools determine the percentage of students reaching or surpassing these minimum skill levels in each educational level.
Using AMPL instead of other foundational learning assessments allows countries to produce comparable learning outcomes data to report on the global indicator SDG 4.1.1.
Build / Strengthen National Learning Assessment Program
Building a national learning assessment program will build technical capacity and increase country autonomy and control over the assessment cycle and ensure long-term success and sustainability of the entire education system.
Most education ministries regularly conduct national learning assessments to gather critical information about the effectiveness of an education system and identify areas for improvement. Monitoring and assessing learning data is the only way to know how many children meet the Minimum Proficiency Level (MPL) for reading and math.
A robust national assessment program ensures that high-quality learning data is used to identify priority interventions, optimizing outcomes with any level of investment. This data enables policymakers to recognize trends and patterns that inform targeted interventions and investments. Trend analysis further supports informed decision-making. National assessments can be linked using standard procedures across different test administrations or versions, incorporating similar items (test questions) in each version to achieve consistency and comparability.
Standalone Country Assessments
In some cases, a country may not have previously used assessments, or they’ve been disrupted, or there are other obstacles that prevent them from accessing reliable data. This should not be a barrier to moving forward. There are practical options for data collection using standalone country assessments. These are usually conducted by external organizations, on a one-time basis, to provide an in-depth analysis of the education system.
AMPLs can be administered as a standalone assessment for proficiency in reading and math if a country does not have a national assessment.
- A Standalone AMPL can be administered as a standalone assessment for proficiency in reading and math if a country does not have a national assessment.
- An Integrated AMPL can be integrated into a national assessment as a whole booklet form or as a rotating booklet through national forms.
The ALIGN For Minimum Proficiency process is an evidence-based, gap analysis process. This data-driven process utilizes the Global Proficiency Framework as a reference to identify whether the pedagogical supports offered to learners will ensure they meet global norms in reading and math. The process focuses on four pedagogical components that lie within the jurisdiction of Ministries of Education: 1. Curriculum and standards 2. Teaching and learning resources 3. Teacher training 4. Assessment.
Foundational Learning Assessments
Foundational learning assessments enrich the empirical basis for understanding education systems, and provide benchmarks to monitor over time. These assessments can be useful if a country cannot access reliable administrative data, and/or faces strong challenges in access and quality at the primary level.
Countries are encouraged to consider these options as they build and strengthen their National Assessment. Note that these assessments don’t link to the GPF or MPLs and therefore, will not help to report on SDG 4.1.1.
An Early Grade Reading Assessment (EGRA) is a test students take that can measure their skill at both pre-reading and reading subtasks. Usually, EGRAs are given to students in Kindergarten through primary school. EGRAs test children’s skill at different subtasks they need to learn, such as letter names and letter sounds, in order to be able to read fluently. The test is typically administered by a teacher one-on-one with a student, out loud. More on EGRA.
Early Grade Mathematics Assessment (EGMA) The Core EGMA is an assessment of early mathematics learning, with an emphasis on number and operations. The Core EGMA consists of six subtests (referred to as “tasks” in the instrument) that, taken together, can produce a snapshot of children’s knowledge of the competencies that are fundamental in early grade mathematics. See EGMA Toolkit
The Multiple Indicator Cluster Surveys (MICS) is one of the largest global sources of statistically sound and internationally comparable data on children. MICS data are gathered during face-to-face interviews in representative samples of households. The surveys are typically carried out by government organizations, with technical support from UNICEF.
The Foundational Learning skills module (FLS) can be used in areas where data are currently lacking, particularly in low-income countries. FLS captures basic literacy and numeracy skills at grades 2 and 3 (SDG 4.1.1.a), targeting children aged 7 to 14, in order to monitor learning and quality of education.
The module was developed for use in household surveys and is well suited to standardized instruments such as MICS and Demographic and Health Surveys that already focus on the well-being of children. As the guidance below demonstrates, the Foundational Learning module could also be adapted to any household survey if the following overall requirements are met:
- The sample is representative and of sufficient size
- There is the ability to identify all members of a household, including age and sex
- National education authorities are involved in the key steps of the process
Want to learn more about how your country can use learning data? Contact the Coalition for Foundational Learning to discuss your specific country needs and interests.