SciELO - Scientific Electronic Library Online

Home Pagelista alfabética de revistas  

Servicios Personalizados




Links relacionados

  • En proceso de indezaciónCitado por Google
  • No hay articulos similaresSimilares en SciELO
  • En proceso de indezaciónSimilares en Google


Latin american journal of economics

versión On-line ISSN 0719-0433

Lat. Am. J. Econ. vol.51 no.1 Santiago mayo 2014 



Manuel Gómez-Zaldívar**

* The author gratefully acknowledges the data analysis assistance provided by Lizeth García-Belmonte.

** Department of Economics and Finance, Universidad de Guanajuato, DCEA - Campus Marfil, Fracc. I, El Establo, Guanajuato, Guanajuato, C.P. 3625, Mexico. Telephone/fax: +52 (473) 735-2925, ext. 2831. Email:

In 1992, Mexico's federal government signed the ANMEB agreement as part of a series of strategic public education reforms. The agreement decentralized the education system, making state governments directly responsible for providing basic public education, in an attempt to reduce marked regional disparities in educational levels. Now that sample sizes are large enough to allow reasonable empirical analysis, I examine several indicators used to measure the characteristics of education in each state. The aim is to assess whether there is sufficient empirical evidence to affirm that the agreement has contributed to improving education levels and reducing disparities among the states.

JEl classification: H4, H75, O1

Keywords: ANMEB, difference-in-differences analysis, Mexican education system.


1. Introduction

The average illiteracy level in Mexico prior to the 1910 revolution was alarming: According to the 1910 population census, 72.3% of Mexicans aged 10 or older were unable to read and write. In order to bring down the illiteracy rate, the Constituent Congress of 1917 established that public education was to be free.1 Nevertheless, this constitutional right failed to achieve the desired effect, and the 1921 population census showed that illiteracy remained relatively high at 66.2%. Therefore, in September of that year the federal government created the Secretariat of Public Education (Secretaría de Educación Pública, or SEP). The main objective of this new institution was to reduce illiteracy and increase gross enrollment ratio nationally. To do this, the federation implemented a school construction program and made it possible for local governments (state and municipal) to build and operate their own schools, in effect creating a two-tier public education system that functioned independently on two different government levels: one federal, the other local.

However, this two-tier education system did not work as well as hoped. Although from 1921 to 1930 the illiteracy rate dropped by almost 5 percentage points, it is also the case that, in absolute terms, the number of illiterate Mexicans increased. This may have been the result of the heterogeneity of development and income levels among the Mexican states combined with a homogeneous education policy implemented by the federal government. In other words, given that the federation covered the operations and maintenance costs of the schools it built while states and municipalities paid for their own, wealthier local governments built more schools and, at the same time, increased their gross enrollment ratio and raised literacy rates within their states. In 1930, in the states along the country's southern Pacific coast (Guerrero, Oaxaca, and Chiapas) only one in five people aged 10 and older knew how to read and write, whereas in the northeastern states (Coahuila, Nuevo León, and Tamaulipas) almost half of the population was literate.

Efforts to attenuate educational differences among the states have been intense, as evidenced by the secondary education laws passed by Congress at various times. Nevertheless, although education indicators have been improving throughout the time for which data is available (i.e., since 1976), disparities among the states remain. During the 1991-1992 school year, the national illiteracy rate stood at 11.7% and the gross enrollment ratio at the elementary level was 95.4%. In terms of states, the illiteracy rate in Chiapas was 28.5%, whereas in Nuevo León it was just 4.3%. In contrast, the gross enrollment ratio in Guerrero reached 100% and in Tamaulipas 89.9%.

In order to reduce these differences, in May 1992 the federal government and the governors of the 31 Mexican states signed the National Agreement for the Modernization of Basic Education (Acuerdo Nacional para la Modernización de la Educación, or ANMEB). The decentralization of educational services (abolishing of the two-tier system established in 1921) was intended, at least in part, to eliminate such differences. In order to achieve this goal, the SEP would, in conjunction with the states, take all actions necessary to reduce and overcome disparities, paying particular attention to those regions with the greatest deficits in terms of enrollment rates and educational achievement. The agreement also established that the Federation would guarantee that more resources would be allocated to those states with economic limitations and more pressing educational deficiencies.

The conceptual framework for addressing the questions of how and why the agreement might help to improve the quality of education and reduce inter-state inequalities in Mexico is complex, primarily due to the extensive changes that the reform entailed.2 Nevertheless, I summarize below the three main strategies and explain the channels through which this development was expected to improve basic education.

First, the central aim of the agreement was to raise the standards for teacher training, as the educational level of teachers has been seen as one of the primary drawbacks of the Mexican education system. The reform sought to achieve this goal through: i) a revised curriculum and revision of the courses studied in teacher-training programs; ii) new in-service programs for all teachers, principals, and supervisors; iii) the creation of a more effective system for assessing teacher-training programs; iv) the creation of a single teacher-training system; and v) the development of a radio and TV-based "distance training" program for teachers in rural and indigenous schools.

Second, a merit pay system was developed to link professional performance to salaries. The career ladder, known as the carrera magisterial, was expected to help raise educational levels by aligning government and teacher incentives and recognizing and stimulating teacher performance by renewing their interest in ongoing improvement. The measurement of professional performance would take into account experience, professional skills, educational attainment, and the completion of accredited courses.

Third, the reform established technical councils made up of teachers, principals, and supervisors to improve teacher participation in the education process. As stated by Tatto (1999), the goal was " promote teachers' analytic and critical views of their own teaching practice and as forums for discussing teaching and learning, curriculum and teacher education." Furthermore, these self-governing mechanisms were expected to develop short- and long-term projects to solve the particular issues of each school.

With the enactment of the ANMEB, the federal government transferred 100,000 schools (66.3% of the total), 500,000 teachers (66.1%), and almost 13.5 million students (68.5%) to the purview of state governments. The administration and organization of the transferred schools presented a heterogeneous dilemma for state governments. During the period of the two-tier public educational system (1921-1992), some states had highly developed local education systems whereas others had no experience at all with providing such a service. Table 1 shows state participation in public education in the year the ANMEB was signed. As can be seen, states such as the State of Mexico, Nuevo León, Baja California, and Jalisco accounted for over 40% of all the students enrolled in the public system, whereas the provision of education in Tamaulipas, Oaxaca, and Hidalgo depended totally on the federal government.


Table 1. State participation in public education, 1991—1992 school year
(in percent)


In addition to being responsible for providing education services, local governments also received resources to cover the teachers' payroll and to maintain and operate the schools they took over after the ANMEB went into effect.

Previous research on the ANMEB has focused mainly on this last issue, i.e., analyzing which factors have determined the allocation of resources to pay for education in the states.3 The objective of this paper differs somewhat. I am interested in analyzing whether, as a result of the signing of the Agreement, state education indicators have improved and whether regional disparities have been reduced. For this purpose, I apply the difference-in-differences (DD) technique. The results indicate that progress has been made in terms of the indicators, primarily those related to elementary schools. Nevertheless, further analysis suggests that is difficult to consider such improvement a consequence of the ANMEB.

The remainder of the paper is organized as follows: Section 2 briefly describes the decentralization processes of a number of other countries and their relationship to that of Mexico. Section 3 succinctly illustrates the difference-in-differences methodology. In Section 4 I present a description of the variables and their basics statistics. Section 5 contains the empirical analysis of different estimated models. Finally, Section 6 outlines the main conclusions.

2. Education decentralization in Mexico and elsewhere

Educational decentralization experiences in countries around the world have been ongoing. These reforms have so many characteristics that it would be difficult to make a clear-cut comparison of all of them; to do so would require analytical frameworks to reduce the dimensionality of such reforms and enable a reasonably fair comparison.

One attempt at such a comparison was made by Tatto (1999) for the purpose of analyzing the Mexican reform. Her approach has two dimensions: the first is characterized by two different methods of teaching, i.e., didactic/routine vs. interactive/conceptual; the second is characterized by two different authority structures, i.e., formal versus organic control. Using this methodology, Tatto compares teaching dynamics in Mexico with those at schools in Brazil, China, France, Japan, and the United States. She concludes that the characteristics of Mexico's educational reform caused teaching to shift from being didactic/routine in nature towards being interactive/conceptual. Furthermore, the reform is expected to change teaching practices to make them more organic, provided teachers are able to work together and see themselves as actors playing a key role in finding solutions to education problems through their own practices.

This paper does not present a general analytical framework to contrast the decentralization experience in different countries, nor to compare different theories of what makes for more or less "successful" education reforms. Nevertheless, I believe that it is important to point out the similarities and differences between Mexico's decentralization experience and analogous reforms in other countries.

According to the Inter-American Development Bank, or IDB (1994), almost every country in Latin America implemented some sort of education decentralization policy during the 1980s and early 1990s. The diversity of the reforms is broad, as are the strategies with which they were executed. Moreover, implementation of such reforms has been influenced by the political and economic context in which they occurred, and there has been little research carried out to evaluate their success. The following are studies that relate the Mexican experience to that of other countries, and studies connected with the work I propose.

Gershberg (1999) analyzes the costs and benefits of two alternative methods of implementing education reforms: the first is to enact legislation to define and support the reforms, and the second is to implement those changes without any legal framework. Specifically, he compares Mexico to Nicaragua,4 arguing that given Mexico's size and the power of its national teachers' union, the strategy implemented by the government—i.e., creating a legislative basis for the reform first—is more appropriate for that particular country. The Nicaraguan strategy, on the other hand, mitigates some of the pitfalls associated with the legislative approach by fostering citizen participation, giving a great deal of power to parents and local stakeholders. He concludes that countries that want to apply education decentralization reforms should use a combination of both strategies in order to achieve better results.

Faguet and Sánchez (2006) study the impact of the decentralization of education funding by evaluating a number of education statistics in Bolivia and Colombia. In Bolivia, they find evidence that after decentralization, investment in education became more responsive to local needs, especially in rural areas. Although they are unable to make a formal comparison between the situation before and after the program due to a lack of data, they find improvements in class enrollment in the post-reform period. In the case of Colombia, the availability of data made it possible to study school enrollment at a municipal level. The authors argue that in municipalities in which educational financing and policymaking are most free from central influence, enrollment increased. They suggest that it would have been more interesting to study other variables besides enrollment, such as standardized test results, but that data limitations make this unfeasible.

Lane and Murray's (1985) description of the policy of education decentralization in Sweden highlights one characteristic that made it similar to the policy followed in Mexico. Both countries' reforms were intended to strengthen local government participation by transferring central decisions and responsibilities to regional and local state bodies. Nevertheless, the Swedish reform included universities and colleges, whereas Mexico's reform included only primary and middle schools. In general, we can say that both reforms are similar in that their goals were determined by central authorities but how these reforms were achieved was decided at the local level.

3. Methodology: difference-in-differences analysis

The DD approach has been widely used to analyze the effects of policy changes. This procedure helps to examine the effect of some sort of "treatment" by comparing the performance of a treatment group to the performance of a control group. In the basic set-up, the researcher analyzes the outcomes of the two groups during two periods of time: before and after the treatment. It is assumed that one of the groups has been exposed to a treatment in the second period but not in the first. The control group is not exposed to the treatment in either period.5

If the researcher focuses on the treatment group alone, before and after, in order to infer the consequences of the policy change, an erroneous conclusion may be reached since there may be other factors influencing events at the same time as the treatment. Therefore, the DD methodology utilizes a control group to remove the possible effects of other factors. The implicit assumption is that if there are other factors affecting both groups at the same time they will have the same effect on the treatment as on the control group.

The baseline DD model to be estimated in this study takes the following form:

Where yit is the value of the variable of interest in state i (i = 1,2,…, 31) at time t; dg is a dummy variable that indicates the group, and which takes the value of zero for the control group and the value of one for the treatment group; dt is a time dummy that indicates the period of time and takes the value of zero for the pre-treatment period and the value of one for the post-treatment period. The coefficient of interest is δ. This reveals the behavior of the variable of interest, that is, the treatment group after the agreement is implemented; the interaction term (dg dt) takes the value of one when the observation belongs to the treatment group in the post-treatment period. The variables denoted as X's are: real gross state product per capita, percentage of population living in urban areas, and state fiscal independence. They are included in the model because they are important characteristics that may influence the level of education provision in the states, especially before the implementation of the ANMEB.

The independent variables employed in model (1) are defined as follows: The time dummy variable, dt, takes the value of zero for observations up to 1992, the year in which the agreement was signed, and the value of one for the years after 1992. The group dummy variable, dg, which distinguishes between those states that were exposed to a treatment and those that were not, deserves a more detailed explanation.

The ANMEB is a federal agreement and therefore one that affects all Mexican states at the same time. Strictly speaking, all states receive the public policy treatment; consequently, it should not be possible to separate the states into two distinct groups: control and treatment. Nevertheless, in order to analyze the impact of the ANMEB with this method, I use the following line of reasoning: García-Pérez (2008) calculates the percentage of students who attended public schools operated by state governments in the 1991-1992 school year, the year in which the agreement was signed. This variable can serve as a proxy to measure the "amount of experience" that each state had in administering and providing public education, and therefore, I believe that it can be used to establish the two different groups needed to apply DD. On the one hand, for those states that had little or no experience in providing education services, the agreement would impose a new responsibility, one with which they were unfamiliar. Thus, the agreement represents a change of policy; I place these states in the treatment group. On the other hand, for those states that already offered this service to a high percentage of students, we can regard the agreement as having had little impact and not representing a change of policy, since these states had already assumed responsibility for providing public education. I place these states in the control group. García-Pérez calculates that the range of participation by state governments in public education was wide. Among the states with the most experience were the State of Mexico, Nuevo León, Baja California, and Jalisco, with over 40% of students enrolled in public schools under the control of the state government. Among those states with least experience were Querétaro, Tamaulipas, Oaxaca, and Hidalgo, with less than 1%. In fact, the state government of Hidalgo had no participation, i.e., 100% of the students were enrolled in the federal system.

In addition, there is another issue that needs to be clarified in the construction of dg, namely, the threshold value of the level of participation of the state governments to be placed in either group.

Since it is difficult or impossible to argue that a specific value of this variable should determine whether to place each state in one of the groups, the analysis is carried out for different levels, i.e., 30% and 40%. In each case, dg takes the value of zero—the state is added to the control group—if the state has "sufficient" experience in providing education; otherwise, the value of the variable is one.

The unknown coefficients, α, β, and γ, represent the constant term, the effect attributed to the specific group and the time effect, respectively. The purpose of the DD methodology is to obtain a good estimator of δ, δ. Equation (1) can be estimated using the data for both groups in both periods of time using ordinary least squares (OLS), by assuming that the error term eit has the properties ordinarily required.

Determining the expected value of the variable of interest in each of the four groups, which is denoted as Yi T0, Yi T1, Yi c0, and Yi c1, is straightforward, where the subscripts T and C refer to the treatment and control group respectively, and the subscripts 0 and 1 differentiate between the pre- and post-treatment period. The expected values for each group are defined as follows:

From these results, we observe that an unbiased estimator of δ is defined:6

The unbiased estimator that assesses the impact of the treatment is defined as the difference in the average response of the treatment group, before and after the treatment, minus the same observed difference in the control group. The name of the method employed is derived from the formula, i.e., the difference of the differences.

4. Description of variables and basic statistics

The DD methodology is used to analyze the effect of the ANMEB on the various education indicators in all states in the country. These variables are then used as the dependent variables when estimating model (1). The variables of interest are listed and described in Table 2.7


Table 2. Description of variables


Figures 1 to 6 show the evolution of the mean and standard deviation of the indicators, illustrating how the indicators develop and the evolution of disparities among the states. According to the figures, every indicator shows improvement as time goes on. Moreover, the standard deviations of the variables tend to decrease over time, which implies that the disparity among states is getting smaller. Since this reduction is perceptible even before the enactment of the agreement, we need more than a graph to be able to determine whether the ANMEB had a significant impact on the observed decline. In general, it seems that the improvement in elementary school indicators is greater than that in middle schools.


Figure 1. Annual mean of elementary-school variables

Source: Authors' calculations.
Note: Failure and dropout rates are measured on the right-hand axis.


Figure 2. Annual dispersion of elementary-school variables

Source: Authors' calculations.
Note: Failure and dropout rates are measured on the right-hand axis.


Figure 3. Annual mean of middle-school variables

Source: Authors' calculations.
Note: Dropout rate and gross enrollment ratio are measured on the right-hand axis.


Figure 4. Annual dispersion of middle-school variables

Source: Authors' calculations.
Note: Dropout rate and gross enrollment ratio are measured on the right-hand axis.


Figure 5. Annual mean of illiteracy rate and average schooling

Source: Authors' calculations.


Figure 6. Annual dispersion of illiteracy rate and average schooling

Source: Authors' calculations.


5. Empirical results

Before presenting and discussing the results, it is necessary to briefly explain how the results should be interpreted. The sign of the parameter of interest, δ, depends on the variable being analyzed. We can use equations (2) and (3) to understand this issue.

If the agreement generates the expected results,8 the sign of δ would depend on whether the indicator measures a positive or a negative characteristic.

In the case that the indicator measures a positive characteristic of the education system, cohort survival rate for example, both (T1 - T0) and (C1 - C0) > 0 should be interpreted as an improvement after 1992. Furthermore, (T1 - T0) > (C1 - C0) implies that the improvement was greater for states in the treatment group. For those indicators that measure a negative characteristic, failure rate for example, the reasoning is analogous. Table 3 summarizes these explanations for both cases.


Table 3. Expected sign of parameter


5.1. Results of the basic model

Table 4 shows the results of estimating model (1). The first column lists each of the dependent variables (yit), while the second column shows the change in the expected value of the treatment group. The third column shows the difference from the expected value in the control group. Finally, the last column shows the estimated value of the parameter of interest.


Table 4. Results of DD methodology for model (1)


The results of all the elementary school variables indicate that following enactment of the ANMEB there was a statistically significant decrease in dropout and failure rates and a significant increase in cohort survival rate and completion rate for states in both groups. Moreover, the last column indicates that this progress was greater in states that belong to the treatment group. This implies that after 1992, there was a reduction in disparities among states, at least as regards elementary schools.

The variables that measure the development of middle-school education and the average schooling series showed statistically significant progress in each of the groups, though my estimations do not show that the gap between the states decreased, i.e., the parameter was estimated as statistically insignificant. In contrast, the illiteracy rate declined in both groups and the reduction was estimated as being substantially greater in the treatment group.9

In general, the three variables included to control for other key features in the states that may be important to describe the level of education before and after enactment of the ANMEB were found to be statistically significant, especially so in the case of real gross state product per capita and state fiscal independence. The last variable, percentage of population living in urban areas, was occasionally found to be not significant in explaining the education indicators.

5.2. Was the ANMEB the cause of the improvement in the indicators?

The results in the previous subsection indicate that after enactment of the ANMEB there was an improvement in all of the indicators. Moreover, the computations indicate that there was a reduction in disparities between states in the control and the treatment group, though only for the elementary-school variables and illiteracy rate.

Figures 1 to 6 provide the evolution of the mean and standard deviation of the variables, showing that the indicators' progress is noticeable even before the ANMEB. Therefore, it is evident that further analysis is necessary in order to determine whether the ANMEB was responsible for the performance of these indicators.

For this task, I continue estimating the same model as before (Equation (1)), although with a slight modification to the time dummy variable. I estimate this model repetitively for the period from 1985 to 2003, changing the year in which the time dummy switches on. My expectation is that if the ANMEB was responsible for improvement in the variables, the estimated value of the parameter of interest, , would be statistically insignificant for the years in which the time dummy switches on before 1992, and statistically significant for the years in which the time dummy switches on after 1992.

Moreover, I would not expect the results of a policy change of this nature to be immediately apparent, but rather to appear gradually over time. If this is true and the effects of the policy change took some time to materialize, the parameter would be expected to have larger absolute values-with greater statistical significance-when the years immediately following enactment of the ANMEB are excluded-i.e., when the time dummy switches on some years after 1992. Therefore, I perform the computations for all the variables, even those for which the parameter δ is not significant in Table 4.

The results of these computations are shown in Figures 7 to 9. For each variable I show the P-value-related to the null hypothesis of no significance-of the parameter δ on the Y-axis; on the X-axis I show the year in which the time dummy variable switches on for that particular estimation. If the ANMEB was the cause of greater improvement in the states in the treatment group than in the states in the control group, the P-value would be expected to decrease over time (the parameter becomes more significant over time), i.e., the P-values would be below 0.05.


Figure 7. did the aNmeB cause improvements in elementary-school indicators?

Source: Authors' calculations.

Figure 8. did the aNmeB cause improvements in elementary-school indicators?

Source: Authors' calculations.

Figure 9. Did the ANMEB cause improvements in the illiteracy rate and average schooling?

Source: Authors' calculations.


The P-values of the different elementary-school indicators in Figure 7 do not follow the expected pattern that would indicate that the ANMEB was responsible for the decrease in disparities between the states in the control and the treatment group. Parameter δ was statistically significant long before implementation of the ANMEB, except for the failure rate, which appears to be significant only for the 1991-1996 period. These results indicate that the reduction in disparities among states started in 1985 (or even earlier); this decline lasts for most of the 1990s and then stops, i.e., the P-values increase beyond 0.05.

The results for middle-school indicators in Figure 8 indicate that the ANMEB was not effective in reducing disparities among states.

There is evidence in favor of this reduction only for the transition rate and gross enrollment ratio, and for a period prior to enactment of the ANMEB, i.e., before the 1990s. There are some indicators whose P-values do not appear in the graphs; this is because for those variables the parameter δ is not significant, i.e., it is not higher than 0.25. Figure 9 shows that the disparity in the illiteracy rate diminished only for a few years after enactment of the ANMEB.

6. Concluding remarks

I analyze various education statistics to evaluate whether there is sufficient empirical evidence to support the assertion that the ANMEB has improved education indicators and contributed to reducing disparities among the states. The results indicate that all of the variables experienced significant improvement after 1992 in both the treatment and the control group. These results are robust to changes in the specification of the treatment and control groups. Nevertheless, the question of whether this progress can be attributed to the ANMEB remains. When the model is modified to examine the period in which the disparities were decreasing, I find that for most of the elementary-school variables, the disparities began to diminish during the mid-1980s and that this trend persisted until the end of the 1990s. For the rest of the variables, evidence of a decrease in disparities is not so straightforward.

Overall, the empirical evidence shows irrefutable support for the existence of an improvement in education indicators during the period being analyzed, both in the treatment and control states. Nevertheless, this evidence is insufficient to affirm that the observed improvement was caused by the ANMEB.


1. And from 1934 onwards, also mandatory.

2. More extensive discussion of the conceptual issues underlying the agreement can be found in Bray (1999), Gershberg (1999), Ornelas (1995, 2008), and Tatto (1999), among others.

3. See Ontiveros (2001), Hecock (2006), and Sharma and Cárdenas (2008). For education financing and the distribution of federal resources linked to the ANMEB, see Cárdenas and Luna (2007) and Latapí, P. and M. Ulloa (2000).

4. According to Gershberg, Mexico followed the former strategy while Nicaragua followed the latter.

5. See Meyer (1995) for a detailed explanation of this approach.

6. Note that

7. Retrieved from (last accessed on August 10, 2011).

8. That is, if the education statistics improve in both groups after 1992 but the improvement is greater in the treatment group states.

9. As a robustness check of the results, the estimation of model (1) is extended by defining the groups differently. In these new cases, three groups are defined, and the middle one is excluded from the sample. As in the first exercise, those states in which a higher percentage of students were enrolled in the state school system in 1992 are placed in the control group and those with a lower percentage are placed in the treatment group. These new results are not significantly different from those obtained originally; they are described and explained in the appendix.



Arnaut, A. (1994), "La federalización de la educación básica y normal (1978-1994)" Política y Gobierno 1(2): 237-74.

Bray, M. (1999), "Control of education: Issues and tensions in centralization and decentralization," in Arnove, R.F. and C.A Torres, eds., Comparative education: The dialectic of the global and the local. Oxford: Rowman & Littlefield.

Cárdenas, O.J. and F.J. Luna (2007), "El gasto educativo: Una propuesta dee financiamiento a la educación básica," Gestión y Política Pública XVI(2): 261-79.

Faguet, J.P. and F. Sánchez (2008), "Decentralization's effects on educational outcomes in Bolivia and Colombia," World Development 36(7): 1294-316.

García-Pérez, A.C. (2008), "Propuesta de un mecanismo de distribución de los recursos del fondo de aportaciones para la educación básica y normal." Undergraduate thesis, Department of Economics and Finance, Universidad de Guanajuato.

Gershberg, A.I. (1999), Education decentralization process in Mexico and Nicaragua: Legislative versus ministry-led reform strategies," Comparative Education 35(1): 63-80.

Hecock, D.R. (2006), "Electoral competition, globalization, and subnational education spending in Mexico, 1999-2004," American Journal of Political Science 50(4): 950-61.

Inter-American Development Bank (1994), Economic and social progress in Latin America: 1994 report, Washington, D.C.

Lane, J.E. and M. Murray (1985), "The significance of decentralization in Swedish education," European Journal of Education 20(2-3): 163-70.

Latapí, P. and M. Ulloa (2000), "El financiamiento de la educación básica en el marco del federalism, Mexico City: Fondo de Cultura Económica.

Meyer, B.D. (1995), "Natural and quasi-experiments in economics," Journal of Business and Economic Statistics 13(2): 151-61.

Ontiveros, J.M. (2001), "Gasto educativo y políticas distributivas de la educación primaria en México," Revista Latinoamericana de Estudios Educativos 31: 53-77.

Ornelas, C. (1995), El sistema educativo mexicano: La transición de fin de siglo. Mexico City: Fondo de Cultura Económica.

Ornelas, C. (2008), Política, poder y pupitres: Crítica al nuevo federalismo educativo. Mexico City: Siglo XXI Editores.

Sharma, A. and O.J. Cárdenas (2008), "Education spending and fiscal reform in Mexico," Journal of International and Global Economic Studies 1(2): 112-27.

Tatto, Maria T. (1999), "Education reform and state power in Mexico: The paradoxes of decentralization," Comparative Education Review 43(3): 251-82.



The table below shows the results when the control group includes those states in which more than 40% of students were enrolled in the state school system in 1992, whereas the treatment group includes those states in which fewer than 30% of students were enrolled in the state school system in 1992. In this case, the states excluded are those in which the percentage is between 30 and 40.


Table A1. Results, first alternative definition of control and treatment groups


The results are almost similar to those in the original model except for the middle-school dropout rate, which is now estimated as positive and significant. The estimation implies that the decrease in the dropout rate was greater in the control group than in the treatment group. One possible explanation for this is that the states that were excluded (those originally in the treatment group) performed very well in terms of the reduction in the middle-school dropout rate, and once they are excluded, the estimated improvement in the dropout rate of the group as a whole declines.

The table below shows the results when the control group includes those states in which more than 40% of students were enrolled in the state school system in 1992, and the treatment group includes those states in which fewer than 20% of students were enrolled in the state school system in 1992. In this case, the states excluded are those in which the percentage is between 20 and 40.


Table A2. Results, second alternative definition of control and treatment groups


The results only differ for two variables: dropout rate and cohort survival rate in middle schools. The change in the result for the first variable has already been discussed. Regarding the other variable, cohort survival rate in middle schools, this is now estimated to be negative and statistically significant. If the states in the treatment group had shown greater improvement after enactment of the agreement than those in the control group, then this parameter should be positive. The estimation reflects the fact that after 1992 the cohort survival rate increased more in control group states than in treatment group states.

Overall, the new calculations indicate that our estimations are robust and do not depend on how the treatment group is defined. It would seem that the two groups-i) the group including states in which more students were enrolled in the state school system in 1992 (control group), and ii) the group including states in which fewer students were enrolled in the state school system in 1992 (treatment group)-are homogeneous. Therefore, the results do not vary when the groups are modified to include different states.